Multiple Gateways -> One Database Alert Table

Hey Guys,

Do you see any problem with having multiple distributed gateways having alert storage profiles pointing to the same database connection and table? Is there a way I can have each system identify itself in the “SYSTEM” column? I’m not sure where I can set that and everything is “SQLTags.default”.

Thanks,

Dan

You’re right, it should include the system id. There isn’t a way to do that right now- however, that “default” part should be the name of the tag provider, so you could change it on each system. In other words, a new install creates an internal provider named “default”. You should be able to rename that to anything you want, and then use that to identify the system.

We’ll have to figure out how to work the system name in in a backwards compatible way. Across the board, we’re trying to look at how all sqltag stuff plays together and ensure that multiple systems play well- that’s kind of the benefit of the having a tag system that can live in a central database.

Regards,

Thanks for the response, Colby.

So at least, in a nutshell, I can have multiple systems log to one alert table centrally? I can just use tag paths to identify the system in the meantime, I just don’t want the Gateways stepping on each other.

On the topic of multiple systems/sqltags, one issue I’ve found recently is when a DB Driving Providers is executing tags in one timezone (say, the gateway and db are in central) and a gateway in the eastern timezone has a DB Provider watching that database, all the tags are stale. I think there needs to be something that identifies timezones and accounts for them…unless the problem is I’m using Oracle DATES instead of TIMESTAMPS…

Thanks,

Dan

So yes, you can have multiple gateways writing to the alert log table.

For the sql tags, there probably is some solution based on db column types. In oracle, using “TIMESTAMP WITH TIMEZONE” should store the timezone info, and I would suspect that the JDBC driver should utilize this when reading dates from the table. I haven’t tested that, though. The time is stored simply using CURRENT_TIMESTAMP, which does contain timezone data as well according to this.

Regards,

That may be our issue. We change our Oracle translator to use DATE instead of TIMESTAMP. The reason for this is so we can use interval partitioning on our large historical tables.

For any oracle users out there, this is an awesome feature that will automatically create new partitions for any date range (day/week/ 12 hours, etc) when new rows are inserted that don’t fit in an existing partition, but it requires a DATE field to partition on. Utilizing daily interval partitions with local bitmap indexes (bitmaps have improved a lot for high transaction support in recent Oracle releases) we can query for a few hours of a specific plant code and piece of equipment in a table with hundreds of millions of rows containing all factories and equipment in a few hundred milliseconds. Anyway, that’s a whole different post. Sorry, I get carried away when talking about query speeds :smiley:

Let me try using timestamp for the sqlt_core tables and we’ll see if the issue disappears.

Thanks!

Dan

Also, make sure to use it on the time columns of “sqlt_sci” as well, that’s the table that is used to determine staleness. The core table will need to be changed in order to properly detect value changes.

Very interesting info about the db partitioning. There’s a tremendous amount of power hidden in these db systems, and hopefully some day we can start putting together guides to help people leverage it (without having to become a certified dba :slight_smile: )

Regards,

Now if only PostgreSQL would support bitmapped/clustered indexes… :cry:

I’m no Oracle DBA, and I think Oracle users are still in the minority for Ignition, but I’d be happy to help out with some techniques we’ve picked up over time. Some Oracle features we’ve used with our Ignition systems are

-interval partitions
-bitmap indexes (not an Oracle exclusive thing, but they actually work well now with real-time systems)
-dbms_alert (for synchronous non-Ignition client DB calls that make asynchronous SQL bridge downloads; basically, existing IT applications can download to PLC’s, with confirmation, using regular stored procedure calls)
-table valued functions
-analytic/window sql functions (probably the most powerful functions we’ve used for analyzing time series data)

If I ever get the time, I’ll put some things up on the Knowledge Base…

Dan

Robert,

It looks like Postgres has window functions. I never realized it:

http://developer.postgresql.org/pgdocs/postgres/functions-window.html

Have you ever used them? You can derive tons of information out of simple on/off, counts, scale weights, etc AFTER you’ve logged them. We’ve come back to the same table and extracted new information multiple times with use of these types of functions.

Dan

Hi Dan,

Never used window functions before. The look pretty cool. I’ll have to have a play with them some day. :slight_smile:

Eh, so a kink in the plan. Apparently “TIMESTAMP WITH TIMEZONE” in Oracle JDBC gets mapped to their own custom type, that requires some work to convert to a normal Java Date. This causes queries in Ignition to throw the error “invalid typecode -101”.

We’ll have to try to figure out something else using a normal timestamp. I’m going to try to mock it up, but is there any chance that changing it back to a normal timestamp (or date) and then switching the translator to use “SYSTIMESTAMP” instead of “CURRENT_TIMESTAMP” could work?

Regards,

Colby,

Sorry to dredge this back up after so long, but did anything come of this? Every configuration I’ve tried with systimestamp and date types seemed to either make my tags stale or totally hosed my DB tag provider. It looks like you’ve made some software changes regarding db driven tags in 7.3, but I don’t know if any of it is related to the Oracle time zone issue.

Thanks,

Dan

Ah! That’s my fault for not posting back here. Well, let’s see… probably one key point is that in 7.2.9 we fixed the problem with not being able to deal with the Oracle “timestamp with timezone” types. So, I would give it another shot with 7.2.9 (or 7.3) and those column types.

Regards,