Query Tag Data Stored by Transaction Groups per server/machine

Ignition 8.1.x nightly build

What I have. I have created a way to enter in a host or ip to create an opcua server. Tags are autogenerated for this server. Then my screens see the tags and allow showing screens and details of the server on a per server basis. This is working nicely. I also have a nice screen where I can add tags I want to take data on. This adds the tags to a list.

What I want to do now is add these tags to a Transaction group that has a trigger that is only set by the server for those tags. I can use the functions in system.groups to do this. However, I am unsure where to store my source xml data. Is there a location in the ignition directory that will backup files we manually add to a specific location? Should I generate the xml data on the fly, write to a temporary, then load the groups?

Then once I have my groups defined I assume the tag data that is stored has the path of the tags themselves so I can query them? I would like to have queries that display the historical data show only the data for each server. The tags that are generated for each server do have a unique tag path. So I am thinking I can query the database matching on the tag paths.

It is important that each server have its own trigger in the transaction group. The reason is we want to take fine grained data during operations and take sparse data during times when each server is idle. Each server represents a separate machine. So storing and retrieving data for graphing and reports will be done per machine/server.

I really would like more scripting control over Transaction Groups and Tag Groups for this, but that is not where Ignition is at right now. I did upvote the suggestions for this in the Features forum.

For such a dynamic environment, consider using gateway timer scripts to emulate transaction groups. The values sources, trigger evaluation, database connections, and table structures can all be dynamic. One timer script to rule them all… and in the darkness bind them.

{ I think Discourse needs a :sauron: emoji. }

3 Likes

How fast is reasonable? My tag data max read speed is 100mS from the server. I would like to read it at that speed. But only when the process is actually running. So I think a timer for low res, and a timer for high res would work. With a tag determining which to collect from. It looks like there is lots of scripting support for database. I will have to look at how it normally stores data.

Edit:
I could separate the server data into separate tables this way. That would be a very good thing.

I would use a single timer script running at the fastest pace desired. It can then decide, based on any desired conditions, which transactions to run in that cycle. Any transaction can be deferred for a slower pace or not recorded at all depending on any desired conditions. I would cache the last rows recorded for any particular transaction in order to generically permit decisions based on changes (or variable paces).

1 Like

If you need to store state in a gateway event script is there standard way to do this?
I could create a library and ensure it is only used once so I can store state. However, I don’t know what kind of retention this has. An alternative would be to store a dataset or similar in a tag.

The first item of state I would like to store is counting how many times the event has fired. Then just do a modulus to fire at different rates. I guess I could do time. I will look into the granularity of the date time functions.

I commonly use a python dictionary at the top level of a project script module. It’ll be empty on project saves that cause script restarts. On any call from the one timer script, start by check for any entry in the dictionary. Load last stored rows for all transactions into the cache (dictionary keys) at that point. From then on, update the cache as you go.

1 Like