Is this a fair test of Tag Change Events?

This is a follow-on to my question Scripting architecture choices. In that question it was suggested that due to the short Tag Change Event queue there was potential for me to lose events as I scaled up my project. I thought about this for a bit and then decided “Challenge Accepted!!” to see if I can force the system to lose events. In that way I could get a feel for when problems would start to occur. To that end I have mocked up a test system that I was hoping would stress out Ignition, and that also mimicked my real world system.

The system architecture is Ignition 8.1.15 running under a Ubuntu VM, and also has an MS SQL DB running in the same VM. The VM is running on a typical corporate laptop which is not overly powerful. The PLC is an Emerson Rx3i sitting on my local network.

On the PLC side of things, my test system comprises of a data block that cycles a mixture of real and int data over a set of repeating values. It also has 99 boolean trigger variables that I can set to True in groups within a single PLC scan cycle. EG there are groups of 10, 15, 30, 50, 75 and 99 trigger variables. (Ideally I would have 99 separate PLCs, each triggered individually, but hey, the petty cash is low this week)

On the Ignition side, I created a single connection to the PLC. I then created a UDT that looks at the cyclic data, and then based on a parameter looks to a single trigger value out of the 99 in the PLC. I then created 99 instances of this UDT to match the 99 trigger variables in the PLC (which resulted in over 2100 variables according to the GW)

Next was the scripting. I created a tag change script on the trigger value in the UDT which did 2 things.

  1. Collect the cyclic data from the instance and write it to a DB table via a Named Query
  2. Set a flag in the UDT instance that marked that the tag event change script had executed.

I also created a second Gateway Timer script that collected some data from every instance of the UDT and wrote it to a second DB table, also via a Named Query. This script runs every 1000 ms (and takes about 190 ms to run)

Finally I made a Perspective screen that showed the status of each UDT instance. This status showed the value of the trigger variable in the PLC as well as the flag that the tag change script sets. By comparing both values I feel like I should be able to see if a tag change event has been missed. EG the PLC trigger variable will be set, but the tag change flag won’t be set.

And this is where I was hoping the fun would start. By setting the triggers in the PLC during a single scan cycle, this should simultaneously invoke the tag change event script on the same number of UDT instances. And to me that meant I could control how much stress I was applying to Ignition.

And when the results came in I was surprised in a disappointed way. Even when I simultaneously set all 99 trigger variables in the PLC, the Perspective screen indicated that all 99 tag change scripts had successfully run. And when looking in the DB I saw all 99 rows of data. There was supposed to be an earth shattering kaboom at least some lost events.

Thus my question is:

Is my test system a fair test for stressing the Ignition Tag Change Event Queue? And if not, how could I change it in order to get an earth shattering kaboom events to be lost?

You can’t. It will run out of memory and crash first.

Are you saying that Ignition is going to be robust in this situation as long as the system it is running on has enough memory?

Yup. As long as you don’t introduce races with tags written from two directions, and handle the initialChange flag intelligently.

That sort of negates my whole other thread :face_with_symbols_over_mouth:

Not really. You got this answer in that other thread. You just didn’t believe it.

I believed it. I just wanted to prove it.

If you want to overflow the tag event queue (not the project-level tag change scripts), it’s easy.

Make an OPC tag and continually increment it in the PLC. Subscribe at 1s. Create an event script something like this:

print "missedEvents=" + missedEvents
sleep(6)

I don’t want to overflow the tag event queue per se. I was hoping organically overflow it by simultaneously triggering a lot of tag change event scripts. It seems that in my test triggering 99 scripts didn’t do it. I’m trying to establish an upper limit so I can determine when to do a project re-write.

Maybe I didn’t describe things as clearly in that other thread :man_shrugging: