Scripting architecture choices

This may be a stupid question, but is there anyway to automate adding the tag paths when a new instance of a UDT is created?

No. Perhaps in v8.2 if the gateway events turn into text-based resources (currently they are binary).

Some more questions

By "fifth event" do you mean the fifth time the same event is triggered on a single instance. Or do you mean the fifth global tag change event across all instances of the UDT?

Is the length of the length of the global thread pool related to the 5 events above (which is basically the same question as above)

So you can tune the GW Events, but not the Tag Events? And where do you tune the GW Event thread pool?

And bonus question. What thread pool do the GW Timer Events run on?

I'm trying to decide how much this will impact on my architecture. The end goal is to support monitoring about 100 or so machines.

Of the GW timer events, one runs at once per second, and one every 3 seconds.

Of the UDT value change scripts.

  • One runs only on UDT instance creation (so I don't really care about the impact of that one)
  • One runs once when there is an operator intervention
  • One runs on the machine completing its current task

Each machine is probably only capable of 20-30 tasks per hour, and operator interventions only can occur on 1/2 the machines. So on average that seems OK, but the machines are not synchronized so there is always the possibility multiple machines completing their tasks simultaneously and overloading the system.

Thinking out loud here, one possible way to help improve the system would be to allocate the tags into tag groups whose execution times are based on prime numbers. I've done that before in other systems to ensure that events stay separated in time.

It's per-tag, not global.

It's not directly related, but since the pool size is only 3 by default if you have a lot of slow scripts executing then it can cause each tag's queue (5 by default) to start to back up. Assuming there are changes to that tag coming in fast enough to fill it.

There's nothing to tune on gateway tag change scripts, just on gateway timer scripts.

The tag event scripts do allow tuning of both the per-tag queue size and thread pool size with some obscure JVM params added to ignition.conf.

No pool, each is configured to use a shared thread or dedicated thread.

1 Like

No, this is mostly wishful thinking and nonsense. Ignition is non-realtime software running on consumer hardware. Don't be fooled into thinking about things like PLC scan times where you have some kind of guarantee.

Driver with multiple requests see their poll times eventually settle out into as much a regular rhythm as possible to support the requested sampling interval and most of Ignition's tag system does not function cyclically at the tag group rate any more.

1 Like

Just to explain it in 1 syllable words for me because I am being really stupid today clarify then. For a value change script on a UDT member, which or both of these two conditions could potentially exhaust the thread pool? (and is that a valid question?)

  1. The value change event for a single UDT instance is triggered 5 more more times before the script itself can complete
  2. 5 or more instances of the UDT trigger their value change events within the timeframe of the script's typical execution time?

And thank you to everyone for their patience in helping me with my ignorance.

It's this one, if you mean the "change event for a single UDT instance member", because your script is on a member tag, not the instance itself.

If this member value changed quickly 5+ times without ever getting a chance to execute, either because the current execution is very slow, or because 3 or more other tags (any other tags, globally, with a tag event script on it) are currently executing very slow (blocking the size=3 thread pool event scripts are executing on), then you would start seeing the missedEvents flag set.

You never exhaust the thread pool event scripts are executing on; it's a fixed size (3). You might overflow the queue (size = 5) that each tag's events are placed in while they await execution.

1 Like

I think I interpreted "exhaust" wrong in my first pass - if by "exhaust" you mean all 3 are currently busy, then that's when the per-tag queues start filling.

I was thinking exhaust as in an unlimited thread pool creating threads until you run out of memory for some reason.

OK. I’m resorting to low syllable words because I think I am confused by the terminology and feel I am still missing something.

I have a UDT that has a value change script attached to one member.

I have N instances of the UDT defined as tags in the project.

Is the issue with the value change script being triggered multiple times on a single instance of the UDT?

Or is the issue if a number of the N instances all trigger the change events simultaneously?

Or are they both the same thing because the script is on the UDT and not on the instance of the UDT?

(I’m trying to build up a mental picture of the underlying architecture of UDTs and scripts attached to UDT members - if you’ll let me take a peek at your source code, I’m sure I can figure it out myself sometime :wink: )

Well... it could be both.

Every member of every UDT instance has its own event queue.

But globally only as many as 3 events are executed at a time, so in this way they are all at least loosely related.

But it only really matters if the scripts execute relatively slow and the tag change events for each tag are coming in relatively fast and filling up that tag's queue.

@pturmel prefers the gateway tag change script mechanism because there's no queue, for each script everything eventually executes no matter how backed up you are. But there's also a possibility of backing yourself up into running out of memory and crashing your gateway.

That’s starting to make sense. Of course Murphy says that with a large enough pool of potential events you will always exhaust your queue.

Ok if each UDT instance has its own event queue, what causes events to be missed? Do they get dropped if the overall global system can’t get them processed fast enough?

I’m debating about what value change event mechanism to use because I need to guarantee as best as possible that the data is pushed into the DB without anything being dropped. I understand the tradeoff with memory usage during execution, but I don’t think that the overall system requirements would challenge my current GW if I was using GW Tag Events.

OTOH I don’t trust the maintainers so minimizing maintenance tasks is a valid concern.

Yes, that is one reason. But not the only reason.

This is easy to monitor and proactively adjust hardware resources. And easy to explain to a pointy-haired-boss after they decline to spend the money.

Compared to checking the missed event flag in every single script just in case you land in a tight corner and data mysteriously doesn't get recorded or state machines mysteriously get into strange states. Uh huh. Not for me, thank you very much.

@pturmel The client I am doing this for is a multi-billion $$ business and they still send us Autocad drawings made with a pirated version of the Student Edition.

Totally understand. Depends on your needs.

When the queue starts overflowing the oldest events are discarded, so you eventually execute with the most recent set of values up to the current value. For some applications this is okay, others it might not be.

1 Like

@pturmel converted me to gateway tag change event scripts

Tag Value Change Scripts: Are They Run Asynchronously? - Ignition - Inductive Automation Forum

And there is no way to extend this queue?

In my case the event change signals the completion of a task. The machine has written its final values and is now waiting for the next command. As such I can easily accept a 1 or 2 or 3 second delay before the data is actually recorded in the DB.

And because each task is unique, I can’t just use the values of the latest event.

1 Like

Note I only skimmed this topic, that pertains to tag events so may not apply to the gateway limit

This is significant. If the event can't fire until the machine runs another cycle, and the command to run another cycle is part of or otherwise linked to the data recording you do in your event script, then the architecture itself prevents more than one event being "in flight". Effectively, that is a handshake. That one almost certainly can be defined on the tag (in the UDT).

If you have events that record data repetitively and relatively quickly during a machine cycle, that's where you can get trouble. Also note my advice in the post @dkhayes linked: anything over single digit milliseconds is just too long for an event defined on a tag. Even if you don't have a problem with that tag, you could push another tag's requirements over the edge.

IMNSHO, tag events have too much opportunity to interfere with each other to handle any complicated task. Use gateway tag change events for anything important.

The machine commands aren’t linked to my data recording. So the machine will happily start the next command without me recording the data from the just finished command. Murphy says that this will occur with enough machines being monitored which will eventually lead to an event missed situation.

I do record repetitive data, but that is handled by a GW Timer script.

I guess I should add a third goal of being more robust in my recording of data.

And I believe all of my scripts are running in single digit ms