Scripting architecture choices

I am looking at cleaning up my first Ignition project that was released to the wild (Under v8.1). While it works I didn’t make the best design choices which has resulted in a complicated (IE ugly) maintenance process.

The application itself is basically a data logging system that monitors multiple machines (each defined by an instance of a single UDT) and writes values to a DB on both value changed and timed events. It also has no display screens of any sort.

The current architecture has 3 value changed scripts in the UDT that package up data and call separate named queries (using a hard coded named query context). The system also has 2 scripts that run from a Gateway Timed Event. These scripts iterate over all the UDT instances and call two other named queries (extracting the named query context from each UDT instance).

I have two goals in my redesign effort:

  1. Eliminate the use of named queries (thus eliminating the need for the SQL Bridge module)
  2. Locate the scripts in a common location (and hence minimize maintenance effort)

I can see two possible architectures to achieve my goals. And while I believe there won’t be any performance differences, I’m not sure if there are tradeoffs that I don’t know about.

Option #1

  • Move all the scripts into the Project Library
  • Rewrite the scripts to do direct DB calls
  • Set the project as the Gateway Scripting Project
  • Change the timed event and value changed scripts into calls into the Project Library

Option #2

  • Leave the timed event scripts where they are
  • Move the UDT value changed scripts to Gateway Value Changed scripts
  • Rewrite the scripts to do direct DB calls

The pros of option #1 are that all that is needed to add a new machine is to simply create a new UDT instance. The cons are that to me the Gateway Scripting Project configuration is not obvious (but it only has to be done once)

The pros of option #2 are that everything is contained within the project itself. The cons are that when adding new machine I need to create a new UDT instance and then manually add paths to the each of the 3 Gateway Tag Change Scripts.

After writing all of this out I think I am leaning towards option #1, but I am curious to know if I have missed any obvious other architecture choices or pros and cons of my current architecture choices.

Named Queries are part of the platform. SQL Bridge module is not required to use them.

I would also say don’t get rid of named queries. It’s nice to have a single source of truth for your queries. Also, in my testing, it is faster to execute runNamedQuery than runPrepUpdate or runUpdateQuery. So another reason I would keep them.

@bkarabinchak.psi The problem I have with named queries is that if I need to add another entry to my DB calls, I need to edit in two places: the named query, and the script that calls it.

I’m OK with doing that, but I am aiming to have systems that are maintained by other (less knowledgable) people

I don’t much care for tag events. Putting them in UDT definitions is a help, and v8+'s global scripting project eliminates the worst side effect (restarting all projects), but UDT instances can have their scripts over-ridden and you simply will not know. Tag events are also queued and discarded in unfriendly ways.


I am turned off tag events by the extra step need to configure a new machine. But if someone making changes that overwrite my UDT scripts, then I have much bigger problems.

Careful with terminology! “Tag events” are the ones defined on the tags in the UDT. I prefer the “Gateway Tag Change Events” where there is one piece of code with a list of tags it applies to.

1 Like

Uggg … terminology. Tag Events in the UDT mean a single step in adding a new machine. Gateway Tag Change Events mean having to add explicit (and different) tag paths to each of the 3 scripts. Thus requiring 4 steps to adda new machine. And one of my goals is simplification of maintenance practices.

But how/when/why do UDT Tag Events get discarded? Is it a performance issue of an under powered GW?

It can be. Or a bunch of events happening faster than the script attached. Any past the fifth event while the first is running will be discarded. Tag events also run on a global thread pool. Too many events at once and scripts will be delayed. Events can be discarded before the first of that tag’s events even gets to run. Meanwhile, gateway tag change events have separate thread pools and can be configured to use dedicated threads.

Managing the list of tag paths is the only downside, IMNSHO. Compared to the other negatives of tag events, I’ll take this minor hassle of gateway tag change events every time.


This may be a stupid question, but is there anyway to automate adding the tag paths when a new instance of a UDT is created?

No. Perhaps in v8.2 if the gateway events turn into text-based resources (currently they are binary).

Some more questions

By “fifth event” do you mean the fifth time the same event is triggered on a single instance. Or do you mean the fifth global tag change event across all instances of the UDT?

Is the length of the length of the global thread pool related to the 5 events above (which is basically the same question as above)

So you can tune the GW Events, but not the Tag Events? And where do you tune the GW Event thread pool?

And bonus question. What thread pool do the GW Timer Events run on?

I’m trying to decide how much this will impact on my architecture. The end goal is to support monitoring about 100 or so machines.

Of the GW timer events, one runs at once per second, and one every 3 seconds.

Of the UDT value change scripts.

  • One runs only on UDT instance creation (so I don’t really care about the impact of that one)
  • One runs once when there is an operator intervention
  • One runs on the machine completing its current task

Each machine is probably only capable of 20-30 tasks per hour, and operator interventions only can occur on 1/2 the machines. So on average that seems OK, but the machines are not synchronized so there is always the possibility multiple machines completing their tasks simultaneously and overloading the system.

Thinking out loud here, one possible way to help improve the system would be to allocate the tags into tag groups whose execution times are based on prime numbers. I’ve done that before in other systems to ensure that events stay separated in time.

It’s per-tag, not global.

It’s not directly related, but since the pool size is only 3 by default if you have a lot of slow scripts executing then it can cause each tag’s queue (5 by default) to start to back up. Assuming there are changes to that tag coming in fast enough to fill it.

There’s nothing to tune on gateway tag change scripts, just on gateway timer scripts.

The tag event scripts do allow tuning of both the per-tag queue size and thread pool size with some obscure JVM params added to ignition.conf.

No pool, each is configured to use a shared thread or dedicated thread.

1 Like

No, this is mostly wishful thinking and nonsense. Ignition is non-realtime software running on consumer hardware. Don’t be fooled into thinking about things like PLC scan times where you have some kind of guarantee.

Driver with multiple requests see their poll times eventually settle out into as much a regular rhythm as possible to support the requested sampling interval and most of Ignition’s tag system does not function cyclically at the tag group rate any more.

1 Like

Just to explain it in 1 syllable words for me because I am being really stupid today clarify then. For a value change script on a UDT member, which or both of these two conditions could potentially exhaust the thread pool? (and is that a valid question?)

  1. The value change event for a single UDT instance is triggered 5 more more times before the script itself can complete
  2. 5 or more instances of the UDT trigger their value change events within the timeframe of the script’s typical execution time?

And thank you to everyone for their patience in helping me with my ignorance.

It’s this one, if you mean the “change event for a single UDT instance member”, because your script is on a member tag, not the instance itself.

If this member value changed quickly 5+ times without ever getting a chance to execute, either because the current execution is very slow, or because 3 or more other tags (any other tags, globally, with a tag event script on it) are currently executing very slow (blocking the size=3 thread pool event scripts are executing on), then you would start seeing the missedEvents flag set.

You never exhaust the thread pool event scripts are executing on; it’s a fixed size (3). You might overflow the queue (size = 5) that each tag’s events are placed in while they await execution.

1 Like

I think I interpreted “exhaust” wrong in my first pass - if by “exhaust” you mean all 3 are currently busy, then that’s when the per-tag queues start filling.

I was thinking exhaust as in an unlimited thread pool creating threads until you run out of memory for some reason.

OK. I’m resorting to low syllable words because I think I am confused by the terminology and feel I am still missing something.

I have a UDT that has a value change script attached to one member.

I have N instances of the UDT defined as tags in the project.

Is the issue with the value change script being triggered multiple times on a single instance of the UDT?

Or is the issue if a number of the N instances all trigger the change events simultaneously?

Or are they both the same thing because the script is on the UDT and not on the instance of the UDT?

(I’m trying to build up a mental picture of the underlying architecture of UDTs and scripts attached to UDT members - if you’ll let me take a peek at your source code, I’m sure I can figure it out myself sometime :wink: )

Well… it could be both.

Every member of every UDT instance has its own event queue.

But globally only as many as 3 events are executed at a time, so in this way they are all at least loosely related.

But it only really matters if the scripts execute relatively slow and the tag change events for each tag are coming in relatively fast and filling up that tag’s queue.

@pturmel prefers the gateway tag change script mechanism because there’s no queue, for each script everything eventually executes no matter how backed up you are. But there’s also a possibility of backing yourself up into running out of memory and crashing your gateway.

That’s starting to make sense. Of course Murphy says that with a large enough pool of potential events you will always exhaust your queue.

Ok if each UDT instance has its own event queue, what causes events to be missed? Do they get dropped if the overall global system can’t get them processed fast enough?

I’m debating about what value change event mechanism to use because I need to guarantee as best as possible that the data is pushed into the DB without anything being dropped. I understand the tradeoff with memory usage during execution, but I don’t think that the overall system requirements would challenge my current GW if I was using GW Tag Events.

OTOH I don’t trust the maintainers so minimizing maintenance tasks is a valid concern.