Standard transaction group missed trigger events

I have a standard transaction group with the sole purpose of executing a query tag at the top of the hour. I know there are similar posts asking how to accomplish this. That is not the question, not looking for suggestions for schedule based and so-on, simply want to know why this setup fails to execute ~1 out of 100 hours. I have this same setup on 2 servers across 5 projects. It has been running for 5+ years, no one noticed the random failures until recently when we missed 4 executions back-to-back, enough to create gaps in the plot that displays this data, alerting my clients to the issue. After exhausting all options that explain this failure as a new problem, I went back 8 years, and found missed execution had been occuring since start-up. This spans multiple ignition versions, java builds, server settings etc.

Group Settings:
Execution Scheduling: Timer, 1 minute(s)
Execute this group on Trigger
Only execute once while trigger is active
Trigger conditions - is = 0 (or false)

Trigger:
Expression tag, run always
dateExtract(now(),‘min’)

Items:
1 triggered expression item which runs an insert into select. The query takes less than 100ms to run.

For testing purposes, I set up the exact same group, only I stored the value of the minute extract along with the current timestamp. Instead of executing on trigger = 0, I executed on trigger value change. I let this run for 12 hours. I didn’t have a single disagreement between the dateextract and the timestamp, but I did miss around 1 out of every 100 minutes, on both servers across all projects.

Very simple problem, yet after 3 rounds with tech support, no one can offer an explanation. No clock drifts, no server issues, no db/server sync issues. When I look at it from a DSP perspective, this group ‘undersamples’ the trigger tag (looking for a change in minute, only once a minute). But I am assured that without clock drifts, this shouldn’t miss.

Any thoughts? Not a big deal for this particular dataset, but highly concerning given the failure rate I have observed.

It depends on what else might be scheduled. Each scheduled task executor has a thread pool, and if all threads are busy when a scheduled event is due, the event can be delayed. I’m surprised that the delay could be as much as a minute (which would push back the next execution). I’m not sure how Ignition assigns transaction groups to executors, though.
I never trust timer events, so when I need something like this, I always compute a ‘next due’ timestamp and trigger when now()>=due. Then I use scheduled event that fires more often than necessary to ensure the latency is low. If I have something that is absolutely critical, I use a sleeping thread that is set to wake at precisely the right time. With suitable error trapping to ensure the thread restarts if anything fails.

1 Like

Try adding this line to your ignition.conf file, in the Ignition installation directory, specifically in the data directory:
wrapper.java.additional.6=-Dignition.sqlbridge.maxthreads=10

The default pool for transaction groups is set to 5. On a few systems with very high numbers of transaction groups I’ve seen this help.

2 Likes

What is a very high number? This pool size is per sql bridge correct?

Also, is there any drawback to increasing this?

This is a total pool for the entire gateway. I would start with small increases (eg, 5 to 10), rather than jump to something drastic. The biggest drawback will be increased CPU usage. You’ll also have to make sure that you increase this in parallel with the number of connections available in the DB pool.