Gateway timer script question

I’ve inherited a gateway timer script that is supposed to write to a tag once every second starting at 0, getting reset every so often in response to some condition being met. The issue I’m having is that the timer isn’t keeping accurate time. You can see it count up with each execution but it stumbles every so often and gains time. It ends up counting 60 every 50 seconds or so. I’ve compensated for it by having the script execute a little less frequently which gets us closer to 60 counts in 60 seconds but I’d like it to be as accurate as possible. Is there a way to set the timer script to execute reliably every 1000ms? I realize that taking a time stamp and a current time then doing a timediff would be the best way to do this but, as I said, I inherited this and the timediff option would require a lot of rewriting of the rest of the project. That’s an option but I’d like to exhaust other possibilities first.

Thanks

Have you set the timer to fixed delay or fixed interval ?

It’s set to fixed rate.

Is it on a dedicated thread?

Some other things to try:

  • Gateway Tag Change Script with an expression tag using now() as the trigger.

  • Use a transaction group.

It’s a dedicated thread.

Is this running on a virtual machine instead of real hardware? Is there any kind of time sync set up?

The gateway server is virtual but I can’t say if there’s a time sync on it. I would assume that it’s getting its time directly from the underlying hardware.

You said you saw 60 executions in 50 seconds, which implies time is running too fast rather than too slow, which is the more common problem.

I can’t imagine why this would happen other than the system time running too fast.

Actually… I guess since you’re using fixed rate, another possibility is that some previous execution of the timer stalled for a while and then a bunch of executions “stacked” up, which could explain seeing 60 within a 50 second window.

What’s this script doing?

Yeah, based on what I know about the timer settings, too slow makes more sense. I would think that a system clock running too quickly would result in a uniform speed up though. If you watch the tag, this counts of more more like 1… 2… 3… 4… 5 6 7… 8… 9… 10 11 12… the seconds mostly count out normally then every so often the seconds race and it will pick up about 3 seconds in the space of 1 and then continue to count normally again for a while. Very odd.

That pattern would mean execution 4 took ~3 seconds to execute for some reason.

Fixed rate timers stack up - you have to be very careful that executions never take longer than the rate.

It could be GC activity if this is a busy system and/or starved for memory, or it could be related to whatever the script is doing.

I gotcha. I think I have a better understanding of what he was doing now. Here’s a little sample:

if system.tag.read(‘Idle Active S2’).value > 1 :
system.tag.write(‘Timer Idle Time S2’, actualIdleTimeS2 + 1)
else :
system.tag.write(‘Timer Idle Time S2’, 0)

this is repeated 10 times, one for each station he’s monitoring. He’s relying on the script to execute every one second exactly to increment his counters but that kind of precision is impossible to guarantee with a timer script because you can’t always account for performance burps elsewhere on the server.

is that basically correct?

Yeah, pretty much. This is not a realtime system, and you've got another entire layer of timing uncertainty in the picture because you're running on a virtual machine too.

If your Ignition is not already set up to use G1GC that may help things. You can see in the ignition.conf file if it's still configured for the old CMS GC.

I think I’d be better off just creating a couple more memory tags, one with the current time, and another with the stop time of the station, then just displaying the time difference every execution cycle. If the difference doesn’t display every second exactly, that’s fine. At least when it does display, the number will be right.