Counting uptime techniques

Wondering what the best practice is for tracking uptime during a shift.

I have a tag that is a 1 when the machine is running and a 0 when it is not running.
I want to have a tag that shows the seconds.

I thought a fixed rate expression tag would be burdensome.

Would a script that runs every second and writes all uptimes for 60 machines with one write blocking per second be better than other options?
If running then add 1.
Reset at end of shift.

I feel that it's best to track this information within the machine's PLC itself. Then, simply read the value from there.

1 Like

I tried tracking in the PLC.
However the PLC clocks suck for multiple reasons.

I am leaning toward one script to rule them all.

I'm not familiar with these reasons. What are they?

I don't have time to list them all, or figure out how to word them.
(lot of plcs, updates, power interrupts etc)

The Gateway clock is standard across all the data I collect though, so that is easy to point out.

This video is pretty good for showing how the clock works in the slc, and it's dependence on the instruction timing.
https://www.youtube.com/watch?v=dRlIUVoqKNc

I think that previously I had suggested to someone to use the PLC clock because I was.
After months of data though, that did not workout in my experience.

One method is to calculate the time between events. You have two flavors of event sources, shifts and machine states.

You want to find all the time between machine states, within the boundary of the shift in question. The place to be careful is finding the machine states at the time of shift start and stopping at now/shift end.

Find the oldest machine state event before the shift start, that is usually the starting state. It's rare to have a state event and shift start to have the same timestamp, but do check for that too.

This method assumes you are able to log event data to a database. That's my approach as a Wonderware guy, maybe with Ignition there's a better way.

You gave me a good idea of a way to fix one method I tried.

I was doing something similar to:

Script sets the tag to 0 at shift start.
Running boolean tag had an on-change script to add time to the uptime for running time
I collect data at -1 minutes to shift end.

I can change that to:

Script sets the tag to 0 at shift start.
Running boolean tag had an on-change script to add time to the uptime for running time
Script toggles the boolean at -2 to shift end to insure I get the time.
I collect data at -1 minutes to shift end.

Then I am only short 2 minutes of a whole shift accuracy, and my processing overhead is low, only twice a shift from the script and on-change of running boolean.
Where if I used one script to rule them all, it would have to run at some sampling rate.

Thanks

Edit:
I found I did this already on some tags earlier last year.
Just didn't remember. Thanks again for help.

I would use tag history

3 Likes

How?

I thought there might be a way to use the historian, but I wasn't sure how for this.

Probably system.tag.queryTagHistory() using duration on calculation
system.tag.queryTagHistory - Ignition User Manual 8.1 - Ignition Documentation (inductiveautomation.com)

1 Like

In the PLC set a bit when you are up, or down your choice. Then log that into history. Set the max time between samples to be tolerable for not getting accurate data. IE if you need down to a minute, set that to 30 seconds and then you will always get something every 30 seconds.

Then @dkhayes117 solution for duration on or duration off to get the amount of time that bit was on/off.

2 Likes

I don't understand what you are saying to do.

How do you get the uptime on a tag in seconds with that?
I use system.tag.queryTagHistory() sometimes. I don't know how you get that to the tag for uptime over shifts.

I think I might understand it a little. I will try it tomorrow.

The manual shows you the parameter options for this function, so you would need the tag path, a start date, end date, aggregationMode = DurationOn, and maybe ignore bad quality. You could also use a start date in conjunction with rangeSeconds/Minutes/Hours parameter.

From the manual I linked

"DurationOn" - The time, in seconds, that a value has been boolean true.
"DurationOff" - The time, in seconds, that a value has been boolean false.
system.tag.queryTagHistory([tagPath], startDate= shiftStart, aggregationMode = 'DurationOn', rangeMinutes = 3600,  ignoreBadQuality =True) 
1 Like

You can setup a server with the Logix5000 Clock Update Tool to periodically update all your PLCs clocks so that they are in sync.

2 Likes

Not needed if you spend a few hundred dollars on a proper time source. Then the native clock synchronization in Logix processors yields superior accuracy, with no code:

5 Likes

I wrote a wall, but I wrote your name for the part I wanted to ask you about and thank you for.
I bolded my questions.
I hope it is easy to read.

@dkhayes117 @bschroeder @jlandwerlen
First, thanks for helping me. I really appreciate the help, effort, and patience with me.

Irritatingly, some of you are telling me the parts I do know and not the parts I don't.
I know about duratonOn and historian queries.
I don't know how anyone meant to implement that.

You mean instead of this?:
Script sets the uptime tag to 0 at shift start.
Running boolean tag on-change script to adds time to the uptime for running time.
Script toggles the boolean at -2 to shift end to insure I get the time.
I collect data at -1 minutes to shift end.

This?:
Script sets the uptime tag to 0 at shift start.
Running boolean tag on-change script to query durationOn for time of the shift
Script toggles the boolean at -2 to shift end to insure I get the time.
I collect data at -1 minutes to shift end.

If so, then I probably will stick with the calculation script because I don't think it will ding the database at all and a script has to run anyway.
Or am I missing something?
As I get better with Ignition, it gets harder for me to depict the parts that I am missing and there are more parts that I do know already. Sorry, I wasn't more articulate beyond "how?".


Why isn't there a historian query tag type?


@lcardenas2 Thanks, does that work with SLC 500s and ControlLogix?
I think you mean for it to be connected to the internet and we avoid that for the machines.


@pturmel
Forgive me, what does "For ControlLogix prior to L8x," mean?
It won't work with SLC 500s?
That TM2000B also has to be connected to the internet or can you set time on it and all devices sync?

Thanks for all the help. I really do appreciate the help prior and to help me fill in these gaps of information.

Turn history on for this tag,then run tag history query on that tag.

2 Likes

No. The "grandmaster" clock synchronization technology was introduced with ControlLogix and CompactLogix processors. It is not available on the ancient SLC-500 or PLC-5 platforms.

Modern Logix (those above) requires clock synchronization if using servo motion control, so many users are already there. Those just need an accurate time source added to the mix. GPS is as good as it gets, short of installing your own Cesium atomic clock.

Otherwise, you turn on the time sync checkbox in the controllers, and adjust processor connections to 1756-ENxT devices to include time synchronization.

You do not connect it to the internet. You install it on the same subnet as your PLCs and wire its antenna in a spot with clear GPS reception. You do not set the clock on the TM2000B. It gets the time from the U.S. military's atomic clock.

In the TM2000B, you turn on the "multicast" part of IEEE-1588 Precision Time Protocol, as that is what the Logix processors use.

After that, the Logix processors automatically locate the best (most accurate) clock on the subnet and make it the grand master. And they all sync to it.

Every device in the Logix family that does hardware timestamping will use this grandmaster clock. Sequence-of-events analysis with such hardware would be accurate across the whole plant to within single-digit microseconds. Hardware that can schedule output transitions can be made to turn on with that accuracy. Two digital outputs driven by two different processors on opposite sides of a plant can be made to turn on or off within a few microseconds of each other.

3 Likes

It seems that there may have been a bug with something in the past or a current issue that if the plcs are synched remotely, that the time updates will negatively impact the servos.

I really don't know what to say here; as a PLC programmer, I routinely work with PLC 5s, SLCs, and 5000 programs. The former two less so nowadays because most of the 5 and 500 plcs I work with are now remote IOs for upgraded controllers. When they fail, they will likely be replaced with a 1794 rack set up as remote IO. If the things you mention were ever a problem, the solution must be built into the standardization models that I now use, so I don't notice them, and I guess I have simply forgotten about them.