I would like the machine cycle count to be recorded in the historian on each change from run to stop and from stop to run. This will allow me to create accurate XY Charts for each machine with good accuracy and minimum data handling. How should I do this efficiently?
- The tag editor allows history sample mode to be set to Tag Group but this would require me to create individual tag groups for each machine’s run-status signal. I’d also need to create a duplicate tag for the live count. This doesn’t seem right.
- Transaction groups would require one per machine with the trigger using my run-status tag.
Both seem messy. What trick am I missing?
Another option is to use a gateway tag change script to write the data to your own tables. Personally i like this method, you get a lot of flexibility this way.
Ooooh, @josborn! That’s something I hadn’t considered and can see the advantage.
Can you give me a kick-start on this?
- I’ve created a gateway Tag Change event script triggered by the machine’s run-status.
- I’ve added the code,
paths = ["[default]path/to/tag"]
value = system.tag.readBlocking(paths)
Now how do I write that to my history table? (I’ve had a quick look through the manual but haven’t found an example yet.)
Can I make a performance gain by making a dedicated historian for this data? (I’d like my production charts to be fairly snappy.)
Well doing it this way doesnt involve historian at all, which is a downside since you don’t have access to any of the nice things historian has built in. I can give you an example where i am logging multiple tags on each event trigger, It will probably get you started.
tagPath = ["[default]ION/ION_Admin_KWH",
values = system.tag.readBlocking(tagPath)
for x in range(len(tagPath)):
system.db.runPrepUpdate("INSERT INTO kwh_log (tag, value) VALUE (?, ?)", [tagPath[x], values[x].value], "ION_MeterData")
I think there is actually a way to use historian, ive never tried it though:
What y’all are describing is a scripted re-implementation of the SQL Bridge module’s transaction group. The key is the use of a wide table, where multiple values are recorded together. In this case, triggered by a state change.
Im curious about this, as i use this approach but with Tall tables.
To do this with tall tables you must ensure the values are actually collected together in a synchronized way (like a transaction’s OPC read mode) and are stored with identical timestamps. Then pivot (or scripted pseudo-pivot) to re-assemble them later into a correlated row.
With a wide table, storage and retrieval are natural SQL insert and select.
Search this forum for “tall versus wide” for more discussions. Each has their strengths.
That makes sense, appreciate the insight.
Though if @Transistor intends to use
system.tag.storeTagHistory that would mean sticking with Tall tables I suppose.
If you need retrieve multiple values with identical time stamps, don’t use the tag historian.
My usual nitpick about
paths = [...]
tags = system.tag.readBlocking(paths)
for path, tag in zip(paths, tags):
system.db.runPrepUpdate("INSERT INTO kwh_log (tag, value) VALUE (?, ?)", [path, tag.value], "ION_MeterData")
And then it’s @pturmel’s turn, to tell you he’d build a prep query and insert everything in just one call. Listen to Phil.
Ah, @pascal.fragnoud, I wouldn’t want to deprive you of the opportunity to expound on efficient code techniques…
Thanks, Phil. You explained this to me well in a previous question.
In this case I don’t need simultaneous timestamps. Each machine will start and stop independently so a tall table layout works fine.
But you do need simultaneous timestamps, to keep the run/stop state and count together. Yes, two variables and a timestamp is not very “wide”, but the concept is the same.
This is more of a philosophical question, but if the machine is stopped, how can the run count change?
So why is there a need to record the run count on the transition from stop to run? or is it that you don’t trust your communications tooth machine and are trying to account for the machine running when you are not aware of it running?
Note that personally I’d still record teh count on both transitions, because I like redundancy
Processor uploads/downloads disturbing the PLC’s contents.
My implementation of this is logging every 15 minutes, and i truly only care about the first and last reading each month, this is sufficient.
It doesn’t, but recording the count at both transitions makes the XY chart very simple to implement.
Mock-up of Perspective template (done in Word). Recording counts at 1 and 2 makes graphing simple and accurate.