Looked at comments Here and Here rather then reply on those older topics I started a new one.
So from my understanding from this with ignition connecting to AB just read the whole Tag UDT at one time but wanted to clarify something as far as tag groups and splitting a UDT into sub UDTs
I have 10 PLCs each with several UDT's and I will only be reading from the controller tags / UDTs
Everything is connected in a gigabit LAN dedicated for SCADA comms. Separate network for PLC/RIO coms.
Just looking at a single UDT for this example that will have 1000 instances on each PLC with ~30 tags read by Ignition in each UDT of is there any significant benefit to separate tag groups based on data needs by splitting the main UDT into some UDT. In total I am currently estimating around a total of 1mil tags read by ignition using 3 backend IO servers.
For example Main UDT named "input"
if currently
Input.raw
input.value1
input.value2
input.value3
... to value 25
instead place as nested UDT
Input.fast.raw
Input.fast.value1
Input.fast.value2
Input.slow.value3
Input.slow.value4
... etc
Then setting input.fast.x (5 tags in this one) to rate of 1 sec and input.slow.x (20 tags in this one) to rate of 1 min.
Is it worthwhile to do a split like this on the PLC and split into tag groups
This is for a safety critical system and there are certain points that we want to capture as close as possible to the actual event time. Most of the other data is 99% of the time stagnant in the PLC (only changes when we do updates/upgrade/changes.
The drivers (both IA's native Logix and my new alternative) don't care so much about tag groups themselves, but at the number of different rates. Tags at the same rate in the same device will optimize together. Unique tags cannot be optimized. Arrays, structure, and arrays of structures can be optimized. All OPC tags pointing into such a top-level large tag should be at the same rate. Otherwise, there will be extra messages on the wire.
For maximum efficiency, try to separate your arrays of structures containing tags of interest from tags/arrays/structures that you aren't going to read from Ignition at all.
How much change in size would you consider worth the optimization of reorganizing tags in the PLC.
For example if the UDT being read is 240 bytes of data If splitting it into 2 only results in a reduction of the fast read to 230 bytes it probably isn't worth the optimization but if it drops to 100 bytes it probably is I think.
Or is there a min size that below that there are no significant gains in reduction of the UDT size (diminishing returns sort of thing because higher percentage of the message size is in the message itself)
I know each situation is going to be dependent on the user / system etc. just looking if you have some rough ideas / breakpoints you might look at in general.
If the missing items won't be polled at all, and you do this for many different structures, or many instances of these structures, then almost any reduction will show up as a reduced number of requests. And therefore better performance.
If some of the split-off items are polled in other (presumably slower) tag groups, factor in about 16 bytes each as new message overhead. But keep in mind that consecutive array elements only pay that overhead in batches that fit in the channel's message buffer (connection size).
One further nuance: if message batching is dominated by request size instead of response size, then you won't gain much. This splitting task will reduce response sizes only. I recommend using a wireshark capture to collect some stats on the packet sizes for the "Multiple Service Request" and their responses. (Wireshark will match them up for you.)
ok and for structure I would need to have it all at the base tag level and a nested UDT woudn't do me any good correct?
For example what exists now in the PLC
Input[1001] of type UDTInput (144 bytes)
where UDTInput is just a UDT with a bunch of Real/String/DINTs (etc)
Only way I can think of keeping the PLC program relatively intact would be to make
Input[1001] of type UDTInput
where UDTInput consists of 3 UDT's
UDT_Slow (data read hourly) - 96 bytes
UDT_Fast (data read as fast as possible) - 36 bytes
UDT_ PLC (data that isn't read by SCADA) - 12 bytes
But from some of your other comments it looks like I would need to have each as base tag level in the controller and nesting them in the UDTInput wouldn't do anything to optimize. (I am not sure why this wouldn't work though).
So to optimize I would need to end up with this as base controller tags in PLC
UDT_SLOW[1001]
UDT_Fast[1001]
UDT_PLC[1001]
then have to heavily rework all the PLC program to reference the 3 different base UDTs as appropriate.
I understand that splitting into 3 different base tag arrays is the most optimal solution for communication efficiency, but will doing nesting instead have some performance improvement or would the nested array structure not have any significant benefit at all?
Assuming this is entirely readable (nothing has ExternalAccess="None"), then it is certainly being read relatively efficiently using using chained requests (not batched). If your comms buffer is 4000, unbatched response overhead is 8 bytes, so ~37 requests. (But watch out--IA's driver only reports the first request in a chained batch, mine reports them all.) PLC's class 3 workload should be low as it doesn't really have to "think" about this.
There will be 1001 batchable requests. Batched responses will be 2+8+36 bytes per, plus batch overhead of 6. So 86 per batch, and ~12 requests at that pace. Class 3 messaging load on the PLC will climb, as it has to parse and locate each nested UDT separately.
There will be 1001 batchable requests. Batched responses will be 2+8+96 bytes per, plus batch overhead of 6. So 37 per batch, and ~28 requests at that pace. Ditto on the class 3 messaging load.
So, more total requests with this separation within the UDT, but likely a win by having a substantially slower pace for the latter part.
There will be 36036 bytes transferred in 10 chained requests. Very low class 3 messaging load.
There will be 96096 bytes transferred in 25 chained requests. Also very low class 3 messaging load.
This combination has fewer requests in total compared to the original UDT, and will also benefit from the slow pace, and will barely touch the PLC's messaging processor.
so the Chain batch is a request for tagx[0] - tag[1001] but when nesting it has to do single batch where it says tag[1].fast , tag[2].fast etc because they aren't all in the same array. Batch grabs are more resource intensive because it has to parse the information on memory location while chain batch is more efficient since it is sequential memory locations.
So I'm thinking the simplest solution would be to leave the Input[1001] array as is.
Add a new UDT Input_Fast[1001] and just put in it the information that needs to be read fast. In PLC just make moves/copy from Input to the matching Input_fast UDT. Since this is read only from the PLC don't need to do anything with the code besides the move. I have a smaller array that I can optimize fast reads from. Then I can set data in the INPUT array to be read at slow rate and/or faster as needed based on trigger from the input_fast data.
I think this should work well for optimizing fast data flow but still keeping the PLC structure intact.
Normally for a simple PLC program or one from scratch I would. This program is quite large and complex and uses the information as is in that UDT structure extensively. Program is also in use in other facilities with different scada/hmi systems. Change like that would require significant work and significant QA testing while just a duplicate of the fast data doesn't really change the program and doesn't impact anything else.