Edge 8.1.7 with OPC UA Omron FINS/TCP connection experiencing overloading and slow read

Hi All, I’m using edge 8.1.7 with OPC UA Omron FINS/TCP connection.
On the Status>Connections>Devices page, it shows overloading of PLC.
The write/sending of values to the plc is instantaneous, but the reading from PLC to gateway is slow. Having delay of 3-6++ seconds.

While reading the OMRON FINS Driver manual, it mentioned about FINS Settings and Request Optimization.

  1. For FINS Settings, right now all the values are default:0 in the configuration.
    But on the actual PLC, we have tuned the UNIT No and NODE No. accordingly based on the IP address configured.
    So is there a need to key in the FINS Settings values to be the same as the actual configuration on the PLC side? As currently the communication is working fine with default values.

  2. For Request Optimization, I am using the default values as well:
    I would like to have more info/understanding about the 4 settings, so that I can play with the values to see if it helps speeding up the read.

Concurrent Requests: 2 (range:1-8)

  • Is 8 a safe value to test with? as the manual mentioned: “increasing this too much can overwhelm the device and hurt communications with the device”. The settings on my PLC side is 100Mbps full duplex

Max Request Size: 240

  • Tried saving setting:1000000 without error.
    Is there a recommended value for it?
    Does it mean the higher value I set, the more the number of request will be sending from Ignition, and if the PLC is able to handle the request, it will send back accordingly, and hence results in faster read?..

Max Gap Size: 0

  • “The maximum address “gap” to span when optimizing requests to contiguous address locations.” sorry i dont really get the concept…
    Write Priority Ratio: 5 (1-10)
  • If I reduce the ratio, does it mean more read request will be executed, and hence a overall faster read result? And at the same time, the writing speed will be compromised?

Any advice is appreciated! Many thanks in advance!

Can you provide screenshots of the diagnostics page for this device from the Ignition Gateway? It would be helpful to see all the numbers.

Not sure if you should change this yet or not. But it's easy to experiment and see how it affects the load.

This is going to be limited by the device you're connected to. It would only matter if you are requesting large contiguous chunks of data from a memory area - it might allow you to get a little bit more data in a single request.

With a gap size of 0, if you ask for D0 and D2 that's going to be 2 requests, the driver won't combine them and ask for D0-D2 in a single request. If the allowed gap size was 1 then the driver would look at the gap between requested offsets and decide it's okay for this to be requested all at once, by asking for D0-D2 in a single request, even though you didn't ask for D1 explicitly.

It does mean more reads would be executed, but this only matters if you are totally saturating the driver with constant write requests, such that it always has write requests it has to prioritize over read requests. This is not common.

Hi Kevin, thank you for answering question 2! Would you be able to provide some info on question 1 as welll?

Below are the screenshots taken from Status>Connections>Devices page



I have tried to play with scan class as well to reduce the load.
Example:

  1. I have a system start status “tag_start” → used to create a driven tag group “taggroup_started”->if tag active, Leased/Driven rate = 1,000ms, Rate 20,000ms
  2. When system started, 10 processes will be running subsequently, each having a process started tag i.e “tag_process1_started”, “tag_process2_started” etc. “taggroup_started” is assigned to these 10 tags.
  3. For each of these process started tag, I have used it to create driven tag groups as well → if tag active, Leased/Driven rate = 1,000ms, Rate 10,000ms. They are called “taggroup_process1_started”, “taggroup_process2_started” etc.
  4. For each processes, they have 20 message tags, and i have assigned their respective process tag groups. Example for process 4, the 20 message tags will be under “taggroup_process4_started”

I used above method to reduce the scanning/load on plc. But i realised sometimes the message tags dont seems to be scanned. When that process has started, and the message tag is active on the plc side, it is not reflected on the ignition tag browser. Only when i assigned it to default direct 1,000ms scanclass, then it reflects the status.
So i tried to “down one level”. Instead of using their respective process start tag group, i assigned them all to “taggroup_started” instead. It helps, most message tags are reflecting correctly, but there are still some random ones not reflecting/scanning once in a while. So for that few specific tags, i have to assign to default direct 1,000ms tag group instead.

I am not sure what i observed above is due to overloading as well, or i have done something wrong…
It seems like, if i use tag_A active as a driving expression for taggroup_A → assign taggroup_A to tag_B → use tag_B active as a driving expression for taggroup_B → assign taggroup_B to tag_C, the scanning of tag_C doesnt seem to be “stable”?

Sorry im not sure if above is drifting off from the main topic…

Look at your mean response time. It is a very reliable ~22 ms. That means your PLC is processing somewhat less than 50 requests per second. You will have to figure out what tags are important enough to poll quickly, and the rest will have to be polled slow enough to keep the total requests per second below 50.

You currently have 155 requests trying to be read at a 1-second pace. You have a lot of chopping to do.

You don't need to change these for FINS/TCP, they get negotiated when the connection is opened. They do need to be set correctly when using FINS/UDP.

Other than that, @pturmel is correct, you probably need to start prioritizing what data needs to be acquired fast (probably no faster than 1000ms) and what can be acquired slower.

It's possible that increasing the Concurrent Requests setting will help you also. Try 4, 6, and 8, and see what impact it has on the diagnostics.

1 Like

Forgot about that. Concurrent requests generally help if the PLC is lightly loaded otherwise. That is, the 22ms response time is dominated by network time, not PLC message processing time.

Hi @Kevin.Herron and @pturmel thank you for replying.
I have previously tried to reduce the request by playing with tag groups, which is why i have used below method, but i am not sure if it is due to the overloading issue as well, the relevant tags dont seem to be reacting accordingly..

I think you simply have too many tags. Experiment with fewer then add more a few at a time to see how things behave. If your hardware is simply not capable of the desired bandwidth, juggling driven scan classes is unlikely to succeed.

( I would not expect the decision tag for a driven tag group to be itself in a driven tag group. That seems especially unlikely to work. )

What model PLC is this, anyways?

The PLC is OMRON CJ2M-CPU35.

Regarding the decision tag in a driven tag group to be itself in a driven tag group, because from my point of view, tag_A (overall system start) will trigger tag_B (process X), only upon tag_B is triggered, then only tag_B will be used to trigger tag_C (1 of the process messages under process X). If tag_A is not active at all, tag_C should not be triggered as well. (if system is not started, the process messages should not be triggered as well)
So from this flow, it should work right?..

I'm not convinced. I recommend you try that after you trim down to the number of tags you can support without driven tag groups. Then you can re-introduce them a little at a time and see how it goes.

What you have now is overloading the driver. By a lot.

Hi all, i have managed to settle the delay issue.
I realized playing with few number of tags, slowly increase the numbers, and then playing with the scanclass accordingly, is able to mitigate the issues by a tiny bit, but there is still overloading issues, which does not help.

Instead, i went to try with Request Optimization settings:
Concurrent Requests: i have set to max, which is 8
Max Request Size: i have set to max for SYSMAC NET configuration, which is 1980 bytes ( some manuals mentioned 1986, 1990 etc, to prevent overloading/damage to communication etc, i set to a safer value of 1980)
(just a doubt, why did Ignition set the default as 240? because even for SYSMAC LINK and Host Link type, the max is 542 bytes)
Max Gap Size: set to 10
Write Priority Ratio: default 5

Before applying the settings, under Task Manager->Performance->Ethernet port, the max Receive speed is ard 72Kbps. After applying the settings, it managed to reach to 80-88Kbps, once in a while 96Kbps (tried to increase max request size to 6000, the Receive speed still maintain at ard 80-88Kbps).
Just wondering, if this Receive speed is further increase, does it speeds up the communication even more? If yes, is there any settings that i can try from Ignition’s side?

Your ethernet speed indication suggests that the limit is the PLC. Which is basically what we already knew. You may need a PLC with more communication bandwidth. Or perhaps an alternate protocol. (I’m biased towards EtherNet/IP I/O connections for troublesome applications–via the CJ1W-EIP21 card in your case.)

1 Like

Not by any means an expert with Omron PLC’s, but is this PLC communicating with other devices over the built in Ethernet port? Have a look at Section 8-7 at the CS/CJ Unit Ethernet Operation Manual (Cat W465). My assumption is that this is all solely with the built in ethernet port. Also, what is the current cycle time of the Controller?

Hi jdehart,

Nope, the plc cable is directly connected to my hmi/gateway's ethernet port.

Hmm, at first i thought so too. i have 2 cables connected to my edge device, 1 plc cable, 1 from the server. So i have swapped the cables connecting to the 2 ports, and regardless of which port, the max receive speed coming from server is always able to reach to 100kpbs and above, whereas the plc port remains low..

When it was in monitoring mode, i saw it was about 10ms