Omron FINS TCP Slow Tag Write Times

Replying from here: When to use system.tag.writeAsync instead of system.tag.writeBlocking? - #10 by pturmel

I have an Omron CS1 processor that is connected to Ignition with the Omron FINS TCP driver. Tag writes (using writeBlocking) are fairly slow. Takes about 1.2-1.5 seconds to write about 30 tags, which I am told is indicative of a larger problem. Any clue as to what may be causing this? Latency is historically an issue on this line as we are trying Ignition as an alternative to Wonderware which ahs been used for the last 20+ years there.

First thing I'd check is your device diagnostics in Ignition, particularly the request counts per pace, and the mean response times per request. An overloaded device connection will have poor write performance, too.

Branded PLC protocols like FINS are request-response, and typically can only hand one request at a time. Or a small number of concurrent requests. Which means that network latency between Ignition and the device will be catastrophic. If Ignition isn't on the same LAN as the device, you should fix that first.

Are you referring to this page?

It is on the same LAN with only two network switches between Ignition and the PLC.

Overload should always be zero. (1% blips are ok.)

You are asking for more than your device can deliver.

That you have only 137 items with 60 requests suggests that you are polling information that is scattered all over the PLC memory space. Don't do that. Place as much as possible into adjacent memory locations, so the Ignition driver can get it all with one or two requests.

I think the FINS driver does a poor job optimizing writes... I kind of remember it being a thing to follow up on after the initial release... but I don't think that ever happened.

Yeah, but I doubt it's the writes' fault in this case.

Well, I think it's kind of our fault. It looks like writing multiple values = multiple requests, and any kind of batching only occurs if you're writing to an array address. I need to look at protocol docs to see what kind of batching is available.

1 Like

I'll bet there is none, just concatenating consecutive addresses.

Hope so :grimacing:

Otherwise we're just leaving free write performance on the floor.

So I am writing to addresses all next to each other. In this case, E0515430-E0515460. Is there a way to view the items to see if there is anything hanging out anywhere that shouldnt be?

No gaps? If so, that is ideal. A wireshark capture while writing would be helpful.

Your reads are definitely not all next to each other.

Ok, so the Memory Area Write command is for consecutive addresses only as expected.

But... we are ONLY grouping the writes for arrays, even if they are next to each other. So there's still something that could be done in the driver.

1 Like

I just wrote up a ticket for this... IGN-15395 if it needs to be referenced.

Sorry @mike.field it doesn't really help you out right now.

edit: though since you are writing from a script, you could just assemble an array value and write it to the corresponding array address. The address would be something like E05<Int16[30]>15430? You'd need to use system.opc.writeValue, not a system.tag function. Full OPC Item Path something like [MyDevice]E05<Int16[30]>15430.

1 Like

So I removed two gaps I found and there was no change to write times. Item count actually went up. Just to be clear, I have multiple tags in different areas, but the tags I am writing to are all consecutive memory locations.

Thanks Kevin,

Would changing the request optimization section of the connection settings help?

Update: Changing the Gap size option seems to help as now Items is down to 88 and overload is down to 0-1. Tag update times have still not changed, but at least it seems like an improvement.

I also broke out the tags that I do not need updated very quickly to poll at 2500ms instead of the default 1000

Also, changing the concurrent request reduced by write times to sub 1 sec.