What is the practical limit for data acquisition using Modbus. I have a device that we're looking for a peak value for and I've set the tag group to 50ms to see whether I can retrieve data at that rate. When I look at the wireshark logs, looks like I'm getting data about every 100ms. Are there settings that I can use to improve that rate? What I can see from the wireshark logs is the gateway sends a request (function 3) to the device, and the response from the device is less than 1ms later. The gateway then waits ~40ms and sends an ACK, and then waits another ~40ms before sending another Read Holding Register request. Is there something I can do to speed this up?
Can you share the Wireshark capture?
Yes. The device I'm talking to is at 192.168.9.40, so use 'ip.addr == 192.168.9.40' in wireshark to filter out other traffic.
How do I upload a file?
You may not have enough permissions yet, I sent you a Dropbox link.
Makes sense. File sent to your dropbox.
Hmm. Well, I suppose it could depend on system specs and load, I'd expect a little bit closer to 50ms between requests than you're getting, but not much. 100ms is usually the realistic floor.
Can you look at status > diagnostics for the OPC connection and make sure the requested sampling interval is 50ms?
edit: Status > Connections > OPC Connections. Click the Client button. Look for the tag group on that page and make sure the rate is 50ms, then click the Nodes button and look at requested/revised sampling intervals.
That appears to show only the tag group default. I have a high speed tag group for the high speed data. Do you know how to look at those values?
It should show all tag groups that have at least one OPC tag belonging to the server you are looking at diagnostics for. You might call support and have them look over your shoulder if that's not the case.
OK. I think I have something figured out. The Status > Connections > OPC Connections. Click the Client button page only shows the default tag group from a given tag provider. So, I added a new tag provider called HighSpeed and set its default refresh rate to 50ms. Now, instead of having a high speed tag group in the default tag provider, I have a High Speed tag provider that defaults to high speed. When I did that, the Status > Connections > OPC Connections. Click the Client button page shows both default and HighSpeed tag providers and shows a rate of 500ms for the default tag provider and 50ms for the HighSpeed tag provider. Now, when I look the Wireshark file, I am getting 50ms between requests. I was also able to write that to the local database at 50ms and could see several instances where I had three different values for consecutive data points at a 50ms record rate.
That said, what do you think the practical limit might be for modbus data acquisition. I understand that it depends on a lot of factors, including the target response, but I was hoping you had some idea. This system is running windows 10 IOT LTSC on a Core i7 computer with 16Gb of ram and all devices are connected to a local hub capable of 10Gb ethernet. My task manager on the gateway computer shows low CPU usage moderate memory usage and 200 kbps of network usage. Seems like I could increase the rate by quite a bit.
Anyway, is setting up a high speed tag provider more efficient? It appears to have solved the problem.
As long as you aren't trying to sample super fast to get fast screen updates, here's the approach I'd try:
- tag providers should be irrelevant here, only tag groups matter
- make a dedicated high speed tag group
-- set the group rate to 50ms (if this is a very recent version there's also a dedicated OPC UA sampling interval property you can use instead of the group rate)
-- set the OPC UA Publishing Interval to 100ms
-- set the OPC UA Queue Size to 10
This will request that the driver to sample very quickly, while the OPC UA subscription queues and reports potential changes at a slightly slower rate.
The drivers can generally poll as fast as the PLC can handle. It's the OPC UA subscription mechanism that isn't meant for this high speed polling unless you set it up like above so that the subscription can execute slower and publish potentially multiple queued values per item.
I'd say you can start from 50ms and try lowering by 5ms or 10ms at a time to see how it behaves. I doubt below 10ms is realistic, and at any of these rates you're gonna see interruptions when garbage collection happens.
When you start going fast, the issue becomes java's garbage collection pauses. You can configure java to try to minimize these pauses, but you can't completely get rid of them, and you cannot make java guarantee anything. These GC pauses will definitely interfere with very fast polling, even using the settings Kevin recommends.
Consider batching of samples in the PLC, paired with multi-row inserts to the DB. If you simply cannot lose any of this data, you should use a ring buffer for many samples in the PLC, and use handshaking between the PLC and Ignition for each batch successfully inserted into the DB.
Thank you for your insights and suggestions. It's so great to have real-time feedback on my projects with Ignition. One of the devices is a SSI device that's not connected to the PLC, so I need Ignition to sample it. Fortunately, I only need to record the peak value so I don't have to write to the database that fast. Now that you mentioned it, I can record the the values that are coming through the PLC using the PLC logic, and have that as a seperate value to sample. That should provide me with a reliable option, at least for those.