Store and Forward Thruput

I am using store and forward for some large data processing tasks >100k DB inserts. The data is going to the cache fast ~300/second however the forward to the DB is VERY slow <10/second. How can I debug this and make sure the data is inserting as fast as possible? There does not seem to be any settings. I’m trying to determine what the max thruput to the DB is.

Max throughput is whatever your DB can handle - whenever store and forward is trying to push pending data, it will try to saturate all available DB connections. You can give your DB connection more connections in the pool in the advanced options, but often this is just a bandaid solution.

Status -> Databases -> Details should give you your ‘longest recent queries’ - you can look at these to see if queries to a certain table are taking up a majority of the time spent. If they are, then you may need to look into building/rebuilding indexes or other general DB optimization strategies.

If it’s specifically historical data that’s slow to insert, then the likely cause is just tables being too large - tables with >1,000,000 rows require very powerful databases to manage, and it’s pretty common for feature-creep in Ignition projects to end up with lots and lots of historical tags storing data more frequently than is strictly necessary - leading to ballooning table sizes over time.

Thanks for the info. I have 30 connections available, but it still appears to only be utilizing one… is this because it’s all going to the same table?

I know it’s not a DB power issue, its a large SQL cluster with > 200GB ram and managed by a dedicated DBA. I sent some of the SQL over to one of our DB guys to check the performance direct to the DB.

May i know if you have solved the issue? i facing the same issue now as the forwarding is very slow. May I know what are the steps to improve it?

I’m with Ryan on this one. We lost connection to a remote device for 4 days. This generated about 200,000 records to be queued on the store and forward local cache. That’s only 20% full. Upon reconnecting the gateway network, the store and forward began sending data to the remote database (full disclosure here…the DB is not local on same pc, but located in another state over cellular/VPN connection…but still…we can remote in and experience zero lag so network speed is not an issue). But holy cow it is slow with the same 10/sec on the database storage status window as Ryan saw. Its going to take 24 hours for the data to fully get sent to the database. We’ve checked the load on the database server, and it is barely breaking a sweat. There’s got to be a setting that increases the push of these old records so we can generate reports. Is this a Gateway Network>Send Buffer Limit or a Store and Forward>Write Size and Write Time setting?
I noticed the database status showed 500ms long insert statements that seem to be batches of about 100 rows at a time. Does the store and forward work on a timer to send about 100 rows every second?

Paul, we have the default 8 Max DB connections…but I’ve never seen it go over 1 used at any given time. I don’t think it utilizes all available DB connections.

Any thoughts?

Here I am bringing up old posts again, but I have a solution to slow Store and Forward issues. PGriffith mentioned the max throughput is whatever your DB can handle…but that’s not nearly as important as “HOW” you are connecting to the database. In a Hub and Spoke Architecture, you’ll have multiple sites in remote locations and typically establish a “Database” connection on each remote gateway traveling through VPN over to a SQL server in a central location. Turns out that ODBC is very sensitive to high latency connections like VPN. Instead, leverage the Ignition Gateway Network by first creating a database connection on your central server (this creates both a store and forward engine and a historical tag provider on the central server). Don’t forget to modify the central Ignition gateway Service Security History Provider Access to Query and Storage. Then remove your remote site’s database connection and create a Remote Historical Tag Provider pointing to the central server’s historical tag provider that was just created automagically (not misspelled). Creating Remote Historical Tag Providers automatically creates a corresponding Store and Forward engine as well. This now sends all history data from the remote sites leveraging store and forward locally through the GAN. The speed is much faster and more reliable despite still being over VPN and spotty connections like Cellular. There is a big caveat through and I think its a bug or oversight in Ignition (as of 7.9.12). Alarm Journals and Audit Profiles only use local database connections and don’t support Remote providers (Ignition Edge has a fix for this already, but hasn’t been implemented for Ignition 7.9).this means that without the slow, unreliable direct database connection, you lose out on Alarm journal data at the central server. So, instead of deleting the connection, just rename it and keep it for the less active alarms events and auditing events…and pray the database connection doesn’t get faulted permanently (as I have seen this happen often). Hope this helps others running into the same issue.

1 Like

For what it’s worth, remote alarm journals and remote audit profiles were added in 8.0.something, I think 8.0.6. It was a significant enough effort, with enough risk of regression, that the decision was made not to backport to 7.9.

Paul, do you think the decision would be reversed if there is a flaw with how Tag History Providers are created in the Gateway? That is, it is easy to create orphan THP entries by first creating remote history providers followed by creating database connections with the same name. Also, it automatically changes the Type of tag provider in the process and begins generating quarantined items due to this change. Just wanted to put that out there for all those still on 7.9 and need a fix.

Have you found any way to increase the throughput after using the GAN? In a hub and spoke with edge gateways at the spoke, using Data Sync pointing to the Hubs historical tag provider is still extremely slow.

Note, I am in 8.0.10 for edge and 8.0.12 for the hub, but I would assume the same concept applies.

Keith, we didn’t have to do anything after utilizing the GAN. We saw instant improvement. Are your edge gateways connected via cellular?

Actually I didn’t update here, but we realized that the issue was an SSL Inspection being performed on the traffic over 8060, switching it to not use SSL and port 80 made a drastic improvement. Thank you for checking though!