I have two Ignition Gateways: One sits in USA (Linux, Ubuntu, Java 1.8.0_171, Ignition 7.9.9, MySQL 5.5.54). This is the client. The other sits in Europe (Windows Server 2016, Java 1.8.0_181_b3, Ignition 7.9.9, MySQL 5.7.23). This is the server.
I have around 70 tags. A few of them update maybe once a second but most of them update maybe once a hour.
I would like to synchronize historical data from the client to the server. I do this by using the “Tag History Splitter” on the client with a database connection to each database, one on the client and one on the server, thru the Gateway Network.
For some reasons (external, not under my influence) the connection might be interrupted for some period of time (hours, days, months) and I am trying to handle by using the store and forward mechanism. However I am having some trouble setting this up. I did some testing by blocking the connection between client and server for 8 hours, allowing the connection again and looked at the data that was stored on the server:
Test 1: With default parameters in the store and forward engine on both server and client, I only got around 1 hours of data: The first hour after the connection was disrupted and the last 5 min of data.
Test 2: I thought the store and forward engine of the client was to blame. May the buffer wasn’t big enough. I tried changing “Max records” to 1000 and “Memory buffer size” to 100.000. It changed nothing.
Test 3: I reverted the setting on the client and changed the settings on the server. Again “Max records” to 1000 and “Memory buffer size” to 100.000. This did it. Now the server seemed to get all the data.
These testings left me with some questions (in order of importance):
1: Why is it the settings on the server that is important? I would expect the server to “block” the client from sending more data when the server is congested.
2: How come it is the first hour of data that I see? Why not the last hour? I would expect the system to work like when the sender buffer is full, it would delete older data and store new instead of just ignoring data.
3: What are the units of “Max records” and “Memory buffer size”? It is not explained anywhere in the manual. What are their implications? If I increase Max records, does it use memory or is it just a max number? Likewise for “Memory buffer size”. How well does it scale? I mean, right now I have 1 client but at some time I expect there to be maybe 100 clients on the same server. Does this use a lot of memory or hard disk space if I have to increase these parameters for all clients? What if I had 500 tags?
Anybody to answer these? Anybody who played around with the settings of Store and Forward to achieve something similar? Or should I drop the idea of using Store and Forward? This thread has some similar questions and this thread suggest not setting “Max records” too high. Finally this thread suggest forget using Store and Forward and use database replication instead.
Thanks in advance