Using Store and forward as database replication

Hi all

I have two Ignition Gateways: One sits in USA (Linux, Ubuntu, Java 1.8.0_171, Ignition 7.9.9, MySQL 5.5.54). This is the client. The other sits in Europe (Windows Server 2016, Java 1.8.0_181_b3, Ignition 7.9.9, MySQL 5.7.23). This is the server.

I have around 70 tags. A few of them update maybe once a second but most of them update maybe once a hour.

I would like to synchronize historical data from the client to the server. I do this by using the “Tag History Splitter” on the client with a database connection to each database, one on the client and one on the server, thru the Gateway Network.

For some reasons (external, not under my influence) the connection might be interrupted for some period of time (hours, days, months) and I am trying to handle by using the store and forward mechanism. However I am having some trouble setting this up. I did some testing by blocking the connection between client and server for 8 hours, allowing the connection again and looked at the data that was stored on the server:

Test 1: With default parameters in the store and forward engine on both server and client, I only got around 1 hours of data: The first hour after the connection was disrupted and the last 5 min of data.

Test 2: I thought the store and forward engine of the client was to blame. May the buffer wasn’t big enough. I tried changing “Max records” to 1000 and “Memory buffer size” to 100.000. It changed nothing.

Test 3: I reverted the setting on the client and changed the settings on the server. Again “Max records” to 1000 and “Memory buffer size” to 100.000. This did it. Now the server seemed to get all the data.

These testings left me with some questions (in order of importance):

1: Why is it the settings on the server that is important? I would expect the server to “block” the client from sending more data when the server is congested.

2: How come it is the first hour of data that I see? Why not the last hour? I would expect the system to work like when the sender buffer is full, it would delete older data and store new instead of just ignoring data.

3: What are the units of “Max records” and “Memory buffer size”? It is not explained anywhere in the manual. What are their implications? If I increase Max records, does it use memory or is it just a max number? Likewise for “Memory buffer size”. How well does it scale? I mean, right now I have 1 client but at some time I expect there to be maybe 100 clients on the same server. Does this use a lot of memory or hard disk space if I have to increase these parameters for all clients? What if I had 500 tags?

Anybody to answer these? Anybody who played around with the settings of Store and Forward to achieve something similar? Or should I drop the idea of using Store and Forward? This thread has some similar questions and this thread suggest not setting “Max records” too high. Finally this thread suggest forget using Store and Forward and use database replication instead.

Thanks in advance

I can’t directly answer your questions, but have you tried watching some videos on I noticed you mentioned the manual but not the training website. There is typically some details contained in the videos that are easily missed reading through manuals.

Yeah, I have been through the videos. The don’t tell much either about this. They do indicate that the times are in milliseconds but they do not define the “Max records” setting for instance, only it is in records. But what is a record? I thought it might be one change in one tag but I found some other indications in the forums that it might be a number of tags changing. Generally I must say the documentation around Store and Forward is inadequate, as others have pointed out.

I see, like I said I’m not well versed in this area. On the outside looking in, a record in databases should mean one row of DB information, so each time the transaction group runs that should be one record.

I’m sure you’ve already checked out the Configuring Store and Forward manual page, which might help with some of your questions. I could be wrong about some of this, but I’ll try to help based on how I understand things.

  1. I’m pretty sure that what matters here is which gateway has the history provider you’re storing to. The store and forward settings for each database have to be set on the gateway where the history provider actually exists (not in the splitter settings). So, if your splitter and realtime tag providers are on the client, but you want to set up S&F with the history provider on the server, then you would have to change the S&F settings on the server.

  2. So, this has to do with the “Max Records” and “Memory Buffer” settings. (Check the Configuring Store and Forward link above). Basically, S&F fills the max records, then fills the memory buffer, and when those are both full it just drops new data. It’s by design.

  3. Again, I could be wrong about this, but I think I remember someone explaining this to me once… and they said that the units of size for both of those is in records, where one record might actually include historical data about several tags. The size of a record is determined by your historical scanclasses. If several tags are on the same scanclass, they can be stored in one record each time the scanclass stores history. So, calculating the physical space associated with any given value for Max Records or Memory Buffer Size can get quite complicated. I definitely recommend testing this explanation before you bank on it.

I hope someone from Ignition clarifies here. I don’t want to give misinformation… I hope this helps!