Tag history splitter functionality

Hi,

I am trying to use the tag history splitter with two different databases, and a primary and standby gateway. Each gateway shares the same hardware as the DB. I am trying to achieve redundant history storage.

However, when I shut down the primary, and restart it, the history is not there for the duration the primary was down. My understanding from the manual is that the store and forward system should be storing the history for the DB that is down, and when it comes back up, the store and forward should be restored.

This doesn’t seem to function this way in my case. Also, I have noticed in the store and forward status page in the gateway, that the history splitter is listed but activity is unavailable, which looks like a problem, but I can’t see any more information in there.

I have also noticed that everything DOES function as i want it if i only shutdown the primary DB server service and not the primary gateway service.

I’d appreciate it if someone could please help me understand the functionality of the history splitter, and what may be going wrong, if my understanding is correct.

How is your history spliter setup? Is one connection localhost and the other the IP of the backup? Are they both direct IP addresses? [quote=“Andrew_P, post:1, topic:19875”]
However, when I shut down the primary, and restart it,
[/quote]

Do you mean that you restarted the computer? If so, then whatever time it took the backup to take over will probably be the amount of data that you miss… Usually this is a few seconds. [quote=“Andrew_P, post:1, topic:19875”]
This doesn’t seem to function this way in my case. Also, I have noticed in the store and forward status page in the gateway, that the history splitter is listed but activity is unavailable, which looks like a problem, but I can’t see any more information in there.
[/quote]

The manual states that the store and forward engines reside in each individual provider not the tag splitter itself. Check there and see if it has what you are looking for.

I would strongly suggest you look into doing database replication instead of trying to leverage the tag history splitter to achieve redundant history storage.

1 Like

Hey thanks for the reply, to answer your questions:

Both are separate direct IP addresses.

I killed the process for the gateway and the database to simulate a machine shutdown, but I needed the computer still up so I could VNC into it, otherwise I have no connection after it goes down as I am working remote. (This machine has two NICs, so its the only PC I can access on the network. I can't access the secondary without this connection. During the time the processes were down (half an hour) no data was able to be retrieved when it came back up.

Yes, I agree, I thought this might be the case, but I didn't understand why it is listed there at all then under store and forward status if it is not relavent, so I had my doubts about my understanding.

I wish I could do this, but I am using postgres on windows 10 and there are no reasonable solutions which will manage failover, despite it being able to support replication. I could attempt to roll my own, but it would be highly risky.

So I was hoping to rely on store and forward. The requirements aren't super strict, I can handle a little data loss, but it just doesn't seem to capture anything at all when the primary machine goes down. Well it probably does on the backup server, but when my primary server is up, I can't see this data.

I'm not familiar with postgres replication, but if it can do master-master replication then just set a single database connection to localhost as the IP address and postgres should do the rest of the work to ensure the databases stay in sync.

Also, is your backup running in the cold mode or warm mode? If it is running in the warm mode, that would mean that Ignition would be writing tag values to both databases at the same time which would cause double the amount of values to be written in the master-master replication.

Hope this helps.

1 Like

PostgreSQL does not do master-master replication out of the box. It’s base replication functionality is Master with any number of Hot-Standby/Load Balancing Slaves. The master can be configured to require slaves to commit to report transaction commits. Slaves can be chained instead of starred to tailor this behavior. Slaves can be promoted to master on the fly, and slaves can re-sync on the fly after a comms disruption.

EnterpriseDB, the commercial sponsor of PostgreSQL, has extended the base to provide multi-master replication, with some support for non-PostgreSQL participants. The EnterpriseDB solution appears to be the path most users choose.

1 Like

Wow, looks good, I had no idea this was out there. My only concern is around licensing, is this free to use in a scada application? I can't see where it says anything about it, unless you are using their cloud services.

It is not free. PostgreSQL is entirely free but EnterpriseDB sells commercial addons and enhancements.