Server(s) redundancy - Cluster/Failover/High Availability/Compellent

I’m part of a project where the customer wants to ensure no lost data. So, it was proposed to have the following system:

I have never dealt with a system setup like this. I was hoping that people may chime in on if there is anything wrong with doing it like this, especially for a control system (maybe this is great for an office building - but what about automation??). For starters, does anyone know if this will even work? They are leaning toward this approach since they didn’t like relying on the tag splitter, as it doesn’t “catch” everything. They didn’t want cluster due to single point of failure, not sure if that is true or not. I think replication was out due to not being dynamic, having to initially setup up tables. So, if that’s true then historian wouldn’t work.

This is still in the design stage, so we have time to make changes if necessary. Appreciate any/all help.

I haven’t used it any systems on this level with Ignition, so I can’t speak to how well it will work with that, but this looks to be fairly similar to the system we used with Honeywell Experion back at a previous role. It does work out well for failing over. We were storing data to a RAID5 SAN with two application servers to fail over if needed. Though I don’t think we had a dedicated controller. I believe the Experion software was completely responsible for that fail over.

Sorry, can’t speak to the implementation very much, I inherited the system.

That describes the kind of infrastructure needed when deploying a clustered database product (all the major commercial products have such). You will need a clustered DB product in your spec. Ignition would indeed make a connection to a single database. All of the HA functionality would be invisible to Ignition. In such a setup, consider having the most critical data buffered in the PLCs, and transferred from PLC FIFO to DB with store-and-forward disabled. This was discussed recently:

Curious why store and forward wouldn't be adequate here?

The concept of having it PLC side is so that the line can proceed if the connection to ignition is lost. Store and forward wouldn’t capture anything in that case since it can’t see the device.

Ok, I understand that part, was curious if there was another point to it. I’ve done that before, where I queue data, but in this project, not one particular piece of data is less important than the next, and I can’t queue everything, there is just too much. Thanks for the reply.

The store-and-forward buffer becomes a single point of failure, and any data stuck in the buffer is subject to loss if the Ignition gateway crashes. A transaction group’s success/fail handshake only reflects placing a record in S&F if it is enabled. If disabled, it reflects success/fail at the DB level, which is HA in such a system. You don’t want the PLC dropping a record from its FIFO until it is truly in the DB in this case.

To clarify one point on data, the data they prize more than others, is data entered from UI/Operator stations. Data from the PLC, like pressures, levels, etc, while still important, isn’t as important. A lot of decisions will be made from data that operators enter. It’s this data that is most important right now. So, store and forward, PLC queue, really won’t apply much here. If the system is down, they can’t enter, so they will just have to wait. What’s important is data already entered must be the same across the primary/backup in case we have a fail over. Then if they enter data on the backup, when we switch over that data must exist on the primary, vice versa. If we lose “historian” data during this process, they can live with that. Hope that made sense.

With that constraint, you should be fine with a commercial HA clustered DB on top of the infrastructure described.