Deployment advice

I’d like some advice on deployment.

Currently I have a mission critical setup that is running one project that has several department specific areas to it. (mining business)

The mission critical setup is based in our mill with both servers being located close to each other. This is useful as we can take one down for maintenance and all the clients continue to function as they should.

One of the departments (water treatment) is based in another building about 1/2 hour drive from the mill, they use Ignition in conjunction with some other HMIs to control the plant that they are based at, as well as several facilities in and around the mill.

The problem is if the fiber link goes down between the other building and the mill, they would have to rely on local HMI (i.e. panelviews) to control the plant until we could shift one of the mission critical servers down there for them to run Ignition off. (Or install a panel edition in that building, this would prevent use of the Database functionality though). Shifting one of the mission critical servers would mean SCADA outages everytime we needed to do maintenance if the fiber link is down.

I would like for the same project to remain accessible on both sides of the link if it breaks down and would appreciate your thoughts on the best way to do this. I’m guessing the answer may lie in retargeting of some sort.

When the system is working they should be able to view alarms from all facilities on one screen as well as trends from these different facilities which I am not sure how to do with retargeting.

Thoughts?

The user manual says Ignition supports two node redundancy, can this be expanded to three?

Idle thoughts from the top of my head…

I would have thought that each server in your current redundant setup would be located in different locations. If you have a fiber between the two buildings, taking one down for update/maintenance shouldn’t affect the other building. Only when the fiber breaks.

Given that you have two sites, each site gathering different data, but you have to be able to see the data in both sites, I would look at two systems (one in each location) and a clustered/shared database.

Or run another fiber (down a different route of course (all it takes is one dump truck to back into the one pole that both fibers are on …))

Thanks Robert, the problem with separating the servers is that there is a third location also (underground paste plant) where communication could also be lost to.

I’d like to keep the project as centralised as possible (i.e. one project) so that we don’t get into the practice of having to do everything twice (in terms of adding the same tags to two different SCADA projects, as well as to prevent divergence (end up with different tagnames referencing the same addresses etc). We could copy the project out once changes are made as per what would have to be done with redundant rsview32 servers, but this is a pain also.

I appreciate the idea though!

Is it possible to have screens from different servers in the same project?

[quote=“AlexW”]

Is it possible to have screens from different servers in the same project?[/quote]

You mean like this feature?

[quote=“AlexW”]Thanks Robert, the problem with separating the servers is that there is a third location also (underground paste plant) where communication could also be lost to.

I’d like to keep the project as centralised as possible (i.e. one project) so that we don’t get into the practice of having to do everything twice (in terms of adding the same tags to two different SCADA projects, as well as to prevent divergence (end up with different tagnames referencing the same addresses etc). We could copy the project out once changes are made as per what would have to be done with redundant rsview32 servers, but this is a pain also.

I appreciate the idea though!

[/quote]

So the main (only?) project collects data from three locations? So if the cable is broken, no data is collected from that site?

As for tag names, if you use SQLtags, they can be shared between gateways. That was where I was going with the DB cluster. Put one node of the cluster at each site. Each site has it’s own gateway that collects the data for that site and updates it’s node. Through the power and magic of clustering, each gateway can ‘see’ the data at the other sites. If a cable is broken, the site becomes isolated but local data is still collected and stored. When comms are restored, everything syncs up again.

Rather then one project, I would create different projects for each department and use retargeting to view stuff from a different site/department.

All data is visible to all gateways but only data that is ‘interesting’ to a department/user is used for their specific project.

nm, robert covered clustered databases I see.

You could also use replication instead of clustering, lower overhead at the cost of not being exact duplicates necessary.

mind you, with replication, all gateways have to write to a single database (main one). If the link went down, the cache system would hold the data until the link was restored.

You could read from the local copy. ( :scratch: how to work fail over into this scheme) I think this could be made to work with what’s available today. To properly support* this would require a bunch of work on Ignition.

*proper support would mean having local and central databases, with local data replicated to the central database, central data of interest to the local station replicated to local db, and the option for a redundant fallback database which could be a different local db at a different site. (I have done this for a C++ (ugg) SCADA app that used Oracle )

Agreed, that’s a head scratcher.

Maybe a virtualized enviroment w/ clustered databased and network raid. Put portions of the cluster at each location so it can be brought up using HA from anywhere and let the network raid handle the redundancy? Gets pricey quick…

this is currently the case.

If I did use re-targeting then I would still need two copies of the same project, one at the remote plant on another gateway, and one at the central mill where the rest of their systems are located. This is not too bad of a setup, but it does involve maintaining duplicate projects and ensuring that they remain the same.

-this would be where we keep our mission critical setup at the mill, and purchase another gateway to reside at the remote plant(s)

We’d also have to educate the operators what to do if the comms link to the primary project fails since there will be no automatic failover mechanism (or is there a way of setting this up for non redundant systems)

So you can share the same tag database between projects? I didn’t know this.

[quote=“AlexW”]
So you can share the same tag database between projects? I didn’t know this.[/quote]
As long as it’s a DBTag. You can’t share memory tags