FSQL Redundancy

Hi,

We are proposing a “Redundancy” system for one application and I have the following questions. BTW I read your news article about redundancy, real good.

Our application is a batch based system with confirmation back to the plc when we logged the information. So my question is do you recommend to use the build in FSQL Redundancy or do you think it will be better to used the feedback of the information logged and have different triggers for secondary FSQL.

How does the FSQL Redundancy works once it fails over to the secondary does it automatically fails back to the primary. Just thinking to see the actions I will have to take to copy the data back to the master once it is back up and running.

Thanks

Hi Julio,

As you probably read, the difficulty in redundancy often comes down to getting the database set up.

In terms of setting up FactorySQL, you have a number of choices that can affect how the system acts. On the redundancy side, it is possible to assign each node a rank so that you would have one “preferred master” that would run when possible, taking over again when it came back up, or you could not rank them, in which case the best/longest running will be master. After a failure, the secondary will start executing, and will continue executing until it goes down.

Usually when people set up redundant systems, they are focused on keeping control working. This usually only requires one-way database mirroring instead of full clustering. Then, they put history on a different data connection, which might write to the data cache while the one machine is down. That way they don’t have to worry about synchronizing the history back to the main DB- when it comes back up, the data cache will be written.

Your idea of using separate triggers would probably work, but I’m not sure if it would really solve any problems.

Maybe I should ask for a bit more information as to what you were planning to do with the database: a mirrored pair? Clustered? Single db and datacaching as a backup? Located on one of the FSQL nodes, or a separate machine?

The particular settings for redundancy aren’t difficult, but there are a huge number of possible configurations.

Regards,

One clustering software that I have been looking at is the open-source continuent.org software. I have plans to build a VMware and a Xen guests once Tungsten is released on August 8th. I know this will work for MySQL, but should also work for other DBs. Once they are built, I will posta link so other IA users can utilize these. For now, we are just running a replicated approach with manual recovery. For now, we can schedule to re-sync the dbs if we loss a database for a bit

Kyle,

Thanks for the info on continuent.org it looks real good you got me exited reading it but :frowning: it looks like they are not Windows. Keep us inform of you test. I am getting a New server at the office with VMWare ESXi so I will have more testing hardware soon that just my laptop with VMWare Workstations running VWMachines on an external Drive.

[quote=“Colby.Clegg”]Hi Julio,

As you probably read, the difficulty in redundancy often comes down to getting the database set up.

In terms of setting up FactorySQL, you have a number of choices that can affect how the system acts. On the redundancy side, it is possible to assign each node a rank so that you would have one “preferred master” that would run when possible, taking over again when it came back up, or you could not rank them, in which case the best/longest running will be master. After a failure, the secondary will start executing, and will continue executing until it goes down.

Usually when people set up redundant systems, they are focused on keeping control working. This usually only requires one-way database mirroring instead of full clustering. Then, they put history on a different data connection, which might write to the data cache while the one machine is down. That way they don’t have to worry about synchronizing the history back to the main DB- when it comes back up, the data cache will be written.

Your idea of using separate triggers would probably work, but I’m not sure if it would really solve any problems.

Maybe I should ask for a bit more information as to what you were planning to do with the database: a mirrored pair? Clustered? Single db and datacaching as a backup? Located on one of the FSQL nodes, or a separate machine?

The particular settings for redundancy aren’t difficult, but there are a huge number of possible configurations.

Regards,[/quote]

This is what I am thinking. A little background, our system buffers data on the PLC and we keep that data in the PLC until we get confirmation from the db via FSQL and the handshake of FSQL that we log the data. So this is our first request for replication so that if system fails and the plc buffer gets full they will be able to continue to run because of the redundancy they will not have to worried about plc buffers.

So that is good that I can setup FSQL not to switch automatic since I am sure I am going to have to right some manual procedures to synch the data from the slave to the master. I know I have a lot of reading to do about how to accomplish this since I know it could be an issue synch the slave to the master.

I don’t think the data caching will work for us since our app gets status information back to the plc after each group trigger.

I know the Ideal will be cluster but customer don’t want to pay $$$ Kyle software solutions reads real good with I am running on a Windows Based.

Below is my first draft for the System Architecture.

Kyle,

Thanks for the info on continuent.org it looks real good you got me exited reading it but :frowning: it looks like they are not Windows. Keep us inform of you test. I am getting a New server at the office with VMWare ESXi so I will have more testing hardware soon that just my laptop with VMWare Workstations running VWMachines on an external Drive.[/quote]
The actual middleware runs in linux, we use SLES 10.2, but openSuse 10.3 or 11 should work fine. The actual DB can be any os