Redundancy questions

In our system we are building we want to use redundant servers, this way if one goes on the fritz the other will pick up where the first left off. We also want to use redundant SQL, this way should the connection to the SQL server go on the fritz for any reason the data will have somewhere to go until the connection is re-established.

My question is: What in your opinion would be the best scenario for redundancy on the SQL?

I beleive that our SQL is housed on two servers so would it be best, if this where the case, to log data to the one then if that connection is lost to log to the other, or would it be best to log to the main and use the MySQL on the IA server as the redundant?

Thanks and have a great day.

Well, there is really 2 things to consider here.

First, the network connection to the database. If this goes down temporarily, FactorySQL will automatically log to a local data cache, and then send that data over to the main database when the connection goes back up. So, you don’t need to worry too much about that kind of failure.

The kind of failure you need to worry about is if the database machine crashes for some reason. By far the best solution for this is to use a highly available database, like the High Availability feature in SQL Server 2005 (microsoft.com/sql/technologi … fault.mspx).

If for some reason you don’t want to use SQL Server’s built in high availability, and want two independent side-by-side SQL Server instances, you can have FactorySQL logging to an aggregate data connection. This is a data connection that will fail over to a secondary one if the first one becomes unavailable. The problem with this approach is that in the event of a failure, your history is going to be fractured across two servers.

Hope this helps,

If you dont mind my asking how big is this cache that it will log to? I ask because when this system is up and running at full scale it will be logging upto 50 machine and each machine will have approx 50 tags which are being saved in the history. So worst case scenario that would be 2500 tags per fault at the machine level. The tags range from DINT, INT, Boolean, and String. So if we know how much we can save there in that cache then we will know the best approach we want to proceed with.

I know for a fact that our IT dept is or maybe by now has SQl 2005, So I will tell them about this feature you mention as well. Since that is their teritory.

Thanks for the reply.

[quote=“Carl.Gould”]Well, there is really 2 things to consider here.

First, the network connection to the database. If this goes down temporarily, FactorySQL will automatically log to a local data cache, and then send that data over to the main database when the connection goes back up. So, you don’t need to worry too much about that kind of failure.

The kind of failure you need to worry about is if the database machine crashes for some reason. By far the best solution for this is to use a highly available database, like the High Availability feature in SQL Server 2005 (microsoft.com/sql/technologi … fault.mspx).

If for some reason you don’t want to use SQL Server’s built in high availability, and want two independent side-by-side SQL Server instances, you can have FactorySQL logging to an aggregate data connection. This is a data connection that will fail over to a secondary one if the first one becomes unavailable. The problem with this approach is that in the event of a failure, your history is going to be fractured across two servers.

Hope this helps,[/quote]

The cache works similarly to how many SQL database backups work - they just store the INSERT queries as text. I’ll look into whether FactorySQL compresses it. You’ll see a lot of very similar looking lines of code like this:

INSERT INTO compressor_history (t_stamp, temp, disch_press, alarm) VALUES ('2007-08-21 07:31:20', 124, 17, 0);

It’s not the most efficient format for data storage, but it’s a cache. What’s your worst case time for the PC and PLC to be up with the network down? The queries have already been formatted based on the particular SQL database. Once you run it you delete it. You can easily do a back of the hand calculation for cache capacity. This example logs date/time and 3 data points. Suppose each string takes about 100 bytes. Then 1 gig of hard drive space should hold about 10 million entries. Wider tables should yield better results. You can calculate cache time based on your update rate, hard drive size, and data points. I’d be surprised if it came out to a scale that was less than days.

Better yet, give it a shot. Use a serial PLC and unplug the network connection (or use a database on a remote computer and disconnect that). You can get a feel for how quickly the cache is growing.