If you dont mind my asking how big is this cache that it will log to? I ask because when this system is up and running at full scale it will be logging upto 50 machine and each machine will have approx 50 tags which are being saved in the history. So worst case scenario that would be 2500 tags per fault at the machine level. The tags range from DINT, INT, Boolean, and String. So if we know how much we can save there in that cache then we will know the best approach we want to proceed with.
I know for a fact that our IT dept is or maybe by now has SQl 2005, So I will tell them about this feature you mention as well. Since that is their teritory.
Thanks for the reply.
[quote=“Carl.Gould”]Well, there is really 2 things to consider here.
First, the network connection to the database. If this goes down temporarily, FactorySQL will automatically log to a local data cache, and then send that data over to the main database when the connection goes back up. So, you don’t need to worry too much about that kind of failure.
The kind of failure you need to worry about is if the database machine crashes for some reason. By far the best solution for this is to use a highly available database, like the High Availability feature in SQL Server 2005 (microsoft.com/sql/technologi … fault.mspx).
If for some reason you don’t want to use SQL Server’s built in high availability, and want two independent side-by-side SQL Server instances, you can have FactorySQL logging to an aggregate data connection. This is a data connection that will fail over to a secondary one if the first one becomes unavailable. The problem with this approach is that in the event of a failure, your history is going to be fractured across two servers.
Hope this helps,[/quote]