Typical Database Size

I already know the answer I’m going to get to this question but I told our IT department that I’d ask so I apologize ahead of time for the dumb question… :wink:

We run a clustered instance MS SQL server setup at our plant and it’s usually a good idea to get a database size roughed out ahead of time for new projects so the DBA’s can assign the appropriate amount of hard drive space to each database. That being said, is there some kind of standard to figure out a starting database size for a project with about 1000 historical tags, mostly real values, logging at about 1/second? I’d rather oversize the database right off the bat but not give them some ridiculously huge number and since I’m not a database guy I don’t know what’s reasonable. Any advice would be much appreciated. Thanks!

As you could probably guess this topic is difficult to outright answer. Varying factors such as quantity, frequency, partitioning etc can effect the storage requirements. According to Ignitions Server sizing and architecture guide (https://s3.amazonaws.com/files.inductiveautomation.com/s3fs-production/test_folder/Ignition%20Server%20Sizing%20and%20Architecture%20Guide_1.pdf?VersionId=kDUhFefNXtqCORpqOzk2qQSLitTDG6bL)

Medium Historian
4 Cores (4 GHz+), 8GB memory, SSD
500 - 2,500 value changes per second

Requires approximately 7TB disk space/year if 100% of the values are changing every second sustained (approximately 150GB with 2% change, smaller with slower rates)

2 Likes

Welcome, Ryan!

{ I suspect @Duffanator figured it out some time in the past twelve years. }

4 Likes

Nope, I've been waiting all this time for an answer. Now I can finally start my project! :rofl:

9 Likes