How big is your system?

I have a potential customer that wants to know if Ignition can handle 250,000 tags.

Does anyone have a system that big? I’ve heard of systems with 100,000 tags.

Im about to find out! I wont be up around 250k though, more like 100-125k

We’re actually redesigning our software to handle systems up to 1M points with ease. Nobody has asked for visualization on that though (yet :smiling_imp: )

Robert: What kind of hardware config does it take to do 1 million tags? Are you dividing the load and using some kind of load balancing?

Just curious.


At this point I don’t know. The system we are developing is a data collection and processing application typically used as a front end to a SCADA application like Ignition.

The target hardware is a 2Ghz dual core cpu with 2 to 4G of RAM and no moving parts. (no fans, SSDD, etc)

We don’t do load balancing. We do do redundancy.

Where Ignition comes in, is for those customers that want a local view into their network or don’t have any SCADA application and don’t want to spend $,$$$,$$$ to get one

I have 51,000 tags currently, and will expand to about six times that. The updates are very fast as well (more or less real time). I will be running some tests this fall to see how it scales to even larger systems, but I know it easily handles what it does now.

We are currently around 230,000 tags defined and are really just starting as we are not a fixed size factory layout and continuous add more. We are running under windows on a pretty beefy vmware server with mysql on a separate server. About 10% of the data points are actively being updated at a given time with the others being configured to show when relevant screens are open. Most noisy process tags are getting updated at the 3-5 seconds rate with force updates at 1-2 minutes if no change. We started having problems with HMI getting behind when the database was at 600-1000 queries a second. We also wrote our own tag server to update the database separately due to custom data format. We normally have 30 or so clients connected.

We have a second one running on directly on a Dell Xeon 2 GHz machine with 4 GB and that is plenty of power for 75000 tags and about 12000 continuously updated. I think it would happily run 8 times that before starting to see strain. It has standard drives in RAID configuration.

My boss likes linux and having the database in between the HMI but its been an inconvenience to me from time to time. This is one of the few HMIs supporting linux. The massive number of updates to the database I think limit scaling abilities and we have problems with some of the indexes causing deadlocks when we reconfigure the scan classes. But most of that is probably just more db tuning.

The biggest bottleneck is really the frequency of tag data updates. The faster the updates the worse the behavior. Our bottleneck is largely the updates into the database. We applied a lot of deadbands to cut down on updates from noise and it helps tremendously.

I’d prefer a more dynamic configuration for what we do. It sort of depends on your use cases I suppose but this HMI works fairly well at this point with that number of tags for us and our throughput bottlenecks are elsewhere for the time being. Not saying its smooth sailing completely at this size but it works.


Are the tags in the database only because you need to drive them yourself? Is there any other way you use the data in the database? I ask because it might make sense down the road to work on converting your tag driver into a module which provided a SQLTags provider, thus taking the data out of the database. That would probably help your performance considerably.

This probably wouldn’t be something to try to tackle today, but I wanted to throw it out there as an idea for the future. Also, just curious- what’s the driving program written in? I think I remember python being mentioned, but I’m not sure.


I’m sure bypassing the database would improve performance substantially as we would just use your internal cache instead of that cache and the database. The database is really the weak link in my opinion just because its a middle-man and offers very little to the real-time streaming part of the system. While the database is there to support external driving it also supports extracting always available tags that are displayed on a separate web portal. Its also used so we can add tags programmatically which I dont think we can do with the existing internal tag driving system. The tag mirroring of KPIs could probably easily be done with transaction groups or something I’m sure.

Anyway, I can think of 2 dozen ways I’d rather do the whole system that would work better for our business process but the project is ‘complete’ so just thought I’d share the the size of our system.

I’m curious how the MySQL memory storage engine would work in terms of performance and memory usage for a large number of (realtime) tags. The system would probably need to recreate the entire database every time you reboot, which is an operation that will take some amount of time. … ngine.html

Regarding memory tables, we tried them. First thing is the default max size is pretty small but is changeable. We considered having a definition table and then using and INSERT INTO … SELECT statement to populate but we found performance of the memory table to be worse than either innodb or myisam to our surprise. Also tried mysql clusters which are supposedly faster but we found to still be comparable or slower than a single instance. The cluster in theory is a way to scale but I have trouble believing that on a update heavy table it would beat a standalone database.

We did find myisam to be faster than innodb for most use cases but table level locking tended to cause some bottlenecks when we were moving tags from one scanclass to another. We chose innodb for consistency since everything else we do uses transactions. Having said that some of the indexes on the innodb tables we have also caused table level deadlocks when we have a large number of tag updates going on while adding tags.

All in all I’m impressed with the performance given whats actually being sent at it.