Transaction Group Data Size


I’m trying to understand the data collection from transaction group to MYSQL. I stored different data type in separate group to database. However, the result does not match what it should be…

For example, the Ignition collects a int value every 1 ms for 9min + 1sec.
The ideal number of item should be 541000, but I got 269012.
The ideal data size should be 2.063751221MiB, but I got 8.5MiB.

I understand that it has some data lost due to the fast speed data collection. But I’m confused how it has greater data size, instead of smaller. :scratch:



You’re assuming exactly 4 bytes per integer. That’s ignoring the overhead that each row will have, not to mention the fact that by default, there are also other columns logged as well, usually an index (another int), and a timestamp. To get a better idea of how data is stored, check out this page in mysql’s user manual.

Now, collecting data every millisecond is ambitious. You won’t be able to do it with such a straight-forward approach. If you really need to collect data this fast (and honestly I’m not sure I’ve seen such a case, but I’m sure some exist), and I’m assuming that it would only be for short periods of time, like 9 minutes, you’re probably going to have to do the following:

  • Create arrays of data in the PLC that represent blocks of data, like 1000 rows for 1 second of samples. Create a few of them. For each one, create a separate trigger bit that says that the buffer has been filled.
  • For each array, create a block group that contains all 1000 rows in it. Add the trigger to the group. All of the groups are identical, in that they are set to:
  1. Insert rows
  2. Store row id
  3. Trigger on the trigger tag>0, reset trigger
  4. OPC data mode = Read on the options tab.
  5. all target the same table and columns.

When the trigger goes high, the group will read the data and insert it as a batch. Depending on the opc server/driver, the read should only take 50-150ms, the insert should take about 100ms. You’ve got 5 buffers or so, so you’ve actually got 5 seconds to do all of this before the array needs to be re-used.

Hope this helps,


First of all, thank you so much of taking time to answer my question.

Secondly, I understand there is Index contained in the table, I deleted the index and the time stamp rows already. The site you gave me is useful, it explains some details but not so clear.

Third, it is true that having data collected every 1ms is ambitious, I wanted to test on the capabilities of Ignition Designer, that’s why.

Anyway, thank you a lot!



The size of the rows are one thing, but you might also want to read about InnoDB Compression if you really want to reduce disk space used.

I understand if you just want to test performance. However, in that case, it’s worth noting what I suggest with the block groups. Using transactions & batching, Ignition is able to insert many rows in the same amount of time as 1 row. So a block group with 100 or 1000 rows could possibly insert just as quickly as a standard group inserting 1 row at a time!

Good luck,