Turn on History on a Dataset


I have a mqtt engine publishing a 2 column dataset to ignition. I would like to turn on the History on this tag but in “edit tag” history is not listed and hence cannot be enabled. I am able to turn on history for my other tags and save its value in a MySQL database.

My end goal is to view the chart made up of the values in the database at different times.

Is there anyway history can be enabled for datasets, or is there a better method to do this?

Datasets indeed can’t be stored easily in SQL (they could be stored as text, but then they’ll be even harder to plot into a chart).

The solution is probably to make some expression tags where you extract the values you want from the dataset, and then log those values to the historian.


Hmm. I can’t speak to the difficulty of implementing this in SQL from Ignition’s perspective, but from the perspective of a user, I would definitely prefer having the option to do this.

We have several multi-point sensor arrays that make sense to have set up as lists, or ‘datasets’ in Ignition parlance. We’d also like time-series plots for each sensor in a single chart. Trying to implement this led to me finding this thread.

Beyond signal boosting the demand for this currently unimplemented use-case, I’d be interested in if you can give some background on why this is difficult to implement in SQL? In what way is a dataset of several adjacent points in PLC memory represented differently from those same points, entered one-by-one as separate tags?

As it stands, implementing a separate expression tag to pull from the dataset is more work than simply implementing the dataset as separate tags, though any python code would be much more compact and easy to write if the tags were a dataset.

Dataset are serialized into a single compressed byte array for transport, and for storage of the most recent value in the internal database. This format is not and basically cannot be query-friendly. Combining multiple sensor values into a single object is only wise for one purpose: atomicity. Sending datasets over MQTT breaks the report-by-exception optimization of the transport, and as you’ve seen, yields an entity that cannot be recorded in history as-is. It also prevents you from configuring alarms or change events on individual values. The most useful aspects of SCADA need to see these sensors individually. You really – really – should configure your engine to send them individually from the get-go.

Interesting, thanks.

I’m coming to this from a Python background, so my reflex is to assign values like this to a single array and iterate over it. In my head that reduces both the amount of work and the potential for error, that is fill the array correctly and write one correct block of code for one object, versus creating, configuring and potentially maintaining several independent tags without error.

Not intending the following as a criticism, just curiosity and attempting to fill in the gaps in my knowledge. I’m new to industrial automation, and my programming background has been mostly data analysis and simulation for my PhD.

The total PLC memory that we’re using amounts to a few (<10) thousand bools and a few thousand more ints and floats. We poll a small subset of that, but were we to poll the entire system at about 1 query per second (on average, sufficient for our monitoring) We’re talking perhaps 100’s of kbps, most likely much less. I can imagine that if the frequency or number of points or both were to rise, we could eventually outstrip the capacity of our OPC network. I’m assuming that we are a somewhat edge-case SCADA user, in the sense that it is not a concern for us at the moment?

Is the use of compression and MQTT to transmit datasets as opposed to other data a necessity of OPC architecture, or is it a choice on the part of Ignition? If it’s a necessity of OPC arcitecture, does it have its roots in limitations of historical computers/networking and the need for backward compatibility, or is there a fundamental aspect of SCADA/networking as it exists today I don’t understand yet? To be fair, I don’t understand almost all aspects of both at this point :).

My point of essential confusion is why an array of values can’t be transmitted over an OPC network as such, and why report-by-exception couldn’t be applied, even to the elements of a list/array/dataset.

Also, can you explain ‘atomicity’ as it pertains to SCADA and what the benefits of atomicity are?


There’s ridiculous amount of protocol overhead in the polling of PLC values by OPC drivers. Most industrial control devices (PLCs and I/O devices) have underpowered CPUs by modern standards, because they are built for durability (esp. vibration), survivability (extremes of temperature/humidity/radiation/voltages), and extremely long life (20+ years is common). And similarly underpowered communications links. Some of this is changing, but there’s a great deal of installed hardware that isn’t getting replaced anytime soon. On top of that, many remote applications are stuck with expensive rural metered bandwidth. Architectures and protocols that can minimize the cost of bandwidth are attractive. This is true even when uplinks are cheap, as a any cheap bandwidth is prone to repurposing for other IT consumers at any given location. A light SCADA bandwidth footprint typically won’t be penalized in such cases.

Hundreds of kbps would not be considered a light load, as lots of rural areas are stuck with 1.5Mbps T1 lines or only marginally better DSL lines. And 3g or older cell modems can be much worse.

MQTT is popular today in large part because it doesn’t poll. Polling stays local to the facility and the local engine notes which values have changed enough to trigger reporting, and only those go to the uplink. Similarly, interested parties (other facilities) subscribed to the value don’t poll either. The change is pushed down from the MQTT broker only when a report arrives. The bandwidth savings is more due to the non-polling behaviour than to any compression offered by a specific protocol or implementation.

Atomicity is an important property when multiple values, possibly of various data types, are closely related to each other in the time dimension. Like a snapshot of many process variables when a machine cycle finishes. Those values as a group belong to that cycle’s output. Quality recording and/or subcomponent traceability are primary consumers of such data. OPC and MQTT and similar subscription-based protocols generally do not guarantee any particular order of arrival when multiple values change at the same or nearly the same time.

Atomicity is also important when a collection of values is used as input to a state machine. If the ‘C’ in SCADA applies, there may be a need for careful design of the data traffic. As you might suppose, I highly recommend not controlling anything (important) remotely.

Thanks, that was very informative.

So, one nice thing (before I learned all this) about the array was that the code to scale the incoming measurements, compute derived values, and compare those values to setpoints could all be done in less than 100 lines of ST code.

It seems to me, as a neophyte, that the only way to do this once I’ve broken the 72 memory points out of their arrays is to cycle through many more lines of ladder logic. The previous iteration of this code for instance, was several hundred mostly identical lines packed into a few special functions.

Is this just the cost of doing business, or is there a clever labor-saving method I haven’t discovered? For instance, if it were possible to treat memory addresses rather than variables (I’m thinking something akin to working with pointers in C) it would be simple to rig up a resetting counter in ladder logic to cycle through each memory address sequentially. This might require more information (which program am I running, which version), but I’m just asking in general.

Nothing I’ve advised would make you eliminate the arrays from your PLC code. That would indeed be inefficient. Just don’t publish to the world as an array.
However, beware of looping construct in PLC code. You will irritate your customers’ technicians and potentially create scan time issues. Use add-on-instructions (if you can) or subroutines (if you must) for repetitive code, with individual instances (or calls) for each chunk of data. Where a looping construct is not time-critical, use indirects to execute one or just a few per scan.

Can’t you directly subscribe individual tags to MQTT payload?.

If Not:


Check the above link. You have to extract each value to another history enabled tag. But you will get new timestamps (timestamp for remote timestamp too). Finally you have to trigger and run the script too whenever the payload arrives. Looking for a payload to arrive, strangely defeats the very purpose of MQTT publish/subscription protocol. For me, it looks like some sort of crude data butchering with a blunt knife :slight_smile: This is an unviable solution for production deployment.

The best solution is, build your own Java MQTT driver module for Ignition.

If you want to save the JSON payload into a text file and import it directly into the historian database along with remote timestamps, check this topic. Later, you can consume that data through query tags. I think, this can very well be done outside Ignition through some PHP scripting. Just “the other way around” solution.

Very interesting. Which MQTT driver do you use?.

If you don’t have a reliable MQTT driver for Ignition, i would recommend this solution:

  1. Log MQTT payload to a database.
  2. Consume it with query tags into Ignition.


Personally, i think this is far better than loading a “Ignition MQTT engine” because:

  1. You have excellent open source MQTT clients which are well supported and used by a large community.

  2. You can eliminate the extra load and risk of running another driver inside Ignition and getting stuck with it forever.

  3. MQTT is open standards and evolving rapidly. It’s not a proprietary PLC driver. Why should you cough up a big amount for a open standards driver to a proprietary company and suffer under their mercy for ever?.

I would never recommend this approach. But… take a look at CrateDB.

awesome. I will test it and post my feedback.