Export ignition tag historian to csv to provide data to central site (custome)historian

Hi Forum,

we are currently trying to get data from multiple edges into a redundant Ignition Gateway. And from there we want to provide the data to our site historian/DB (Postgres in the backend).
Our site historian/DB can import tags via OPC-UA (subscribing or polling an OPC-UA server, or by reading CSV files.

  1. Using the internal OPC-UA Server. Issue: Data is not backfilled in case of connection lost between the edges and the central gateway
  2. Export historian to csv Issue: The idea was to query all tags every minute or so. But the script takes very long and the csv is not user friendly. It repeats the same time over all tags. Is this the normal behavior?
    We tried it with 500tags which changes in a simulation every second.
Csv file structure:
t_stamp; Tag1; Tag2; Tag3; Tag n-1; Tag n
"2025-02-28 12:37:53.000","670"
"2025-02-28 12:37:53.000","670","670"
"2025-02-28 12:37:53.000","670","670","670"
"2025-02-28 12:37:53.000","670","670","670","670"
"2025-02-28 12:37:53.000","670","670","670","670","670"
"2025-02-28 12:37:53.000","670","670","670","670","670","670"
"2025-02-28 12:37:53.000","670","670","670","670","670","670","670"

The epoch is 100% the same for all tags, also in the historian.

#Query tag history
endTime = system.date.now()
startTime = system.date.addSeconds(endTime, -60)
historicalData = system.tag.queryTagHistory(paths=paths,
									   startDate=startTime,
									   endDate=endTime)


csv = system.dataset.toCSV(historicalData)
system.file.writeFile(r"C:/myExportsTest.csv", csv)

Are the edge gateways actual Ignition Edge?

If so, use sync services to replicate the data to the central gateway.

If full Ignition, the supported way to do this is to set up remote history providers in your spoke gateways alongside the local historian, and use a tag history splitter to cause your tags to push to both.

Another possibility (my preference) is to use native database replication outside of Ignition to push to read-only replicas in your central location, and point the central gateway at those.

Hi @pturmel,

    • Yes, we use Ignition Edge Gateways and the GAN with Tag syncs (beside Audit sync).
    • We do not plan to use full gateways for "edge" locations. But if this is the only way, we will add it as an option in our evaluation. We need to script here if we do not want to use SQL bridge in addition for all "edge locations"?
    • Thank you for your recommendation, although I don't fully understand. We can replicate the Ignition historian (we need it as this is the only way to S&F from edge, beside MQTT). But I'm not sure how this would help. We might can extract the data from there and put it into csv. This will most like increase the performance. But we tried it with only 500tags/100% change rate for 1 sec. The production environment will have ~20k tags.

Do you know if the CSV export behaves as intended. I would expect to only have one row.

  • We will check if MQTT/SparkPlug module helps with S&F. Maybe exposing reference tags to the OPC-UA server, but not sure if this works for S&F data.

Thank you,
André

Look at additional settings on the system.tag.queryTagHistory function. It does not look like you are telling it to return only one row.

If you already have the data in your central historian, I don't understand your problem. Just connect the central Ignition to your PostgreSQL database. Perhaps just use PostgreSQL as the historian target. In any case, CSV is not the answer at all.

Finally, if you really want to manually transfer history, make sure your calls to system.tag.queryTagHistory() specify Tall format, and "as stored" --no aggregation, no fixed return size.

But don't convert to CSV. Just insert directly to a PostgreSQL DB connection.

@Chris_Bingham
I checked and tested the options, but I can't find one which only returns one row per timestamp.

@pturmel
We do not have the data in our central historian. We want to collect data from ignition (edges via central) to store them into the central historian.

Okay Thank you!
Unfortunatly we can't directly use the native PostgreSQL connection.

You have Edge connected to a central Ignition gateway, with sync setup per comment #3, item #1. So you have your data in that gateway's historian, just not in your PostgreSQL custom historian. (If you don't actually have S&F working, pushing to the central Ignition gateway, get that working first.)

From the central gateway? This is what you need to fix, then. Really.

  • We do have S&F running with GAN sync to the central ignition. Thats not the issue. Sorry if I expressed myself incorrectly here.
  • Okay, we can ask our supplier to open the interface to the historian. Currently its only CSV and OPC-UA. In case we can call their internal stored procedures, my approach would be to test this with the SQL Bridge. From my understanding this will not work as the data from edge to central is only S&F to Ignition historian. Another approach we will try is to script it in the central gateway - also here, not sure if we have access to the S&F tags. We are not super experienced with Ignition but trying to learn and eventually work with an integrator if our test scenarios work.

Thank you!

A Tag Historian Splitter IS a Remote History Provider. If you wish to push data to two historians from Edge gateway, then create a Tag History Splitter on the Full gateway, then configure the Edge Sync to push data to that same history (splitter) provider. The Full gateway will then send the Edge data to BOTH historians.

Then you have a database that your central Ignition is using. (That's how Ignition stores long-term history.)

Ideal. Ask them to allow Ignition to create its standard history tables there, and then all data exchange can be inside their database.

If they do not permit a direct connection, and option would be to set up two new PostgreSQL database servers, one adjacent to your central gateway controlled by you, and one in their data center under their control. Configure yours to be your history storage instead of the current database connection. Have them configure theirs to be a replication slave to yours. Then the connection is from them to you, instead of from you to them. And then they can move data on their side without exposing their main database.

@ptumel
Yes we have a database which ignition central is using. The question was how to extract the data and bring it to the historian collector (e.g. OPC-UA or CSV).
As this seems to be not that easy, it might be better to replicate the ignitoin database and give the historian provider read access.

Thank you both!

For those who are interested.

Configuring history on MQTT Engine tags - MQTT Modules for Ignition 8.x - Confluence

There is an option to backfill data, from the edge via MQTT to engine tags..
These tags can than be exposed via OPC-UA.

Our test setup:
• Ignition Edge sends 500 incremental tags via Transmission
• We activated "flush in order" and "rolling buffer" + the memory history. We flush 5000 data points every 1000ms after a reconnect.
• Broker for tests: Chariot
• Engine receives the tags at the central Ignition gateway
• We deactivated the flag to directly write history tags to the db.
• We activated history on the engine tags
• We expose the sames tags via the Ignition OPC-UA server
• We then subscribe to the OPC-UA server with a queue size of 65000 (not sure if this really works) and a publishing interval of 500ms
• We log the data from OPC-UA to a csv file (with Prosys OPC-UA Browser)

  1. Now we disconnect the MQTT Broker from the Edge and the Gateway by activating the firewall.
  2. Everything is okay so far. The edge stores the data to the memory history
  3. Now we connect the MQTT Broker again
  4. The edge flushs the 5000 tags until it is done (we can see this in the logs)

Verification:

  • We download the messages from the Broker via the Chariot client.
  • We download the csv files from the OPC-UA client logger
  • We check the Ignition history

Unfortiontly we have data loss in all cases, but only via OPC-UA some significant in between.

  1. The CSV File from the OPC-UA Client logger. We have missing data between 3 and 22 or 10:50:08 and 10:50:27 and there are other gaps, almost always larger gaps of 10-20seconds. But beside that it works good....

The data itself is available via MQTT and is also historized


image

Does anybody know how the tags are provided to the internal Ignition OPC-UA server.
When the edge flushes data, it has to be queued somewhere in Ignition. Maybe this leads to some limits?

Thank you,

PS more CL related: Some data are not historized in the Ignition Historian but were available to see the data via OPC-UA. Think this is somewhat strange :slight_smile:

This is the Ignition history. Data is missing between 86-91 or from 10:51:31 – 10:51:36
image

But this data is available in the OPC-UA clients log

By default, the Ignition historian handles floating point tags with "Analog" deadband style, which tries to reduce DB writes by omitting samples that fall on a recent trend line. (As documented.) If deliberately set to "Discrete" deadband style, the historian will record new points strictly according to deadband and min/max time interval settings.

If your OPC client loses its connection while the field device is connected to the brokers, you will lose history. Ignition's OPC UA server does not implement any history methods.

You should generate your CSV files from Ignition instead, with scripts that directly access the historian.

Hi @pturmel

Thank you for your comments

  • The tags are set to discrete for the historian
  • The OPC UA client never lost connection, only the edge to the central gateway (which hosts the OPC-UA client). We understand that OPC-UA HA is not implemented, but queues are implemented and should buffer the data as specificed and requested by the client.
  • Thats the options we checked, but it was not working as expected. We found something in the documentation which explains the issue and will now check if the interval sample mode without interpolation works:

Does anybody know why there are gaps in the tags exposed via OPC-UA after flushing the data from the edge? The most historical data can be exposed via OPC-UA, but there are these longer gaps (15-20sec) which are not explainable for us.
Is there maybe a queue between an Ignition Tag and an OPC-UA node which gets full before the OPC-UA server is able to publish the data? Just a guess :slight_smile:

The exposed tags feature of Ignition's OPC Server does not use Ignition's history subsystem at all. Interval settings on queries only apply to the historian.

The OPC Server simply collects and forwards tag changes at the tag. When MQTT delivers data in order, the OPC Server will see and be able to buffer those changes. If any comms hiccup causes MQTT to deliver any samples out of order, the late-arriving samples with earlier timestamps will be sent to backfill history, but will not be delivered as a tag value change. Thus, the OPC Server will never get them. They will only be available to history queries within Ignition.

Your chain of data flows is not robust for reliable history collection.

Good luck.

Yeah, you are most likely right. We will do another test, but it looks like that the only viable option is to use standard gateways and make use of the S&F mechansims with sql scripts

Semantics here... Edge vs Standard gateway does not change the S&F mechanisms that are in-place. Any data in the Ignition historian that you see (or don't see) with your current configuration (on Edge) should be the same with a matching Full / Standard gateway setup.

You really need to challenge the requirement for the site historian to inject data only via OPC UA & CSV. Pursue an option that either:

  • Permits Ignition to write directly to the site historian
  • Permit site historian to sync (or query) Ignition historian directly

Thank you for your feedback.

I'm not sure what you mean with semantics. If we want to use scripted SQL queries, this is only possible with standard gateway - as far as I know.

We will try our best to challenge the requirements, but having direct SQL access is - in my experience - not the standard for site/enterprise historians. CSV works in general fine. But we are not experienced enought with Ignition so far to make this work seamlessly.
In the long run, there will be a Kafka and MQTT interface to the historian, but we need an intermediate solution :slight_smile:

What do you mean by this?

In my experience - having, at a minimum, read only access to historical data, is not only the standard for a SCADA system, it is a requirement.

Also, it should be noted that the SQL bridge module despite the name has nothing to do with database connectivity or “bridging” data from one database to another, in the way that you might think.

That module gives you access to Transaction Groups, which do not work with CSVs but will require a database connection.

1 Like

If you have access to import data from a CSV, then you should probably also have access to INSERT rows to the same tables. I'm not sure why IT / security requirements would differentiate between those two. That said, type of access should absolutely apply. A user should be created which has only the permission to mimic the functionality of the CSV insert capability (insert rows to existing table, optionally, create new tables, etc).

This is true, but you already have a full gateway, correct?
Is this a representation of your current architecture?
Edge Ignition --> Standard Ignition --> Ignition DB --???--> Enterprise DB
...where you trying to figure out how to get data from Ignition DB to Enterprise DB...