Does Tag Change Script Trigger for Replayed Edge Data After Gateway Network Reconnection?

Hi everyone,

I’d like to confirm my understanding of how Ignition Edge Data Sync (Tag History) behaves after a network outage.

Mu Scenario is:

  • Edge collects event-based data (one message per event).
  • Tag History and Data Sync are enabled.
  • During a connection loss, Edge buffers data locally.
  • When the connection is restored, historical values are sent to the Standard Gateway with original timestamps.

On the Standard Gateway:

  • A Tag Change Script listens to the synced tag.
  • The script parses each incoming value and writes one row per event to a database.

** My question is:** Can you confirm that each replayed historical value will still trigger the Tag Change Script normally, allowing all buffered events to be processed and stored correctly, assuming the script uses event.timestamp and does not rely on real-time logic?

Thanks!

I could infer by your post, but, more information required:
Are you using MQTT or Ignition GAN / Remote Tag Provider to deliver live data from Edge to Full?
Do you have the Historian module installed & configured on full gateway?

Data is being delivered from Edge to Full using Gateway Network / Remote Tag Provider.
The Historian module is installed and configured on the Full Gateway.

I think the answer is No, because it is Edge Sync that will deliver the history, not the remote tag provider.

You are already at risk of missing updates as an RTP cannot deliver values closer than one second apart.

But you should test if you wish to be sure.

Given this scenario, what would you consider the best practice to persist data coming from an Edge into a Full Gateway database, especially when the goal is analysis and traceability? More specifically, I’m looking at storing the incoming data partitioned into columns, for example:

  • Value
  • Quality
  • Gateway receive timestamp
  • Edge timestamp (when available)
  • Error / parsing flag (in case the Edge timestamp is invalid or missing)

In this context:

  • Would Edge Sync → Historian → DB still be the recommended approach?

Curious to hear how others usually structure this to avoid data loss and still keep good diagnostics.

Edge sync's whole purpose is to reliably store all of those values, except the extra timestamps you seem to want.

I don't know of any system that will capture those extra timestamps. You may need standard Ignition with a database connection to do this.

I'm still not certain of your end-goal here. But I'll pull from this statement as the basis for my response.
In general, Edge gateway contains its own historian, local to the gateway. It stores all historical data locally, regardless of comm status to remote / full gateway. If you have visualization on your Edge gateway, you may query that internal database for historical data, etc.
Within your Edge sync settings, you define the rate at which Edge will publish historical data to the full gateway. This process publishes data directly from the Edge gateway to the store-and-forward engine of your choosing within the full gateway. and operates outside of any tag change script on your project. There is a confirmation handshake that exists between the two gateways which ensures that ALL data is received. If anything goes wrong with the handshake, then the data is published again from the Edge gateway.
Put another way, ALL historical data that is stored in the Edge historian (timestamps & values) WILL be published to the full gateway's historian (provided that comms are sufficient to publish data out faster than data arrives, the outage is less than 10M rows of data AND 35 days, etc.)
While these details are related to historical data, it sounds like you might have a separate process that you need to handle. Are you able to query the historian (on the full gateway) to serve data to your other process?

1 Like

On the Edge gateway, I have a single tag that updates every few seconds , containing a string message with the quality test results of a product moving through the line. This string embeds multiple fields (IDs, measurements, status flags, timestamps, etc.).

Current setup:

  • This string tag is historized on the Edge .
  • Through Edge Sync , I have access to that history on the Full (Standard) gateway historian.
  • My goal on the Full gateway is not just to store the raw string , but to:
    • Parse each update
    • Split the message into individual fields
    • Persist those fields into a custom SQL table with multiple columns for later analysis.

What I initially tried:

  • I created a tag on the [default] provider in the Full gateway, bound to the Edge tag.
  • I attached a Tag Change Script to this tag to parse the string and insert the values into the database.

The issue:

  • When the Gateway Network connection drops, the Edge historian continues collecting data correctly.
  • However, during that outage, the Tag Change Script on the Full gateway does not execute, so none of those intermediate updates are parsed or written to the database.
  • When the connection is restored, only the latest value is seen by the tag, and the historical updates are effectively skipped for my custom table.

So my core question is:
Is there a recommended pattern to reliably parse and persist historized Edge data into structured database columns on the Full gateway, without losing data during communication outages?

In other words:

  • Is querying the historian on the Full gateway (rather than relying on Tag Change Scripts) the correct approach here?
  • Or is there another architecture better suited for transforming historized string data into relational records?

I’ve attached images for context to make the data flow clearer.


The tag with the tag change script in Ignition Standart

I trust Edge Sync services to deliver all data that you have historized.
As such, I would create a view on the historian (or a script within Ignition) which returns the data in the format you're expecting. The query should be able to select the string tag, parse out the values into different columns, etc.

Side note, the task of parsing the value of your string tag to meaningful values would be straightforward for any LLM to generate a good starting point.

If you require the records in a separate database, then a periodic process to query the full gateway historian, grab any new values, parse them, and inject to your other tables would be my option #2.

Do the parsing on the Edge system and store the results in memory tags. Historize the decoded parts, not the string. Make sure they are set to Discrete deadband style.

No buffering or special code needed on the central Ignition gateway at all.

I understand the suggestion, but in my case I need the data to be persistently and consistently stored in the database, with all values properly correlated in rows for auditing and later analysis, my focus is on ensuring the historian pipeline reliably saves the decoded data in an organized way at the database level, rather than relying only on in-memory tags. Given this requirement, what would be the recommended approach to guarantee this kind of structured and reliable storage?

Don't use Edge. Parse and store to your database with Standard Ignition in place of Edge.

Ignition historian technology is fundamentally non-synchronized among multiple values. To have synchronized storage, you need wide database tables, and Edge cannot do databases. :man_shrugging:

There is no recommended approach for your situation in Edge.

In my case, the use of Ignition Edge is because it runs locally on the line and continues collecting data even if the connection to the central gateway is lost. Given this, is there really no other option to achieve structured, reliable storage while still taking advantage of Edge’s local buffering and store-and-forward capabilities?

Edge's store-and-forward capabilities do not include traditional database functionality. That's the primary reason Edge is so cheap.

I strongly discourage users from deploying Edge in the same facility as Standard Ignition. Your better and usually cheaper solution to comms reliability in a single facility is multipath links and Rapid Spanning Tree Protocol, not Edge. (Also dramatically reduced engineering effort.)

  • How many data points are included in this...transaction?
  • What is causing these values to update (OPC tags via rate defined by tag group? Ignition driver? An event trigger? ...?)
  • How often is the data expected to update in your desired database (Approx: #/sec? #/min? #/hr?)
  • What is the maximum acceptable time of comms outage before data loss is expected?
  • What is the acceptable deviation between timestamps of values between points (# ms, # sec, etc.)?

Thanks everyone for the insights and questions, after reviewing the suggestions and rethinking the architecture, I found an alternative approach that I’m now testing. I configured the Standard Gateway database as a Remote History Provider on the Edge. With this setup, every new tag update is already being persisted directly into the Standard Gateway’s database, while still benefiting from Edge’s local buffering and store-and-forward behavior during comms outages.

From this point on, my plan is simply to handle the structuring on the Standard side: a script will periodically process the newly inserted records (including batches that arrive after an outage) and reorganize them into another table designed specifically for reporting and analysis purposes. This keeps the persistence concern solved at the Standard level, while allowing me to maintain a clean, query-friendly schema. So at this stage, data persistence on the Standard Gateway is already guaranteed, and the remaining work is just organizational.

Thanks again to everyone for the contributions — they definitely helped steer me toward a more robust solution.

Very good idea.

Would MQTT be a better solution here? While it does require a broker now as part of it and an additional module on the standard gateway, there is an option in MQTT to send all data in sequential order when a connection is restored. I believe this is for exactly this purpose.

  • Only one data point (single tag).
  • Values are updated as they arrive via a TCP connection.
  • Data is expected to update as fast as possible, approximately every second.
  • The maximum acceptable outage time corresponds to the Edge buffer capacity, which is around 35 days.
  • Each message contains its own timestamp, so in the final database table the timestamp comes directly from the message payload itself.