Tag Change Event on Write when tag value is unchanged

Is there a way to generate a tag change event "on write" such that if a tag holds one value, if the exact same value is written to it but at a different point in time which is later than its last update, a tag change event is generated?

The application is we use a kafka history sink to send data to central repository and we have been requested to send the value sent 4 times per hour on 0, 15, 30, and 45 minutes.

If we can make the tag change happen each time we write to the tag regardless of whether the value is changed or not, I will always end up with 4 values per hour in the central cloud DB.

If we can't do that, we will do interpolation on the query that pulls from the central repo.

History Provider

Transmission Mechanism

@Override
    public void storeData(HistoricalData data) throws IOException {
        int pathIndex = 0;

        for (HistoricalData row : BasicDataTransaction.class.cast(data).getData()) {
            BasicScanclassHistorySet scanset = BasicScanclassHistorySet.class.cast(row);
            if (scanset.size() == 0) continue;

            String gatewayName = this.hostName;
            String provider = scanset.getProviderName();
            pathIndex = provider.length() + 2;

                for (HistoricalTagValue tagValue : scanset) {
                    try{
                        String json = new JSONObject()
                                .put("gatewayName", gatewayName)
                                .put("provider", provider)
                                .put("tagPath", tagValue.getSource().toString().replace("["+provider+"]", ""))
                                .put("type", tagValue.getTypeClass())
                                .put("quality", tagValue.getQuality())
                                .put("value", String.valueOf(tagValue.getValue()))
                                .put("epochms", tagValue.getTimestamp().getTime())
                                .toString();

                        SinkData value = new SinkData(topic, json, this.getPipelineName());
                        this.sendDataWithProducer(value);

                    } catch (JSONException e) {
                        logger.error("Error sending tag: " +  e.toString());
                    }
                }
        }
        if (pathIndex > 0) {
            setLastMessageTime(this.name);
        }
    }

Thanks,

Nick

I had not noticed its existence until now, but I am going to evaluate this function.

https://docs.inductiveautomation.com/display/DOC81/system.tag.storeTagHistory

On the surface it looks like it would allow us to store to the history provider of our choice at the time of our choosing. The devil is in the details so I will do some tests and post the details back here as always in case they can be of use to others in the future.

Meanwhile if anyone has any other silver bullets, let me know.

Thanks,

Nick

Use of store tag history looks like it may be sufficient for our needs. I ran a quick test and it easily propagates data from the gateway, to kafka, and into our central DB with the timestamps of our choosing.

Note that we only transmit the most important values :slight_smile:

Test Code

historyprovider = "kafka-tag-history"
tagprovider = "default"
paths = ["Dev/nr/testInt"]
values = [69420]
qualities = [192]
time = common.datetime.getRecentHourEnding()
timestamps = [system.date.addMinutes(time, 45)]

system.tag.storeTagHistory(
	historyprovider, 
	tagprovider, 
	paths, 
	values, 
	qualities, 
	timestamps
)

Data At The End of the Pipeline

Nick

I'm not 100% sure on your setup, but tags have a Quality Change event that will trigger on anything that changes in the qualified value. This means the event fires for value, timestamp, or quality changes.

In the event the value is written to, but the value does not change, the quality change event still fires because the timestamp will update.

1 Like

What triggers it is if tag history on the tag is used and only when the value itself changes. In summary exactly how Ignition stores tag history.

Our setup is a Kafka producer module we wrote that can pick up and transmit tags, alarms, and audit events.

In case of tags, it is a history provider that can be used directly or with a splitter connection.

Using this method is advantageous because I can control what timestamp is sent. Some things we compute and send are pulling from a local database but what writes into that database is buffered and written in batches. So if I query exactly on the times, data will be missed. For this reason I wait until a bit later to execute, but the actual query is for an earlier time and this function will allow me to represent that in the actual timestamp that is sent in the kafka message.

Cheers,

Nick