Gateway Tag Change Events triggered by PLC Boolean tags

Hello,

I would like to get some feedback on the following scenario that I have implemented:

I have over 100 UDT instances (1 per PLC) with each of them having an OPC tag pointing to a "Part Produced" pulse from the PLC. The pulse duration is programmed in the PLC to be 1 second. In ignition, I am using the default scan rate of 1 second in subscribed mode.

I have a Gateway Tag Change event script that enters a row into a database each time any of the instances report a "Part Produced". There is no handshake between ignition and PLC as I just put this ignition system in place and the PLCs were already programmed.

The problem we are having is that the actual number of parts produced are not matching the number of records in the database. What's worse is that the offset is in both directions! Meaning our database is showing more parts than actual for some shifts and less parts than actual for other shifts.

I am not sure if ignition is missing pulses. And if it is missing pulses, how can it have more records in the database than triggers?

I appreciate any feedback on this issue. Thank you.

How are you currently recording the # of parts to your database? Why not just point Ignition to your database and query from there?

  • It matters how long the signal will be off, as well as the pulse width when on. You have to catch both directions. To catch both directions, you typically need a poll rate that is half the shorter of the on time and the off time.

  • That poll rate has to be sustained--if you have any overload on your device, you will miss transitions. If there are any comms hiccups, you will miss more.

  • Consider not using booleans. The most reliable way to count production is to use non-resetting (odometer-style) counters in the PLC.

4 Likes

The database gets data from ignition. The database would not have any data to query. The source of production is not the database. It is the PLC signal.

@pturmel Thank you very much for your quick response!

  • Your first point makes perfect sense. I agree with the poll rate you have described. In my case, that would mean I need to poll at 500ms because the ON time is shorter and it is one second. The OFF time is around a minute.
  • I can see how we might miss the pulse if there are any hiccups, even if the poll rate is half of the pulse.
  • I have just checked the PLC logic. And luckily, it does have a non-resetting odometer-style counter tied to the pulse. So if I start point to that counter, ignition should always be able to read each count up even if there is a delay.
  • I agree with using the counter instead. Luckily, I am using a UDT so I can just change the tag address in my UDT and all 100 of my instances should automatically start pointing to the counter. Of course, I will also have to change the data type of the tag in the UDT to integer.

Although it is clear that this should take care of the issue, one of my colleague thinks that we cannot trust ignition to always catch a change in a PLC tag value. He says that with OPC-UA, there will always be some triggers missed. Something about this communication not being deterministic? I would like to get some feedback on how reliable would be event script execution if we correct the current trigger situation?

Forget the trigger.

Just monitor the counter on change or periodically or on gateway schedule at shift start / end or on the hour or every five minutes. You can then calculate the parts produced over any period of time by getting the difference between readings. If you lose comms or the gateway is restarted it will get the new count when it comes back on line and you can still calculate totals (provided you don't miss readings at the boundaries of your search conditions).

Ideally your counter (in the PLC) never resets.

You might want to log to a dedicated table.

For the timing situation you describe, your problem will be duplicate triggers on quality changes and/or tag restarts, not missed triggers. I recommend you write the count value to the DB and use a unique index in the DB to reject duplicate counts. That will be very robust.

If you have ancillary data you wish to capture for each cycle completed, use system.opc.readValues() to explicitly get them. (Subscribed tags might have values from prior to completion after the trigger fires. Subscriptions have no delivery order control.)

@Transistor ,

That sounds like a more reliable solution because it will not depend on catching each trigger, but, my goal is not just to count parts at the end of the day or shift, it is also to provide a live, running cycle time of the production line, which is being used for calculating metrics as these cycles come in to the database. So I do need to provide a more live solution, rather than calculating the difference in number of parts produced after a certain period of time.

You have the live count from the OPC tag.
You should have the most recent count and timestamp available from the database.
Create an expression tag to use those bits of information and now() to calculate the rate of change (parts per minute) or whatever.

I do have a UID set up in my table, as shown below. How exactly are you recommending to use the UID to see if the count value that I am inserting is already present? I image doing something like this before the INSERT: "SELECT id where int_value = new_int_value". If number of rows returned is greater than 0, don't insert that record, otherwise move forward with the insert.

Is that what you have in mind?

No, selecting then conditionally inserting is terribly racy and will double the workload on your DB. Just create a unique index in the DB for your unique ID column. Then any insert that would clash will be rejected by the DB. Catch DB errors and if you get that one, discard and carry on. (If using a competent DB, you could use an ON CONFLICT clause to have the DB proactively discard without throwing an exception.)

I am only triggering my script on value change, as shown below. So I assume quality will not trigger the script. I am also filtering out initial change, so restarts should not trigger it either. Please see below code.


# Create logger to debug code
logger = system.util.getLogger("Production")
if not initialChange:
	# Get tagPath
	tagPath = str(event.getTagPath())
	# log tagPath
	logger.info(tagPath)
	# Get the tag that changed
	trigger = str(event.tagPath.getItemName())
	# Log the tag name that triggered the script
	logger.info(trigger)
	# Get UDT instance name
	parentPath = str(event.tagPath.getParentPath())
	parentPathList = parentPath.split("/")
	udtInstance = parentPathList[len(parentPathList)-1]
	# Log the instance name
	logger.info(udtInstance)
	# Get trigger time
	dateTime = newValue.getTimestamp()
	logger.info(str(dateTime))
	# Get trigger oldValue
	prevValue = str(previousValue.getValue())
	logger.info(prevValue)
	# Get trigger newValue
	triggerNewValue = str(newValue.getValue())
	logger.info(str(triggerNewValue))
	# Get trigger quality
	quality = str(newValue.getQuality())
	logger.info(str(quality))
	
	# Logic
	if trigger == "PartCountPulse" and triggerNewValue == "True":
		logger.info("Unit Produced")
		
		# Get sourceIDs
		(plantID,zoneID) = GDH.Operations.getSourceIDs(tagPath)
		logger.info(str(plantID)+" "+str(zoneID))
		
		# data entry
		result = GDH.Operations.logProduction(plantID,zoneID,dateTime)
		if result:
			logger.info("Production Logged")
					
		

Please excuse my limited DB knowledge, but if I just create a unique index for my id column, wouldn't that just ensure unique values in the id column? Which is already the case. Based on what I have just looked up, it seems to me like I need to create a unique index for all columns so there is no duplicate combination of all the columns.

I thought you meant you were getting a unique production ID from the PLC. If not, record the count value to the DB and give that column a unique index.

The PLC just has a counter that counts up each time a pulse is generated. That counter value will eventually max out and probably require a reset. Even though it might be years before that happens, it will eventually repeat the count values when it gets reset eventually. That is my first concern with the database not accepting repeat counter values. The second concern is there are over a 100 counters all together in the plant that are all going into the same SQL table so this will most likely have common counter values.

With that said, if I apply the unique index on all columns so that the combination is always unique, that might work.... But I need to see how to apply that.

If you have 100 counters, then include a code for which counter it is in another column, and make the unique index include both columns. So it enforces a unique counter&count pair.

Don't let counters reset. At once per minute, a 32 bit counter will last over seven thousand years. I think you'll be OK.

To avoid PLC screw ups, use a trigger on PLC mode change that will cause an Ignition script to run, fetching the most recent count for each counter, and writing those to the PLC.

1 Like

I just realized that the PLC counter can last longer than the lifetime of this system, so that should not be an issue. However, there are other factors that can accidentally or unintentionally end up resetting the counter. I am thinking maybe I can add some handling logic in my script to write the last good value to the counter if I get an unexpected counter value and still insert a record to the DB in that case.

LOL. I think we were literally just proposing the same idea to each other!

This is very interesting. Are you referring to special tags in AB PLCs? How would you detect mode change?

There's lots of ways to solve these problems. Personally, for slow processes where you wish to record every cycle, I prefer to use an LINT microsecond timestamp (Rockwell UTC wallclock) as a trigger. Easy to check for validity.