Transaction groups triggered expression item updating when trigger is false

Having a transaction group with triggered expression/SQL items and a run always expression item used as trigger for the group. When the trigger (run always item) references any of the triggered items, these updates even though the trigger is false. Is there any way to stop this behavior as the triggered items stores relevant information I need between triggers?

Please some of you ignition people, is this the expected behavior for triggered items?

Try set up transaction group with a false trigger, then create a triggered expression item just referencing some tag. Now start the group and see the value is ‘N/A’ (as expected). Next create a run always item that references the triggered item, still with the trigger set to false and see the triggered item updates when the group is startet!

To me (at least), this is unexpected. Running Ignition Platform 7.9.5

Makes sense to me. It needs that value in your expression, so it’ll execute to get it. I don’t see how it could work any other way.

The problem is the triggered item is updated every time the transaction group runs then, e.g. having the trigger false for 10 seconds and the group run every second would make that one triggered item behave as a run always item. The expected behavior is having the triggered item contain the value from the time the group was actually triggered.

I know the value will be ‘N/A’ or None when the group is started, but still think this is wrong.

Other question then, do you have some kind of approach to make a group running at dynamic time intervals then? Tried doing it with a triggered item storing the system time every time it was triggered and a run always item as trigger comparing the last trigger time with the current system time. My approach fails as the triggered item containing the last trigger time is updating every time my run always trigger is looking at that value.

I would use the state dictionary in an objectScript() expression…
{ Available in the Simulation Aids Module }

1 Like

Thank you!

Downloaded the Simulation Aids Module and must say this gives some very much needed features :slight_smile:

1 Like

Now that @pturmel and the people behind was so kind to make the Simulation Aids Module free for use I thought I would share my solution for making my transaction group trigger on variable intervals.

Note, the solution depends on the Simulation Aids Module and the group timer must be set faster than the desired fastest interval.

First, this is the script modules I created

from __future__ import with_statement
from shared.util import CleanUp

def setTriggerTime(state):
	"""
		Defined as source of time was not truly decided on creation.
	"""
	
	state['triggerTime'] = system.date.now()



def getTriggerByInterval(state, interval=5000, firstTrigger=False):
	"""
		Used by transaction groups to trigger on variable intervals.
		Intended to be used on a 'run-always' boolean expression item as trigger for the group, used like
		this with the minimum allowed arguments specified:
		
			objectScript("shared.transactiongroups.getTriggerByInterval(state, *args)")
			
		This method adds 'triggerTime' to the 'state' dictionary.
		
		Arguments:
			state: The object scripts internal state dictionary.
			interval: The interval between triggers. Time in milliseconds.
			firstTrigger: Trigger on first group execution.
	"""

	with CleanUp(lambda: setTriggerTime(state)) as m:
	
		# If no trigger time, set the trigger to the state of the first trigger argument.
		if state.get('triggerTime') is None:
			trigger = firstTrigger
		
		# If the current time since last trigger is greater than the interval specified,
		#	Set the trigger to true.
		elif system.date.millisBetween(state['triggerTime'], system.date.now()) >= interval:
			trigger = True
			
		else:
		
			# At this point just set the trigger to false and cancel setting the trigger time.
			trigger = False
			m.cancel()
	
	# Finally, return the state of the trigger.
	return trigger
	
	
def getTriggerByIntervalAndIntervalChange(state, interval=5000, firstTrigger=False):
	"""
		Extends the 'getTriggerByInterval' function adding a trigger on interval change.
		This method is called the same way as the trigger by interval function.
		
		This method ads 'interval' to the state dictionary. 
		
		Arguments:
			See 'getTriggerByInterval'.
	"""
	
	def f_cleanup():
		state['interval'] = interval
		
	
	with CleanUp(f_cleanup) as m:
	
		# Return true if triggered by interval.	
		if getTriggerByInterval(state, interval, firstTrigger):
			return True
		
		# If stored interval is greater than the current interval, set the trigger and
		#	update the trigger time.
		if state['interval'] > interval:
			setTriggerTime(state)
			return True
		
		# At this point, just return false.
		return False

The code depends on an old manager class I created long ago (the CleanUp class).

class CleanUp(object):
	"""
		Manager class for running a cleanup function on exit.
	"""
	
	class Cancel(Exception):
		"""
			used for stopping the manager without clean up.
		"""
		pass
	
		
	class Stop(Exception):
		"""
			used for stopping the manager with clean up.
		"""
		pass
		
		
	def __init__(self, f_cleanup):
	
		# When initialized, store the internal cleanup function
		self.f_cleanup = f_cleanup


	def __enter__(self):
	
		# Return self as the manager.
		return self


	def __exit__(self, ex_type, ex_value, traceback):

		# Detect if the cleanup was cancelled. Cancelled if exception of 
		#	type 'CleanUp.Cancel' is raised
		cancelled = ex_type and issubclass(ex_type, self.__class__.Cancel)

		# If clean up wasn't cancelled, run the clean up function. This will run
		#	even though execution was stopped by an exception.
		if not cancelled:
			self.f_cleanup()

		# Return 'True' if no exception or the exception stopping the manager was
		#	the 'CleanUp.Cancel'.
		return (not ex_type or issubclass(ex_type, self.__class__.Stop)) or cancelled


	def stop(self):
		"""
			Stops the manager finishing off with the cleanup function.
		"""
		
		raise self.__class__.Stop
		

	def cancel(self):
		"""
			Cancels the manager, stopping execution of logic without doing clean up.
		"""
		
		raise self.__class__.Cancel

Finally, a run-always expression item is made in the transaction group which will use the following expression

objectScript("shared.transactiongroups.getTriggerByIntervalAndIntervalChange(state, *args)",
	{[~]Interval},
	True
)

The interval can now be set using the {[~]Interval} tag.

Edit:

Forgot to mention, the group should of course be set up to use the trigger, making the group run on True condition.

Consider not using the shared.\* namespace for your scripts. The examples in the objectScript() docs were written with expression tags in mind, which require that. Transaction Groups always exist in a project, so the project.\* namespace is available – I would highly recommend it.

Is there any difference other than, using a shared script, the code could be changed from another project? As the script is generic using the 'state' as storage between calls, I find this fits perfectly in the shared namespace.

Editing and saving a shared script restarts gateway scripting in all projects, and prompts or triggers client restarts for all projects (unless that’s been turned off). It is incredibly disruptive on large systems, and unless you are really careful with state, can really scramble an operation. I recently posted some comprehensive advice for consideration. You might find the whole thread helpful.

@pturmel, I’ve been toying a lot with the object script from your module, having one thing I don’t understand though. For demonstration I have an expression that looks like this:

runScript("str(hasattr(self, 'someCustomMethod'))") + '/' +
objectScript("str(hasattr(binding.target, 'someCustomMethod'))")

which outputs True/False, why is that?
Tried a lot of stuff to figure out why, however my recent tests

runScript("str(self.hashCode())") + '/' + 
objectScript("str(binding.target.hashCode())")

outputs 1728264954/1728264954

AND

runScript("str(id(self))") + '/' + 
objectScript("str(id(binding.target))")

outputs 2/2.

As self from the runScript function and binding.target in the objectScript function seems to be the same object, I now give up figuring out why I can’t call the custom methods using the objectScripts binding target.

Would really appreciate if you could explain what is going on :slight_smile:

Interesting. That means custom methods behave like custom properties in java-to-jython conversions.
So, Java objects have to be wrapped in jython shells to be used within the jython interpreter. The standard java-to-jython shell exposes the methods as-is and also takes any methods that fit the NetBeans getter/setter naming conventions and exposes those as the corresponding properties. Ignition overrides the standard conversions so that its components have their custom properties and methods injected into the instance dictionary. This override must have some explicit call requirement as I’ve noted that once wrapped in a jython shell, most java methods that return a component won’t inject that component’s custom properties. So the binding’s getTarget() method yields a bare component instead of invoking Ignition’s extended wrapper.
I may have to experiment with Jython’s .toPy() type registry to see if I can figure it out. Or maybe someone from IA will pipe up with a hint. @Carl.Gould ?

Would be really great if this is possible. Trying to make a template with an in/out parameter which also needs to bind to a property inside the template. Using the objectScript seems to be the only way to accomplish this without using the propertyChanged event.

Ok, so I’ve been down the rabbit hole and found Ignition’s (undocumented) PyComponentWrapper. I haven’t deciphered it enough to be sure, but it seems to be invoked only for particular situations, based on the string constants in the class file (like “getComponent” and “getComponents”).
To play with this, I constructed a conversion adapter like so:

public class PyComponentAdapter implements PyObjectAdapter {

	public PyComponentAdapter() {}

	@Override
	public boolean canAdapt(Object jObj) {
		return JComponent.class.isInstance(jObj);
	}

	@Override
	public PyObject adapt(Object jObj) {
		return new PyComponentWrapper(jObj);
	}
}

Followed by this initializer in my hook class:

static {
	Py.getAdapter().addPostClass(new PyComponentAdapter());
}

Which yielded this beta version of Simulation Aids. Your first expression above yields True/True with it. Give it a spin. I’m not entirely sure there isn’t a gotcha here, but it appears to work.
I’ll bet PyComponentWrapper could be simplified if this was adopted by IA in the base product.

Edit: See the newer version below. The March 15 version has been removed.

1 Like

Thank you very much for the quick fix. Been testing this new beta on a few use cases and haven’t issued any gotchas yet :slight_smile:

1 Like

So, I’m concerned that catching all JComponents is way too aggressive and will hurt performance on complex UIs. I’ve adjusted the adapter like so:

	public boolean canAdapt(Object jObj) {
		return DynamicPropertyProvider.class.isInstance(jObj)
			|| FPMIWindow.class.isInstance(jObj);
	}

and created a new version of the beta. Please give it a whirl instead.

1 Like

Will try it, thank you.

Tried it and seems like its working as well :slight_smile:

1 Like

So with some feedback from IA, I’ve spun up a new version, and I consider this one to be a release candidate.
Two changes were made:

  • The classes/interfaces to be checked within the adapter itself were adjusted to include the VisionComponent interface. I left the DynamicPropertyProvider interface in the checklist in case there are third-party modules supplying components without the VisionComponent interface.

  • The static block in my module hook was changed to first check if Py.java2py() yields a properly wrapped generic component. Like so:

static {
	PyObject pyo = Py.java2py(new PMILabel());
	if (!PyComponentWrapper.class.isInstance(pyo))
		Py.getAdapter().addPostClass(new PyComponentAdapter());
}

This is to avoid doubling up the adapter when IA adds this functionality to the base install. :grin:
( A doubled adapter wouldn’t really hurt anything, but for an itty-bitty conversion delay. )