Automation Professionals' Integration Toolkit Module

Hey @pturmel,

I'm trying to figure out how I can calculate the time-weighted rate of change from a dataset with "t_stamp" and "value" columns with your toolkit expression functions, but i'm a bit lost with the starting at i=1 and ending at i(n-1) but still using i(n)

sample dataset:

"#NAMES"
"t_stamp","value"
"#TYPES"
"date","F"
"#ROWS","6"
"2025-04-11 14:00:00.000","10.0"
"2025-04-11 14:01:00.000","20.0"
"2025-04-11 14:02:00.000","30.0"
"2025-04-11 14:03:00.000","50.0"
"2025-04-11 14:10:00.000","100.0"
"2025-04-11 14:15:00.000","100.0"

Without adding "now" it should return "90" edit: if it wasn't working properly. If it was working properly it would return "6"

What a terrible example. That equation simplifies to (vn - v1) / (tn - t1). No need for the intermediate rows. (It does not yield 90, fwiw.)

The lag() function yields xr-1, with a null for the first row. Wrap lag() in a coalesce() to substitute in that first row. Or include an extra row as first, then cut it off with a where().

1 Like

Whoops, I had a rough day :sweat_smile: I think my wife might have cooked her car engine, my son got nits from school, and I obviously forgot how numbers work :face_with_crossed_out_eyes:

Oops, I typed the wrong value, should have been 6!

Correct, yes!

OK Ill give this a go, cheers!

1 Like

Try this:

transform(
	{path.to.source.dataset},
	sum(
		forEach(
			value(),
			if(
				isNull(lag()),
				0.0,
				(it()[1] - lag()[1]) * (toMillis(it()[0]) - toMillis(lag()[0]))
			)
		)
	) / (toMillis(value()[len(value()) -1 , 0]) - toMillis(value()[0,0]))
)

{Untested.}

I think we're both overcooking it:

transform(
	{this.custom.key},
	(value()[len(value())-1, 1] - value()[0, 1]) / 
	(millisBetween(value()[0, 0], value()[len(value())-1, 0])/1000/60)
)

Hah! Indeed.

Tuck away what I show, though: if you replace the denominator with a time unit constant expressed in millis, the expression yields the integral of v dt in that timebase.

2 Likes

:frowning:

IGN-8204, someday, will let you do negative subscripts on ordered sequences.

Maybe I can convince someone to let me do a shotgun pass to the expression system...

3 Likes

Ooooo! I should add that to the qvAt() function.

That was easy. :grin:

Automation Professionals is pleased to announce a new Production release of this Integration Toolkit. New features and changes since v2.1.0:

  • Tag Actors
  • tags() refactoring to return good quality even when some tag values are not good.
  • New qvAt() function to make quality of list and dataset nested values accessible, with python-like negative index support.

For Ignition v8.1: v2.1.1.251011523

4 Likes

...huh, is the expression system not running through jython -- is there a custom lexer/parser under there?

Expressions are pure java, except where a function author deliberately calls out to jython. In IA's stuff, only the runScript() expression hands off to jython.

In my toolkit, objectScript() is souped-up version of runScript(), and view() uses jython, too.

2 Likes

The reason users are advised to use expressions instead of scripts where possible is because the jython interpreter has significant overhead. And it is an interpreter, with all that implies.

Yep. It's generated code, but pure Java.

LR(1) lookahead, i'm gathering from the docs :slight_smile:
(i don't suppose y'all have a BNF for the expression language)

is there a set() (specifically, the list deduplication “feature") equivalent? (before I try to derive a creative one)

You could do something funky like where({array}, IndexOf({array}, it()) = idx()).
Seems to work in perspective at the least.

Pre-Expression:

[
  {
    "key": "value",
    "key_1": 2
  },
  "value2",
  {
    "key": "value",
    "key_1": 3
  },
  "value",
  {
    "key": "value",
    "key_1": 3
  },
  "value"
]

Post expression:

[
  {
    "key_1": 2,
    "key": "value"
  },
  "value2",
  {
    "key_1": 3,
    "key": "value"
  },
  "value"
]
1 Like

Just curious, what's the use case you have for this?

deriving a single working tagpath from “fuzzy matching” on unordered/unspecified path parts passed in to a view. could be from flex repeater instance parameters, implicit rowData parameters, cell value parameter, etc. Each method has limitations and I don’t want to hardcode fullpaths, but could end up with duplicates. I could just iterate over the duplicates, but thought might as well deduplicate them

speaking of - I’m sure I’ve figured out why folder tag bindings return ‘2’, but can’t remember at the moment.

for instance each highlight is the exact same embedded view, but

  • subview components are using full tag paths (results of system.tag.query from each row’s hidden fullpath column
  • most table columns are using derived paths based on cell value, rowdata and column viewparams with an “altPaths” list of possible subpaths/tagnames
  • % Valid column is using a completely different path that can’t be derived entirely from rowdata so using string replacement(indirection) on a “parameterized” “full path” coming from the same column viewparams “altPaths” list

Yes, you can emulate set() with something like this:

forEach(
	asMap( // Keys of a map are forced to be unique.
		forEach( // Convert simple list to list of pairs where the 2nd value is a dummy
			{path.to.list.of.strings},
			it(),
			""
		)
	),
	it()[0] // Extract the unique keys back out of the map.
)
2 Likes

Phil,

Just wanted to say that this module has once again saved me a bunch of work.

Needed to filter a dataset, because it couldn't be done in a Named Query, and I really didn't want to have to script the query, just be cause NamedQueries can not utilize a list to facilitate an IN clause.

where() to the rescue.

Sometimes, I wonder how much time I would waste without this module.

6 Likes