Ignition Extensions - Convenience utilities for advanced users

Hi Paul,

Can you also add simple web socket server in the extensions?
You can use system.util.getGlobals() to pass data to perspective.

Probably not as part of Ignition Extensions, but I might put up a standalone example module to do that at some point in the future.

1 Like

How about:

system.dataset.builder(colName1=int, colName2=str).addRow(1, 'abc').build()

Too 'cute' to embed column names as keyword arguments?
One downside is embedding column names with spaces:

from collections import OrderedDict
columns = OrderedDict()
columns["Column 1"] = int
columns["Column 2"] = str
system.dataset.builder(**columns).addRow(1, 'abc').build()

I suppose you could call builder(), and then I could add setters for column names and column types to the builder object; that way you can get the terser syntax if you don't need exotic column names, but it's available if you do.

1 Like

Meh. DatasetBuilder gives you total control in a very terse package. Maybe just expose it in system. so we don't have to import it. (With the java class names suited to datasets for use with its colTypes method.) Its methods are chainable so one-liners are easy.

1 Like

The keyword args are optional, so you can either use system.dataset.builder().colNames("colName").colTypes(colType) or system.dataset.builder(colName=colType).

Hmm. Are you suggesting exposing e.g. system.dataset.String, system.dataset.Integer? Or something else?

Those, or the abbreviations used in CSV files. Or both. But limited to the classes in the dataset tag whitelist.

Or maybe re-implement .colTypes to also accept strings of the simple class names or the abbreviations.

0.6.0 available now:


  • system.dataset.equals(dataset, dataset): Boolean
    Returns True if both input datasets have the same columns, with the same types, and the same values.

  • system.dataset.valuesEqual(dataset, dataset): Boolean
    Returns True if both input datasets have the same number of rows/columns, and those rows/columns have the same values in the same places.

  • system.dataset.columnsEqual(dataset, dataset, ignoreCase=False, includeTypes=True): Boolean
    Returns True if both input datasets have the same column definitions. Use the optional keyword arguments to make the behavior more lenient.

  • system.dataset.builder(**columns): DatasetBuilder
    Returns a wrapped DatasetBuilder. Provided keyword arguments (if any) are used as column names; the values should be Java or Python types, or the 'short codes' accepted by system.dataset.fromCSV:

    alias class
    "byt" byte.class
    "s" short.class
    "i" int.class
    "l" long.class
    "f" float.class
    "d" double.class
    "b" bool.class
    "Byt" Byte.class
    "S" Short.class
    "I" Integer.class
    "L" Long.class
    "F" Float.class
    "D" Double.class
    "B" Boolean.class
    "O" Object.class
    "clr" Color.class
    "date" Date.class
    "cur" Cursor.class
    "dim" Dimension.class
    "rect" Rectangle.class
    "pt" Point.class
    "str" String.class
    "border" Border.class

    In addition, the colTypes function can now be called with the same short codes, or common Python types (e.g. str instead of java.lang.String).


Great stuff @PGriffith . The string for column types is better than my original suggestion and more consistent with what Ignition already uses.

One question re: system.dataset.equals vs system.dataset.valueEquals - first one is same columns/types/values, and the second is same rows/columns, but not type. Is it possible then for two datasets of differing row numbers to return true from system.dataset.equals? Just trying to understand a situation where system.dataset.equals would return true but system.dataset.valueEquals would return false and vice versa.

The implementation of equals is literally: return ds1 === ds2 || (columnsEqual(ds1, ds2) && valuesEqual(ds1, ds2))

So, the only way you'd have equals return false but valuesEqual return true is if the second dataset had the same column types, rows, data, but different column names. I figure that's unlikely, but potentially useful.


This was the idea I was talking about. Is it possible to add this change into the extensions module? Or at least another function to return just the non-default config on the UDT instance and not in the definition? This is a big issue when trying to script tag changes

1 Like

I could probably add something like system.tag.getLocalConfiguration? Assuming I'm understanding what you're asking for properly.

1 Like

Basically need a function to get the json exactly as the 'copy json' in the tag browser returns it. Local config sounds about right! That'd be great to have :slightly_smiling_face:

Alright, it's on the list - feel free to add any input you might have.


Sounds good!

Coincidentally, why does the getConfiguration doco say "will attempt to retrieve all the tags" instead of just "will retrieve all the tags"? When would this fail?

Most likely, it's a verbatim transcription of what a developer sent when initially explaining the feature when 8.0.X launched that no one's ever amended.

1 Like

Recently I had to convert something to an iso format. Currnetly you would have to do it manually system.date.format but other date libraries (like datetime for instance) have a function .isoformat() since ISO is so common. So maybe that as an additional date function? Something like system.date.isoFormat(someDate)?

I know it's simple and only saves a handful of lines of code but we are talking about convenience functions. Just a thought. I don't know how others feel about this one.

1 Like

Sorry for the sin of double posting but I ran into a dataset one that I would love to have.

I am trying to write tests now and I have to refer to datasets that come in and get referred in my functions. In my test function I don't want to have to have to go out to the window and grab a dataset. This I guess falls under meta programming but it would be nice if we could reverse engineer any dataset into a script that creates the dataset? For example say I have a dataset

id          name
1         Test1
2         Test2
3         Name1

Then running something like system.datset.getScript(ds)
would produce

cols = ['id','name']
rows = [[1, 'Test1'], [2,'Test2'],[3,'Test3']]
ds = system.dataset.toDataSet(cols, rows)

so that I could then copy and paste this in to my unit test scripts.

Perhaps this is too easy to do in jython already though so maybe it's not worth it. Just throwing it out there in case others also think it would be useful.

To be reliable, it would need to use DatasetBuilder. With that, python's repr() is your friend. I don't think you need java for this.

1 Like

This is the only datetime format I use, since it's the most logical (sorry you US folk :laughing:) and importantly, it's also inherently sortable :slight_smile: (e.g. filenames)