Ignition Extensions - Convenience utilities for advanced users

How about:

system.dataset.builder(colName1=int, colName2=str).addRow(1, 'abc').build()

Too 'cute' to embed column names as keyword arguments?
One downside is embedding column names with spaces:

from collections import OrderedDict
columns = OrderedDict()
columns["Column 1"] = int
columns["Column 2"] = str
system.dataset.builder(**columns).addRow(1, 'abc').build()

I suppose you could call builder(), and then I could add setters for column names and column types to the builder object; that way you can get the terser syntax if you don't need exotic column names, but it's available if you do.

1 Like

Meh. DatasetBuilder gives you total control in a very terse package. Maybe just expose it in system. so we don't have to import it. (With the java class names suited to datasets for use with its colTypes method.) Its methods are chainable so one-liners are easy.

1 Like

The keyword args are optional, so you can either use system.dataset.builder().colNames("colName").colTypes(colType) or system.dataset.builder(colName=colType).

Hmm. Are you suggesting exposing e.g. system.dataset.String, system.dataset.Integer? Or something else?

Those, or the abbreviations used in CSV files. Or both. But limited to the classes in the dataset tag whitelist.

Or maybe re-implement .colTypes to also accept strings of the simple class names or the abbreviations.

0.6.0 available now:

Adds:

  • system.dataset.equals(dataset, dataset): Boolean
    Returns True if both input datasets have the same columns, with the same types, and the same values.

  • system.dataset.valuesEqual(dataset, dataset): Boolean
    Returns True if both input datasets have the same number of rows/columns, and those rows/columns have the same values in the same places.

  • system.dataset.columnsEqual(dataset, dataset, ignoreCase=False, includeTypes=True): Boolean
    Returns True if both input datasets have the same column definitions. Use the optional keyword arguments to make the behavior more lenient.

  • system.dataset.builder(**columns): DatasetBuilder
    Returns a wrapped DatasetBuilder. Provided keyword arguments (if any) are used as column names; the values should be Java or Python types, or the 'short codes' accepted by system.dataset.fromCSV:

    alias class
    "byt" byte.class
    "s" short.class
    "i" int.class
    "l" long.class
    "f" float.class
    "d" double.class
    "b" bool.class
    "Byt" Byte.class
    "S" Short.class
    "I" Integer.class
    "L" Long.class
    "F" Float.class
    "D" Double.class
    "B" Boolean.class
    "O" Object.class
    "clr" Color.class
    "date" Date.class
    "cur" Cursor.class
    "dim" Dimension.class
    "rect" Rectangle.class
    "pt" Point.class
    "str" String.class
    "border" Border.class

    In addition, the colTypes function can now be called with the same short codes, or common Python types (e.g. str instead of java.lang.String).

11 Likes

Great stuff @PGriffith . The string for column types is better than my original suggestion and more consistent with what Ignition already uses.

One question re: system.dataset.equals vs system.dataset.valueEquals - first one is same columns/types/values, and the second is same rows/columns, but not type. Is it possible then for two datasets of differing row numbers to return true from system.dataset.equals? Just trying to understand a situation where system.dataset.equals would return true but system.dataset.valueEquals would return false and vice versa.

The implementation of equals is literally: return ds1 === ds2 || (columnsEqual(ds1, ds2) && valuesEqual(ds1, ds2))

So, the only way you'd have equals return false but valuesEqual return true is if the second dataset had the same column types, rows, data, but different column names. I figure that's unlikely, but potentially useful.

2 Likes

This was the idea I was talking about. Is it possible to add this change into the extensions module? Or at least another function to return just the non-default config on the UDT instance and not in the definition? This is a big issue when trying to script tag changes

I could probably add something like system.tag.getLocalConfiguration? Assuming I'm understanding what you're asking for properly.

1 Like

Basically need a function to get the json exactly as the 'copy json' in the tag browser returns it. Local config sounds about right! That'd be great to have :slightly_smiling_face:

Alright, it's on the list - feel free to add any input you might have.

1 Like

Sounds good!

Coincidentally, why does the getConfiguration doco say "will attempt to retrieve all the tags" instead of just "will retrieve all the tags"? When would this fail?

Most likely, it's a verbatim transcription of what a developer sent when initially explaining the feature when 8.0.X launched that no one's ever amended.

1 Like

Recently I had to convert something to an iso format. Currnetly you would have to do it manually system.date.format but other date libraries (like datetime for instance) have a function .isoformat() since ISO is so common. So maybe that as an additional date function? Something like system.date.isoFormat(someDate)?

I know it's simple and only saves a handful of lines of code but we are talking about convenience functions. Just a thought. I don't know how others feel about this one.

1 Like

Sorry for the sin of double posting but I ran into a dataset one that I would love to have.

I am trying to write tests now and I have to refer to datasets that come in and get referred in my functions. In my test function I don't want to have to have to go out to the window and grab a dataset. This I guess falls under meta programming but it would be nice if we could reverse engineer any dataset into a script that creates the dataset? For example say I have a dataset

id          name
=======================
1         Test1
2         Test2
3         Name1

Then running something like system.datset.getScript(ds)
would produce

cols = ['id','name']
rows = [[1, 'Test1'], [2,'Test2'],[3,'Test3']]
ds = system.dataset.toDataSet(cols, rows)

so that I could then copy and paste this in to my unit test scripts.

Perhaps this is too easy to do in jython already though so maybe it's not worth it. Just throwing it out there in case others also think it would be useful.

To be reliable, it would need to use DatasetBuilder. With that, python's repr() is your friend. I don't think you need java for this.

1 Like

This is the only datetime format I use, since it's the most logical (sorry you US folk :laughing:) and importantly, it's also inherently sortable :slight_smile: (e.g. filenames)

2 Likes

Just want to give back to the community a little bit since you all have helped me so much. I know it's something I specifically asked for but maybe it will help someone else too - a way to take a dataset and convert it to code that would generate an identical dataset. I am using this so I can take datasets that are in windows and convert it to code for my testing functions easily (I have a lot of datasets to do it for)

def makeToDatasetScriptFromDS(ds):
	columnNames = [str(column) for column in ds.columnNames]
	rowData = []
	for cur_row in range(ds.getRowCount()):
		row = []
		for cur_col in range(ds.getColumnCount()):
			value = ds.getValueAt(cur_row, cur_col)
			if isinstance(value, unicode) or isinstance(value, str):
				value = str(value)
			row.append(value)
		rowData.append(row)
	code = 'system.dataset.toDataSet(%s, %s)'%(columnNames, rowData)
	return code

It works pretty well for me and the types seem to come through as well at least on my initial tests which is nice. I'm sure there is room for improvement but this works well enough for me atm.

3 Likes

You should be able to change this line to:
if isinstance(value, (unicode, str)):
or even
if isinstance(value, basestring):

2 Likes