Convert columntype in table to integer

Dear all,

I’ve been breaking my head over this one but i think there should be a simple solution i’m overlooking…

On a page i have a table and a button, by pressing the button i call on a script which empties the table using

Headers= ["Header1: "," Header2"] blankdataset = [] table.data = system.dataset.toDataSet(Headers,blankdataset)

Then, using a for loop and “system.db.runQuery” i call on an SQL query to fetch some data which gets imported to the table using the

table.data = system.dataset.addRow(table.data, Row)

The challenge i am facing is that doing this makes all the columntypes in “String” format while the data i am fetching is a mix of integer,strings and dates and i would like to be able to (for instance…) change rowbackground color based on one of the columns.

Does anyone knows of a solution and/or script command to change the columntype?

  • Edit: forgot to mention i am using multipple for loops to get data from different tables in the database, this is the main reason i am creating a complete row and adding that to the table…

There is a simple solution.

table.data = system.db.runQuery(query)

In this case trying to append the dataset is unnecessary.

Edit: Forgot about the for loop that you are using. In that case I think you should build a dataset in the loop and when that is done then set the table data.

#psuedo code
rows = []
headers = ["col_1","col_2"]
for some loop:
    data = system.db.runQuery(new_query)
    for row in data:
        rows.append(row)
table.data = system.dataset.toDataSet(headers, rows)

Dear JGJohnson,

thank you for your quick reply. Unfortunately my opening post was incomplete…

The reason i am using the addrow is that i am getting data from multiple tables in the database, therefore i am using multiple for loops… and i have really no idea how to merge multiple datasets to one.

Still hope you have an answer for me :wink:

Do you absolutely have to do this in script?

Seems to me that he simplest solution would be to build a view in the database that gathers data from all of the tables and then run a query against that view in Ignition.

Alternatively if all of the queries are returning the same number of columns all with the same data-types, then you could forget about clearing the table data-set. Just set the table data to the result of the 1st query and then as new queries are run append the rows to the table data-set. There is no way to change the data-type of a column in an existing data-set as far as I know.

Use system.dataset.fromCSV to specify data types and column headers for your blank dataset like this:

blankDataset = """#NAMES
Header1, Header2,
#TYPES
d,I
#ROWS,0"""

See Ignition Manual for data types, etc.

Consider using the platform’s DatasetBuilder class. Something like:

from java.lang import Integer, String
from com.inductiveautomation.ignition.common.util import DatasetBuilder

def someFunction():
    b = DatasetBuilder.newBuilder().colNames("col1", "col2").colTypes(Integer, String)
    for x in range(5):
        b.addRow(x, "V%d" % x)
    return b.build()
3 Likes

Thanks Phil (was Pat). My posted solution worked okay for a really simple application where I needed to store data in correct types, but your solution is superior. For x in range fits in nicely to create a dataset with ten of this column, ten of that column, and ten of another bunch of columns.

Pat? I see no Pat here in this topic. (-:

Oh Pat, you are so silly!

2 Likes

Oh, you’re going by Phil now! :wink: Somehow I had Pat in my mind for your avatar; sorry 'bout that.

1 Like

Well, they do sound a lot alike… :wink:

1 Like

@pturmel,

How do you dynamically add columnNames and types.

I mean, I am build a dataset from another dataset. So I could get the columns for the original dataset as .getColumnNames().
I can directly pass list ?

Also, I am doing this in component scripting.
Thanks !

I'd use a PyDataSet for that. It will preserve datatypes for you.

def addColumns(datasetIn, newColNames, newColData):
	colNames = list(datasetIn.getColumnNames())
	pyData = system.dataset.toPyDataSet(dataset)
	newRows = [list(row) + list(addCols) for row, addCols in zip(pyData, zip(*newColData))]
	return system.dataset.toDataSet(colNames+newColNames, newRows)

# Create test dataset
colNames =['c1', 'c2', 'c3']
data = [
		[1, 2.0, '3'], 
		[4, 5.0, '6']
       ]
dataset = system.dataset.toDataSet(colNames, data)

# New columnNames as List
newColNames = ['c4', 'c5']
# New column data as list of lists
newColData  = [
			   ['X', 'Y'],
			   [23, 42]
			  ]			   

dataOut = addColumns(dataset, newColNames, newColData)

print dataOut.getColumnNames()
print dataOut.getColumnTypes()

Output:

[c1, c2, c3, c4, c5]
[class java.lang.Integer, class java.lang.Double, class java.lang.String, class java.lang.String, class java.lang.Integer]

Resultant dataset:

row | c1 c2  c3 c4 c5
---------------------
 0  | 1  2.0 3  X  23
 1  | 4  5.0 6  Y  42
1 Like

Is there something you can’t do with datasets ?

What I am trying to do is manipulated the data to add header and sub header. But since I have integers and double columns, I can’t really do it.
So, the whole process for me goes like this -

Get the data from table,

Copy it to new dataset and change the columns to string

Add row in the beginning as header,

Add another row at 1 as Sub header,

print that data !

Depending on the header requirement, set cell span !

Also, hide original Header !
Lol ! That’s the dumbest way to express the data.