Insert into the database from 2 different sources

Hello

I have a database table with 14 columns, I want to insert into this database from 2 different sources. (I am using perspective module)

Source1:

The data from 4 entry boxes:

Source 2:

The data from a dataset table (2 rows and 10 columns)

Final Databse:

The database table after inserting these 2 sources would be:

I have tried different solutions but none of them were successful.

I am able to insert the 1st source into the database but I am not able to combine that with source 2 and send that back to the database, and I am using a message handler.

I greatly appreciate any information you could share with me.

Assuming you have you are using system.db.runPrepUpdate (which you should be using over system.db.runUpdateQuery) and that you want to do this for every single row in your dataset and that the values of the columns are the same order as the columns in your query-

dataResource = somePath.to.dataResource.text # or whatever field you are getting from this componeont
dataPartSN = some.path.to.dataPartSN.text
DataPassFail = somepath
datausername = someusername

for row in range(someData.rowCount):
    values = [dataResouce, dataPartSN, DataPassFail, datausername]
    for col in range(someData.colCount):
        values.append(someData.getValueAt(row, col))
    system.db.runPrepUpdate("INSERT INTO someTable (col1, ..., col14) VALUES (?, ... ?)", values)

I think this should do it. Just make sure your this part

dataResource = somePath.to.dataResource.text # or whatever field you are getting from this componeont
dataPartSN = some.path.to.dataPartSN.text
DataPassFail = somepath
datausername = someusername

you set to the actual paths of your component fields that you want to save.

Edit: Just saw you said you were using a message handler. Any specific reason for that? Perspective scripts run on the gateway already, and if you’re just trying to securely run a SQL query, you could achieve that by making a Named Query.

1 Like

Thank you very much for your reply!
Appreciate it!!!

I tried your suggestion here:

	dataResource = 			"datasource.user"
	newPartSN= 				self.view.custom.part.id.sn
	newPassFail= 			self.view.custom.part.id.overall_pass_fail
	username=				self.session.props.auth.user.userName
	
	data= self.view.custom.part.spec.data			#Dataset [12R ? 10C]
	pds = system.dataset.toPyDataSet(data)		#<PyDataset rows:12 cols:10>
	for row in range(pds.rowCount):
	    values = [dataResource, newPartSN, newPassFail, username]
	    for col in range(pds.colCount):
	        values.append(pds.getValueAt(row, col))
	    system.db.runPrepUpdate("INSERT INTO 000_process_data (resource, part_sn, overall_pass_fail, username, spec, spec_rev, name, description, state, findings, low_limit, process_value, high_limit, units) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)", values)

But I get this error:
the object has no attribute ‘colCount’

My fault, should be pds.columnCount, I make that typo all the time in my own scripting and I passed it on to you.

1 Like

So that did not work I got the error again,

I changed it to this:

	dataResource = 			"melt.user"
	newPartSN= 				self.view.custom.part.id.sn
	newPassFail= 			self.view.custom.part.id.overall_pass_fail
	username=				self.session.props.auth.user.userName
    values = [dataResource, newPartSN, newPassFail, username]

    data= self.view.custom.part.spec.data										#sample Dataset [12R ? 10C]
	pds = system.dataset.toPyDataSet(data)
	for index in range(len(pds)):
		rowData=list(pds[index])
		rowData.append([values,rowData])
		system.db.runPrepUpdate("INSERT INTO process_data (resource, part_sn, overall_pass_fail, username, spec, spec_rev, name, description, state, findings, low_limit, process_value, high_limit, units) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?)", rowData)

the only problem I have is to append values with rowData:

data= self.view.custom.part.spec.data										#sample Dataset [12R ? 10C]
pds = system.dataset.toPyDataSet(data)
**rowData=list(pds[index])**
**values = [dataResource, newPartSN, newPassFail, username]**
**rowData.append([values,rowData])**

Don’t append it, ADD it.

values = [1,2,3]
rowData = [4,5,6]

print values + rowData

result:
[1,2,3,4,5,6]

values = [1,2,3]
rowData = [4,5,6]

rowData += values
print rowData

result:
[4,5,6,1,2,3]

The way you add them together is important, and must match the order of your columns, otherwise your data will not be in the columns you want it in or you will get an error because the datatype doesn’t match.

Also your script needs some adjusting, the whole point of converting to a pyDataset is to take advantage of the data structure. Using range(len(pds)) in your for loop defeats the purpose.

Your code should look like this:

	dataResource = 			"melt.user"
	newPartSN= 				self.view.custom.part.id.sn
	newPassFail= 			self.view.custom.part.id.overall_pass_fail
	username=				self.session.props.auth.user.userName
    values = [dataResource, newPartSN, newPassFail, username]

    data= self.view.custom.part.spec.data										#sample Dataset [12R ? 10C]
	pds = system.dataset.toPyDataSet(data)
	for row in pds:
		rowData= values + row
		system.db.runPrepUpdate("INSERT INTO process_data (resource, part_sn, overall_pass_fail, username, spec, spec_rev, name, description, state, findings, low_limit, process_value, high_limit, units) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?)", rowData)
1 Like

Thank you very much! APPRECIATE it :slight_smile:
I got the error for that but I wrote this script based on your suggestion and it is now working:

	dataResource = 			"melt.user"
	newPartSN= 				self.view.custom.part.id.sn
	newPassFail= 			self.view.custom.part.id.overall_pass_fail
	username=				self.session.props.auth.user.userName

	data= self.view.custom.part.spec.data			#Dataset [12R ? 9C]
	pds = system.dataset.toPyDataSet(data)		#<PyDataset rows:12 cols:9>
	
	for index in range(len(pds)):
		values = [dataResource, newPartSN, newPassFail, username]
		rowData=list(pds[index])
		rowData=values+rowData
		system.db.runPrepUpdate("INSERT INTO process_data (resource, part_sn, overall_pass_fail, username, spec, spec_rev, name, description, state, findings, low_limit, process_value, high_limit, units) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?)", rowData)

Glad you’ve got it working.

I would be remiss if I didn’t mention that re-writing the for loop will make this much more readable in the future, as it will actually say what you are doing. It will still function the same, but:

for row in pds is much more readable (and pythonic) than for index in range(len(pds))

Also, I couldn’t help myself, you can optimize the loop many times by removing un-needed function calls and by not making a prepQuery call for each row.

I would refactor the code to this:

	dataResource = 			"melt.user"
	newPartSN= 				self.view.custom.part.id.sn
	newPassFail= 			self.view.custom.part.id.overall_pass_fail
	username=				self.session.props.auth.user.userName

	data= self.view.custom.part.spec.data
    values = [dataResource,newPartSN,newPassFail,username]
    psa = system.dataset.toPyDataSet(data)
    query = """INSERT INTO process_data
                        (resource,
                         part_sn,
                         overall_pass_fail,
                         username, spec,
                         spec_rev,
                         name,
                         description,
                         state,
                         findings,
                         low_limit,
                         process_value,
                         high_limit,
                         units)
                     VALUES %s"""
    valueArgs = ','.join(['(?,?,?,?,?,?,?,?,?,?,?,?,?,?)']  * data.rowCount)

    argList = [values + row for row in pds]

    system.db.runPrepUpdate(query % valueArgs,argList)
3 Likes

Thank you very much @lrose
Appreciate the great information.

Although I got the error code cannot be compiled with this version:

org.python.core.PyIndentationError
File “function:onMessageReceived”, line 8
values = [dataResource, newPartSN, newPassFail, username]
^
IndentationError: unindent does not match any outer indentation level

Ignition v8.1.10 (b2021090812)
Java: Azul Systems, Inc. 11.0.11

@lrose used 4-space indenting at the beginning of his lines, which is different from the tab you are using. All indents need to match, whatever you use. :slight_smile:

3 Likes

Thank you, Jordan!
I fixed that; however, I get another error:

line 26, in onMessageReceived
TypeError: can only concatenate list (not “com.inductiveautomation.ignition.common.script.builtin.DatasetUtilities$PyDataSet$PyRow”) to list

Line 26: argList = [values + row for row in pds]

You need to cast the row to a list:

argList = [values + list(row) for row in pds]

But, that won’t work, as the above returns a list of lists, and you need a flat list. Try:

argList = []
for row in pds:
	argList.extend(values + list(row))
2 Likes

It worked now!!! Thank you very much, Jordan. Appreciate it.

1 Like

Hello, I am facing the similar issue. That I want to insert entire dataset into database table.
I followed the code by @lrose but getting below error.

Code:

data1 = system.tag.read("[default]TEST/Tag").value
pyData = system.dataset.toPyDataSet(data1)
query = "INSERT INTO  MyTable (C1, C2, C3, C4, C5, C6, C7) VALUES %s"
                            
valueArgs = ','.join(['(?,?,?,?,?,?,?)']  * data1.getRowCount())
argList = [row for row in pyData]
system.db.runPrepUpdate(query % valueArgs, argList, 'MyDB')

Error:

Caused by: com.inductiveautomation.ignition.client.gateway_interface.GatewayException: Unable to decode arguments

	at com.inductiveautomation.ignition.client.gateway_interface.GatewayInterface.newGatewayException(GatewayInterface.java:351)

	at com.inductiveautomation.ignition.client.gateway_interface.GatewayInterface.sendMessage(GatewayInterface.java:325)

	at com.inductiveautomation.ignition.client.gateway_interface.GatewayInterface.sendMessage(GatewayInterface.java:278)

	at com.inductiveautomation.ignition.client.script.ClientDBUtilities._runPrepStmt(ClientDBUtilities.java:288)

	at com.inductiveautomation.ignition.common.script.builtin.AbstractDBUtilities.runPrepUpdate(AbstractDBUtilities.java:261)

	... 23 more

Caused by: java.io.InvalidClassException: failed to read class descriptor

I am not sure what I am missing here.

This yields a list of lists. "Prep" queries need a flat list. Try this:

argList = [v for row in pyData for v in row]

Or this:

argList = [v for v in row for row in pyData]

(I can never remember which way they go. :man_shrugging: )

2 Likes

Awesome!!!
That worked.
Thank you so much for your help @pturmel

Keep in mind that the number of question marks allowed in a query is limited by JDBC drivers. Some as low as ~2000 or so. You should include logic to make multiple inserts when a dataset is large.

Noted.
Thank you very much again. That helped me a lot.

First one. Same order as if you were nesting for loops:

argList = []
for row in pyData:
    for v in row:
        argList.append(v)

argList = [v for row in pyData for v in row]

Or you could do this:

from itertools import chain

argList = list(chain(*pyData))
# or
argList = list(chain.from_iterable(pyData))
2 Likes