I'm trying to create a script to combine the results of two different queries. I can't seem to append the results into a single dataset. Here is my script with what it is returning in the script console.
Please post preformatted code, and not pictures of code. Makes it more difficult to help when we have to retype the script.
Normally I would say write a query that returns all of the results that you want. The database is going to be much better at this than a script, however, I don't know what the data looks like or what the stored procedures you're running are actually doing.
You can use the DatasetBuilder to accomplish this.
from com.inductiveautomation.ignition.common.util import DatasetBuilder
stackId = '3316'
stackDataQuery = 'EXEC dbo.PIR_GetStackData @stackId = ?'
serialDataQuery = 'EXEC dbo.DET_GetSerialData @serial = ?'
stackDataResults = system.db.runPrepQuery(stackDataQuery, [stackId])
builder = DatasetBuilder.newBuilder()
builder.colNames(['column names for your dataset'])
builder.colTypes(['column types for your columns']) #list of column types must be the same length as the headers
for stack in stackDataResults:
for row in system.db.runPrepQuery(serialDataQuery,[stack]):
builder.addRow(row)
print builder.build()
Irose,
I think what you have here will work, however, I can't figure out what to put in for colTypes. Here is a screenshot of a returned value from the stored procedure.
I agree with @lrose here: seems to me like the whole thing could be done in just one query, which would make it simpler and more efficient.
Can you share that stored proc ?
The stored procedure is looking at tables that I don't have access to. The client is very strict with the development vs live database, and is writing the stored procedures themselves for me to use.
Well if they want to write the queries themselves, ask them for a proper query that does what you need it to do ;p
If you haven't solved the dataset merging thing, try this:
from itertools import chain
data = list(chain.from_iterable(
system.dataset.toPyDataSet(system.db.runPrepQuery(sp, [stack]))
for stack in stackDataResults
))
if you need it in dataset format:
data = [system.dataset.toPyDataSet(system.db.runPrepQuery(sp, [stack])) for stack in ids]
headers = list(data[0].columnNames)
ds = system.dataset.toDataSet(
headers,
list(chain.from_iterable(data))
)
Though the builder way is probably better. I just never used it.
Possibly, but technically you don't have the final datasets until after the builder is constructed. You must call colNames first and colTypes second, and then add rows. You don't really want to call those functions multiple times.
Indeed, but you could run the queries first, get the names and types from one of them, then build the final dataset from those.
Kinda like this:
from itertools import chain
from com.inductiveautomation.ignition.common.util import DatasetBuilder
data = [system.db.runPrepQuery(sp, [stack]) for stack in stackDataResults]
builder = DatasetBuilder.newBuilder()
builder.colNames(data[0].columnNames)
builder.colTypes(data[0].columnTypes)
for row in chain.from_iterable(data):
builder.addRow(list(row))
ds = builder.build()
Not sure it's really simpler, but at least you don't have to deal with types and names yourself