So I need some help on where to get started here. I have 26 identical set up tables A yield column standard column, and time stamp column. I am wanting to run a for loop and take data from the standard column and Yield column based on the time stamp that is being compared and then add all the standard entries from the 26 tables and all the yield entries and then writes them to the new dataset that would be graphed? I was thinking of running this in the timer script so it periodically adjusts the dataset that would be graphed and incase mistakes were to be made it would readjust the total automatically. Does any one have any ideas how to do this?
It will look something like this:
# Define header names for the resultant dataset
headers = ['Standard Values', 'Yield Values']
# Create a list to store all filtered value rows
data = []
# Iterate through all 26 tables
for index in range(26):
# Get the data from each table, and iterate through the rows
# ...Assumes the 26 table names are 'Table 0' through 'Table 25'
dataset = event.source.parent.getComponent('Table {}'.format(index)).data
for row in range(dataset.rowCount):
# Get the timestamp value of each row, and check to see if it meets the requirements
if dataset.getValueAt(row, 'timestamp') == # some condition:
# get the value from the yield column
yieldValue = dataset.getValueAt(row, 'yieldColumn')
# get the value from the standard column
standardValue = dataset.getValueAt(row, 'standardColumn')
# Add the values as a row list to the data list
data.append([standardValue, yieldValue])
# Create the chart dataset using the data collected from the 26 tables
chartDataset = system.dataset.toDataSet(headers, data)
Where does the tables data come from ?
If the data is coming from a database, then it may be more performant to use a query to do this for you.
What would that look like to do it through a query?
Also I should of specified table earlier its a database table not a vision component table
With 26 tables, it would probably look a little redundant, but I agree with them that a single query would probably be more performant than 26 separate queries followed by a scripting function.
I imagine the query will look something like this:
SELECT `Standard`, `Yield` FROM `Table 1`
Where `timeStamp` # Define your limiting condition here
UNION
SELECT `Standard`, `Yield` FROM `Table 2`
Where `timeStamp` # Define your limiting condition here
UNION
SELECT `Standard`, `Yield` FROM `Table 3`
Where `timeStamp` # Define your limiting condition here
### Repeat until you have the data from all 26 tables
Pretty much what I had in mind, though I was trying to figure out if there was a way to get rid of the redundancy...
I guess you could build the query through scripting ? Either once, then paste it in a named query, or dynamically and use a runQuery
with the resulting string, depending on how often it needs to changes ?
You could use this to generate the query instead of typing it all yourself:
q = """
select standard, yield from Table {}
where timestamp between :start and :end
"""
print "union all".join(q.format(n) for n in xrange(1, 27))
run this in the script console or a python interpreter then copy/paste the string it print ?
Eww, no!. If you are going to script the generation of the query, script it with ?
placeholders and construct a matching params list at the same time.
I should have worded this better. I was thinking about A system.db.run*
function, one of those, not specifically system.db.runQuery
.