Reading from dataset query tag

Hey everyone,
Just want to make sure I’m not being dumb and doing things the hard way…
anyways, I have some query tag with a dataset data type and I’m trying to read the specific columns with other tags. For example, dataset has 10 columns and I want to make 10 tags that grab each of those values.
After some research, I didn’t really find anything beside scripting option. I basically have a “update” BOOL tag that becomes true and when that happens it triggers it’s Value Changed event and when that happens I write the dataset values to the tags and it works fine:

 if currentValue.quality.isGood() and currentValue.value == 1:
      if initialChange == 0:
           system.tag.read("datasetPath").value.getValueAt(0, 'columnName')
           system.writeAll(paths,values)

I just want to make sure if this is the only way to do it or is there a simpler way that I totally missed… Thanks as usual!!!

Another way to do it would be to make your single-value tags expression tags and use something like this for the expression:

{[~]myDatasetTag}[2,1] //ordering is [row,column] 

…but if you’re trying to write the data back to an outside tag provider (like a PLC) then I think you’re doing it the best way.

ETA:

On second thought, depending on how you’re driving your update tag, it might be just as effective to run the script on a timer (like as a gateway timer script).

1 Like

Well I know I was being stupid… lol
I’ve used that expression command MANY times and it has saved my behind quiet a lot lol.
I guess when I thought I tried it earlier since it seemed to be the obvious choice. I guess I messed up something that kept giving me an error.
On this specific situation I’m only reading values so it’s probably better just to go with the expression code rather than scripting. I’m sure I’ll use the scripting down the road.
Thanks for the help

I’ve been trying something similar, but the other direction (many single value tags to one dataset tag). I’m driving each single value tag(memory) with a transaction group (XG) that updates the whole group on selective trigger update logic from a number of existing tables filled with bulk time-series data externally fetched and minimally structured/indexed. ( I’m trying to limit lots of client sql calls to the db at the project’s base polling rate directly from multiple components, but still having individual updated values pushed to the component when they’ve changed.)

I’ve not found a way to push values to a dataset tag from a XG, so I believe I’m limited to binding a component’s dataset property to either:

  1. a memory dataset tag sculpted with values from the individual tags with scripting;
  2. an expression dataset tag sculpted with values from the individual tags with expressions.

This feels like the long way around to achieve optimal db performance while getting asynchronously updated values to all relevant components in use by multiple clients. If this works, I’m considering widespread use across my entire project due to the ability to transpose data to both horizontal (wide) and vertical (tall) datasets.

Am I overlooking features of Ignition that were intended to abstract away this complexity?

Does anyone see potential unintended consequences from such an approach?
(i.e. transferring undue burden to the gateway’s internal db/memory,

It seems like you are pulling live tag values from SQL, and that’s why you’re using the transaction group, so that you won’t have a bunch of query tags polling at the base poll rate. You might want to doublecheck whether the SQL queries from your transaction group are going out the way you expect, because Ignition doesn’t always formulate its queries in a very efficient way.

I do have a hard time imagining an industrial system where it is beneficial to put live data into SQL before pulling it into ignition (and not the other way around). If I were in that scenario then my first choice would be to change the external source of data so that it can host the most recent data in a set of OPC tags. I recognize that this might not be an option for you, though. And you’re not the only person I’ve heard of who wanted to drive real-time tags in ignition with SQL data.

Another thing to think about is that, depending on your specific needs, sometimes it can be more efficient to just use a bunch of tags organized into folders rather than a dataset. You could set these tags to have different poll rates if you wanted to use query tags, or you could update them all in a script using system.tag.writeall, the same way as you would with the dataset. If the script and folder naming structure are done well, you can get very useful results with similar efficiency. I have had a few projects where client functions depended on procedurally generated folder paths. There is a practical limit to dataset sizes to think about, too. I had a project once which managed a dataset with something like 10000x8000 items, and its contents didn’t always survive gateway resets because of the large size (that project wasn’t for an actual client, and we weren’t too worried about reliability).

Anyway, I think there isn’t necessarily a problem with your method. I’ll be curious if you come up with a better structure for what you’re doing. Be sure and post about it.

Thx for the reply Zac!

The values are indeed from a live industrial system but it's a remote system, users are not controlling it, and snapshots update every ~5 minutes. I have a crontab running to fetch and insert this bulk data into a large, minimally structured table in SQL as an overall larger external data collection effort.

I'm using the transaction group primarily to transform a subset of this bulk data from horizontal to vertical, triggered on a timestamp value change, and stored for a limited amount of time into the past (using "Delete records older than..."). So yes, the bulk table sees limited traversal to construct this subset of transposed data into a smaller "helper" table of maybe the last week or month's worth of data, as compared to... multiple vision components per window (from which users could potentially ask for the moon) or query tags hitting the bulk table every 5, 10, or 15 seconds filling up my historian with duplicate and repeating data, not knowing exactly when the new 5-min data will arrive, in a costly effort to keep things snappy on the client side.

The end goal is for each client of many to be able to sort, reorder columns, and time slice vertical data feeding a set of stacked bar charts. I'm trying to create the effect of a synchronized time-series San Key diagram that each user can shuffle according to their own objectives asynchronously, without flooding the database with chrontab-like queries. Also, when a chart updates, it re-initializes the component potentially interrupting the user's analysis every 5, 10, or 15 seconds.

I actually started out with this approach (mostly because I couldn't get a transaction group to work with only expression tags that only existed in the group) Ultimately, I need to write down to components' datasets in Vision, and give the user some ability to individually manipulate that dataset, and control whether the data is auto updating. It goes to my initial reply, building a dataset from the ground up with a transaction group/individual tags. I'm experimenting with the DatasetBuilder library that Phil has suggested in many of his posts, it seems extendable enough. I just need to sharpen my python looping and data structure skills, go buy some, or study his example closely.

For the time being, I'm settling for a client memory dataset tag that uses a set of client tags, as fed from user inputs to manipulate a SQL query to the helper table, with its polling turned off. When a user changes their input values, it fires another query. Just need to figure out how to fire that same query when new data arrives in the helper table...prolly an client tag event "value change" script triggered from the same gateway tag that's triggering the transaction group, and takes the last select query and runs it again.

Does it ever end? Sure seems like there should be a short cut.:sweat_smile:

I see. Yeah, I guess your application is pretty unique, but it sounds conceptually simple enough, and useful enough, that I agree there should be an easier way to do it. I’m a little jealous that I don’t get to work on that project myself! It sounds like a really cool task.