CSV Auto-Import to Tag History

Looking for some collaboration help from the community :slight_smile:

The setup...
My customer utilizes a device/instrument that only has 2 possible options for external communication/integration into Ignition.

  1. It can automatically export data to a semi-colon delimited CSV file.
  2. It does have a proprietary protocol/API but it uses a C dll and I'm not an expert with rewriting it to work with an Ignition SDK Module. Wouldn't even know where to start with something like that and would probably take me months to figure out.

So my focus has been on trying to use the CSV option. Couple more characteristics to consider.

  1. The columns in the CSV will vary per "Batch".
  2. The CSV is opened by the OS system and continuously writes to it (typically 1 second scan rates), that doesn't prevent someone from opening the CSV or copying it <-- I've verified that piece.
  3. These "Batches" may run multiple days (200k+ rows in the CSV).
  4. Multiple "Batches" could be running at the same time - the device/instrument in question has 5 channels and they use 2 of those devices. So they could potentially run 10 "Batches" at the same time.

The goal...
I need to figure out the best/most efficient way to ingest those CSV files into Ignition's Historian. Here are a couple of my thoughts that I'd like some feedback on. Each one I have done smaller proof-of-concepts but am not sure if anyone might have other ideas.

  1. Wait till the batch is finished and use the system.tag.storeTagHistory() based on a user clicking a button. The user would have to select/identify the CSV to import. There is no automatic detection of when a batch is complete other than no more rows being written to the CSV. This is already/mostly scripted for a different piece of their system, so it would be the fastest solution to implement.
  2. Continuous ingestion - every X seconds Ignition would make copies of the CSVs, store the data to history, record the last timestamp, then remove the copied CSV. On the next iteration it would "find" the newest rows based on that recorded last timestamp. Could get messy if my logic isn't on spot with the timestamps.
  3. Do something similar to #2 but instead of store to history it would write the most recent values to memory tags that have history enabled. This would require a very frequent timer script. (I'm talking with them about the need to have 1-second data and might be able to use 5 or 10 second scan rates - also some of their data doesn't change frequently so using Ignition's Historian Tag Configurations would help with reducing the amount of data being saved).
  4. Utilize a 3rd-party tool like Node-Red to parse the CSV and write the values to Ignition memory tags. I've used very little Node-Red but it seems doable and would take the load off of Ignition.

The biggest problem with #1 and #2 is that the CSV tags don't actually exist in the Tag Browser - there "ghost" tags or just pure historical tags. I'd like to get them a solution that is more real-time and won't require operator intervention - so I don't like option #1.

I see options #3 and #4 as being closer to real-time but also vastly more complex - especially if you consider that the CSV columns may vary in number (the column name will never change but the CSV for different batches may have a different amount of columns). Due to that, the solution would need to be capable of automatically detecting new columns and creating those memory tags for the specific process cell. Pre-creating all possible columns isn't an option - the device offers close to 100 variables but they'll only configure the CSV export to use 5-10 of those variables and it may not always be the same 5-10 variables. A few of the variables/columns will always be the same, it is the remaining ones that are unknown until at the time of starting the batch <-- that key point might be an issue with Node-Red unless all possible variables are created as memory tags.

Any thoughts or ideas or concerns are greatly appreciated! I am leaning towards option #3.

I would do a combination of #2 and #3, with pre-creation of all columns. (Perhaps a UDT with all columns.) I would not turn on history, or even use Ignition's historian. This is a classic "wide" table application and should have exactly that. Store nulls to columns that a particular run isn't using. Most databases store nulls efficiently with bitmaps.

I did consider just importing directly to a non-ignition table, wide table as you mentioned but I haven't thought through all the UI changes that would be needed to support both types of trending (Ignition Historian and Flat Tables).

Maybe I'm making this way too complicated for myself by trying to use Ignition's historian...

That might make downstream stuff less complicated as they have several calculations that run against the values and aggregations (right now they use Excel). And pulling those values out of Ignition's historian has proven to be sluggish/very complex.

Would you think its more efficient to just directly insert new rows into that wide table and then use Query Tags (with no history enabled)? Or to insert the new rows AND directly write the values to Memory Tags for on-screen display?

Thanks for the idea - that is making me rethink my original options/plan.

Ignition has a read-only "Wide table historian" specifically for exposing data in wide tables to components that expect an Ignition historian.

This. Only write that last row to the tags after storing all new rows.

Just for further reference:

1 Like

:man_facepalming: I didn't even think or realize that option was available...time to do some learning. Thank You!