Power table column data search separate DB table for match

I have an application that reads a CSV file, we then use this data and insert into a database. One of the columns is referencing vendors. The CSV references the vendor as text that someone hand types. I’m trying to come up with a good way to take the text, search our vendors table, and instead of inserting as text, I would like to use the ID so we can make use of keys and have things managed in one spot instead of several. One issue is what happens when there is no exact match, or even several matches.

After I read the CSV, I load a power table. They can preview the data before inserting into the DB. I was wondering if I can add another column in the table that shows the vendor matches as a dropdown and if they don’t like what they see they can select the correct one.

Wondering your thoughts on this. I don’t want to fall in the rabbit hole. :slight_smile:

I have done something similar before using a table and python scripting, and it may work for your case as well (Or any component using a dataset? I haven’t tested it). Using python scripting, you can show, hide, or alter a dataset as needed. It’s important to note that datasets are also immutable. So you must assign the returned dataset to a new variable to be able to use the altered values

Another thing, it is easy to forget Python Datasets and Ignition Datasets need to be converted for usage with the other, otherwise they may not play nicely. I stared at my script before wondering why it worked when using static values in a python script… Turns out I just didn’t convert the returned python table using tag values as an Ignition dataset.

It still is a smaller rabbit hole as you have to script it yourself, but it can be done. One benefit is you customize the dataset altering per your own use-case. The downside is you customize the dataset altering per your own use-case.

Official 8.0 Docs for Vision Table, Datasets, and Python Scripting for the lazy.

I would build a dictionary with the vendor keys before loading the CSV.
Then you can just do a has_key check in the dictionary for each read line in the CSV.
Then add the result to the dataset.

existsDic = {}
vDS = system.db.runQuery('Select VendorID,VendorName FROM tblVendors','MyDatabaseConn')
existsDic = dict(svr)

for line in d.readlines():
    if existsDic.has_key(line[0]):
        #DoSomething for existing Vendor
    else:
        #Do something for new Vendor

Something along those lines