Custom Datasets Efficiency

I am curious as to the most efficient way to accomplish creating a custom dataset created from a UDT that comes from OPC. Currently I have created this script in a project library:

def createTable(dryer_data):  # Renamed DryerType to dryer_data for clarity

    # Create Header
    header = ['Name','Actual', 'Set']
    
    SetName = dryer_data['Set']['Name']
    ActualName = dryer_data['Actual']['Name']
    
    SetNumber = str(dryer_data['Set']['Number'])
    ActualNumber = str(dryer_data['Actual']['Number'])
    
    SetDryingTime = str(dryer_data['Set']['DryingTime'])
    ActualDryingTime =str( dryer_data['Actual']['DryingTime'])
    
    SetCoolingTime = str(dryer_data['Set']['CoolingTime'])
    ActualCoolingTime =str( dryer_data['Actual']['CoolingTime'])
    
    SetInletTemp = str(dryer_data['Set']['InletTemp'])
    ActualInletTemp =str( dryer_data['Actual']['InletTemp'])
    
    SetOutletTemp = str(dryer_data['Set']['OutletTemp'])
    ActualOutletTemp =str( dryer_data['Actual']['OutletTemp'])
    
    SetCoolingTemp = str(dryer_data['Set']['CoolingTemp'])
    ActualCoolingTemp =str( dryer_data['Actual']['CoolingTemp'])
    
    SetReversingTime = str(dryer_data['Set']['ReversingTime'])
    ActualReversingTime =str( dryer_data['Actual']['ReversingTime'])
    

    
    SetUnloadingTime = str(dryer_data['Set']['UnloadingTime'])
    ActualUnloadingTime =str( dryer_data['Actual']['UnloadingTime'])
    
    SetInfaredTemp = str(dryer_data['Set']['InfaredTemp'])
    ActualInfaredTemp =str( dryer_data['Actual']['InfaredTemp'])
    
    SetInfaredTime = str(dryer_data['Set']['InfaredTime'])
    ActualInfaredTime =str( dryer_data['Actual']['InfaredTime'])
    
    SetAfterDryingTime = str(dryer_data['Set']['AfterDryingTime'])
    ActualAfterDryingTime =str( dryer_data['Actual']['AfterDryingTime'])

    
    # Create Rows
    rows = [
        ['Program Name', SetName, ActualName],
        ['Program Number', SetNumber, ActualNumber],
        ['Drying Time', SetDryingTime, ActualDryingTime],
        ['Cooling Time', SetCoolingTime, ActualCoolingTime],
        ['Inlet Temp', SetInletTemp, ActualInletTemp],
        ['Outlet Temp', SetOutletTemp, ActualOutletTemp],
        ['Cooling Temp', SetCoolingTemp, ActualCoolingTemp],
        ['Reversing Time', SetReversingTime, ActualReversingTime],
        ['Unloading Time', SetUnloadingTime, ActualUnloadingTime],
        ['Infared Temp', SetInfaredTemp, ActualInfaredTemp],
        ['Infared Time', SetInfaredTime, ActualInfaredTime],
        ['After Drying Time', SetAfterDryingTime, ActualAfterDryingTime]
	]

    # Create PyDataset
    dataset = system.dataset.toDataSet(header, rows)

    # Write the dataset to a tag
    return dataset

I then call this through a Historical Tag binding on a table like this:

I was wondering is this the most efficient way to do this or is there something better like a transaction group that could accomplish this. The idea is to have this visualized in realitime in perpsective and stored to a database in way that is easily read. Also keep in mind that in in this UDT there are a lot of different data types that I am converting to a string so that the table structure can look like this:

I'm very confused.

What do you think your tag history binding is doing, currently?

Because it's not actually used in your transform, meaning you're running the binding for literally no purpose.

As a secondary tangent: don't explicitly import project library scripts (from createData import createTable); just use them directly (createData.createTable(x)).

Ok that makes sense would this be better served just calling this script from a gateway timer script and setting it to a dataset tag?

To be clear the only reason i used the tag history was to enable the polling so that it would be updated at a specified rate.

Okay, yeah, so don't do that. If you need a binding to poll, use an expression now(updateRateMilliseconds) - at least if you're throwing that away it's going to run in a few microseconds, and not require a full network IO hop to retrieve history data that's then immediately discarded.

But all that's a tangent. Let's go back a step:
You have a bunch of individual tags here that you're trying to assemble into a dataset for display on a view. That's fine.
Are each of these individual tags supposed to be stored into a database? You almost certainly want them to be logged individually - not in some kind of pseudo dataset format. If you just need trends over time, that's a fine job for the tag historian. If you want event driven storage, such as "when I change X, do Y", then you need either transaction groups or to just script database updates. This is a totally separate problem from the live display in a table problem, and should be treated as such.

The "live display of a sequence of values" problem is best handled with bindings, so that you don't have to do the polling yourself and can reuse our tag subscription mechanisms for efficiency. Unfortunately it's slightly convoluted to do this - you need to configure an inner view to use for each row of the table, and provide it tag paths to use internally in indirect bindings:

1 Like

This is not gonna solve anything, but it will certainly make things easier:

def create_table(dryer_data):  # Renamed DryerType to dryer_data for clarity
    key_map = {
        'Program Name': "Name",
        'Program Number': "Number",
        'Drying Time': "DryingTime",
        'Cooling Time': "CoolingTime",
        'Inlet Temp': "InletTemp",
        'Outlet Temp': "OutletTemp",
        'Cooling Temp': "CoolingTemp",
        'Reversing Time': "ReversingTime",
        'Unloading Time': "UnloadingTime",
        'Infared Temp': "InfaredTemp",
        'Infared Time': "InfaredTime",
        'After Drying Time': "AfterDryingTime"
    }

    data = [
        [k, dryer_data['Set'][v], dryer_data['Actual'][v]]
        for k, v in key_map.iteritems()
    ]
    return system.dataset.toDataSet(
        ['Name','Actual', 'Set'],
        data
    )
  • This is without assuming anything about the data coming in.
  • Keep your variables names case consistent: You're mixing snake_case (dryer_data) and PamelCase (SetNumber). Ignition uses camelCase. PEP8 recommends snake_case for function and variable names. PascalCase is reserved for classes.
  • # Create Rows: please don't. Comments need to be useful, or they only add clutter and make things less readable.
1 Like

Ok I think I understand what you are saying. These are two seperate issues. For storing the tags to the database I should connect the UDT individual tags to a historian. Then for the live view I should create the table as custom component that I can pass in the UDT as a parameter and will connect to each of the inner views rows like this: