JSON to Ignition Dataset

Hi and good day. I would like to get some advice in converting the contents of this JSON file to an Ignition Dataset. The goal is to plot either the average or max measurement in the parameters results. Thanks.

PLP8-20231120101704.json (4.5 KB)

"results": {
"1": { ------------> Test condition
"chgRate": [ ------> Parameter
3.5, ---> average
5.0, ---> max
2.0 ---> min

{
    "fileName": "PLP8-20231120101704",
    "testerID": "14.3.2",
    "testStart": "2023-08-09 15:26:33",
    "testEnd": "2023-08-11 17:12:39",
    "activeMaterial": 11382,
    "cellBuilder": "NN",
    "builRemarks": "MPC",
    "chgCondition": {
        "1": [
            "1138.2",
            "2.4"
        ],
        "2": [
            "569.1",
            "2.4"
        ],
        "3": [
            "285",
            "2.4"
        ],
        "4": [
            "285",
            "2.4"
        ]
    },
    "dchgCondition": {
        "1": [
            "1138.2",
            "0.5"
        ],
        "2": [
            "569.1",
            "0.5"
        ],
        "3": [
            "285",
            "0.5"
        ]
    },
    "cycleCountSet": {
        "1": "5",
        "2": "10",
        "3": "10"
    },
    "cycleCompleted": 23,
    "cRate": {
        "1": "3.4C",
        "2": "1.4C",
        "3": "2.9C"
    },
    "results": {
        "1": {
            "cycleID": [
                3.5,
                5.0,
                2.0
            ],
            "chargeCap": [
                331.92,
                343.63,
                305.85
            ],
            "dchargeCap": [
                333.3,
                341.19,
                319.81
            ],
            "dchgSpecCap": [
                29.28,
                29.98,
                28.1
            ],
            "chgDchgEff": [
                100.52,
                104.57,
                98.66
            ],
            "dchgEnergy": [
                0.55,
                0.56,
                0.52
            ],
            "dchgSpecEnergy": [
                48.07,
                49.6,
                45.53
            ],
            "medianVoltage": [
                1.68,
                1.7,
                1.66
            ],
            "chgTime": [
                1049750.0,
                1086800.0,
                967300.0
            ],
            "dchgTime": [
                1054125.0,
                1079100.0,
                1011500.0
            ]
        },
        "2": {
            "cycleID": [
                11.0,
                15.0,
                7.0
            ],
            "chargeCap": [
                431.47,
                442.71,
                420.57
            ],
            "dchargeCap": [
                398.22,
                404.77,
                393.49
            ],
            "dchgSpecCap": [
                34.99,
                35.56,
                34.57
            ],
            "chgDchgEff": [
                92.32,
                93.96,
                89.8
            ],
            "dchgEnergy": [
                0.67,
                0.68,
                0.66
            ],
            "dchgSpecEnergy": [
                59.18,
                60.02,
                58.19
            ],
            "medianVoltage": [
                1.72,
                1.73,
                1.71
            ],
            "chgTime": [
                2728762.5,
                2799900.0,
                2659800.0
            ],
            "dchgTime": [
                2518125.0,
                2559500.0,
                2488200.0
            ]
        },
        "3": {
            "cycleID": [
                20.33,
                23.0,
                17.0
            ],
            "chargeCap": [
                485.01,
                663.54,
                444.61
            ],
            "dchargeCap": [
                382.44,
                441.0,
                190.06
            ],
            "dchgSpecCap": [
                33.6,
                38.75,
                16.7
            ],
            "chgDchgEff": [
                79.76,
                94.46,
                42.75
            ],
            "dchgEnergy": [
                0.66,
                0.76,
                0.36
            ],
            "dchgSpecEnergy": [
                57.66,
                66.41,
                32.04
            ],
            "medianVoltage": [
                1.75,
                1.89,
                1.71
            ],
            "chgTime": [
                6126950.0,
                8382400.0,
                5616600.0
            ],
            "dchgTime": [
                4829333.33,
                5568800.0,
                2400000.0
            ]
        }
    }
}

How are you wanting to plot/display the data? Vision or perspective? That will inform how you will need to transform this json data.

1 Like

Please list the column headers you want on your dataset.

Just having a bit of fun on my computer, I was able to convert your results to a dataset using this script:

# filepath = wherever your file is stored

# Get the file as a string
jsonString = system.file.readFileAsString(filepath)

# Load the file as a json object
jsonFile = system.util.jsonDecode(jsonString)

# Get the first key of the results as a reference for getting the headers and iteration lengths
firstKey = next(iter(jsonFile['results']))

# Get the headers from the first key's keys
headers = jsonFile['results'][firstKey].keys()

# "data =" could be done with comprehension, but nested comprehensions can get convoluted
# Initialize a data list for or the row lists that will comprise the data in the dataset
data = []

# Iterate over the indexes of the first value list in the 'results' dictionary
for index in range(len(next(iter(jsonFile['results'][firstKey].values())))):
	
	# For each key in the 'results' dictionary
	for key in jsonFile['results'].keys():
	
		# Create a row by extracting the 'index'-th element from each value list
		rowData = [value[index] for value in jsonFile['results'][key].values()]
		
		# Append the created row to the 'data' list
		data.append(rowData)
		
# Convert the headers and data into a dataset
dataset = system.dataset.toDataSet(headers, data)

Result:

2 Likes

Using the same jsonFile as justin:

from itertools import chain, izip_longest as zipl

... 

rows = jsonFile['results'].values()
headers = rows[0].keys()
data = [list(row) for row in zipl(*[list(chain.from_iterable(row[k] for row in rows)) for k in headers])]
dataset = system.dataset.toDataSet(headers, data)

edit: Don't do this in prod if you want to stay friends with the people you work with !
Note that it could be simpler if toDataSet accepted tuples...

edit2: :grimacing:

ds = system.dataset.toDataSet(
	next(iter(jsonFile['results'].itervalues())).keys(),
	[list(row) for row in zipl(*[list(chain.from_iterable(row[k] for row in jsonFile['results'].itervalues())) for k in next(iter(jsonFile['results'].itervalues())).keys()])]
)

Sorry, I'm trying to quit smoking and I might need to compensate a bit.

6 Likes

It's my second week of using Ignition and I really appreciate the advice. Thanks Justin and Pascal. I'll try using Justin's script.
I'll be using Perspective for visualization.

I also tried translating the json into a "linear' format, so I ended up making 2 datasets, one for attributes and another for test results. The production test conditions aren't fixed yet, hence the multiple conditions. But 3 seems to be the max for now. I changed my backend parser script to append a batteryID into the results to I could reference it as a primary key.

Could I ask some advice for inserting the data in a MySQL? I already got the driver and the db status is valid.

The first visualization goal is a line chart of dchargeCap(1).

PLP36-20231127161159.json (3.2 KB)

# For creating the attibute dataset, single-row format
# Added a batteryID attribute
attribKeys = 	[
				"batteryID",
				"fileName",
				"activeMaterial",
				"buildRemarks",
				"cellBuilder",
				"testerID",
				"testStart",
				"testEnd",
				"cRate",
				"cycleCompleted"
				]
attrib = {}
allKeys = data.keys()
attrib.update((key, data[key]) for key in attribKeys)
orderAttrib = collections.OrderedDict(sorted(attrib.items()))
sortedOrderAttrib = sorted(orderAttrib.keys())
orderDict= collections.OrderedDict(sorted(orderAttrib.items()))
datasetAttrib = system.dataset.toDataSet(sortedOrderAttrib, [orderDict.values()])

# For creating the results dataset, single-row format
# Condition number gets added to each column (param) name
# Added a batteryID attribute
newD = {}
newD2 = {}
for condNum in range(len(data["results"])):
	cndNum = condNum + 1
	d = data["results"][str(cndNum)]
	d.update({"batteryTestID":str(attrib["batteryID"])})
	od = collections.OrderedDict(sorted(d.items()))
	resultKeys = data["results"][str(cndNum)].keys()
	sorted_keys = sorted(data["results"][str(cndNum)].keys())
	
	for key,value_list in od.iteritems():
		newD[key+str(cndNum)] = [value_list]
		
	newD2.update(newD)

onewD2 = collections.OrderedDict(sorted(newD2.items()))
sortedParmKeys = sorted(onewD2.keys())
sortedParmVals = onewD2.values()
sortedResultdset = system.dataset.toDataSet(sortedParmKeys, [onewD2.values()])

datasetAttrib
sortedResultdset

To insert data to a database you would call one of the database functions designed for update queries.

Preferably a Named Query, but if not then one of the system.db.run*PrepUpdate() functions. Here is a link to those in the manual, would need more information to really help any more than that.

https://docs.inductiveautomation.com/display/DOC81/system.db.runNamedQuery
https://docs.inductiveautomation.com/display/DOC81/system.db.runPrepUpdate
https://docs.inductiveautomation.com/display/DOC81/system.db.runSFPrepUpdate

For creating a dataset, you really don't need the overhead of an OrderedDict. Instead, just use a list, they are already ordered.

A dataset, is honestly just a fancy wrapper around a List of Lists, so what your script should do is build a list of lists.

However, if your goal is to translate JSON into a dataset, then do that. However, if your goal is to display data in a perspective XY Chart, then I would not be concerned with first converting the JSON into a dataset, instead convert it into the format needed by the XY Chart Component, which is by default a List of objects where each object represents an intersection of the X and Y axis on a per series basis.

You have stated that you need a Line Chart of the column 'dchargeCap`, I'll assume that for this sample data you ran 3 tests, and you want to charge the Average, Min, and Max for each of those conditions.

You could do something like this:

###########################################################
###########################################################
###  note that because you're using perspective, getting a file is different 
###  than if you're using vision and depending on the method used to 
###  obtain the file, then how you generate a JSON string from that file 
###  will change
###########################################################
###########################################################


#Once you have the string, then we must decode it.
json = system.util.jsonDecode(jsonString)

#I only care about the results for this case, and for that I only
#care about the dchargeCap parameter for each condition
#This will build a list of objects, with only the dchargeCap
measures = ['average','max','min']
retResults = []

for condition,params in json['results'].iteritems():
    values = {measure:value for measure,value in zip(measure,params['dchargeCap'])}
    values['condition'] = condition
    retResults.append(values)

return {'results':retResults}

NOTE: There are more concise and probably also more performant ways of writing this script, however, I tried to keep it simple and readable. While maintaining some semblance of pythonic code.

You can use this in a library script called from an expression binding using runScript() on the dataSources property of an XY Chart.

image

3 Likes

Strongly disagree. Datasets carry metadata (column names and data types) and optionally, quality data (per value), and are efficiently serializable.

While the simplest tools to construct datasets may not use all of these features, they are crucial for performance in many applications.

1 Like