PyDataset to Dictionary

Hi,

I have created a pyDataset, 2R x 2C, I would like to convert it to dictionary then to jsonFormat.
An example for the pyDataset,
Header = [Fruit, Cost]
Row1 = [Apple, 5]
Row2 =[Pear, 1]

I have found similar topic as reference but I am still unable to manipulate the dataset into dictionary. https://forum.inductiveautomation.com/t/bug-15901-json-encoding-and-decoding-a-dataset-datatype-property/31617

I have manage to do it manually, with the single code below. Is there any alternative to execute it if I have x amount of row and column in my dataset?

NewDict = {“Data1”:{Header[0]:pyData[0][0], Header[1]:pyData[0][1]},“Data2”:{Header[0]:pyData[1][0], Header[1]:pyData[1][1]}}

So the end result will be as follow.
{
“Data1”:{
“Fruit”: “Apple”,
“Cost”: " 5"
},
“Data2”:{
“Fruit”:“Pear”,
“Cost”: “1”
}
}

Thanks in advance!

I manage to solve the conversion, below is the code:

MainData = []
i = 0
while i < pyData.getRowCount():
	MainData.append("Data" + str(i))
	i = i + 1
	
NewDict = {}
row = 0
col = 0

for j in MainData:
	NewDict[j] = {}
	for k in Header:	
		NewDict[j][k] = pyData.getValueAt(row, col)
		col = col + 1
	row = row + 1
	col = 0

jsonNewStr = system.util.jsonEncode(NewDict, 5)
print jsonNewStr

Just Fyi to be more succinct, you can also use:
for i in range(pyData.getRowCount()):
Then you don’t need the i=0 or the i+=1 :slightly_smiling_face:

1 Like

Also, if you use zip you can iterate through multiple objects at once.
Also-also, list and dictionary comprehensions are useful to learn.

Header = pyData.getColumnNames()

NewDict= {}
for row, i in zip(pyData, range(pyData.getRowCount())):
	NewDict['Data'+str(i)] = {colName:value for colName, value in zip(Header, list(row))}
6 Likes

I never thought about using zip like that with a for loop to get eaches of both lists… Interesting!

1 Like

I mean, as long as we’re golfing, you can also nest the comprehensions:

NewDict = {
    'Data %s' % i: {
        colName: value
        for colName, value in zip(pyData.columnNames, list(row))
    }
    for i, row in enumerate(pyData)
}
8 Likes

Now you’re just showing off :smile:

I was going to do that, but I wanted to leave something to nerd snipe Paul. :stuck_out_tongue:

2 Likes

I wrote and use the following:

def dictListToDataset(dictValues, defColumns=None):
	"""
	This will take a python list of dictionaries.  
	The dictionaries will contain Name/Value pairs where the Name field will become the Column Name for the resulting Ignition Dataset.
	Each dictionary row MUST match, it must contain the same Name/Value pairs with the Values being the same data type.
		
	Arguments:
		dictValues: The input list of dictionaries
	
	Results:
		Dataset: A built dataset containing all the Rows and Columns extracted from the dictionaries.
	"""
	logger = shared.MWES.Logging.getLog(lib=libName, src=srcName, autoMethod=True )

	data = []
	columns = defColumns
	for obj in dictValues:
		if columns == None:
			columns = obj.keys()
		row = []
		for column in columns:
			row.append(obj[column])
		data.append(row)
	if columns != None:
		logger.debug("Columns %s, Data %s" % (str(columns), str(data)))
		return system.dataset.toDataSet(columns,data)
	else:
		raise ValueError('could not find dictionary keys in %s' % (str(dictValues)))
		
def datasetToDictList(dataset):
	"""
	This will take an Igntion Dataset and convert it to a real python list of dictionaries.  Unlike the PyDataset type.  
	The resulting dictionaries will contain Name/Value pairs where the Name field is derived from the Column Name in the Ignition Dataset.
		
	Arguments:
		dataset: The input Ignition Dataset
	
	Results:
		List of Dictionary Elements: A built list of dictionary rows containing all the Rows and Columns extracted from the dataset.
	"""
#	logger = shared.MWES.Logging.getLog(lib=libName, src=srcName, autoMethod=True )

	data = []
	columns = list(system.dataset.getColumnHeaders(dataset))
	pyDataset = system.dataset.toPyDataSet(dataset)

	return pyDatasetToDictList ( pyDataset, columns )	
	
def pyDatasetToDictList( *args, **kwargs ):
	"""
	This will take an Igntion PyDataset and convert it to a real python list of dictionaries.  Unlike the PyDataset type.  
	The resulting dictionaries will contain Name/Value pairs where the Name field is derived from the Column Name in the Ignition PyDataset.
		
	Arguments:
		pyDataset: The input Ignition PyDataset
		columns: Optional list of column names to extract
	
	Results:
		List of Dictionary Elements: A built list of dictionary rows containing all the Rows and Columns extracted from the dataset.
	"""
#	logger = shared.MWES.Logging.getLog(lib=libName, src=srcName, autoMethod=True )

	pyDataset = None
	columns = None
	
	idx = 0
	sz = len(args)
	pyDataset = shared.MWES.Args.getParam( sz, idx, "PyDataset", False, True, args, kwargs )
	idx+=1
	columns = shared.MWES.Args.getParam( sz, idx, "Columns", None, False, args, kwargs )
	idx+=1

	data = []
	if pyDataset == None:
		return data
		
	if columns == None:
		columns = list(system.dataset.getColumnHeaders(system.dataset.toDataset(pyDataset)))
	for row in pyDataset:
		rowMap = {}
		for col in columns:
			rowMap[col] = row[col]
		data.append(rowMap)
	
	return data

It would have been really NICE if the pyDataset did the above for us!!!

1 Like

Thanks everyone for all the comment. I am still new to python, glad to learn all the different way to execute it.

this isn’t exactly your question, but I just figured it out recently and I think it’s relevant enough that it might help someone:

If you have a python dictionary that you want to save for later you can store it in a memory document datatype tag like this:

myDict = {'some':'stuff'}
myJsonDict = system.util.jsonEncode(myDict)
system.tag.writeAsync(["Document Tag Path"], [myJsonDict])

and then later when you want to use it you can get your dictionary back like this:

myDict = dict(system.tag.read("Document Tag Path").value)
3 Likes

You don’t need the inner comprehension. You can turn the zip directly into a dict.

NewDict = { 
 "Data %d" % i : dict(zip(pyData.columnNames,list(row))) 
     for i,row in enumerate(pyData)
}
5 Likes