I want to show only rows with valid onChange events, Imagine we insert these tags in a SQL table manually and only do it when one of them is changed. I want to do the same thing with Ignition history when I query data and show them in a table. The History Query binding generate a new row for each events even the time of happening is the same!!!
Considering there is a valve UDT with 4 boolean tags and string tags:
Open FB, Close FB, Open Cmd, Close Cmd, Username(who issue the command)
All tags are set to change mode for history.
The return table needs to be like this:
|  | Time | Open FB | Close FB | Open Cmd | Close Cmd | User |  | 
|  |  | 1 | 0 | Null | Null | Null | This cmd tag also trigger and use
 system.tag.storeHistory to log who did it
 | 
| Issue Cls cmd → |  | Null | Null | Null | 1 | Jordan |  | 
|  |  | 0 | 1 | Null | Null | Null |  | 
| Issue Opn cmd → |  | Null | Null | 1 | Null | John |  | 
|  |  | 1 | 0 | 0 | Null | Null |  | 
|  |  |  |  |  |  |  | 
 But in reality there is always a new row for username and cmd with same time stamp.
The first priority is Not Null values wins for merging.
The 2nd one is if two rows have a valid value (Not Null) the last value wins. Actually, this part never happen because I try to omit repeated rows. Each time gateway reset the server log value in history even they don’t change at all.
             
            
              
              
              
            
            
           
          
            
            
              If you send me a sample dataset, I’ll see what I can come up with. I’ve been on the road a lot lately, so I can’t guarantee when I can put something together.
             
            
              
              
              1 Like
            
            
           
          
            
            
              Thanks, Jordan, no hurry on that.
This perspective view that is included binding and dataset and all functions:
filter dataset.zip (70.6 KB)
             
            
              
              
              
            
            
           
          
            
            
              Had a chance to look at this. Going row by row over the sample data, it appears that rows with duplicate timestamps are changes that are late to the party, working as a last change wins per column.
As I may have mentioned before, when manipulating data like this, I prefer to use a dictionary. And with the improvements in PyDataSet performance, I find them easer to use, with little impact on performance. YMMV, depending on the size of the dataset.
def collapseDataset(dataIn):
	# Convert to pyDataSet, if needed.
	if 'PyDataSet' not in str(type(dataIn)):
		dataIn = system.dataset.toPyDataSet(dataIn)
	
	# Initialize a dictionary
	dataDict = {}
	# Get the column names for later 
	headers = list(dataIn.getColumnNames())
	# create a list of dictionary subkeys (t_stamp will be our primary key)
	subKeys = [col for col in headers if col != 't_stamp']
	for row in dataIn:
		t_stamp = row['t_stamp']
		
		# Check if t_stamp doesn't already exist
		if t_stamp not in dataDict.keys():
			# Insert into the dictionary
			dataDict[t_stamp] = {}
			# Insert the remaining values into the nested dictionary
			for subKey in subKeys:
				dataDict[t_stamp][subKey] = row[subKey]
		else:
			# Otherwise, update the nested dictionary values if they have changed.
			for subKey in subKeys:
				if row[subKey] != dataDict[t_stamp][subKey]:
					dataDict[t_stamp][subKey] = row[subKey]
	# Create our output dataset
	dataOut = []
	for t_stamp in sorted(dataDict.keys()):
		dataOut.append([t_stamp] + [dataDict[t_stamp][subKey] for subKey in subKeys])
	return system.dataset.toDataSet(headers, dataOut)
# I had previously exported the dataset to a tag called 'Nader' ;)
tagValue = system.tag.readBlocking(['[Test]Nader'])[0].value
dataset = collapseDataset(tagValue)
             
            
              
              
              3 Likes
            
            
           
          
            
            
              Thank you Jordan for your help. 