Back Up and Restore Tag Values?

I need to be able to backup and restore tag values to disk. These would be OPC type tags that are pointing to my PLC.

I can’t find any built-in way to do this. Does anyone have a preexisting solution for doing this?

I have several PLCs that have a common set of set points. I would like to be able to save them to disk and then write them back in or use this feature to copy set points to other PLCs. These PLCs do not live on the same network so that is why I need to backup to disk.

this would also behelpful when I want to do downloads to Allen Bradley PLCs and keep my Setpoints in a give PLC the same.

Not sure if there is a prebuilt solution (except for history module, if it meets your requirements!). However you can certainly write a custom script for this to read values of a set of tags (corresponding to set points etc) and write them in file or send them to other PLCs.

This is one of the things transaction groups is for. Save values to the database, and vice versa.

I asked technical support the same question last year 2024. I was attempting to create a save/restore feature that would allow me to save tag values to a file and the be able to restore them. But support was not able to provide a solution or an example script. So, I put this project on hold. If anyone has figured out a simple way to do this or has any ideas, I would be interested in how this would be done.

Overview of save:

  • Read all the tags with system.tag.readBlocking()
  • Construct a dataset with columns for tag path, perhaps the value timestamp, and value columns by type (boolean, integer, long, string, float, real).
  • Use Ignition's system.dataset.toCsv() to make a file that Ignition can read back later, and write those bytes to your target file.

Overview of restore:

  • Read the CSV file into a dataset.
  • Construct lists for system.tag.writeBlocking() containing the dataset's tag path column, and the first non-null value from each row.
  • Write all.

For saving, the critical task is knowing what tags to save. You'll have to share more about what you really want.

I ran into a lot of issues trying to write the dataset and then read the .csv file back into a dataset. I eventually got it working but I had to trim the excessive characters in the csv data. I have the write and read to file working now but I can't figure out how to put the data back into the tags. I could use some guidance on what I am doing wrong. I attached my test script, exported tags and a copy of the .csv file it generates.
DAS_Config.csv (2.2 KB)
tags.json (10.3 KB)
Test Code.txt (2.7 KB)

Please edit your post to use preformatted text blocks instead of making me download those files. See Wiki - how to post code on this forum.

import time
from datetime import date

# update the timestamp value 
t_stamp = int( time.time() )
# And the ordinal date for the filename
d = date.fromtimestamp(t_stamp)

#filePath = "C:\\CSV Files\\" + d.isoformat() + "DAS_Config" + ".csv"
filePath = "C:\\CSV Files\\DAS_Config" + ".csv"
values = ""

############# Write tags to CSV file #############

#Collect the values to be sent 
paths = ["[Beckhoff CTU]Devices/BOP Pressure",
		"[Beckhoff CTU]Devices/Chain Front Stretch",
		"[Beckhoff CTU]Devices/Casing Pressure"]						
values = system.tag.readBlocking(paths)
#print values

if system.file.fileExists(filePath):
	dataOut = []
	# The cvs reader needs a header so lets just use column count numbers for now 
	#system.file.writeFile(filePath,"1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39\r\n",0) # Write file with "append" flag = 1 (TRUE)
	dataOut = ["1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39'}'"]
	
	# Put the values into a data list dataOut
	for i in range(len(values)):
		dataOut.append(values[i].value)
	
	# Add the timestamp
#	dataOut.append(t_stamp)
#	print dataOut
	
	# convert to CSV format
	csvData = system.dataset.toCSV(dataOut,0)
	trimmed = csvData
	# trim the csvData removing unnecessary characters as needed
	trimmed = csvData.replace('"',"").replace("[","").replace("]","").replace(" ","").replace("'","").replace("\r","").replace("\n","").replace("{","").replace("},","\n").replace("}","")

	# I only want to write rows of data (no header)
	try:
		system.file.writeFile(filePath,trimmed,0) # Write file with "append" flag = 0 (FALSE)
	except:
		system.gui.messageBox("CSV File write error " + filePath + "\r\nMake sure that the file is not being accessed externally")
else:
	system.gui.messageBox("CSV File DAS_Config error\r\n" + filePath + "\r\nConfig file does not exist so create a new file")
	system.file.writeFile(filePath,"DAS_Config",1) # Write file with "append" flag = 1 (TRUE)

	
############# Read tag data from CSV and put back into tag #############
import csv
# Prompt for CSV File
path = system.file.openFile("csv")
file = open(path)

# Read file content to reader object
csvData = csv.reader(file)
print(csvData)

# Convert file content to dataset
headers = next(csvData)
data = []
for row in csvData:
	data.append(row)
ds = system.dataset.toDataSet(headers, data)
print ds

# convert tp PyDataset
pds = system.dataset.toPyDataSet(ds)
print pds
#print pds[1][26]

# copy data back into Tags ???? This is where i am having trouble figuring out how to put the data back into the tags. 
for row in pds:
	system.tag.writeBlocking(paths, pds)

I'm surprised you actually got a CSV to output, since system.dataset.toCSV() expects a dataset, not a list. You need to construct a dataset with defined column types--the best tool for this is IA's DatasetBuilder helper class. (Lots of examples on this forum.)

As you add rows to that dataset, you need to make sure the tag value is dropped into the column that matches its data type, and the other value columns are given a None.

.toCSV() will put header lines into the file that Ignition can use later to reconstruct the exact data types.

Ok, I figured I was heading down a rabbit hole. I don't do a lot of work using datasets and I am not familiar with the DatasetBuilder you mentioned so this is new territory for me. If you can kindly point me to a few good links tor the examples you mentioned that would speed up my research. Example code helps me the most in these situations.

Ok, after a bit of searching I found some information and examples on how to use the DatasetBuilder. But most of the examples I can find are very simplistic and nothing like what I am attempting to do, so I have a lot of unanswered questions. My tags are based on complex UDT structures which would require 39 columns, and I am not clear on how to build my dataset using the builder.
For example, as you can see my UDT type has a lot of values and mixed types (See below). Do I have to manually define ALL of those columns and types? If so, does everything need to match the UDT verbatim? Example does my first column need to be named "CountsPerRevolution" or can it be anything I want for example "Column1"?

The following is a print of the readBlock values. I also sent you a copy of the Tag export earlier and a screenshot of the printout below..

}, Good, Wed Jan 15 12:30:06 CST 2025 (1736965806190)], [{
    "CountsPerRevolution": 0,
    "Diagnostic": "",
    "ElectricalValue": 0.0,
    "Fault_Hi": false,
    "Fault_HiHi": false,
    "Fault_Lo": false,
    "Fault_LoLo": false,
    "Fault_Status": false,
    "FaultTroubleshooting": "",
    "FaultTxt": "",
    "Fltr": 200,
    "Help": "Override value in engineering units only SignalSelection =2 , ex 500 psi. ",
    "IndicatiorTitleLine1": "",
    "IndicatiorTitleLine2": "",
    "IndicatiorTitleLine3": "",
    "Limit_Hi": 0.0,
    "Limit_HiHi": 0.0,
    "Limit_Lo": 0.0,
    "Limit_LoLo": 0.0,
    "MilliampMaximum": 20.0,
    "MilliampMinimum": 4.0,
    "Offset": 0.0,
    "ProcessValue": -3750.0,
    "RawValue": 0,
    "Resolution": 0.0,
    "ScaleMaximum": 15000.0,
    "ScaleMinimum": 0.0,
    "SetPoint": 0.0,
    "SignalOverrideCommand": false,
    "SignalOverrideSetPoint": 0.0,
    "SignalSelection": 2,
    "SignalType": "CTU Das Slot6 12bit 0…20 mA, differential Analog Input Channel3",
    "Status": false,
    "TagDescription": "",
    "TagName": "",
    "Unit": "",
    "UnitPerRevolution": 0.0

No, just define the columns I recommended in comment #5: tagpath, optional timestamp (type java.util.Date), and one of each of the basic data types.

The add one row for each tag path with a value, placing the actual value in the column of that value's datatype, and nulls (None) in the other value columns. This is basically how the historian stores values, if you look at its data tables.

Using this method makes restore quite simple.

Fundamentally you're hitting an impedance mismatch between the data you're trying to represent (Ignition tags, which are arbitrarily deeply nested collections of arbitrary numbers of properties with arbitrary data types) and your chosen data format (CSV, which is explicitly a fixed 'flat' format with no nesting).

Phil's suggestion is the best general purpose "solution", but you'll have a better time (in my opinion) if you drop the CSV requirement altogether, instead of trying to pass a square peg through a round hole.

1 Like

PGriffith, I don't mind going another route and I am not married to the .csv idea. Ideally, I would like to store the tags as entire structure objects to a file. But I don't know how to do that. The thing I am trying to avoid is storing them in the DB. That is not an option.

pturmel, BTW, I am only interested in the values, timestamps and any other information about the tag is not important. I am only trying to save the data that the user changes.

That's why I suggested the timestamp column is optional. The rest of my advice applies.

1 Like