Mass import of UDT tags via simple csv and script (example)

There are SO many posts in here regarding tag imports, csv formats, xml schema, and the issues regarding them, so I figured I would post a very simple solution that was born out of my own issues with tag imports.
The problem with the legacy csv import is that it is quite un-intuitive when it comes to the format when you are trying to write parameter values and tag values (overrides) along with the tag. The format becomes multi-line and somewhat cryptic.
The issue with using xml is that unless you’re used to working with xml, or have a schema to work from, the xml file (even in excel as an editor) is multi-line and again, somewhat cryptic.

I finally settled on the fact that the only solution (for me) was to create a script (thanks to a quick post reply by @PGriffith ) that would use a SIMPLE, single-line-per-tag csv file to do the heavy lifting. Here is an example of what I’ve put together for anyone that wants to do the same (you can also read my blog post about it).

import csv	#import the python csv library
# Script tools to open csv file and import tags of specified UDT Type

# Open csv file through file selector
def openFile():
	filePath = system.file.openFile("csv","%user%")
	csvFile = open(filePath, 'r')
	reader = csv.DictReader(csvFile)
	print reader.fieldnames
	for row in reader:
		# Call addUDT function based on UDTType
		if row['UDTType'] == 'AIn': addAIn( row )
		# add more conditions for more UDTTypes...

# Function to add instance tag of UDT type 'AIn'
def addAIn( csvRow ):
	if csvRow['TagProvider'] == "": tagProv = '[default]'
	else: tagProv = '[' + csvRow['TagProvider'] + ']'
	tagPath = tagProv + csvRow['TagPath']
	tagName = tagPath + '/' + csvRow['Name']
	
	if system.tag.exists(tagName):
		system.tag.removeTag(tagName)
		print tagName + ' removed'

	system.tag.addTag( 	parentPath = tagPath,
						name = csvRow['Name'],
						tagType = 'UDT_INST',
						attributes={'UDTParentType':csvRow['UDTType']},
						parameters={'Description':csvRow['Description'],
									'EquipGp':csvRow['EquipGp'],
									'ItemName':csvRow['ItemName'],
									'RawLocation':csvRow['RawLocation'],
									'EngUnits':csvRow['EngUnits']
									},
						overrides={
										'PriorityHH':{'value':csvRow['PriorityHH']},
										'PriorityH':{'value':csvRow['PriorityH']},
										'PriorityL':{'value':csvRow['PriorityL']},
										'PriorityLL':{'value':csvRow['PriorityLL']}
									}
						)
	print 'AIn tag ' + tagName + ' added'

The advantage with this is that the csv is very simple. I imported a csv with 351 instances of 9 different UDTs, contained in 16 headers and in the process, updated 2457 values (parameters & member tag values).
It’s just a way to keep things simple, but doing it efficiently and accurately.

[Edit]: I put this in a Project Library script and run it from the Script Console.

4 Likes

Thanks for sharing!

I think with some minor tweaks this can be really helpful for me!

Are you using the standard csv format output when you export tags to csv?

1 Like

Nope. That’s the sweet part. You simply create a csv of your own format with only the information you need to create your tags.
For example:
Based on what your script handles internally, you may have the following headers in your csv:

    [provider]    ,        [path]         ,     [tagname]      ,   [UDTtype]    ,   [param1]    ,   [param2]

so your csv would look like:

    [provider]     ,           [path]        ,   [tagname]   ,   [UDTtype]    ,     [param1]    ,   [param2]
    '[default]'    ,   root/folder/folder    ,   tagname1    ,     myUDT      ,       blue      ,       red
    '[default]'    ,   root/folder/folder    ,   tagname2    ,     myUDT      ,      green      ,       red
    '[default]'    ,   root/folder/folder    ,   tagname3    ,     myUDT2     ,       blue      ,       red
    '[default]'    ,   root/folder/folder    ,   tagname4    ,     myUDT      ,       blue      ,       red

You’re in charge of what you need/want in there, whatever makes your life easier.

2 Likes

Tim,

Thanks for these posts. Am definitely going to build some tools in-house using these examples.

Also, sounds like you’ve been in touch a bit with the developers on this. Any idea if they plan to include some tools like this as part of an install or make some available on their website? To me this is a bit of a gap at the moment, just a basic supported tool to edit parameters on UDT’s (like you’ve provided) would be a huge help. Maybe they figure every one’s configs are so unique that providing a bespoke tool is difficult or won’t appeal to a wide enough audience to be worth it. Just curious if you had any insights on that.

-Greg

Ignition has phased out the csv tag import in favor of the json, and for good reason. The json file format is very powerful, the problem is that a lot of integrators haven’t yet been exposed much to json and don’t have the past experience with it. They’re used to working with a simple csv.
While csv files are great, and they’ve afforded us with a huge time-saver over the years, the truth is they’re quite restrictive. The format of the csv file has to be what the software is expecting. Most of us would start by exporting what we wanted to duplicate and used that csv as a format to import.
JSON and XML are much more powerful in that the ‘item’ is identified as to what it belongs to, but they can look very scary when they are formatted in certain ways. Carriage Returns and Line Feeds are only used to make things more readable to the human eye, so you can really format them in whatever way you want. It’s the syntax of curly and square brackets (or braces), commas and colons that matter. Then, once you know what’s required, you can still use excel as a tool to create the json ‘lines’ that you need.

Good work with the csv import. Regarding json format. How would you work with that format. Let’s say you initially have your tags in an excel workbook or similar?

What I’ve done that works pretty good is to build an example of what I need in Ignition, then export it as json, then remove the carriage returns to get whatever item I want to duplicate on a single line.
Once that is done, I now have an exact format to duplicate in excel (using whatever formulas, sequential data ‘dragging’ and concatenation I need to do).
Then I could take all those cells and insert them in the json file where my example line was.
If I’m wanting to import it as a new, separate folder structure, I can Search & Replace the json file before I paste my new tags in there.

I hope that all makes sense.

Cheers!

I guess its pretty much what I do myself when working offsite.

But, if i am on my development system, I read the clipboard and upload all tags into a SQL table (DB and table is selected by dropdown).

Normally the clipboard upload is not needed as we always have the object and signal lists in DB during project creation.

Then we use the SQL table to generate tags. Takes some time to develop though. (user interface + code)

Great Post!
I was able to take this and modify to my needs.
I can now export my tags in Logix as a CSV file and use that file to import all of my Data Type Instance tags, while ignoring everything else. And, I don’t have to make any modifications to the file that Logix exports!

1 Like