Writing data from CSV file to Dataset Tag in Ignition Perspective with Ignition Edge

I am using Ignition Edge version 8.1.21 hosted on a Groov EPIC from Opto 22.

I am using Node-RED (also hosted on the Groov EPIC) to read data from a serial device and add the data to a .CSV file that is stored on the EPIC.

I want to know if there is a way to query the CSV file in Ignition Perspective and store the rows and columns of the CSV file in a Dataset Tag. The Dataset Tag would not need to become larger as the CSV file gets larger. I only need to store the 3000 most recent entries of the CSV file in the Dataset Tag.

Does this seem like something that would be possible with Ignition Edge?

I don't have a test environment that matches yours but I may be able to point you in the right direction. I would start by looking at these:

You probably have to massage your input from the readFileAsString to make it compatible with the datasetFromCSV. You can see what that format looks like with datasetToCSV.

Perspective is web-based and has web-based limitations on handling files. You want to try doing this on the gateway scope in a script. You might be able to do this on a message handler or something along those lines.

1 Like

Thank you.

Where should I store a file if I want it to be accessible in Ignition Edge gateway?

I had seen a post by @PGriffith that I thought was outlining a spot on the server you can dump files for server-side processing but I'm having trouble finding it now. I tagged him in case you is available to help.

There were a few threads in this search that looked like they might be helpful
https://forum.inductiveautomation.com/search?q=perspective%20csv%20file

I don't think there's (currently) any officially supported way to host arbitrary files with Edge, since neither Webdev nor Phil's blob server module run on Edge. In some future version, no sooner than 8.3.0, we're planning to introduce arbitrary file storage inside of projects, but no specific timeline on that.

What you're alluding to, Steve, can be found by searching these forums for "webserver/webapps", along with associated advice that it's not officially supported and only works by a coincidence of our Jetty configuration, that it causes file locking issues on Windows, etc.

1 Like

And my Blob server would be especially pointless in Edge with no database. :man_shrugging:

Thanks for the clarification.

Hi Steve,

Thank you for your help (and everyone else's input) so far.

So, I was able to use system.file.readFileAsString in a Gateway Timer Script and specify the filepath for the file stored on my Groov Epic, but I'm still getting an error that I can't diagnose.

Originally, I had been specifying a secured file path as follows:

path = '/home/dev/secured/fx-227.csv'
data = system.file.readFileAsString(path)
system.perspective.print(data)

And getting the following error message in my Gateway log:
IOError: File '/home/dev/secured/fx-227.csv' doesn't exist or isn't a file.

I moved the same file to an unsecured location in my Groov Epic directory, and I think this at least allowed Ignition to recognize the filepath...

So my Timer Script (note the change in the filepath):

path = '/home/dev/unsecured/fx-227.csv'
data = system.file.readFileAsString(path)
system.perspective.print(data)

Error message:
IOError: File '/home/dev/unsecured/fx-227.csv' isn't readable.

Then I thought maybe it was due to a mismatch in the encoding of the csv file and the default encoding, so I added the encoding parameter, but am still getting the same "file isn't readable" error in the Gateway log.

path = '/home/dev/unsecured/fx-227.csv'
data = system.file.readFileAsString(path, 'UTF-8')
system.perspective.print(data)

"Isn't readable" suggests it's a relatively straightforward Linux permissions error. Have you tried to chmod the file to make it world-readable?

2 Likes

I have not. Someone on the Opto 22 forums also suggested its a permissions-related issue.

I think I have to do that in the shell of the Groov Epic.

Linux doesn't use backslashes as folder delimiters. :man_shrugging:

2 Likes

Beat you to the delete :wink: About the only thing I have done before you.... :smiley:
It was irrelevant anyway as it seems to have made it to the file with the IOError. Multitasking while trying to respond just doesn't work...

1 Like

Well if it makes you feel any better I did still try that out because I didn't know that about Linux, LOL. But yes, Opto 22 products are all Linux-based I believe. Or at least the EPIC is.

I would try to use the Node-Red to write the data directly to a database and query it in Ignition from there. It is an easy setup in Node-Red if the Groove has network access to the database server. We do this on one of our gateways running Ubuntu to write to SQLServer.

That's what I want to do, but I don't think Ignition Edge allows you to query a database (I would need full Ignition for this, and don't yet have approval from my company to buy it).

1 Like

This might be a dumb question, but I'm confused how system.dataset.fromCSV can require a string formatted such that certain information is conveyed on different lines in the string.

This is from the docs.inductiveautomation page for system.dataset.fromCSV, showing the required format of the string to be entered as a parameter into the function:

#NAMES
Col 1,Col 2,Col 3
#TYPES
I,str,D
#ROWS,6
44,Test Row 2,1.8713151369491254
86,Test Row 3,97.4913421614675
0,Test Row 8,20.39722542161364
78,Test Row 9,34.57127071614745
20,Test Row 10,76.41114659745085
21,Test Row 13,13.880548366871926
The first line must be #NAMES

Does the string need to convey a new line with \n ? Like, how exactly would I delineate lines in a string? I tried using system.file.readFileAsString to read a text file with 2 lines, and the output I got was just the 2 lines concatenated together with no spaces.

I haven't used that method but it looks like you need to pass it a "dataset CSV" not just a raw CSV.

The difference is that you're defining the name and datatype of each column along with a row count. It looks like you have to store the CSV portion of your intput as CSV data and then initialize a string with the column definitions (#NAMES, #TYPES and #ROWS) and then append the CSV data.