Showing a .csv table from the server in Perspective

I have a power automate script that every hour refreshes a table in C:\SCADA\orders.csv with new orders. The csv file just have a simple table with the list of orders, with it’s date, number, description, qty, etc…
I want to show this table in a perspective view, I been trying with chatgpt to get a script that reads the csv file and put the contents into a dataset tag, but how is normal with chatgpt, the script doesn’t work, and it doesn’t seem to be able to find a solution.

The biggest problem seems to be that ignition can’t just access C:\SCADA at all. The user that is running the service in the computer is a local admin, and the folder is shared with the ignition service, but still reports that there’s no file.

I’m trying to find information about this but I can’t. Is there anything special that needs to be done for the gateway to be able to read local files?

Thank you.

PS: This is the script:

import csv

def getOrders():
path = r"C:\SCADA\orders.csv"
rows = []
headers = []
with open(path, 'r') as f:
reader = csv.reader(f)
for i, row in enumerate(reader):
if i == 0:
headers = row
else:
rows.append(row)
return system.dataset.toDataSet(headers, rows)

Also tried with path = r"C:\Program Files\Inductive Automation\Ignition\data\orders.csv" with no luck.
This function is called by a timer script:

`def handleTimerEvent():

try:
    dataset = project.readorders.getOrders()
    system.tag.writeBlocking(["[default]orders/OrdersDataset"], [dataset])
except Exception as e:
    system.util.getLogger("getorderstimer").error("Timer error: " + str(e))`

Are you calling it from a gateway timer event?

If not, move it there.

Also double the backslashes in the path, and then write the results to a dataset tag in the call itself.

path = r"C:\\SCADA\\orders.csv"

blah blah

system.tag.writeBlocking("[default]orders/OrdersDataset", system.dataset.toDataSet(headers, rows))

This is how it looks right now, as you can see the tag is still null. The double \\ didn’t seem to help. I don’t know if this may be a file access problem more than a code problem.

As a sanity check, see if you can write a file.

It should at least let you know if it’s a permissions issue.

1 Like

tip: csv.reader is iterable, so you can use

headers = next(reader)
rows = list(reader)

using next will consume the first line, so it won't appear in rows.
I usually do something like

headers = next(reader)
data = [dict(zip(headers, row)) for row in reader]

This builds a list of dicts that you can pass directly to a perspective table.

As another sanity check, is the project enabled?

1 Like

The r indicates a raw string and you don’t need to double the slashes

After some debugging, following the tips you all gave me, found some indentation problems from copy-pasting code in the timer. Now the function is properly called, and after checking if I can write (I can) I found the problem was something as stupid as replacing csv.reader(f) for csv.reader(f, delimiter=';')

After some cleanup i’ll leave the script that ChatGPT ended making in case someone finds this post in the future:

# spaces-only indentation (no tabs)

import csv
import io

def getOrders():
logger = system.util.getLogger("readorders")
path = r"C:\SCADA\orders.csv"

encodings = ["utf-8-sig", "utf-8", "cp1252", "latin-1"]
used_encoding = None
headers = []
rows = []

# Try multiple encodings until one succeeds
for enc in encodings:
    try:
        with io.open(path, 'r', encoding=enc, newline='') as f:
            reader = csv.reader(
                f,
                delimiter='\t',        # TAB-delimited
                quotechar='"',
                escapechar='\\'
            )
            headers = []
            rows = []
            for i, row in enumerate(reader):
                # Trim whitespace in cells
                row = [c.strip() for c in row]
                if i == 0:
                    headers = row
                else:
                    rows.append(row)
        used_encoding = enc
        break
    except UnicodeDecodeError:
        # Try next encoding
        continue

# Last-resort: decode with replacement to avoid crashes
if used_encoding is None:
    with io.open(path, 'r', encoding='latin-1', errors='replace', newline='') as f:
        reader = csv.reader(
            f,
            delimiter='\t',
            quotechar='"',
            escapechar='\\'
        )
        for i, row in enumerate(reader):
            row = [c.strip() for c in row]
            if i == 0:
                headers = row
            else:
                rows.append(row)
    used_encoding = "latin-1(replace)"

# Clean headers: keep content inside [...] when present; ensure uniqueness
cleaned = []
seen = set()
for h in headers:
    h = h.strip()
    if '[' in h and ']' in h:
        inside = h[h.find('[') + 1:h.rfind(']')].strip()
        if inside:
            h = inside
    base = h or "Column"
    name = base
    idx = 1
    while name in seen:
        idx += 1
        name = "%s_%d" % (base, idx)
    seen.add(name)
    cleaned.append(name)

# Normalize rows to header length
colcount = len(cleaned)
normalized = []
for r in rows:
    if len(r) < colcount:
        r = r + [''] * (colcount - len(r))
    elif len(r) > colcount:
        r = r[:colcount]
    normalized.append(r)

ds = system.dataset.toDataSet(cleaned, normalized)
logger.info(
    "getOrders: path=%s, rows=%d, cols=%d, encoding=%s" %
    (path, ds.getRowCount(), ds.getColumnCount(), used_encoding)
)
return ds`

I’m pretty sure this code is unnecessarily cluttered, but for now it just works for my .csv file