We are working on importing data from a different system into our Enterprise Historian.
That particular system can generate CSV files for us in the following format:
I have code written and debugged, I am just wondering if there is a more efficient way of doing this.
The CSV files can have up to 150,000 lines. The code I am using parses and inserts it in under 5 seconds.
But it just seems like a clunky way of doing this.
Our Historians partition by month, so this is why I am going the scripting route instead of just inserting direct to the database.
import csv path = system.file.openFile("csv") reader = csv.reader(open(path, 'rb')) def gen_chunks(reader, chunksize=1000): chunk =  for i, line in enumerate(reader): if (i % chunksize == 0 and i > 0): yield chunk del chunk[:] chunk.append(line) yield chunk prov = 'EntHistory' selectedPath = event.source.parent.getComponent('tagPath').text.split(']') x = 0 fullDS =  for chunk in gen_chunks(reader, chunksize=1000): vals =  paths =  quals =  ts =  #Delete the first row of the CSV as it is just headers if x < 1: del chunk for row in chunk: paths.append(selectedPath) d = system.date.parse(row,"MM-dd-yyyy HH:mm:ss") gd = system.date.format(d,"yyyy-MM-dd HH:mm:ss") vals.append(float(row)) quals.append(192) ts.append(gd) x=+1 system.tag.storeTagHistory(prov,'default',paths,vals,quals,ts) #Put DS in a table on the screen just for debugging and confirmation of values hdr = ['tag','val','qual','date'] fullDS = zip(paths,vals,quals,ts) event.source.parent.getComponent('Power Table 3').data=system.dataset.toDataSet(hdr,fullDS)