I am using a variant on the script found in the manual, linked below, to manually export historian data on a ~daily basis while we sort out some broader network issues.
If I have to run the script two or more times in a row, such as if I'm catching up on a few days, the script starts throwing Java out-of-memory errors. To get around this, I have to close both the script console and the Designer, restart them and try again. Any thoughts on how to avoid this? I'm worried this might continue to be a problem if I try to schedule this instead of running it manually everyday. Thank you!
Code as follows, pretty close to what I took from the docs except for some added date manipulation.
One 24-hour export CSV file is about 104MB in size with 86,400 rows (number of seconds in 24 hours) and 166 columns, although a fair number of columns have no data because they are just portions of the path to the historical tag.
# Our browse function that will browse for historical tags.
# By setting this up as a function, it allows us to recursively call it to dig down through the specified folder.
# Pass in an empty list that we can add historical paths to, and the path to the top level folder.
def browse(t, path):
# Loop through the results of the historical tag browse, and append the path to the empty list.
for result in system.tag.browseHistoricalTags(path).getResults():
t.append(result.getPath())
# If the result is a folder, run it through the browse function as well.
# This will continue until we are as deep as possible.
if result.hasChildren():
browse(t, result.getPath())
# Start with an empty list to store our historical paths in.
historyPaths = []
# Call the browse function, passing in an empty list, and the folder that we want to browse for historical tags.
# This path is a placeholder. It should be replace with your valid path.
browse(historyPaths, path='histprov:Edge Historian:/drv:<folder>:<tag_provider>')
# Create another empty list to store our tag paths that we will pull out of the historical paths.
tagPaths = []
# Loop through the list of historical tag paths, split out just the tag path part,
# and push it into our tag path list.
for tag in historyPaths:
tagPaths.append("[<tag_provider>]" + str(tag).split("tag:")[1])
# Now that we have a list of tag paths, we need to grab the historical data from them.
# Start by creating a start and end time.
endTime = system.date.midnight(system.date.addDays(system.date.now(), 0))
startTime = system.date.addHours(endTime, -24)
# Then we can make our query to tag history, specifying the various parameters.
# The parameters listed for this function can be altered to fit your need.
data = system.tag.queryTagHistory(paths=tagPaths, startDate=startTime, endDate=endTime, returnSize=86400, aggregationMode="Average", returnFormat='Wide')
# Turn that history data into a CSV.
csv = system.dataset.toCSV(data)
# Export that CSV to a specific file path. The r forces it to use the raw path with backslashes.
system.file.writeFile(r"C:\Users\<path>\<filename>.csv", csv, 1)
print("Done")