Storage after running script

Hi all,

I am running a script which is opening .csv files and stores them in the database. The script runs good, all data is coming in nicely.
But when I check the storage capacity of the machine I am working on, I see that it takes up to 2 GB after running one csv file. My personal storage gets decreased. I have to run about 300 csv files so this would take up to 600GB which we don’t have.
Is the script storing data somewhere? I noticed when I restart my client (and thus also the gateway since they are on the same machine) the storage is back to normal.
Anyone had experience with this?

Thanks in advance!


Sounds like your script is creating temporary files, directly or indirectly. Share your script. (After pasting, highlight the code and click the “preformatted” button–this one: </>.)

1 Like
trigger ="[graphs]General/ImportOldData").value
if trigger == 1:
	import csv
	import sys
	store = "Metingen"
	# Get the csv filepath
	path ='[graphs]ImportTagsOBViews/FileName').value
	#print path
		csvData = csv.reader(open(path))

		header =
		# Create a dataset with the header and the rest of our CSV.
		dataset = system.dataset.toDataSet(header ,list(csvData))

	tagPath ='[graphs]ImportTagsOBViews/MeasurementName').value
	if system.tag.exists(tagPath):
		for row in range(0,dataset.getRowCount()) :
			input = dataset.getValueAt(row,0)
			lastSpaceIndex = input.rfind(" ")
			value = input[lastSpaceIndex+1:]
			timestampStr = input[:lastSpaceIndex]
			timestamp =, "yyyyMMddHHmmss")
			value = float(input[lastSpaceIndex+1:])		

	system.tag.writeBlocking("[graphs]General/ImportOldData", 0)

Hope this helps!

I see a syntax error (indentation) at your first call to storeTagHistory, but I don’t otherwise see any temp space consumers. Unless storeTagHistory itself does so. I wouldn’t bother creating a dataset, by the way. There’s no reason you shouldn’t just insert into history as you read each row from the CSV. Your script will use much less RAM if you don’t pull the whole CSV into a dataset.

1 Like

Hmm ok.
The explanation for the possible error is that I’ve deleted a few print statements. Maybe I took away an indentation.
Do you have any idea where I can find these temp files? (if they are created).
I can clear some space by restarting the machine, but I can not do that if it in production.


I would use FileLight or WinDirStat or similar tools to do a before & after comparison of your disk usage.

1 Like

Call me stupid, but I’ve noticed I still had some print actions on in the script.
So I had a ton of prints that were stored in the wrapper log files. I did not noticed it since the vision client and the gateway are both running on the same machine.
I still need to test if this is the root cause of course.

Wrapper log files don’t grow unbounded (absent some kind of bug), they are capped at 10mb and 5 files by default.

Sounds like either the actual history storage is taking up way more space than you think (how many rows?) or your script is a memory hog and you’re swapping large amounts of memory to disk.

Or maybe the Jython/Python CSV functions use a temp file in some way?

I had 10GB worth of wrapper log files. So I think something went wrong with the installation or someone adjusted this.
There are 800000 lines in one file. Could this be a problem?


Seems suspicious. What does your ignition.conf file (in the data/ directory) look like? There should be some fairly clear settings around log file rotation and maximum size there.

I can give this information tomorrow. Then I have access to the system.
But I was pretty stunned by the amount of log files that were in the folder.
The installation was done by a coworker so I’ll ask him tomorrow as wel.
As long as I know that there are no reasons to believe that it is the script, it is ok for now.
But if someone sees anything suspicious in the script, please let me know :slight_smile:

Ok I managed to get the file. I can also see that I have 60 wrapper log files at this moment. Which is strange because I can find following information in the conf file:

Are there any other parameters I should check?

I don’t think you did anything wrong, it sounds like “absent a bug” isn’t the case for some reason.

Delete them and keep going maybe? If you want to call support and open a case to share your script in case there is some particular about it that triggers this bug go ahead.

I’ll open a ticket and put the script in there.
I am running this script now and the memory is going up and down.
So i guess this would be temp files. In the bigger picture, I would think this only goes up.


Also submit your complete ignition.conf file.

Thanks for the advice. I’ll add it :slight_smile:

So I’ve made a support ticket and looks like the store&forward takes up a lot of space.
It makes sense since the memory storage is OK when the store&forward is done.
Are there ideal settings? I see that the gateway here has some strange settings:
Memory buffer size = 250
store settings:
Max records = 25000000
Write size = 10000
Write time = 1000

The forward settings are the same as the store settings.

Do theses make sense? I’ve reduced the Max records with factor 10 because it is way to much.

Best regards,

Any advice on this? There has to be a lot of data.

We have edited the gateway script and added a sleep function after every storeTagHistory.
The script runs slower, but it is stable and we do not have any more problems with the store&forward.
This seems to be OK for now.
I think we just wanted to go to fast and thus we overloaded the system :crazy_face:

Thanks for all the input, guys!