Script error when running in gateway

I am getting the same error in version 8.1.43. One of my colleague has 8.1.33 running on production environment and he has the same error.

When I run my script in script console It works fine, but when I run the same script in gateway timer scope it is throwing the below error.

Hello @Hitesh_Rathod1 Could you post the actual error. Hit the plus sign on the right bottom corner. When running a script in the script console it is run in the client scope. Some system functions require different arguments when running in a gateway scope.

Here is the error.

com.inductiveautomation.ignition.common.script.JythonExecException: Traceback (most recent call last): File "", line 1, in File "", line 79, in main File "", line 28, in insert_data_into_database IOError: File 'C:\Users\Hitesh\OneDrive - Adient\Desktop\Mindtrace\10474124220091639_ed486bee-62b7-47c0-a14b-6e40e1c4b1ef.json' doesn't exist or isn't a file.

at org.python.core.Py.IOError(Py.java:185)

at com.inductiveautomation.ignition.common.script.builtin.FileUtilities.readFileAsString(FileUtilities.java:121)

at com.inductiveautomation.ignition.common.script.builtin.FileUtilities.readFileAsString(FileUtilities.java:113)

at jdk.internal.reflect.GeneratedMethodAccessor107.invoke(Unknown Source)

at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)

at java.base/java.lang.reflect.Method.invoke(Unknown Source)

at org.python.core.PyReflectedFunction.call(PyReflectedFunction.java:190)

at org.python.core.PyReflectedFunction.call(PyReflectedFunction.java:208)

at org.python.core.PyObject.call(PyObject.java:461)

at org.python.core.PyObject.call(PyObject.java:465)

at org.python.pycode._pyx51.insert_data_into_database$3(:70)

at org.python.pycode._pyx51.call_function()

at org.python.core.PyTableCode.call(PyTableCode.java:173)

at org.python.core.PyBaseCode.call(PyBaseCode.java:134)

at org.python.core.PyFunction.call(PyFunction.java:416)

at org.python.pycode._pyx51.main$4(:79)

at org.python.pycode._pyx51.call_function()

at org.python.core.PyTableCode.call(PyTableCode.java:173)

at org.python.core.PyBaseCode.call(PyBaseCode.java:119)

at org.python.core.PyFunction.call(PyFunction.java:406)

at org.python.pycode._pyx50.f$0(:1)

at org.python.pycode._pyx50.call_function()

at org.python.core.PyTableCode.call(PyTableCode.java:173)

at org.python.core.PyCode.call(PyCode.java:18)

at org.python.core.Py.runCode(Py.java:1703)

at com.inductiveautomation.ignition.common.script.ScriptManager.runCode(ScriptManager.java:804)

at com.inductiveautomation.ignition.gateway.project.ProjectScriptLifecycle$TrackingProjectScriptManager.runCode(ProjectScriptLifecycle.java:859)

at com.inductiveautomation.ignition.common.script.ScriptManager.runCode(ScriptManager.java:752)

at com.inductiveautomation.ignition.gateway.project.ProjectScriptLifecycle$TrackingProjectScriptManager.runCode(ProjectScriptLifecycle.java:840)

at com.inductiveautomation.ignition.common.script.TimerScriptTask.run(TimerScriptTask.java:92)

at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)

at java.base/java.util.concurrent.FutureTask.runAndReset(Unknown Source)

at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)

at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

at java.base/java.lang.Thread.run(Unknown Source)

Caused by: org.python.core.PyException: IOError: File 'C:\Users\Hitesh\OneDrive - Adient\Desktop\Mindtrace\10474124220091639_ed486bee-62b7-47c0-a14b-6e40e1c4b1ef.json' doesn't exist or isn't a file.

... 36 common frames omitted

Is the gateway on the same computer as the designer. It looks like you are trying to access a file that doesn't exist on the gateway, Or the gateway does not have permission to access that file.

Yes, Gateway and designer both are on the same computer.

My script runs against json files which are being stored in source folder and then It will read those json files and store the data from json files to database, then those json files will be moved out to another archive folder.

those folders are also on same computer where gateway and designer are resides.

The source folder will get json files from the third party software and my Ignition script will be executed based on timer and it will check the source folder. If source folder has files, my script will pick those files and insert into database, then those files will be moved out to archive folder.

But I am assuming my script store the last file name from those bunch of files and when the new file came to folder, It will first run against that stored file which is actually not longer available in the folder.

But this error does not appear when I run my script in the script console. This error only appered whren I run the script in gateway timer scope.

Below is my script

import os
import shutil
from datetime import datetime

logger = system.util.getLogger("Mindtrace")

source_folder = r"C:\Users\Hitesh\Desktop\Mindtrace"
archive_folder = r"C:\Users\Hitesh\Desktop\Mindtrace_Archive"

def get_json_files(source_folder):
	list_of_json_files = []
	for filename in os.listdir(source_folder):
   		file_path = os.path.join(source_folder, filename)
   		if os.path.isfile(file_path):
			list_of_json_files.append(file_path)
	return list_of_json_files

list_of_files = get_json_files(source_folder)

print list_of_files

def move_to_archive(source_folder):
	for filename in os.listdir(source_folder):
		file_path = os.path.join(source_folder, filename)
		shutil.move(file_path, archive_folder)
		logger.info(file_path + " has been sucessfully moved into Archive Folder")
 
def insert_data_into_database(list_of_files):	
	for files in list_of_files:
		print files
		jsonString = system.file.readFileAsString(files)
		data = system.util.jsonDecode(jsonString)
		welds_flat = []
		row_id = -1
		for weld_name, weld_info in data['Welds'].items():
			if weld_info['WeldStatus'] == 1:
				query = "Select max(block_id) from WeldInspection"
				result = system.db.runScalarQuery(query, 'WeldInspectionDB')
				next_block_id = int(result or 0) + 1
				row_id += 1
				welds_flat.append({
		    		'SerialNumber': data['SerialNumber'],
		   			'PartNo': data['PartNo'],
		    		'Badge': data['Badge'],
		    		'WeldName': weld_name,
		    		'WeldStatus': weld_info['WeldStatus'],
		    		'WeldDefect': weld_info['WeldDefect'],
		    		'TimeStamp': datetime.now().strftime('%Y-%m-%d %H:%M:%S:%f')[:-3],
					'block_id' : next_block_id,
					'row_id': row_id
		  			})
		  			
		for record in welds_flat:
			values = (
		   		record['PartNo'],
		    	record['Badge'],
		    	record['WeldStatus'],
		    	record['WeldName'],
		    	record['SerialNumber'],
		    	record['WeldDefect'],
		    	record['TimeStamp'],
		    	record['block_id'],
		    	record['row_id']
		   		)
		   		
			query_template = """
		   		INSERT INTO WeldInspection(PartNo, Badge, WeldStatus, WeldName, SerialNumber, WeldDefect, TimeStamp, block_id, row_id) 
		   		VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
		   					"""
		   
			result = system.db.runPrepUpdate(query_template, values,  'WeldInspectionDB')
		logger.info("The Mindtrace query has been successfully run against " + files)
	move_to_archive(source_folder)	
	
def main():
	if not os.path.isdir(source_folder):
		logger.info("The Path is not a directory")
	files = os.listdir(source_folder)	
	if not files:
		logger.info("The folder is empty.")
	get_json_files(source_folder)
	insert_data_into_database(list_of_files)
print main()

Take a look at this thread: Wiki - how to post code on this forum
edit: And remember that you can edit your post, just click on the little pen icon:
image

Your code is unreadable as it is.

But in any case, don't put it in a gateway event script, put it in your library and call it from the event.

Please consult this topic to fix your formatting when posting code:

As for your actual problem: Ignition by default runs as a "system" service account. This limited privilege account probably doesn't have access to your user folder. If this is a test only scenario, you can change the Ignition service to run as your user - see steps 4 - 9 in this section; ignore the MSSQL related pieces:

1 Like

I edited my previous post and made code more redable as you suggested.

Yes, I am doing the same thing. I have put my code in the library and calling from gateway timer event script.

see in the below image.

@PGriffith The Ignition has access to that source folder. The script can read json files and It can store data into database. But don't know the root cause for throwing that error.

There's a mix of tabs and spaces here:
image

And in a few other places.

The code is pretty clean, but there are a few things that are a bit off/odd, for example how you're returning a list of files from the get_json_files function but not actually using that return when you call it.

I've changed a few things, not that it will solve the issue, but it might make things easier to debug:

click for code
import os
import shutil

logger = system.util.getLogger("Mindtrace")

source_folder = r"C:\Users\Hitesh\Desktop\Mindtrace"
archive_folder = r"C:\Users\Hitesh\Desktop\Mindtrace_Archive"


def build_insert_string(table, data, columns=None):
	"""
	Build the query string and the values list for a multi-insert query.
	Use with `system.db.runPrepUpdate`
	params:
		table (string):						The table to insert into
		data (list of dicts):				A list containing a dict for each row to insert, where the keys are the columns names
		columns_names (list of strings):	The columns to be inserted. If None, they're deduced from the first row's keys.
	return:
		a tuple where:
			- the first element is the query string formatted for a `prepUdate` (with question marks as place holders)
			- the second element is the list of values
	"""
	if not data:
		return None
	if columns is None:
		columns = data[0].keys()
	marks = "({})".format(','.join("?" for _ in columns))
	marks = ",".join(marks for _ in data)
	col_names = ','.join(columns)
	q = "insert into {} ({}) values {}".format(table, col_names, marks)
	values = [row[c] for row in data for c in columns]
	return q, values


def get_json_files(source_folder):
	list_of_json_files = []
	for filename in os.listdir(source_folder):
		file_path = os.path.join(source_folder, filename)
		if os.path.isfile(file_path):
			list_of_json_files.append(file_path)
	return list_of_json_files


def move_to_archive(source_folder):
	for filename in os.listdir(source_folder):
		file_path = os.path.join(source_folder, filename)
		shutil.move(file_path, archive_folder)
		logger.info(file_path + " has been sucessfully moved into Archive Folder")


def insert_data_into_database(list_of_files):
	row_q = "Select max(block_id) from WeldInspection"
	for files in list_of_files:
		jsonString = system.file.readFileAsString(files)
		data = system.util.jsonDecode(jsonString)
		
		welds_flat = (
			{
				'SerialNumber': data['SerialNumber'],
				'PartNo': data['PartNo'],
				'Badge': data['Badge'],
				'WeldName': weld_name,
				'WeldStatus': weld_info['WeldStatus'],
				'WeldDefect': weld_info['WeldDefect'],
				'TimeStamp': system.date.format(system.date.now(), 'yyyy-MM-dd HH:mm'),
				'block_id': (system.db.runScalarQuery(row_q, 'WeldInspectionDB') or 0) + 1,
				'row_id': row_id
			} for row_id, (weld_name, weld_info) in enumerate(data['Welds'].items())
				if weld_info['WeldStatus'] == 1
		)
		
		q, v = build_insert_string('WeldInspection', welds_flat)
		system.db.runPrepUpdate(q, v,  'WeldInspectionDB')
		logger.info("The Mindtrace query has been successfully run against " + files)
	move_to_archive(source_folder)


def main():
	try:
		json_files = get_json_files(source_folder)
	except NotADirectoryError:
		logger.info("The path is not a directory")
		raise
	if json_files:
		insert_data_into_database(json_files)
	else:
		logger.info("The folder is empty.")

  • wrapped the json files function in a try/except instead of testing if the path is a directory.
  • not a dir / empty dir aborts the execution instead of running the rest of the function - even if nothing actually happens and the end result would be the same, it's easier to follow. You don't have to worry about "what happens in that function if the list is empty ?"
  • used enumerate to get the row_id instead of maintaining it manually
  • bulk insertion for each file with the build_insert_string function. Could bulk all of them, but let's take things one step at a time.

a few notes:

  • there's probably a way to bulk move your files to the archive folder, but I didn't investigate this. I don't do much file system operations with python.
  • if block_id is the primary key of your table, which it seems to be, you should configure it to be automatically generated by your db.
  • if you can't, it could be safe (depending on the possibility for race conditions) to run the select query once, then increment it using row_id and another counter for the file loop. You can use enumerate here as well. If nothing but that timer script calls the function, it should be safe.
  • I suggest you put the build_insert_string function somewhere in your library, so that it doesn't add clutter to that particular script. And then you can also use it for other things, it's made to be reused.