we have 3 dedicated, with fixed delay timer scripts in our project, which internally calls java gateway scripting module methods, one run at fixed delay of 1 sec and other two run at fixed delay of 10 sec, the 1 sec call pushes the data to DB and rest two timers queries the data and stores data to the tags.
Also, some of the perspective screens calls the method call for viewing the reports
Note: when we run those methods from outside ignition, they are completing in less than a minute.
the scripting module not using the ignition jdbc connection details, java scripting module has JDBC utils which is creating the connection for duckdb and that connection is used for the java methods.
Again I urge you to share your actual script especially since you are doing things that are not the standard (using a packaged JBDC like mysql/postgres/sql server) and leverage the system functions.
That’s the only way to fix this kind of an issue. This could be caused by a number of issues both on the script side and on the database side so it’s hard to help you without knowing what you’re doing.
1 minute seems like a long response time to me. Are you getting the same response time running your queries directly in your database?
I’m not familiar with duckdb but a Google search told me it’s a high performance relational db. You may want to look at the execution plan that your query is getting and see if an index could speed that up a lot. You can query against millions of records in MSSQL in seconds so I feel like 1 min execute time is likely to be using a bad execution plan or a deadlock. You could be getting full table scans because of missing indexes or something along those lines.
I don’t know with duckdb but with MSSQL you generally avoid these deadlocks by making queries set-based and avoiding anything that creates an SGAM lock. You can also add a SQL log file for each core on the db server to allow concurrent SGAM locks. If there’s something like that in duckdb then you want to look at what’s happening there. That said, you want to generally avoid SGAM locks like the plague on MSSQL; especially in systems with multiple clients calling the same queries concurrently.
2.5s is generally fine for query performance. If you see it sometimes executing basically instantly and other times taking a long time it’s likely a deadlock situation. You can probably test this by making a form with a bunch of bindings that are populating datasets using the same query. Simulate multiple clients pounding it simultaneously. Don’t do that when the production system is using the database (I’m assuming this is only select queries).
The next step I would recommend is taking a copy of your script to a test page and calling it from a button using sample data. Isolate each part of the process to see where you’re getting into the chunky performance and figure out why they’re taking a lot of time.
If you can’t make things perform faster; spin up a thread to execute them so the processing isn’t locking up your main thread.
What happens if you run just a single instance of your query from a button with all timer scripts that hit the db stopped (so one query on the connection)?
What happens if you try to run several from the same connection using the JDBC thing you are testing your queries from?