We are trying to tune our Ignition deployment to reduce CPU and memory utilization. We currently don’t have a setup for isolated load testing, stress testing, or other performance benchmarking, nor do I think a robust strategy is feasible in the short term.
I’m looking for any built-in or simple configurations to do my own testing, or even better, the wisdom gleaned from your own testing and a walkthrough of the setup. I have read other threads inside and outside this forum that cover garbage collection, z garbage collection, tuning garbage collection, and jvm service tuning, so feel free to be as detailed and technical in responses as possible. If you’ve done benchmarking across a scaled topology, feel free to outline results from that as well.
Use case: we are currently trying to replace database queries/named query execution when a user launches a Perspective session. We’ve written some tags with inventory & user entitlement datasets stored in them, which are updated periodically (e.g. on DB write). We read from those tags to display dynamic data to our users in Perspective, as opposed to doing a DB query every time. I thought this would be a marked decrease in resource consumption, but we’re still seeing CPU spikes when these component bindings are in use (i.e. every time the script that reads from the tag is executed). By “we’re still seeing”, it’s primitive and my coworker toggling the tab that executes my scripts and watched the Gateway Status → Performance…so I’m going to do some of my own benchmarking, but I’d like to hopefully timebox it to a few days with this forum’s help.
We have many more use cases and I can outline what we’re running on if you’d like specifics, but ideally, I’m looking for testing configuration and outlines.
Using script transforms in bindings could be part of the problem. Always use Expressions when possible, and if you must use scripting, put it in a library function and call it using the runScript expression. Script transforms are very inefficient.
That’s a good question–I’m not sure. When I have converted script transforms into Expressions, the improvement has been most noticeable in the snappiness of the UI. Views render and update much quicker. I’ve also seen browser memory use decrease significantly. But given those improvements, it’s hard to imagine that resource use on the server isn’t also improved.
Yes, but do so with care. When I learned that rule of thumb, I started marking everything private willy-nilly, including view parameters, and custom properties that I was passing as parameters to other views. There’s a reason properties aren’t private by default.
UI snappiness is definitely important. We work with large datasets and while I’ve tried to optimize for time/space complexity, it’s still noticeable. I’ll keep this in mind.
Is the script you run on these tag datasets, are you able to maybe do them once and cache the result and have the UI fetched the cached results instead of having to do a transform? Also if you post the script people may be able to help you optimize it.
We don’t display large datasets to users, so I’d think the computation of the payload is more intensive than encode/decode operations. I’ll keep this in mind, though, because I haven’t done any client-side analysis on this either and I think that’s what you’re getting at.
Not sure the code needs optimization - it reads a dataset and filters the output with O(N) complexity.
However, I am new to Ignition, so I would like to be able to see how my code operates within the more complex JVM. Some of the design and implementation choices are foreign to me, so either outlining test suite intel, noting performance indicators, and/or any additional heuristic insight is all helpful.
Could be true but we can’t assume or know that without seeing it. As you say -
I am new to Ignition
So there may be some ignition specific gotcha’s you’re not aware of bogging down your script because they are fine in CPython. For this reason alone I think it’s probably worth posting the script. Bad scripts can certainly bog down a server.
For my own perfomance testing I have never really gone beyond a simple timer decorator for when a script seems slow to get execution time in ms, improving the big O of slow / complicated scripts, and optimizing database queries directly when they are slow.
Regarding your arcitecture, you say that you are just filtering a user/inventory dataset. How many rows are these? It might be significantly more costly to be filtering this with jython than to ask the database at the moment for the specified rows. Databases are much more appopriate for WHERE filtering than a jython transform script. You can use Named Queries with caching so queries with the same params share results up to the TTL. This would 100% be faster for the case where a user went to your view, called the NQ with some params, got a result, left, came back, and now the NQ this second time gets a cache right from the gateway (no db hit) vs going to the view and running a jython transform on a full dataset every time.
Optimizing perspective is tricky. Almost everything that happens is executed on the gateway, so things you wouldn't expect can have an impact. If you're running your DB on the same machine as the Gateway, or using VM's and not insuring that your Hpyervisor is configured correctly, you can see poor performance.
I would avoid storing large datasets in a tag. Databases are built for that type of workload, and in my experience there is no performance gain here. I am not suprised that you haven't seen much if any impact here.
Also, rember that Ignition uses Jython, and many times where outside of Ignition you would use a python package to accomplish some things, inside of Ignition there is almost always a Java alternative. This also means that there are some optimizations that have been builtin. For instance chartComponent.chart is more performant than chartComponent.getChart().
Here is a link to a post with some good information:
Apologies for being evasive, I’m very experienced with (non-Ignition) full stack high throughput, high frequency development with less experience in a jvm environment. However, the question was how to measure performance, not to do a code review. I’m not above code review, that’s just not what I’m looking for.
Yes, databases are optimized for data operations. I understand Named Queries > Expression > Script Transform, but how would I measure that? I can parse logs, but I’d ideally like a benchmarking toolkit for this type of discovery.
I didn’t know you could enable caching with Named Queries, this looks like a better implementation than what I’m doing. Have you noticed any issues with stale data, where the client session doesn’t refresh automatically if a view isn’t reloaded?
We use a separate database server within our cloud provider, where we don’t control network routing (presumably it’s over LAN, but I know some cloud providers virtualize that and only guarantee regional affinity). Given this architecture, my goal was minimize queries and offload processing to the client where possible.
Exactly. While I’m impressed by Ignition’s overall performance, understanding where and how to change the coding paradigms to offload load from the gateway remains elusive.
I don’t configure my hypervisor, I am at the mercy of our cloud provider there. While we aren’t seeing terrible performance, it’s still a cause of concern because we see spikes over 50% in the gateway performance for a single query that, frankly, isn’t that huge (dataset tag ~1000 rows, filtered data ~100 item object). Our usage sits at 2% usually.
Thanks for that optimization list, I’m about to go through my library and rework it with those in mind.