What are some ways to instrument scripts?

Sometimes my code doesn't do what I expect, and sometimes it isn't completely obvious why.

Over the years I've used a variety of strategies to figure out what the code is doing and why:

a. set a breakpoint and step through the code
b. sprinkle hardcoded messages ("here", "here1", "here2") through the code and watch the results in the console/log file/database log table...
c. create some global variables and bind those variables to a display
d. add an assert
e. create a test harnass / unit test suite to run the code in a controlled environment
f. build a logging system that allows me to turn logging on and off by category, source, severity and route it to a variety of destinations and then add lots of logging calls to it
g. bang my head against the desk next to the keyboard until the answer appears in flashing lights

I've been trying to develop effective strategies for Ignition and would like some input and discussion. (Maybe even have a thread that people could link to ...)

So far,
a. not an option
b. system.util.getLogger("Test").info("Here1") seems to be working ok. print works in the console.
c. system.tag.writeBlocking("[Test_Provider]variable", variableInQuestion) seems to be working ok
d. assert would work, as long as the message ends up somewhere that I know to look for it (generally I would force this with a try/catch and system.util.getLogger(...).errror(...)
e. the console seems to work for some things, but anything that requires a specific context doesn't work. Stripped down test views are working for some things. What about scripts run from "the pipeline" or client scripts or other places that scripts could run that I am not thinking about?
f. there are already several ways that things are logged. So far I've been doing Perspective so everything is a gateway script and the gateway Status > Diagnostics > Logs seems to the the right place for most of it.
g. plenty of space on the desk. Nothing has changed there.

Comments, suggestions?

1 Like

I use b+f. Once in a great while I use c.

I keep a tail -f /path/to/ignition/install/logs/wrapper.log open in a console while working with anything in gateway scope.

4 Likes

I also use b + f for most everything.

When using scripts in perspective on components or in script transforms I use system.perspective.print(). I'm not sure if that's what you meant by "print" or if you meant the python built in print.

A good reference for how "print" works in all of Ignition.

4 Likes

That should be a sticky...

3 Likes

Lots of b for me. Sometimes I'll do c in the form of a temporary label or table, for watching a given custom prop or dataset. Much easier to read a dataset from a table component than from a hundred lines of logs.

As the person nominally "in charge" of the development of scripting in Ignition, I'm very open to ideas on how to improve things. There are a lot of reasons we can't do straightforward breakpoints in Ignition, but there may be other "good enough" solutions that help folks out.

How hard would it be to implement a kind of "watch window" for scripts? I often find myself looking in the logs on the gateway to see if a script actually executed or not.

1 Like

@Ryan_McLaughlin's Perspective Utility Scripts are another way of doing this. While not designed with debugging in mind, the statusbar type of display can be used to display variables very nicely.


Ryan has done an excellent job on styling with fades, etc.

1 Like

I've been working on this for the last few projects in perspective.
Log_2023-05-19_1453.zip (1.9 KB)

If anyone wants to improve on this and post it back that would be great.

Mostly call Log.Err(self, sys.exc_info()) after 'Try'ing something fails. It uses the Pythons error object to peal out where your try failed - so you know the line number, not just the subroutine. I haven't taken the time to dive into traceback - but I plan to in the next project....

See PythonAsJavaException in my later.py script module. You can hand it directly to a logger to get a pretty stacktrace from your python.

Check out the Python Code Assistant - chatGPT Exchange Resource. That resource is only a demonstration and not a finished product but it does work to analyze python scripts in the script library.

I used chatGPT to write most of the parsing scripts to dig into the folders/files.

Python Code Assistant

Other than that new tool....I mostly do "b" - sprinkle messages at key points...I'll even include tStamps throughout to see if any part is executing slower than others. I do that in the script console, once I get the exact results then I copy it over to either a screen component or into the script libraries and do my final testing.

I've also done "f" but didn't enjoy the end result - makes the code much longer and messier.

I do use this. By making functions that do only one specific thing, you can test most of them in the script console.
I made a unit test framework for this, I can share it if you're interested. It's not an advanced thing at all, but it helps making tests quickly and simply.

It's not enough to test everything, as you said some things require a context that you can't (yet ? **) get in the script console... But if you can make sure that most of your bricks are straight and robust, building the wall becomes much simpler.

** I think some people have asked for the possibility to simulate other contexts in the script console. I'm not sure what the answer was. I have to admit I'd really like something like this. @PGriffith ?

1 Like

Well, the script console "simulates" Vision Client context because that's what it is. By the very fact that it is running in a non-gateway process with Vision's model for accessible jars.

Even if you could load gateway jars into the designer, it still wouldn't be the gateway process underneath, so would still be useless.

Test gateway context scripts with message handlers. Full stop. (If you get creative, I'm sure you could make a generic one in your dev environment.)