Sometimes my code doesn't do what I expect, and sometimes it isn't completely obvious why.
Over the years I've used a variety of strategies to figure out what the code is doing and why:
a. set a breakpoint and step through the code
b. sprinkle hardcoded messages ("here", "here1", "here2") through the code and watch the results in the console/log file/database log table...
c. create some global variables and bind those variables to a display
d. add an assert
e. create a test harnass / unit test suite to run the code in a controlled environment
f. build a logging system that allows me to turn logging on and off by category, source, severity and route it to a variety of destinations and then add lots of logging calls to it
g. bang my head against the desk next to the keyboard until the answer appears in flashing lights
I've been trying to develop effective strategies for Ignition and would like some input and discussion. (Maybe even have a thread that people could link to ...)
a. not an option
b. system.util.getLogger("Test").info("Here1") seems to be working ok. print works in the console.
c. system.tag.writeBlocking("[Test_Provider]variable", variableInQuestion) seems to be working ok
d. assert would work, as long as the message ends up somewhere that I know to look for it (generally I would force this with a try/catch and system.util.getLogger(...).errror(...)
e. the console seems to work for some things, but anything that requires a specific context doesn't work. Stripped down test views are working for some things. What about scripts run from "the pipeline" or client scripts or other places that scripts could run that I am not thinking about?
f. there are already several ways that things are logged. So far I've been doing Perspective so everything is a gateway script and the gateway Status > Diagnostics > Logs seems to the the right place for most of it.
g. plenty of space on the desk. Nothing has changed there.
Lots of b for me. Sometimes I'll do c in the form of a temporary label or table, for watching a given custom prop or dataset. Much easier to read a dataset from a table component than from a hundred lines of logs.
As the person nominally "in charge" of the development of scripting in Ignition, I'm very open to ideas on how to improve things. There are a lot of reasons we can't do straightforward breakpoints in Ignition, but there may be other "good enough" solutions that help folks out.
If anyone wants to improve on this and post it back that would be great.
Mostly call Log.Err(self, sys.exc_info()) after 'Try'ing something fails. It uses the Pythons error object to peal out where your try failed - so you know the line number, not just the subroutine. I haven't taken the time to dive into traceback - but I plan to in the next project....
Other than that new tool....I mostly do "b" - sprinkle messages at key points...I'll even include tStamps throughout to see if any part is executing slower than others. I do that in the script console, once I get the exact results then I copy it over to either a screen component or into the script libraries and do my final testing.
I've also done "f" but didn't enjoy the end result - makes the code much longer and messier.
I do use this. By making functions that do only one specific thing, you can test most of them in the script console.
I made a unit test framework for this, I can share it if you're interested. It's not an advanced thing at all, but it helps making tests quickly and simply.
It's not enough to test everything, as you said some things require a context that you can't (yet ? **) get in the script console... But if you can make sure that most of your bricks are straight and robust, building the wall becomes much simpler.
** I think some people have asked for the possibility to simulate other contexts in the script console. I'm not sure what the answer was. I have to admit I'd really like something like this. @PGriffith ?