is it possible to use the standard Python/Jython unittest modules with Ignition? My goal is to [eventually] incorporate these unittests in with Jenkins and potentially then EAM, but right now I’m struggling to get even a trivial example to work inside the script console.
The error message I’m receiving:
Traceback (most recent call last):
File "<buffer>", line 72, in <module>
File "C:\Users\<username>\.ignition\cache\<gwname>_8088_443_main\C0\pylib\unittest.py", line 766, in __init__
self.progName = os.path.basename(argv[0])
IndexError: index out of range: 0
class Person:
name = []
def set_name(self, user_name):
self.name.append(user_name)
return len(self.name) - 1
def get_name(self, user_id):
if user_id >= len(self.name):
return 'There is no such user'
else:
return self.name[user_id]
import unittest
class Test(unittest.TestCase):
"""
The basic class that inherits unittest.TestCase
"""
person = Person() # instantiate the Person Class
user_id = [] # variable that stores obtained user_id
user_name = [] # variable that stores person name
# test case function to check the Person.set_name function
def test_0_set_name(self):
print("Start set_name test\n")
"""
Any method which starts with ``test_`` will considered as a test case.
"""
for i in range(4):
# initialize a name
name = 'name' + str(i)
# store the name into the list variable
self.user_name.append(name)
# get the user id obtained from the function
user_id = self.person.set_name(name)
# check if the obtained user id is null or not
self.assertIsNotNone(user_id) # null user id will fail the test
# store the user id to the list
self.user_id.append(user_id)
print("user_id length = ", len(self.user_id))
print(self.user_id)
print("user_name length = ", len(self.user_name))
print(self.user_name)
print("\nFinish set_name test\n")
# test case function to check the Person.get_name function
def test_1_get_name(self):
print("\nStart get_name test\n")
"""
Any method that starts with ``test_`` will be considered as a test case.
"""
length = len(self.user_id) # total number of stored user information
print("user_id length = ", length)
print("user_name length = ", len(self.user_name))
for i in range(6):
# if i not exceed total length then verify the returned name
if i < length:
# if the two name not matches it will fail the test case
self.assertEqual(self.user_name[i], self.person.get_name(self.user_id[i]))
else:
print("Testing for get_name no user test")
# if length exceeds then check the 'no such user' type message
self.assertEqual('There is no such user', self.person.get_name(i))
print("\nFinish get_name test\n")
if __name__ == '__main__':
# begin the unittest.main()
unittest.main()
Traceback (most recent call last):
File "<buffer>", line 72, in <module>
File "C:\Users\<username>\.ignition\cache\<gwname>_8088_443_main\C0\pylib\unittest.py", line 766, in __init__
self.progName = os.path.basename(argv[0])
IndexError: index out of range: 0
This (along with if __name__ == '__main__':) implies that the unittest module assumes it’s running directly from the command line. Unfortunately, that’s true of nowhere in Ignition - in all cases, including the script console, the “raw” script you write is passed off to a Java method that compiles and executes the code, passing in particular values for Python’s locals() and globals(). At a guess, the unittest module may work work in a pure Jython environment (which is why it’s included in our distribution of the standard library) but you would probably have to do some significant re-engineering to get it to work within an Ignition environment.
Thanks, I appreciate it. From the script console I was able to get an alternative calll working, as I suspected something along the lines of command line/main call.
import unittest
def fib(n):
""" Calculates the n-th Fibonacci number iteratively
>>> fib(0)
0
>>> fib(1)
1
>>> fib(10)
55
>>> fib(40)
102334155
>>>
"""
a, b = 0, 1
for i in range(n):
a, b = b, a + b
return a
class FibonacciTests(unittest.TestCase):
def test_Dummy(self):
self.assertEqual([0,3.3,'FOIL',5.01],[0,3.3,'FOIL',5.01])
def test_Calculation(self):
self.assertEqual(fib(0), 0)
self.assertEqual(fib(1), 0)
self.assertEqual(fib(5), 5)
self.assertEqual(fib(10), 55)
self.assertEqual(fib(20), 6765)
suite = unittest.TestLoader().loadTestsFromTestCase(FibonacciTests)
output = unittest.TextTestRunner(verbosity=2).run(suite)
if output.wasSuccessful():
print 'Passed tests.'
else:
number_failed = len(output.failures) + len(output.errors)
print "Failed "+str(number_failed)+ " test(s)"
Thanks Sander - that is a good idea. May I pick your brain a bit - how do you decide when code is ‘big enough’? Just trying to get a feel for the dev ops and QA side of Ignition development vs traditional development, and some best practices.
If it’s a stand-allone function, and the effects are immediately visible in the client, I usually don’t test it since it’s easy enough to confirm the validity of the function.
With stand-allone, I mean the data comes either from the global scope (tags), or from visible client state, and the function only gets called in a specific context.
When functions get linked together, or use data that’s hard to consult, it usually makes debugging harder, so I put it in a library and write tests for it. To me, it’s not about the single function being small or large, it’s about it being isolated or not.
@PGriffith : We're experimenting with continuous integration and unittesting. Using the WebDev python resource, we've got a rest-API running and can use the Python unit testing framework to test Utitlity functions. But..
Running a test against a function which uses an Ignition python function like system.tags.readBlocking() we receive an error "NameError: global name 'system' is not defined".
Do you know any way to resolve or circumvent this issue?
This is because the “system” library is built into ignition and not Python. In order to write Pytest or Unityest code against your code you will unfortunately have to try one of a few things:
Trigger it with web dev endpoints so that it executes on the gateway
Jump around any system functions for testing purposes, this sort of defeats part of the purpose of testing
Build a perspective screen to trigger this code and use selenium to trigger the parts of the screen that call it and monitor state through labels
Can you show any screenshots or more detail about exactly how you're setting things up?
Also, more generally for the topic at hand, is anyone who's interested in unit testing within Ignition willing to talk through what they're looking to do in more detail? Building better tools is something I'm very interested in, either first party or as part of Ignition extensions.
My interest is primarily in reducing the cost of upgrading applications. I have multiple vendors across dozens of gateways. When we talk about platform version upgrades, getting each application tested to ensure it meets compatibility is usually a work order. If I had a native, automated way to provide this testing capability, we could build that into our application standards.
My ideal state would be TDD through feature branches in GitLab without having to use 3rd-party modules or custom-developed workflow.
You and I talked about UI testing in perspective, that's a huge one for us. We had to build an entire framework library to interact with perspective through selenium just for this, and everything we do is subject to change whenever the application is upgraded.
As the tool grows further into the software space, unit testing and regression testing will become more and more a client requirement. The two main requests would be:
The ability to unit test your project scripts, that include system functions
The ability to test Perspective UIs without needing to maintain an entire library of code, since I know you guys have the exact same setup and would love to share
Of course I'm willing to try and explain our setup:
(Future) Setup:
We use subversion (internal server) for version control
We use Jenkins for CI
Using Jenkins we want to start up a docker with the project to test and run the (unit) test and get back the result to get a nice repport. Interface through a rest API with the WebDev module
For other (#Net) projects, we've got this up and running.
Where do we stand now:
Prototyping unittesting and restAPI, starting with an abstract library using "utility functions" (project independent)
A) For each script-library a "mirror" library in the unitTesten-package has been created.
This unit-test version of the library contains the function to call and start tests for this lib.
And for each function in the original library, a class with testcases for the equivalent function e.g. "BepaalInstantieDeelVanStringTests" has been created.
Using the WebDev module I've got some python-code running that can be accesses using Jenkins json plugin or for testing benefits with Postman. (see screenshot below)
The get-command contains the action to execute (runTest) and the name of the library/package to test (e.g. cwdStrings)
As you can see, the results come in nicely..
So far so good!
The problems arise when we try to use this setup to test functions utilizing ignition pyhton function (e.g. system.tags.readBlocking), this results in an error. (running the code from the script-console to eliminate possible problems, results using Postman are the same..)
As you can see. The system.tag.readBlocking function in the class itself works fine. But when the
call to the library is executed by the unittest-framework, then the error occurs..
I know that unittest typically is executed as a module, and that it uses some trickery to figure out its own script and import it for testing.
I wonder if under the hood it’s doing something to the interpreter it’s executed in that’s causing it to lose visibility to the system library provided by Ignition?
In this particular case, I'm willing to bet that the problem has something to do with unittest's dynamic construction of test cases. I suspect you could subclass TestLoader or TestSuite and do something to pass in the required context; possibly as simple as passing in system=system in the parameters of one of the methods, in order to pass the system variable from the calling scope into the constructed methods.
For anyone interested in this, using @tim.schultz's code we were able to run tests using the system library with one minor adjustment; we're setting the stream argument to sys.stdout instead of the default sys.stderr on unittest.TextTestRunner(stream=sys.stdout, verbosity=2).run(suite).
import importlib
import sys
import unittest
_SYS_MODULES = sys.modules.keys()
def create_import_test(module_name):
def test(self):
self.assertModuleAvailable(module_name)
return test
def pop_imported_modules():
for module in sys.modules.keys():
if module not in _SYS_MODULES:
sys.modules.pop(module)
class TestPackageImports(unittest.TestCase):
modules_to_test = ["system.alarm", "system.db", "system.zzz"]
for module_name in modules_to_test:
test_name = "test_import_{}".format(module_name)
locals()[test_name] = create_import_test(module_name)
pop_imported_modules()
def assertModuleAvailable(self, module_name):
try:
importlib.import_module(module_name)
except ImportError:
self.fail("Failed to import module: {}".format(module_name))
def run():
suite = unittest.TestLoader().loadTestsFromTestCase(TestPackageImports)
unittest.TextTestRunner(stream=sys.stdout, verbosity=2).run(suite)
It is using PrettyUnit https://github.com/beatsbears/prettyunit, in which I removed the function "generate_json_and_send_http" and the "import requests", at the top, that are not used in this case
def __getMembers(baseTypeName, baseType, callback):
import inspect
result = {
"test-cases": { },
"project": baseTypeName,
"start-timestamp": system.date.now(),
"end-timestamp": system.date.now()
}
for name, obj in inspect.getmembers(baseType):
if "Package" in str(type(obj)):
__mergeResults(result, __getMembers(baseTypeName + '.' + name, obj, callback))
elif "Module" in str(type(obj)):
__mergeResults(result, callback(baseTypeName + '.' + name, obj), baseTypeName + '.')
return result
def __mergeResults(mergedResults, results, baseTypeName = ''):
for prop in results:
if prop not in mergedResults:
mergedResults[prop] = results[prop]
elif isinstance(results[prop], (int, long)):
mergedResults[prop] = mergedResults[prop] + results[prop]
for prop in results["test-cases"]:
propDestination = (baseTypeName + prop).replace('.', '-')
if propDestination in mergedResults["test-cases"]:
mergedResults["test-cases"][propDestination] = mergedResults["test-cases"][propDestination] + results["test-cases"][prop]
else:
mergedResults["test-cases"][propDestination] = results["test-cases"][prop]
mergedResults["end-timestamp"] = system.date.now()
return mergedResults
def runAll():
"""Discover and run all test from specified package
https://docs.python.org/2.7/library/unittest.html
Args:
None
Returns:
Test Results formatted using PrettyUnit
"""
# Lets say I want to discover all tests defined in MyPackage.test
return __getMembers('MyPackage', MyPackage.test, runModule)
def runModule(moduleName, module):
import unittest
from prettyunit import PrettyUnit
PU = PrettyUnit(moduleName)
suite = unittest.TestLoader().loadTestsFromModule(module)
results = unittest.TextTestRunner(verbosity=0).run(suite)
# Populate UC dictionary and generate json
PU.populate_json(moduleName, suite._tests)
PU.add_results_json(results)
return PU.data