I am trying to build a service living on a separate server from my gateway that will be able to handle processing requests that are somewhat outside of the normal operating range of Ignition. For example, running complicated regression analysis of data in real-time, using packages that are unavailable in Jython. Effectively, I want to use it as an augmentative "brain" for the devices we are operating through Ignition.
Overly simplified, I want to be able to bind a JSON-like thing in a python service to Ignition, masquerading as a more typical device.
I have been trying to figure out how to implement this so that it can communicate as robustly as possible with Ignition and so far my best sounding idea is to have the service run an OPC-UA server to communicate as a device attached to Ignition, rather than trying to set up a long-running thread that can IO and then load data from JSON packets into memory tags.
The python-side package that I would want to use to run the external server is opcua-asyncio: GitHub - FreeOpcUa/opcua-asyncio: OPC UA library for python >= 3.7. It would just be used as an IO layer on top of an arbitrary processing system (like Airflow for example). I foresee mapping something like an appropriately formatted JSON object to the OPC-UA nodes.
I have two questions for anyone who might have opinions/information to share: 1) does the OPC-UA device method sound like the right way to go about this, and 2) how would I handle the OPC driver? The UDP/TCP driver is evidently read-only for simple devices, and I would like a full device link so that I can handle IO on a single channel.
I am hoping I can make this all happen without going deep into the JDK + Maven/Gradle necessary to play with the Ignition SDK, because that of course is a whole can of worms I have not yet cracked open.