Bring Your Own Machine Learning Model in Ignition?

Hi,

I have a pre-trained ML model. I want to bring the same into my perspective project and do my predictions on a dataset that is generated live from the machine from a bunch of tags (either in the form of a dataset tag or a pydataset). Then I want to invoke my model on certain tag triggers on that dataset and use the inference from my model to write to certain tags. How do I achieve this? I have explored the forum and also the ML Manager from Ignition Exchange. ML manager has its own algorithms where we can train and store the model. But that does not quite fit my use-case.

Any help will be much appreciated. Thanks in advance!

How do you run this model? If it isn't jython-compatible, you will need to run it in its own process. Either as a parallel system service with an API, or using java's ProcessBuilder tooling (from jython). Neither is trivial.

2 Likes

any thoughts on the secure implementations of something like the parallel system API? giving external servers the right to autonomously write to tags sounds risky.

The parallel system service with an API, the way I meant it, would be where the model lives. Ignition's own internal scripting would drive the model and retrieve results via the external process' API. The external API would not be calling into Ignition to write to tags--all tag writes would be still be controlled by Ignition's scripts.