Basically we want use a simple webcall passing “tagpath” to get a json with each tag and its properties (value, quality and timestamp).
We are currently using a script we developed in webdev, but when reading a folder with a bunch of tags it may take a while, e.g. if I read 20,000 tags it takes about 45 seconds, getting something like {“name”:“Status”,“path”:“tagpath”,“dataType”:“Int4”,“qualifiedValue”:{“value”:“12384”,“quality”:“Good”,“timestamp”:“2018/07/04 14:07:31”}} for each tag.
I know the case above may be a little extreme, we currently have subfolders, in average one of our production lines is using around 2500 tags that we need to constantly monitor and the current process takes about 6 seconds, per line.
We are exploring methods to read this data from Ignition, recommendations are welcome.
Cache your data structures so you don’t have to browse the tag hierarchy on every call. Use readAll with a list of tagpaths generated from your cache to minimize synchronization overhead.
As part of my ignition modules viz AR-SCADA, RM-SCADA, mSCADA, I have an Ignition gateway communication module (GCM) written in JAVA which provides a communication link between Ignition server and my modules thru a NodeJS server (called NJSCADA server) on a TCP/IP socket to read and write Tag values from and to Ignition. The NODEJS server acts as a SCADA server for my modules using Socket.IO and HTTP protocols.
The GCM module (JAVA) reads the list of tags of to be read from Ignition, from a TagPaths file once at the start and sends their values to NJSCADA as a delimited string cyclically. The meta data about tags is sent only once and subsequently only delimited values are sent for efficiency. However the communication from NJSCADA to Ignition (writing values to Ignition TAGS) is based on delimited name values pairs (though not json).
I will be releasing the evaluation version of the modules pretty soon. This is just to serve me as a reminder for this post.
If you want to build a JSON api server for the latest values of your tag database, try to create a transaction group for the tag folders and configure it to update (not insert) the database. Datalogging is very fast and It happens almost instantaneously. You can check this post. Here. i am serving the ignition historical table as JSON api and consuming it in ReactJS and Python.
Just wondering, when i can log the latest values of the entire tag database as transaction groups, serve it as JSON api and consume it on any platform seamlessly, why should i buy your non standard proprietary GCM module and get stuck with it forever?.
My strategy is, to use the JSON api server to build DV&R web servers for 100's of consumers and use the Ignition native client for normal command control operations.
My GCM is not a standalone product its primarily built as an Ignition module to communicate with NJSCADA which is a NodeJS server to cater to my other modules. You can use what ever method or technology you like, to communicate with Ignition, no one is forcing anything, there are various methods of solving a problem. (Even I would like to try other methods instead of GCM later on). If someone finds NodeJS technology easier to work with, they can try NJSCADA , GCM is bundled with it, if not they can use whatever they want. The NJSCADA acts as a mini SCADA server for my other modules. A generic client module will also be supported if users want to build their specific application using NodeJS and JavaScript technologies. Trial version will be made available soon for people to evaluate it and make a decision.
As a tag server, how is it superior than the zero cost open standards solution i have demonstrated which any Ignition SI can implement within 20 mins and consume any where?.
If you want to sell your product, you must convince the people with solid performance benchmarks to decide in your favour. Ironically, i was the first person in this forum who came to you as a potential client. After digging a bit deeper, i found that my humble solution is far better and outperforms any other solution for web consumers.
I have created transaction groups for 3000 high speed simulated FP double precision tags and could successfully update all the latest values in less than 1 second and consume it as JSON api seamlessly. The only limitation was the 6gb memory on my laptop. If i upgrade it to 32/64gb, i can easily do it for 100,000 tags which is a very good ROI benchmark per Ignition server. I am sure, we are not going to have 100,000 high speed tags in any SCADA project. If required, i can even have a dedicated Ignition server for transaction groups.
I could easily distribute the JSON payload across many MongoDB servers for further consumption and provide a high availability solution.
My client is very much impressed with my DV&R analytics solution. His engineer has built awesome analytics server with this boiler plate code. He gave me the pay check and asked me to go ahead for an Apache Kafka pilot project.
Open standards is a monster. It’s not possible for proprietary solutions to stand up against it. When i can scale Ignition horizontally and consume the data on web as JSON REST api with minimum load, why should i put a piggy back closed solution vertically on top of it and open up the door for instability and system crashing?.
I am sharing my experience to ignite the imagination and passion of other users to build better Ignition projects.
Can you post a bit more information about your solution thus far. Understanding what you are doing currently can help use figure out what to try next. Things such as using system.tag.readAll, avoiding multiple iterations, implementing a tag listener, etc are all good ideas.
Thank you, I really like this idea, I will do some test and try to measure speeds and compare performance to our current setup. I will create a DB table where all tags are going to be updated and a “data” server that reads the values and answers to webcalls.
Can you post a bit more information about your solution thus far. Understanding what you are doing currently can help use figure out what to try next. Things such as using system.tag.readAll, avoiding multiple iterations, implementing a tag listener, etc are all good ideas.
Long story short, we have an internally developed ShopFloor Managment system to control all our factory, basically our system manages all the process from materials all the way to shipping and the idea is interact with all the different stations in the middle. We have different ways of "talking" to all the machine in the floor, for some of them we use ignition as a middle ware, for some tasks we have RuleEngines monitoring tags and triggering actions, for example, in some of our most advanced lines we have RFID pallets, we have some RuleEngines monitoring tag values so when we detect a unit gets to a specific station we trigger a webcall to do some actions, either printing labels, showing specific instructions to operators, etc.
Currently our Rule Engines are calling a script developed in webdev, we have a few flavors, we have one to read multiple tags at the same time using "system.tag.browseTags", we have other to read a single tag using "system.tag.read", in both cases the webdev converts to json the tag data.
We have different Ignition Instances to isolate some main areas and multiple rule engines, as we continue growing the performance its being impacted.
So which script is giving the issue, the browseTags or the read implementation. Any browse will take some time, browseConfig might be faster.
We are not having an issue per se, our current architecture works, we have some performance concerns as we continue to add more equipment, there are some internal requirements that will lead us to read all the tags in a "Main folder" and as stated in the email, it currently takes about 45 seconds.
We are trying to explore options to read the data from Ignition.
Some tips might help. Create a test DB and configure it in transaction group to UPDATE the values . All tables will be automatically created.
To quickly test your JSON api server, download the ignition.php file from the folder at Git, change the MySQL query according to your requirement.
Ignition-React-Django-JSON-Realtime/src/backend/apache_php/ignition.php
To configure, rights for your apache server, follow this link, Feel free to ask any clarifications if required. I am also excited to get your feedback.
Travix Cox in one of his webinars on Ignition's SQL DataBase module did mention about using the SQL database for inter process communication with external applications as well. Other option is to use Jython server side scripts to communicate over TCP sockets. However I took the Java Module approach to communicate over TCP sockets which is very fast inter process communication approach (a few milli seconds to read a few hundred tags, once the tagPaths are read and data structures are initialized) being memory to memory for my application. However I am going to try all other possible approaches as well for my applications eventually. Each approach has its pluses and minuses depending upon your requirements and applications. As I said GCM is not my main module , its a glue between Ignition and my other modules viz AR-SCADA mSCADA, UNISEMS etc.
Is it taking 45s to read all the tags, or 45s to browse the tag tree to get all the tag paths?
Browsing the tag tree and caching the results would probably be the easiest implementation to save some time. Without seeing your code, however, we can’t analyze what you are doing.
You can't make confusing statements about your own product. Will other modules work without GCM?. Backup your claims with proper performance benchmarks and online demo. I can confidently say that, it's impossible to beat my humble solution with your GCM.
There is nothing confusing, I have always said GCM is not a stand alone module, its bundled with NJSCADA the nodeJS server which serves my other modules like ARSCADA etc and it meets its specific requirements. I will be making the whole NJSCADA with GCM and ARSCADA available for download and trial. No body is challenging your module.
But of course GCM can also spurt JSON files to third party applications like its writing to PubNub dash boards. The packets can be configured that need to be sent out. The demos and documentation will explain it completely. There are some testing issues that I am resolving right now. Have patience!