How to decode an OPC tag to have useable data


First time posting on here, and I've been doing the Inductive University to learn more about Ignition. I'm using a pi-cam to detect April Tags and the hope is to stream data to ignition so someone can search for object marked with April Tags via browser app.

I've hit a roadblock, I have a OPC device I've connected, and I am receiving data, but I don't know what type of data it is. It is coming from a python script that is using 'socket' to push a data packet. The data packet has 5 variables in it I need to extract and use. On the tags value it's just a bunch of number pairs, I'm really just stabbing in the dark trying to figure out what I need to do make it present useable data.

The python code to send the packet is this:

while True:

if not self.data_queue.empty():
                dataFrame = self.data_queue.get()

Do I need to put "receive.decode()" somewhere in the tag to decipher the data?

It is sending this data:

[{'ID': 0, 'Confidence': 14.714971542358398, 'CenterX': 571.9390390981096, 'CenterY': 299.1089590611948, 'Time': datetime.datetime(2024, 2, 12, 13, 17, 58, 928563)}]

I'm trying to set it up where it is adding the streaming data into a SQLlite DB and then the program will query that and pull the information.

I'm pretty lost and I need to move forward with this project, I feel like once I can get the data in Ignition I can stumble through the rest fairly quickly. Thanks in advance.

It seems likely that you're sending encoded binary data that you'll have to parse back into a usable string to then further json decode.

Can you share the script you're currently using to read and what the received value actually looks like?

1 Like

Welcome to the forum.

Tip: See Wiki - how to post code on this forum. The formatting / indentation is messed up on your code snippet.

import threading
import kritter
import vizy
import cv2
from pupil_apriltags import Detector
import numpy as np
import socket
import queue
from datetime import datetime
class AprilDetector:
    def __init__(self):
        # Set up Vizy class, Camera, etc.
        self.kapp = vizy.Vizy() = kritter.Camera(hflip=True, vflip=True) =
        # Put video window in the layout = kritter.Kvideo(width=1000, overlay=True)
        brightness = kritter.Kslider(name="Brightness",, mxs=(0,100,1), format=lambda val: '{}%' .format(val), grid=False)
        self.kapp.layout = [, brightness]
        def func(value):
   = value

        # Define detector
        self.at_detector = Detector(

        # Define queue for socket streaming
        self.data_queue = queue.Queue(1)
        # Run camera grab thread.
        self.run_grab = True
        videoThread = threading.Thread(target=self.grab).start()
        socketThread = threading.Thread(target=self.listen_for_connections).start()
        # Run Vizy webserver, which blocks.
        self.run_grab = False

    def handle_connection(self, client_socket, client_address):
        while True:            
            if not self.data_queue.empty():
                dataFrame = self.data_queue.get()

        #except EOFError:
            # The client disconnected, close the socket and exit the thread
            #print(f"Client {client_address} disconnected")

    def listen_for_connections(self):
        # Create a server socket
        server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        server_address = ('XXX.XXX.XXX.XXX', 8000)

        # Listen for incoming connections
        print(f"Listening for incoming connections on {server_address}")

        # Handle incoming connections in a loop
        while True:
            # Wait for a client connection
            client_socket, client_address = server_socket.accept()
            print(f"New connection from {client_address}")

            # Start a new thread to handle the connection
            client_thread = threading.Thread(target=self.handle_connection, args=(client_socket, client_address))

    def processImage(self, results, image):
        tagInstances = []
        for r in results:
            # extract the bounding box (x, y)-coordinates for the AprilTag
            # and convert each of the (x, y)-coordinate pairs to integers
            (ptA, ptB, ptC, ptD) = r.corners
            ptB = (int(ptB[0]), int(ptB[1]))
            ptC = (int(ptC[0]), int(ptC[1]))
            ptD = (int(ptD[0]), int(ptD[1]))
            ptA = (int(ptA[0]), int(ptA[1]))
            # draw the bounding box of the AprilTag detection
            cv2.line(image, ptA, ptB, (255, 0, 0), 2)
            cv2.line(image, ptB, ptC, (255, 0, 0), 2)
            cv2.line(image, ptC, ptD, (0, 255, 0), 2)
            cv2.line(image, ptD, ptA, (0, 0, 255), 2)
            # draw the center (x, y)-coordinates of the AprilTag
            (cX, cY) = (int([0]), int([1]))
  , (cX, cY), 5, (0, 255, 255), -1)
            # draw the tag family on the image
            tagFamily = r.tag_family.decode("utf-8")
            tagId = r.tag_id
            cv2.putText(image, str(tagId), (ptA[0], ptA[1] - 15),
                cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 0), 2)
            current_time =
            # Prepares data packet
            tagInstance = {"ID":r.tag_id,
                        "CenterY"[1], "Time":current_time

            if not self.data_queue.full():
                packet = str(tagInstances) + "\n"
        if len(tagInstances)>0:        
    def grab(self):
        while self.run_grab:
            # Get frame
            frame =[0]

            #Process frame
            grayImage = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
            detections = self.at_detector.detect(grayImage)
            self.processImage(detections, frame)

            # Send frame
if __name__ == "__main__":

This looks like Python code you're running outside of Ignition.

What exactly are you hoping to accomplish here?

There's a camera that is going to be sending a data packet to Ignition. That will give me the tag data and location. This will be used for inventory tracking. We place an April Tag on an item, and the camera will detect it, and I will build an Ignition app that lets people enter a tag number and the app will return a location.

Edit: I forgot to say this code is running on the camera.

Ok, and... how are you hoping to achieve this? How are you planning to receive the data in Ignition?

Yes , I have the camera connected as an OPC device. It is getting a good connection and I am receiving data. It's just that the data is just numbered pairs and not the info I need. I feel like since I am encoding it, I need to decode it; I just don't know where.

What does this mean? What driver are you using? Does the camera have an OPC server?

You haven't really explained the connection between your Python script and the data somehow being in Ignition.

I'm pretty new to this so idk how else to explain it. The python code on the camera is streaming to Ignition. I just went into devices and added the camera by the IP address and it said the connection was good and I started receiving data.

And it's getting data as a tag in Perspective

Ok, so you're using the TCP driver.

In your camera code it looks like you are just sending newline-delimited JSON strings. Make sure the driver is configured to use newline as the packet delimiter, and then try just subscribing to the "Message" tag instead.

Add a tag change event script to the message tag if you want to do some processing on the JSON string you receive.

1 Like

Ok, so in the Device settings in the Message Delimiter field put "\n" correct?

Also IDK what type of "Data Type" to consider the camera data?

And in the tag I tried this for the Value changed but I'm pretty sure I'm wrong.

Again, bring in the regular "Message" tag instead, not "MessageBytes", and leave it as a String.

I don't think you need to "decode" anything because it looks like you are sending Strings containing JSON data.

1 Like

I don't see anything when viewing the "Message" value in OPC Quick Client, should I be seeing something?

I am looking for data that looks like,

" [{Tag ID:6, Confidence: 65.7764435, XCoord: 223.4443556.......etc}]".

And I need to be able to pick apart the different categories like ID, Confidence Interval, x coordinate, y coordinate, etc. I tried to set up different fields in the Device settings and use a comma as a field delimiter, but it didn't work.

Thank for your help btw, I'm mainly a device side guy who started programming PLCs a few years ago. The most interaction I've had with Ignition is adding some devices and moving buttons around. This is all new to me, especially when it's outside of Allen Bradley and simple IO devices.

Can you show the driver delimiter settings?

Once you get the delimiter working right you should end up with String values containing JSON coming into your event script.

From there, you can parse the JSON using system.util.jsonDecode and
start picking out the values you need.

1 Like

And I will look into the link you sent.

Don't bother with the fields, you're not going to be able to split any out.

You may need to use \r or \r\n as your message delimiter.

Ok, I have set the fields to 0. And I tried \r and \r\n in the message delimiter and still see nothing. I should be seeing the data in the Quick Client when I read the "Message" though right? So far nothing. I read the link you sent me but idk how to use that info? Do I put that in the tag scripting under Value Events->Value Changed?

It's working! I'm not quite sure what I did exactly, but I'm going to show my settings in case this may help someone else later.

I'm getting the data I need!

So here are my driver settings.

I putting that JSON decode script in the tag scripting under Value Event --> Value Changed

And the Tag Value in the Tag Editor is as follows:

Value Source: OPC
Data Type: String
OPC Server: Ignition OPC UA Server
OPC Item Path: ns=1, s=[Cam_1]8000/Message

Ok, thanks a lot! Now I'm going to try and figure out how to parse the data to get the individual data points out I need.