API Call on alarm trigger

Hey all,
Just looking for potential ideas and best solution.

Currently we have several tags, that when their alarms are triggered should send an alarm to an API.

These API's are writing based on ignitions system.net.httpClient where these are written up in our project library.
Our first idea was to put these all on our tag scripting for alarm active, acknowledge and cleared.

This worked okay as the comms activated quickly so there was little thread pooling issues. Now we have a requirement for it to keep retrying if it fails, my concern now is that if comms fail and several alarms go off, if we just use a while loop to keep retrying, we will have some issues.

What would be the best way of doing this other than buying the alarm notification module, can we detect alarm acknowledgement in the tag change event script, or can we call our function in the tag change event, but have this execute on the gateway threading.

Any ideas would be appreciated.

Sounds like you need a persistent queue and dictionary to hold information about API calls that are needed, and a gateway timer event to perform the calls, whether original or retries. The tag events themselves would just flip the needed information to the persistent queue.

I tend to use a java concurrent queue and python dictionaries for such things, set up globally via .setdefault() on the dictionary from system.util.getGlobals(). Many discussions here about such things.

2 Likes

I've had a go at getting it working based on the thread:

I had some issues where my .offer didn't seem to actually append anything to my queue.

I made a project library script called alarmQueue:

from java.util.concurrent import LinkedBlockingDeque, TimeUnit
logger = system.util.getLogger("Alarm Event Queue")
# Capacity = 100.  The new instance is thrown away if already in globals.

deque = system.util.getGlobals().setdefault('MyAlarmQueue', LinkedBlockingDeque(100))

def someTimerEvent():
	nextItem = deque.poll(15000, TimeUnit.MILLISECONDS)
	#logger.info(str(system.util.getGlobals()))
	if nextItem:
		pass
		
# Run the above at a fixed **rate** matching the poll timeout.

def addDeque(message):
	response = deque.offer(str(message))
	logger.warn(str(response))

I call the someTimerEvent() every 15000 ms in a timer script (using dedicated thread, fixed rate), I added the addDeque to the alarm active script on a a testing tag, with the below script:

	path = str(alarmEvent.getDisplayPath())
	
	AlarmQueue.addDeque(path)

The response value always returns a True, but whenever I try querying the queue using:
print AlarmQueue.Deque is just returns single value even though I trigger my alarm several times. They also dissapear after some time, does this timer event erase the queue?

Where am I getting it wrong? Or is it okay, I just don't know how to get the data back out of it

That example's timing and handling conditions are probably not what you want, but that is a good example of how to set up such a queue.

For your application, I'd probably use two separate Deques. One for new messages, and one for retries needed. I'd probably also define the deques to take a tuple of (timestamp, message).

Your timer event should be using the .drainTo method of deque to empty these into local variables (I like LinkedList for such), and assembling a batch API call with all of them. If it fails, push all of the tuples back into the retry deque.

2 Likes

That sounds like a good idea, all of the API's for our alarming have a seperate area for the timestamp on them. I guess the drainTo method would make this into a list that I assign to a variable, does it still need the parameters ...(15000, TimeUnit.MILLISECONDS)

But just wondering about the batch calling, currently we use the system.net.httpClient and then client.post the API call, these API's are made by another team and are designed to have a single JSON being sent to the URL in the POST method.

How would we send these as a single batch using the Ignition http client?

You would ask the other team to make a batch submission API endpoint. Not having one seems like an invitation to inefficiency. It didn't occur to me that it wouldn't exist.

Ah, that might be an issue then. I'll have a chat with them and see what we can figure out.

Thanks for the help!

If you must, you can always loop through the entries, submitting one-by-one, and requeuing the failures.

What we might consider doing is making a seperate set of queues and loops for the more common API calls, and then loop through one-by-one each second.

That should work well enough for now