AlarmManager / AlarmListener – Good Practices

Background

I’ve been working with the standard Alarm Status Table component in Perspective and identified several performance bottlenecks that become critical in bandwidth-constrained environments:

  • Full table retransmission on every update
    Every alarm event triggers a complete table refresh.
    In our scenario:
    • 5 remote screens / ~500 KB table payload / ~15 updates/minute → Bandwidth consumption: ~0.6 Mbps. This is extremely inefficient for remote sites with limited connectivity.
  • Polling-based architecture
    The component polls the AlarmManager at fixed intervals (we use 1 second).
    Even with the shared polling engine cache, this still results in unnecessary queries when no alarms change.
  • Missing interaction features
    • No onRowDoubleClick / onRowRightClick

Proposed solution: Delta-based custom Alarm Table component

I’m developing an alternative component that relies on delta updates instead of full snapshots.

Architecture overview:

  1. Initial load
    The client requests an initial snapshot and receives it via a fetchable URL.
  2. Gateway subscription
    The Gateway subscribes to an AlarmListener (shared across all component instances).
  3. Delta updates only
    On alarm changes (ALARM_ADDED, ALARM_REMOVED, etc.), incremental messages are sent to component instances, including a sequence counter for synchronization.
  4. Heartbeat / sync check
    Periodically sends the current sequence to ensure no messages were lost.
  5. Frontend sorting
  6. Backend filtering
    Filtering is applied both to the initial snapshot query and to new events received via AlarmListener.

Technical question: Do I need a dedicated thread pool?

To avoid blocking the AlarmListener callback, I implemented an asynchronous architecture:

private static AlarmListener createSharedListener() {
    return new AlarmListener() {
    @Override
    public void onActive(AlarmEvent alarmEvent) {
        enqueueToAllInstances(alarmEvent);
    }
(...)

private static ExecutorService sharedProcessorPool = 
    Executors.newFixedThreadPool(4);
private final BlockingQueue<AlarmEvent> eventQueue = 
    new LinkedBlockingQueue<>(1000);

// AlarmListener callback - enqueue only
private static void enqueueToAllInstances(AlarmEvent event) {
    List<AlarmTableModelDelegate> delegates;
    synchronized (LOCK) {
        delegates = new ArrayList<>(activeInstances);
    }
    
    for (AlarmTableModelDelegate delegate : delegates) {
        delegate.eventQueue.offer(event);
    }
}

// Async processing in shared thread pool
private void processEventQueue() {
    while (running) {
        AlarmEvent event = eventQueue.poll(1, TimeUnit.SECONDS);
        if (event != null) {
            processAlarmEvent(event); // Filter, convert JSON, fireEvent
        }
    }
}

Alternative (simpler) architecture – process directly in the listener:

private static void processAllInstances(AlarmEvent event) {
    synchronized (LOCK) {
        for (AlarmTableModelDelegate delegate : activeInstances) {
            delegate.processAlarmEvent(event);
        }
    }
}

Questions:

  1. Is the thread pool architecture necessary, or am I overengineering this?
    I estimate per-instance processing time at ~0.5 ms (filtering + JSON conversion + event dispatch).
    With 100 instances (5x security margin), that’s ~50 ms total per alarm event on the callback thread.
    Is this acceptable, or could it impact the AlarmManager’s ability to notify other listeners?
  2. What is the AlarmManager threading model?
  • Does it invoke listeners using a thread pool?
  • Are listener callbacks executed in parallel?
  • If my callback blocks for ~50 ms, could this delay notifications to other listeners?
  • Or are listener callbacks serialized anyway?
  1. Is there a specific reason the standard Alarm Status Table uses a query-based architecture that I might be overlooking?

Any insights or best practices around AlarmListener usage and threading would be greatly appreciated.

I can answer some of your questions.

There is not a thread pool used. They are executed serially, not in parallel, so yes if your callback blocks for 50ms you are delaying the notification to other listeners.

1 Like

I can probably at least guess about this.

We build a lot of Ignition under the guiding principle of "Simple, correct, fast; in that order".

Often by the time you've finished building something simple and correct you've already landed at "fast enough", so unless there are certain performance goals that are part of the design criteria from the start, that's where it stops.

Your use case where the table is being used over a remote, bandwidth sensitive connection probably was simply not considered. I bet this component was built in something of a rush while a remarkably small team was building a hundred other Perspective components and systems, so it was done enough and fast enough.

1 Like