Background
I’ve been working with the standard Alarm Status Table component in Perspective and identified several performance bottlenecks that become critical in bandwidth-constrained environments:
- Full table retransmission on every update
Every alarm event triggers a complete table refresh.
In our scenario:- 5 remote screens / ~500 KB table payload / ~15 updates/minute → Bandwidth consumption: ~0.6 Mbps. This is extremely inefficient for remote sites with limited connectivity.
- Polling-based architecture
The component polls theAlarmManagerat fixed intervals (we use 1 second).
Even with the shared polling engine cache, this still results in unnecessary queries when no alarms change. - Missing interaction features
- No
onRowDoubleClick/onRowRightClick
- No
Proposed solution: Delta-based custom Alarm Table component
I’m developing an alternative component that relies on delta updates instead of full snapshots.
Architecture overview:
- Initial load
The client requests an initial snapshot and receives it via a fetchable URL. - Gateway subscription
The Gateway subscribes to anAlarmListener(shared across all component instances). - Delta updates only
On alarm changes (ALARM_ADDED,ALARM_REMOVED, etc.), incremental messages are sent to component instances, including a sequence counter for synchronization. - Heartbeat / sync check
Periodically sends the current sequence to ensure no messages were lost. - Frontend sorting
- Backend filtering
Filtering is applied both to the initial snapshot query and to new events received viaAlarmListener.
Technical question: Do I need a dedicated thread pool?
To avoid blocking the AlarmListener callback, I implemented an asynchronous architecture:
private static AlarmListener createSharedListener() {
return new AlarmListener() {
@Override
public void onActive(AlarmEvent alarmEvent) {
enqueueToAllInstances(alarmEvent);
}
(...)
private static ExecutorService sharedProcessorPool =
Executors.newFixedThreadPool(4);
private final BlockingQueue<AlarmEvent> eventQueue =
new LinkedBlockingQueue<>(1000);
// AlarmListener callback - enqueue only
private static void enqueueToAllInstances(AlarmEvent event) {
List<AlarmTableModelDelegate> delegates;
synchronized (LOCK) {
delegates = new ArrayList<>(activeInstances);
}
for (AlarmTableModelDelegate delegate : delegates) {
delegate.eventQueue.offer(event);
}
}
// Async processing in shared thread pool
private void processEventQueue() {
while (running) {
AlarmEvent event = eventQueue.poll(1, TimeUnit.SECONDS);
if (event != null) {
processAlarmEvent(event); // Filter, convert JSON, fireEvent
}
}
}
Alternative (simpler) architecture – process directly in the listener:
private static void processAllInstances(AlarmEvent event) {
synchronized (LOCK) {
for (AlarmTableModelDelegate delegate : activeInstances) {
delegate.processAlarmEvent(event);
}
}
}
Questions:
- Is the thread pool architecture necessary, or am I overengineering this?
I estimate per-instance processing time at ~0.5 ms (filtering + JSON conversion + event dispatch).
With 100 instances (5x security margin), that’s ~50 ms total per alarm event on the callback thread.
Is this acceptable, or could it impact theAlarmManager’s ability to notify other listeners? - What is the
AlarmManagerthreading model?
- Does it invoke listeners using a thread pool?
- Are listener callbacks executed in parallel?
- If my callback blocks for ~50 ms, could this delay notifications to other listeners?
- Or are listener callbacks serialized anyway?
- Is there a specific reason the standard Alarm Status Table uses a query-based architecture that I might be overlooking?
Any insights or best practices around AlarmListener usage and threading would be greatly appreciated.