Ignition redundancy issue with message [_alarm_system_] Will perform full pull. Reason: redundant provider on local side has no data

Ignition 8.1.52 and before

Redundant gateway,
we have periodically this message "[alarm_system] Will perform full pull. Reason: redundant provider on local side has no data." and lots of device are disconnected.

Whats does it means ?

|04Feb2026 21:45:29|[_alarm_system_] Will perform full pull. Reason: redundant provider on local side has no data.|
|---|---|
|04Feb2026 20:00:13|[_alarm_system_] Remote needs to do full pull. Reason: redundant provider on remote side has no data.|
|04Feb2026 19:38:58|Redundancy state changed: Role=Master, Activity level=Active, Project state=Good, History level=Full|
|04Feb2026 19:38:58|[internaldb] Remote needs to do full pull. Reason: major change detected between nodes (sync provider UUIDs did not match), this node is the redundant master, and this redundant provider does not use bidirectional sync.|
|04Feb2026 19:38:57|System restore initiated for backup node. A data-only restore file will be provided. The backup node will then restart.|
|04Feb2026 19:38:57|Redundancy state changed: Role=Master, Activity level=Active, Project state=OutOfDate, History level=Full|
|04Feb2026 19:38:35|Redundancy state changed: Role=Master, Activity level=Active, Project state=Good, History level=Full|

Based on what is in those log entries, the redundant master gateway may have restarted. The 21:45:29 message means that the alarm redundant provider on the master gateway is in a “just started” state. That’s what “redundant provider on local side has no data” means in the message. The backup gateway has recent alarm data that the master needs to pull over so that it can be brought up to date. That is all I can tell you about that log snippet though.

That s strange, both nodes was up and running
I will check if something cause this unexpected restart of the master active node...

on the backup side:

System Events
Severity	Time	Event
04Feb2026 21:45:26	[_alarm_system_] Remote needs to do full pull. Reason: redundant provider on remote side has no data.
04Feb2026 21:45:25	[_alarm_system_] Remote needs to do full pull. Reason: redundant provider on remote side has no data.
04Feb2026 21:45:23	[_alarm_system_] Remote needs to do full pull. Reason: redundant provider on remote side has no data.
04Feb2026 21:45:22	[_alarm_system_] Remote needs to do full pull. Reason: redundant provider on remote side has no data.
04Feb2026 21:45:21	[_alarm_system_] Remote needs to do full pull. Reason: redundant provider on remote side has no data.
04Feb2026 21:45:21	[tagprovider-ligneA] Changes detected on both sides! Will need to perform full pull from master. This will clear all pending changes for this sync provider on this node.
04Feb2026 21:45:19	[_alarm_system_] Will perform full pull. Reason: redundant provider on local side has no data.
04Feb2026 21:08:19	[tagprovider-ligneA] Changes detected on both sides! Will need to perform full pull from master. This will clear all pending changes for this sync provider on this node.
04Feb2026 20:00:14	[_alarm_system_] Will perform full pull. Reason: redundant provider on local side has no data.
04Feb2026 19:42:11	Redundancy state changed: Role=Backup, Activity level=Cold, Project state=Good, History level=Full
04Feb2026 19:41:49	Redundancy state changed: Role=Backup, Activity level=Cold, Project state=OutOfDate, History level=Full
04Feb2026 19:41:49	[client_auth_token_runtime_provider] Will perform full pull. Reason: major change detected between nodes (sync provider UUIDs did not match).
04Feb2026 19:41:49	[_alarm_system_] Will perform full pull. Reason: major change detected between nodes (sync provider UUIDs did not match).
04Feb2026 19:41:49	[_alarmshelf_] Will perform full pull. Reason: major change detected between nodes (sync provider UUIDs did not match).
04Feb2026 19:41:49	[tagprovider-ligneD] Will perform full pull. Reason: major change detected between nodes (sync provider UUIDs did not match).

Is there any settings in the gateway to ajust if the system use lots of alarms ?
We have many Diagnostics alarms used to compute indicators.

On the backup passive node I have an alarm file about 22Mo
image

I think about increasing send/receive thread pour gateway network redundancy connection
(I use 8.1.52)
But the redundancy connection is not visible in the gateway netwok incoming or outgoing connection on both node to change those settings ???

The send/receive threads are configurable for the redundancy gateway network connection. On the backup gateway, you have to open the redundancy configuration and the send and receive thread settings are in there under the Backup Settings section. But I think what is happening in your situation is that the gateway network connection is faulting between the master and backup gateways, and this is causing both gateways to become active. This is what the “Changes detected on both sides” message means. Both gateways became active at one point, reconnected to each other, and now they both have their own conflicting data. I would look at the backup gateway and see if the gateway network connection to the master is becoming faulted, and then start investigating that.

2 Likes

we still have quite frequent message:

[_alarm_system_] Remote needs to do full pull. Reason: redundant provider on remote side has no data.

A continue ping between backup and master doesn't show any network issue.
Is there some settings to avoid full pull ?
Our system use a lots of "alarms" to track some data change.

I’m not sure what is going on in your situation. Full pulls should not happen regularly unless something got disrupted. I think this is a case where you have to call in to Support and have them check over everything.

Yes I have a ticket with the support. I need to plan a remote session as soon as it will be possible regarding operation

while waiting to be able to do a remote sesion with support (ticket #175761),

we've noted a strange exception message:

Caused by: java.io.IOException: An established connection was aborted by the software in your host machine

Could it be an EDR sofware responsible from the disconnection between master and backup ?

ReadCoilsRequest	13Feb2026 16:44:25	Request failed. FailureType==DISCONNECTED
ReadHoldingRegistersRequest	13Feb2026 16:44:25	Request failed. FailureType==DISCONNECTED
ReadHoldingRegistersRequest	13Feb2026 16:44:25	Request failed. FailureType==DISCONNECTED
MasterStateManager	13Feb2026 16:44:24	Peer node information has been updated: RedundancyNode(address=10.29.50.211, httpAddresses=[http://10.29.50.211:8088], activityLevel=Cold, projectState=OutOfDate)
MasterStateManager	13Feb2026 16:44:24	Checking if backup has existing session...
MasterStateManager	13Feb2026 16:44:24	Peer node information has been updated: RedundancyNode(address=10.29.50.211, httpAddresses=[http://10.29.50.211:8088], activityLevel=Active, projectState=OutOfDate)
MasterStateManager	13Feb2026 16:44:24	Scheduling session initialization...
MasterStateManager	13Feb2026 16:44:24	Peer address changed to '_0:2:Ignition-BREST-GTC-SVR1'
MasterStateManager	13Feb2026 16:44:24	Registering redundancy session from '_0:2:Ignition-BREST-GTC-SVR1', peer state=RedundancyNode(address=10.29.50.211, httpAddresses=[http://10.29.50.211:8088], activityLevel=Active, projectState=OutOfDate)
MasterStateManager	13Feb2026 16:44:24	Checking if backup has existing session...
MasterStateManager	13Feb2026 16:44:24	Scheduling session initialization...
WebSocketConnection	13Feb2026 16:44:24	[100] ignition-brest-gtc-svr1-backup connection status has been updated from Initialized to Running: Web socket session established for ignition-brest-gtc-svr1-backup|c9ae8131-fed9-471f-950f-83f9fa80c563
MetroWebSocket	13Feb2026 16:44:24	<- incoming local='ignition-brest-gtc-svr1-master' remote='ignition-brest-gtc-svr1-backup' method=onMessage: Session id now set to [1616494698]
SocketIODelegate	13Feb2026 16:44:24	[hostname=10.29.50.80,port=502] Socket connection closed, DriverState was Connected.
ReadHoldingRegistersRequest	13Feb2026 16:44:24	Request failed. FailureType==DISCONNECTED
ReadCoilsRequest	13Feb2026 16:44:24	Request failed. FailureType==DISCONNECTED
SocketIODelegate	13Feb2026 16:44:24	[hostname=10.29.50.95,port=502] Socket connection closed, DriverState was Connected.
ReadHoldingRegistersRequest	13Feb2026 16:44:24	Request failed. FailureType==DISCONNECTED
BREST_GTC_LOG.bre.eclairage	13Feb2026 16:44:24	[<module:bre.eclairage>, line 366, in eventPiloterQuality] Message: 'QualityChanged tagPath=[ligneB]GTC/STA/Station1/ECL01/01001/TC/ALLUMER_ABRIS_QUAI, value=False'

WebSocketConnection	13Feb2026 16:44:23	[100] ignition-brest-gtc-svr1-backup connection status has been updated from Unknown to Initialized
CentralManager	13Feb2026 16:44:23	Registering connection: ignition-brest-gtc-svr1-backup
WebSocketConnection	13Feb2026 16:44:23	[1196556094] ignition-brest-gtc-svr1-backup connection status has been updated from Faulted to Shutdown: Remote is reconnecting
WebSocketConnection	13Feb2026 16:44:23	<- incoming local='ignition-brest-gtc-svr1-master' remote='ignition-brest-gtc-svr1-backup' method=shutdown: [1196556094] Shutting down connection 'ignition-brest-gtc-svr1-master' to https://10.29.50.211:8060/system: Remote is reconnecting
ReadCoilsRequest	13Feb2026 16:44:20	Request failed. FailureType==DISCONNECTED
ReadHoldingRegistersRequest	13Feb2026 16:44:20	Request failed. FailureType==DISCONNECTED
ReadHoldingRegistersRequest	13Feb2026 16:44:20	Request failed. FailureType==DISCONNECTED
BREST_GTC_LOG.shared.tag.alarmByCategory	13Feb2026 16:44:19	[<module:shared.tag.alarmByCategory>, line 1090, in updateAlarmeMenus] Message: '12 values updated [vues]'
SocketIODelegate	13Feb2026 16:44:19	[hostname=10.29.50.80,port=502] Socket connection closed, DriverState was Connected.
ReadHoldingRegistersRequest	13Feb2026 16:44:19	Request failed. FailureType==DISCONNECTED
ReadCoilsRequest	13Feb2026 16:44:19	Request failed. FailureType==DISCONNECTED
SocketIODelegate	13Feb2026 16:44:19	[hostname=10.29.50.95,port=502] Socket connection closed, DriverState was Connected.
ReadHoldingRegistersRequest	13Feb2026 16:44:19	Request failed. FailureType==DISCONNECTED
BREST_GTC_LOG.bre.eclairage	13Feb2026 16:44:19	[<module:bre.eclairage>, line 366, in eventPiloterQuality] Message: 'QualityChanged tagPath=[ligneB]GTC/STA/Station1/ECL01/01001/TC/ALLUMER_MLUM, value=False'
BREST_GTC_LOG.bre.eclairage	13Feb2026 16:44:19	[<module:bre.eclairage>, line 366, in eventPiloterQuality] Message: 'QualityChanged tagPath=[ligneB]GTC/STA/Station1/ECL01/01001/TC/ALLUMER_ABRIS_QUAI, value=False'
BREST_GTC_LOG.bre.eclairage	13Feb2026 16:44:19	[<module:bre.eclairage>, line 366, in eventPiloterQuality] Message: 'QualityChanged tagPath=[ligneB]GTC/STA/Station1/ECL01/01001/TC/ALLUMER_PUB, value=False'
ReadHoldingRegistersRequest	13Feb2026 16:44:14	Request failed. FailureType==DISCONNECTED
ReadHoldingRegistersRequest	13Feb2026 16:44:14	Request failed. FailureType==DISCONNECTED
ReadCoilsRequest	13Feb2026 16:44:14	Request failed. FailureType==DISCONNECTED
SocketIODelegate	13Feb2026 16:44:14	[hostname=10.29.50.80,port=502] Socket connection closed, DriverState was Connected.
ReadCoilsRequest	13Feb2026 16:44:14	Request failed. FailureType==DISCONNECTED
ReadHoldingRegistersRequest	13Feb2026 16:44:14	Request failed. FailureType==DISCONNECTED
SocketIODelegate	13Feb2026 16:44:14	[hostname=10.29.50.95,port=502] Socket connection closed, DriverState was Connected.
ReadHoldingRegistersRequest	13Feb2026 16:44:14	Request failed. FailureType==DISCONNECTED
Routes	13Feb2026 16:44:14	Could not find project 'lost-connection'. Verify url is correct and project is published.
SyncManager	13Feb2026 16:44:14	Error executing task 'syncRequest'.
java.lang.reflect.UndeclaredThrowableException: null

at jdk.proxy2/jdk.proxy2.$Proxy108.synchronize(Unknown Source)

at com.inductiveautomation.ignition.gateway.redundancy.state.SynchronizationManager$SyncRequest.run(SynchronizationManager.java:1270)

at com.inductiveautomation.ignition.gateway.redundancy.state.SynchronizationManager$CollapsingRunnable.run(SynchronizationManager.java:1235)

at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)

at java.base/java.util.concurrent.FutureTask.run(Unknown Source)

at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)

at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

at java.base/java.lang.Thread.run(Unknown Source)

Caused by: org.eclipse.jetty.io.EofException: null

at org.eclipse.jetty.io.SocketChannelEndPoint.flush(SocketChannelEndPoint.java:118)

at org.eclipse.jetty.io.ssl.SslConnection.networkFlush(SslConnection.java:536)

at org.eclipse.jetty.io.ssl.SslConnection$SslEndPoint.flush(SslConnection.java:1171)

at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:419)

at org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:272)

at org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:251)

at org.eclipse.jetty.io.AbstractEndPoint.write(AbstractEndPoint.java:368)

at org.eclipse.jetty.websocket.core.internal.FrameFlusher.process(FrameFlusher.java:345)

at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:253)

at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:232)

at org.eclipse.jetty.websocket.core.WebSocketConnection.enqueueFrame(WebSocketConnection.java:763)

at org.eclipse.jetty.websocket.core.WebSocketCoreSession$OutgoingAdaptor.sendFrame(WebSocketCoreSession.java:707)

at org.eclipse.jetty.websocket.core.ExtensionStack.sendFrame(ExtensionStack.java:250)

at org.eclipse.jetty.websocket.core.WebSocketCoreSession$Flusher.forwardFrame(WebSocketCoreSession.java:798)

at org.eclipse.jetty.websocket.core.util.FragmentingFlusher.onFrame(FragmentingFlusher.java:51)

at org.eclipse.jetty.websocket.core.util.TransformingFlusher$Flusher.process(TransformingFlusher.java:161)

at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:253)

at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:232)

at org.eclipse.jetty.websocket.core.util.TransformingFlusher.sendFrame(TransformingFlusher.java:78)

at org.eclipse.jetty.websocket.core.WebSocketCoreSession.sendFrame(WebSocketCoreSession.java:516)

at org.eclipse.jetty.ee8.websocket.common.JettyWebSocketRemoteEndpoint.sendBytes(JettyWebSocketRemoteEndpoint.java:65)

at com.inductiveautomation.metro.impl.protocol.websocket.WebSocketConnection.notifyMessageWaiting(WebSocketConnection.java:414)

at com.inductiveautomation.metro.impl.protocol.websocket.WebSocketConnection.storeForDownload(WebSocketConnection.java:365)

at com.inductiveautomation.metro.impl.protocol.websocket.WebSocketConnection.handle(WebSocketConnection.java:1199)

at com.inductiveautomation.metro.impl.protocol.websocket.WebSocketConnection.handle(WebSocketConnection.java:103)

at com.inductiveautomation.metro.impl.transport.AbstractTransportAdapter.handle(AbstractTransportAdapter.java:23)

at com.inductiveautomation.metro.impl.transport.AbstractTransportAdapter.handle(AbstractTransportAdapter.java:23)

at com.inductiveautomation.metro.impl.MessageSendManager.sendNext(MessageSendManager.java:197)

at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)

at java.base/java.util.concurrent.FutureTask.run(Unknown Source)

... 3 common frames omitted

Caused by: java.io.IOException: An established connection was aborted by the software in your host machine

at java.base/sun.nio.ch.SocketDispatcher.writev0(Native Method)

at java.base/sun.nio.ch.SocketDispatcher.writev(Unknown Source)

at java.base/sun.nio.ch.IOUtil.write(Unknown Source)

at java.base/sun.nio.ch.IOUtil.write(Unknown Source)

at java.base/sun.nio.ch.SocketChannelImpl.write(Unknown Source)

at java.base/java.nio.channels.SocketChannel.write(Unknown Source)

at org.eclipse.jetty.io.SocketChannelEndPoint.flush(SocketChannelEndPoint.java:112)

... 32 common frames omitted


WebSocketConnection	13Feb2026 16:44:14	<- incoming local='ignition-brest-gtc-svr1-master' remote='ignition-brest-gtc-svr1-backup' method=notifyMessageWaiting.writeFailed: Write to session web socket failed, received header:[opCode=MSG_SEND, subCode=0, flags=0, messageId=15629, senderId='ignition-brest-gtc-svr1-master', targetAddress='_0:2:Ignition-BREST-GTC-SVR1'] header to send:[opCode=MSG_SEND, subCode=0, flags=1, messageId=15629, senderId='ignition-brest-gtc-svr1-master', targetAddress='_0:2:Ignition-BREST-GTC-SVR1']
MetroWebSocket	13Feb2026 16:44:14	<- incoming local='ignition-brest-gtc-svr1-master' remote='ignition-brest-gtc-svr1-backup' method=onClose: Connection ignition-brest-gtc-svr1-backup|c9ae8131-fed9-471f-950f-83f9fa80c563 has been set to Faulted:null
Routing	13Feb2026 16:44:14	Route disconnected between server '_0:2:Ignition-BREST-GTC-SVR1' and connection 'ignition-brest-gtc-svr1-backup|c9ae8131-fed9-471f-950f-83f9fa80c563'
WebSocketConnection	13Feb2026 16:44:14	[1196556094] ignition-brest-gtc-svr1-backup connection status has been updated from Running to Faulted: onClose has been called on web socket:null
MasterStateManager	13Feb2026 16:44:14	Destroying session.
MetroWebSocket	13Feb2026 16:44:12	<- incoming local='ignition-brest-gtc-svr1-master' remote='ignition-brest-gtc-svr1-backup' method=onConnect: Could not create new web socket for 'ignition-brest-gtc-svr1-backup|c9ae8131-fed9-471f-950f-83f9fa80c563' at https://10.29.50.211:8060/system: A connection with system name or id 'ignition-brest-gtc-svr1-backup' already exists on the GatewayNetwork! The new connection from https://10.29.50.211:8060/system has been rejected