Websocket connection closed unexpectedly. code=1009, reason=Text message too large: (actual) 2,098,113 > (configured max text message size) 2,097,152, codeMeaning=Message Too Big, codeDescription=The endpoint is terminating the connection because a data frame was received that is too large.
There are a few forum topics about this. Our Perspective pages and views haven't changed in about a year and a half and in that time we've seen perhaps two lost connection banners. Now, it's happening several times a day. What could be causing this when nothing has been changed in Designer?
Unlike the related topics below, our pages/view are relatively simple, no images, no tables, just tag values, gauges, and power charts.
Should have mentioned v8.1.31
No OS updates in a year.
No firewall changes (directly connected on the LAN, not going through the firewall).
Chrome.
Web inspector console indicates more.
store.Channel: Websocket connection closed. code=1009, wasClean=false, reason=Text message too large: (actual) 2,104,643 > (configured max text message size) 2,097,152, codeMeaning=Message Too Big, codeDescription=The endpoint is terminating the connection because a data frame was received that is too large.
PerspectiveClient.18a5e9a5f5264ddeb349.js:2
store.Channel: Websocket connection closed. code=1009, wasClean=false, reason=Text message too large: (actual) 2,101,610 > (configured max text message size) 2,097,152, codeMeaning=Message Too Big, codeDescription=The endpoint is terminating the connection because a data frame was received that is too large.
PerspectiveClient.18a5e9a5f5264ddeb349.js:2
store.Channel: Websocket connection closed. code=1006, wasClean=false, reason=No reason given, codeMeaning=Normal Closure, codeDescription=Reserved. Indicates that a connection was closed abnormally (that is, with no close frame being sent) when a status code is expected.
The alarm status table had about 1000 unacknowledged alarms. The issue cleared up after acknowledging them. It took some time as I kept losing the connection, but it improved as more and more alarms were acknowledged.
Another note on this, it also happened on views that didn't include the alarm status table, just some alarm counts in a view header (query driven).
Here is a script that can be used to acknowledge all alarms from the script console.
alarms = system.alarm.queryStatus(state = [0, 2]) # 0 is Clear Unacked, 2 is Active Unacked
print "Unacked alarms before:", len(alarms)
notes = None
alarmIds = [str(alarm.eventid) for alarm in alarms]
system.alarm.acknowledge(alarmIds, notes)
alarms = system.alarm.queryStatus(state = [0, 2])
print "Unacked alarms after:", len(alarms)
Encountered this issue again. We use a lot of embedded views for display and each one queries the alarm status to find out if its tag is in alarm and if so display an alarm indication. May need to revisit and optimize that implementation, 740 unacked alarms shouldn't take down the UI.
You will likely encounter this issue if you remove all filters on the alarm status object, maybe you have to search, cant really remember, but i found a consistent way to break it by interacting with the alarm object.
Im also pretty sure this was the fix:
In igntion.config, exchange the number 8 for a unused number. wrapper.java.additional.8=-Dperspective.websocket.max-message-size=4096
Gateway has to be restarted for the change to take effect.
Not really sure how many alarms this is good for, but its enough for me..