Maximum active connection reached

We are seeing this after only about 250 connections

INFO | jvm 1 | 2018/09/10 08:43:32 | E [i.m.s.i.ProtocolProcessor ] [12:43:32]: Maximum active connection reached. Connection refused, server unavailable. CId=mimic/mqtt/00251

How can we increase the limit?

Can you give us a little more context? What kind of connections are these? Something from the MQTT modules?

Yes, MQTT Edge module with trial license. Is that a limitation of the trial?

Paging @wes0johnson or anybody else who might know :slight_smile:

Yes - MQTT Distributor is limited in number of client connections. In trail mode - it runs as Distributor Plus which supports 250 clients. When licensed, it can be licensed as ‘plus’ or ‘standard’. As plus it will remain with a 250 client limitation, and as standard it will have a 50 client limitation.

Your page at https://inductiveautomation.com/whats-new-ignition-edge says

Supported MQTT Servers

The MQTT Distributor Module and any server compliant with the 3.1.1 MQTT protocol OASIS standard.

By “and any server compliant with the 3.1.1 MQTT protocol OASIS standard” I understand
that there is an alternative to “MQTT Distributor”.
Are there instructions to link with a server other than MQTT Distributor so that we can
overcome this limit? We want to go to 10,000 connections, at the very least.

You want Chariot – explicitly designed for deployments bigger than Distributor’s limits. It is not an Ignition module itself.

Ok, we are running with IBM MessageSight, and are trying to get to 10,000 EON nodes
and are unable to do so. We have tried 3 times (each cycle taking about half an hour
including bookkeeping, evidence gathering, rebooting Ignition, etc). So far 100% reproducible
in the couple of times we tried.

It failed at 7500, 8300+ and 4000+ nodes, so it does not appear to be a hard limit.
This wrapper.log message seems to accompany the failure:

INFO | jvm 1 | 2018/09/17 09:59:38 | Exception in thread “Thread-22” java.util.ConcurrentModificationException
INFO | jvm 1 | 2018/09/17 09:59:38 | at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
INFO | jvm 1 | 2018/09/17 09:59:38 | at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
INFO | jvm 1 | 2018/09/17 09:59:38 | at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
INFO | jvm 1 | 2018/09/17 09:59:38 | at java.util.HashMap.putMapEntries(HashMap.java:512)
INFO | jvm 1 | 2018/09/17 09:59:38 | at java.util.HashMap.putAll(HashMap.java:785)
INFO | jvm 1 | 2018/09/17 09:59:38 | at com.cirruslink.mqtt.engine.gateway.EdgeNodeManager.getAllEdgeNodes(EdgeNodeManager.java:84)
INFO | jvm 1 | 2018/09/17 09:59:38 | at com.cirruslink.mqtt.engine.gateway.EngineWorker.run(EngineWorker.java:136)
INFO | jvm 1 | 2018/09/17 09:59:38 | at java.lang.Thread.run(Thread.java:748)

We have screenshots, videos, that we can share to diagnose this and of course can reproduce this
any time. Everything seems fine on the MIMIC and MessageSight side (ie. 10k connections established
and maintained).

That is almost always a bug. Good catch.

We can reproduce with 1000 nodes. Just a little harder. Typical race condition, looks like it.

Is there a better place to report this bug?
Once it happens, the MQTT client is dead, and it prevents us from doing further work.

Yes, reach out to Cirrus Link.

Paging @wes0johnson :wink:

Yes - send a query to support@cirrus-link.com and we’ll take care of it from there. I agree it looks like a bug.

Thanks. It’s working well.
See screenshot with 10,000 EON nodes connected to Ignition.

2 Likes

2000 EON nodes, 20k devices, 200k tags