400 concurrent clients


one of our clients has a very small project (3 devices and about 1000 variables) but many clients that will connect to the visualization (up to 400 concurrent clients). I've read the server sizing and architecture guide, but what do you think? Is 16 Cores (4 GHz+), 32GB memory, SSD enough or can we take lower options, because of small tag number?

You don't say whether this is Vision or Perspective. That server would likely support that many clients in Vision. Based on what I've seen so far, unlikely to be enough for that many Perspective clients. (Like, not even close.)

We have a customer with 70 odd perspective clients, 90 vision clients running on two split gateways. 20gb front / 16 gb back, both with 6 cores.
So I guess 36gb and 12 cores for a single server. They have 650k tags though.. So it's hard to compare apples

Hmm, I wonder how well this response from 2019 holds up.

Probably amended somewhat; as long as you take care to take advantage of Perspective's mechanisms to share repetitive query results to different sessions, you can trade memory for CPU to pretty good extent. That said, Perspective does consume more resources than we'd like it to on the gateway, and we're always working to improve that.

1 Like

Thanks for your response.

The project will be in Perspective and we are going to use Azure cloud.

What about scalable resources? Have anyone tried it? Wouldn't it work for 400 concurrent clients?

1 Like

Meh. I don't agree with that at all. Vision's polling is trivial compared to Perspective UI scripting and binding. Not to mention that Vision is taking delivery of dense binary representations of original java objects while every similiar item for Perspective has to be converted to very non-dense JSON.