Vision Client Request/sec too high

Previously, I worked with only four clients, and the client response time was almost instantaneous for everything.
The client asked me to add more stations; now we have 11 clients, and I've noticed some delay with clients.
The client application is simple; it basically consists of screen changes so station operators can follow instructions (there are a lof of step for operators, so, i have a lot of screens). Among the 11 clients, I also control 15 Atlas tools using Open Protocol, and all 11 clients write data to a database simultaneously.

Things I think are wrong:

  • I have a lot of system.tag.readBlocking and system.tag.writeBlocking on all clients. I've read that this isn't ideal.
  • I have a client running on the gateway. Ideally, the gateway should be installed on a computer without clients.
  • I don't have an SSD in the gateway. I don't know if installing one would improve it. Should I install SSDs on the clients as well?

Any other tips would be greatly appreciated. Thanks.




Definitely not ideal, especially if you're calling it from the single GUI thread (the event dispatch thread or EDT). Blocking that thread is the most likely cause of "delay with clients". While 10000 requests/sec is extremely high, it's not necessarily a problem on its own - your gateway seems to be dealing with it okay.

Is there a reason you arrived at "a lot of readBlocking and writeBlocking" calls over, say, bindings (bidirectional if necessary)? Bindings are vastly more efficient for reading - they set up subscriptions on the gateway so that only changed values are delivered, as well as running the IO work in a background thread and only delivering updated values on the EDT.
If you must use scripting, then at least ensure that you're batching reads and writes together into as few operations as possible. Ten readBlocking calls in a row in a script implies ten sequential network hops to the gateway and back to receive a value.

Okay, understood, thanks!
I think this is the script where ReadBlocking is most abused (it runs in one station every 4/5 minutes).
I have four data arrays in the PLC, each with max 100 values. So I run a query to save those 400 data items to the database.
Should I create 400 bindings?

	if(pos2 > pos1):
		pos3 = pos2
	else:
		pos3 = pos1
	
	for i in range(pos3):

		GraphData1 = system.tag.readBlocking([("[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlideTest_C1/FIFO_ARRAY_{}_".format(i))])[0].value
		GraphData2 = system.tag.readBlocking([("[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlideTest_C2/TesterCurrent_{}_".format(i))])[0].value
		GraphData3 = system.tag.readBlocking([("[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlidePos_C1/FIFO_ARRAY_{}_".format(i))])[0].value
		GraphData4 = system.tag.readBlocking([("[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlidePos_C2/FIFO_ARRAY_{}_".format(i))])[0].value
		query = "INSERT INTO `amvian_db`.`graph_slidetest` (`Fecha`, `VIN`, `Model`, `C1`, `C2`, `Pos_C1`, `Pos_C2`) VALUES (?, ?, ?, ?, ?, ?, ?);"
		args = [date, VIN, model, GraphData1, GraphData2, GraphData3, GraphData4]
		system.db.runPrepUpdate(query, args)

I used system.tag.readBlocking once and I had the same trouble. In my case I was using a OPC Server with the PLC and instead tag.readBlocking I used system.opc.readValue() and it worked better. I don't know if you are using a OPC too.
I hope it could help.

1 Like

Consolidate your read blocking calls into a single call as much as possible:

		GraphData1, GraphData2, GraphData3, GraphData4 = [ 
			qv.value for qv in system.tag.readBlocking([
			"[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlideTest_C1/FIFO_ARRAY_%d_" % i, 
			"[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlideTest_C2/TesterCurrent_%d_" % i,
			"[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlidePos_C1/FIFO_ARRAY_%d_" % i,
			"[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlidePos_C2/FIFO_ARRAY_%d_" % i
		])
	]

You could go even further and use the for loop to build all the paths for that position, then read all the tags at once in a single readBlocking call and insert. 1 single jump to gateway and back instead of 4x however many positions you have.

3 Likes

runPrepUpdate is also blocking.

The easiest (but definitely not best) thing to do, if you just want the bandaid solution, is to wrap all this logic up into a function (preferably in the project library) and call it in an asynchronous thread (system.util.invokeAsynchronous). If you don't need any return value wherever you're calling the script, that's a pretty trivial change to make.

But you're leaving a huge amount of optimization on the table. Assuming pos3 is ~100 or whatever, you could issue a single readBlocking call, assemble all your parameters, and issue a single runPrepUpdate for all 100 rows in the DB at once too. That's probably going to run more than 100x faster, because you're eliminating extra network hops and making it easier on your database (bulk inserts make rebuilding indexes easier).

Maybe something like this (untested):

if(pos2 > pos1):
	pos3 = pos2
else:
	pos3 = pos1

def paths(i):
	yield "[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlideTest_C1/FIFO_ARRAY_{}_".format(i)
	yield "[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlideTest_C2/TesterCurrent_{}_".format(i)
	yield "[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlidePos_C1/FIFO_ARRAY_{}_".format(i)
	yield "[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlidePos_C2/FIFO_ARRAY_{}_".format(i)

def chunks(lst, n):
    """Yield successive n-sized chunks from lst."""
    for i in xrange(0, len(lst), n):
        yield lst[i:i + n]

allPaths = list(paths(i) for i in xrange(pos3))
allValues = [qv.value for qv in system.tag.readBlocking(allPaths)]

chunked = chunks(allValues, 4)

query = """
	INSERT INTO `amvian_db`.`graph_slidetest` 
	(`Fecha`, `VIN`, `Model`, `C1`, `C2`, `Pos_C1`, `Pos_C2`)
	VALUES {}
	""".format(", ".join("(?, ?, ?, ?, ?, ?, ?)" for _ in xrange(len(chunked))))

baseArgs = [date, VIN, model]
	
args = []
for chunk in chunks:
	args.extend(baseArgs)
	args.extend(chunk)

system.db.runPrepUpdate(query, args)
4 Likes

And considering the return isn't being used, replace runPrepUpdate with runSFPrepUpdate to bypass waiting for it to be executed

3 Likes

I would only add to be cautious here as it may run up against whichever JDBC drivers max limit for parameters per query.

That generally around 2k but is still a limit so in some cases multiple updates will be required, depending on the situation.

Am I missing something? Why are you running this in a client at all and not on the gateway? I don't see any need for this to run on the client.

This right here.
Clients shouldn't be doing logging to a database. Gateways should.