Previously, I worked with only four clients, and the client response time was almost instantaneous for everything.
The client asked me to add more stations; now we have 11 clients, and I've noticed some delay with clients.
The client application is simple; it basically consists of screen changes so station operators can follow instructions (there are a lof of step for operators, so, i have a lot of screens). Among the 11 clients, I also control 15 Atlas tools using Open Protocol, and all 11 clients write data to a database simultaneously.
Things I think are wrong:
I have a lot of system.tag.readBlocking and system.tag.writeBlocking on all clients. I've read that this isn't ideal.
I have a client running on the gateway. Ideally, the gateway should be installed on a computer without clients.
I don't have an SSD in the gateway. I don't know if installing one would improve it. Should I install SSDs on the clients as well?
Any other tips would be greatly appreciated. Thanks.
Definitely not ideal, especially if you're calling it from the single GUI thread (the event dispatch thread or EDT). Blocking that thread is the most likely cause of "delay with clients". While 10000 requests/sec is extremely high, it's not necessarily a problem on its own - your gateway seems to be dealing with it okay.
Is there a reason you arrived at "a lot of readBlocking and writeBlocking" calls over, say, bindings (bidirectional if necessary)? Bindings are vastly more efficient for reading - they set up subscriptions on the gateway so that only changed values are delivered, as well as running the IO work in a background thread and only delivering updated values on the EDT.
If you must use scripting, then at least ensure that you're batching reads and writes together into as few operations as possible. Ten readBlocking calls in a row in a script implies ten sequential network hops to the gateway and back to receive a value.
Okay, understood, thanks!
I think this is the script where ReadBlocking is most abused (it runs in one station every 4/5 minutes).
I have four data arrays in the PLC, each with max 100 values. So I run a query to save those 400 data items to the database.
Should I create 400 bindings?
I used system.tag.readBlocking once and I had the same trouble. In my case I was using a OPC Server with the PLC and instead tag.readBlocking I used system.opc.readValue() and it worked better. I don't know if you are using a OPC too.
I hope it could help.
Consolidate your read blocking calls into a single call as much as possible:
GraphData1, GraphData2, GraphData3, GraphData4 = [
qv.value for qv in system.tag.readBlocking([
"[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlideTest_C1/FIFO_ARRAY_%d_" % i,
"[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlideTest_C2/TesterCurrent_%d_" % i,
"[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlidePos_C1/FIFO_ARRAY_%d_" % i,
"[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlidePos_C2/FIFO_ARRAY_%d_" % i
])
]
You could go even further and use the for loop to build all the paths for that position, then read all the tags at once in a single readBlocking call and insert. 1 single jump to gateway and back instead of 4x however many positions you have.
The easiest (but definitely not best) thing to do, if you just want the bandaid solution, is to wrap all this logic up into a function (preferably in the project library) and call it in an asynchronous thread (system.util.invokeAsynchronous). If you don't need any return value wherever you're calling the script, that's a pretty trivial change to make.
But you're leaving a huge amount of optimization on the table. Assuming pos3 is ~100 or whatever, you could issue a single readBlocking call, assemble all your parameters, and issue a singlerunPrepUpdate for all 100 rows in the DB at once too. That's probably going to run more than 100x faster, because you're eliminating extra network hops and making it easier on your database (bulk inserts make rebuilding indexes easier).
Maybe something like this (untested):
if(pos2 > pos1):
pos3 = pos2
else:
pos3 = pos1
def paths(i):
yield "[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlideTest_C1/FIFO_ARRAY_{}_".format(i)
yield "[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlideTest_C2/TesterCurrent_{}_".format(i)
yield "[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlidePos_C1/FIFO_ARRAY_{}_".format(i)
yield "[Amvian]_PLC_/Program:TESTER/Graph_ST8/SlidePos_C2/FIFO_ARRAY_{}_".format(i)
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in xrange(0, len(lst), n):
yield lst[i:i + n]
allPaths = list(paths(i) for i in xrange(pos3))
allValues = [qv.value for qv in system.tag.readBlocking(allPaths)]
chunked = chunks(allValues, 4)
query = """
INSERT INTO `amvian_db`.`graph_slidetest`
(`Fecha`, `VIN`, `Model`, `C1`, `C2`, `Pos_C1`, `Pos_C2`)
VALUES {}
""".format(", ".join("(?, ?, ?, ?, ?, ?, ?)" for _ in xrange(len(chunked))))
baseArgs = [date, VIN, model]
args = []
for chunk in chunks:
args.extend(baseArgs)
args.extend(chunk)
system.db.runPrepUpdate(query, args)