Latency and retransmission to DB2

Ok, having a weird issue that thus far has eluded solution by us...
We have ignition connected to a AS400/DB2, and there is a firewall between the connection.
We are getting high latency on queries, ie 60-120 ms for a simple "SELECT 1 from sysibm.sysdummy1"
Using wireshark on the igniton server, we see a bunch of TCP retransmits triggered by no ACK from the DB2.
Trying hard to determine if this is a driver issue, infrastructure, or....
Any input appreciated.

Following up on this; and trying to understand query timing.
I have a simple script that runs a query multiple times and times it;

host=system.tag.read('[System]Gateway/SystemName').value
if host=='DB2':
    query='SELECT 1 from sysibm.sysdummy1'
else: 
    query='select 1'
header=['i','milliseconds']
rows=[]
reps=50
start=system.date.toMillis(system.date.now())
for i in range(reps):
    a=system.db.runQuery(query)
    row=[[str(i)], [str(system.date.toMillis(system.date.now()))]]
    rows.append(row)
ds=system.dataset.toDataSet(header, rows)

csv=system.dataset.toCSV(dataset=ds, showHeaders=True)    
newdata=csv.replace('"','')
newdata=newdata.replace("['",'')
newdata=newdata.replace("']",'')
#print newdata
#print csv
end=system.date.toMillis(system.date.now())
diff=end-start
print ("total, average")
print (str(diff)+', '+str(diff/reps))

This script iterates and reports ~120ms per query, running on the gateway itself.
At the same time, I have the designer db query browser open, doing the same query on auto-refresh, and it is reporting ~65ms for query duration.
If I turn off Auto-refresh, I get the same ~120ms time as my script report.

Meanwhile, if I look at the DB query performance, I see this:
image
which tells me that my queries are typically under 0.1ms.
Why is there such a discrepancy in the timing reported between the designer query browser, my script, and the performance? Why does the performance connection counter always show 0 connections? This is using 8.1.29, still in trial mode because we can't apply the license until we resolve this issue.

I would look really closely at the JDBC driver. Consider using an external JDBC-based management workspace, like JetBrains Datagrip[1], to test that part in a neutral setting.


  1. Happy customer! ↩ī¸Ž

Thanks for the suggestion-I have done same query with SQLSquirell, using the same jdbc driver, and get similar speed.
Is that the basic function you were suggesting with datagrip, or does it have more functionality in terms of JDBC performance analysis?

I'm not familiar with SQLSquirrel, but if it is showing your JDBC execution times similar to Ignition, then the problem lies between the DB2 JDBC driver and DB2. Then you would throw this into your IT group's lap.

Yeah, they so far have been baffled, but this week the guy who can help trace/diagnose on DB2 side is back, so maybe can make some progress.
Any ideas why the ignition performance times are so different from the actual experienced times? (IE ignition is saying mean time of 0.1ms, but script and query browser are reporting so much more?

Ok, I at least have figured out how to increase the open connection counter
If I put a "system.db.beginTransaction( timeout=1000)" in the loop, I can get the number of connections to increase. It does add ~40ms of time to each iteration.
Hopefully this week we can observe from DB2 side.

Update, for anyone interested. The root cause was NOT the firewall or JDBC inherently. The Ignition server is Windows 2019, and the root issue can be traced to the implementation of Nagle's Algorithm for TCP queuing. Adding 'tcp no delay=true' to the connection properties disables this and brought our query time down to the expected 2ms.
, Nagle's algorithm - Wikipedia

5 Likes