Hi
I am using webdev to send progress messages from a python program (running on same machine as Ignition gateway). In total there are 7-10 such messages.
Without these messages the python program takes around 5 seconds to run
With the messages the program takes 1-2 minutes to run
The messages are written to an Ignition udt (with first element named Msg1), a structure of 10 text tags with FIFO code to clock the messages through the udt list. These are then displayed in a perspective table.
Have I done something wrong? It seems a complex way to do something that 'should be' simple (but it works)
Is there a more efficient way to accomplish this, ie display messages from external code in a perspective display?
oops...
Thanks Kevin
I don't understand your responses:
WebDev script inefficient.
a. why/how is the WebDev script inefficient?
b. The latency appears to be on the Python side ?? Does Python wait for acknowledgement from Ignition? If so can I modify the strategy?
WebDev script being called far more than I think
a. How is the WebDev script being called 'far more than I think'. How can I measure this?
b. I have a 'print' statement right before the requests.post(url=url, json=payload) line in the python script. The print statement writes to the console exactly once per message, so I assume the 'requests.post' is also called exactly once per message.
The 'HttpChannel' message is 2000+ lines that look like:
at org.json.JSONObject.populateMap(JSONObject.java:847)
at org.json.JSONObject.(JSONObject.java:260)
The timestamps on the log entries indicate the same latency as when I print to console from Python with log-messaging enabled in Python. With logging disabled in Python the messages appear 20x faster on console.
The only thing I can conclude (with my limited experience) is that the 'post' from Python hangs until a response from Ignition is received.
Is this correct?
Can I prevent it?
Is there an alternative method?
Yes, probably. AFAIK the requests library is synchronous. Most HTTP libraries probably are.
If you truly don't care about the response you might be able to modify your code to make the post in another thread, or find another HTTP library to use that will let you make requests asynchronously.
2000 lines? Actually, or are you exaggerating? That suggests an enormously deep object, which is not going to JSON serialize well, and you might (for instance) paying a lot of reflection overhead or something else "invisible".
That section of the stacktrace being repeated a lot means you've got a map that contains a map that contains a map, over and over and over again. That's a very inefficient data structure in general, but especially to represent in JSON, and also just bad for the old, crusty org.json.JSONObject class that backs the system.util.json* functions.
It doesn't necessarily explain why it takes minutes, but it's definitely not helping performance.