Communication via webdev slow

Hi
I am using webdev to send progress messages from a python program (running on same machine as Ignition gateway). In total there are 7-10 such messages.
Without these messages the python program takes around 5 seconds to run
With the messages the program takes 1-2 minutes to run

My python code is:

    url = 'http://localhost:8088/system/webdev/....../WriteTag'
    payload = {'path': "[default]Message/Msg1",'value': message }  
    requests.post(url=url, json=payload)

The messages are written to an Ignition udt (with first element named Msg1), a structure of 10 text tags with FIFO code to clock the messages through the udt list. These are then displayed in a perspective table.

  1. Have I done something wrong? It seems a complex way to do something that 'should be' simple (but it works)
  2. Is there a more efficient way to accomplish this, ie display messages from external code in a perspective display?

Thanks
R

The obvious reasons it would run that much slower are:

  1. your WebDev script is very inefficient
  2. your WebDev script is being called far more often than you think

Can you add some logging and share some code?

1 Like

tHANKS FORTHE RESPONSE kEVIN

2 Likes

oops...
Thanks Kevin
I don't understand your responses:

  1. WebDev script inefficient.
    a. why/how is the WebDev script inefficient?
    b. The latency appears to be on the Python side ?? Does Python wait for acknowledgement from Ignition? If so can I modify the strategy?
  2. WebDev script being called far more than I think
    a. How is the WebDev script being called 'far more than I think'. How can I measure this?
    b. I have a 'print' statement right before the requests.post(url=url, json=payload) line in the python script. The print statement writes to the console exactly once per message, so I assume the 'requests.post' is also called exactly once per message.

The ignition log looks like:

HttpChannel	12Mar2025 09:05:26	/system/webdev/F3_Scheduler/WriteTag
msg	12Mar2025 09:05:21	[default]Message/Msg1 9:05:21 . . . . Starting

The 'HttpChannel' message is 2000+ lines that look like:

at org.json.JSONObject.populateMap(JSONObject.java:847)

at org.json.JSONObject.(JSONObject.java:260)

The timestamps on the log entries indicate the same latency as when I print to console from Python with log-messaging enabled in Python. With logging disabled in Python the messages appear 20x faster on console.

The only thing I can conclude (with my limited experience) is that the 'post' from Python hangs until a response from Ignition is received.
Is this correct?
Can I prevent it?
Is there an alternative method?

Thanks for your patience.
R

Yes, probably. AFAIK the requests library is synchronous. Most HTTP libraries probably are.

If you truly don't care about the response you might be able to modify your code to make the post in another thread, or find another HTTP library to use that will let you make requests asynchronously.

Thank you for the clarification

2000 lines? Actually, or are you exaggerating? That suggests an enormously deep object, which is not going to JSON serialize well, and you might (for instance) paying a lot of reflection overhead or something else "invisible".

yes. 2047 in the example I selected.
what do you mean by 'deep' object?

That section of the stacktrace being repeated a lot means you've got a map that contains a map that contains a map, over and over and over again. That's a very inefficient data structure in general, but especially to represent in JSON, and also just bad for the old, crusty org.json.JSONObject class that backs the system.util.json* functions.

It doesn't necessarily explain why it takes minutes, but it's definitely not helping performance.

Thanks Paul
What do you mean by maps?

a Map data structure, in Python a dictionary.

His point is that whatever you're doing in the handler is doing some deep recursion for some reason.