Profiling python scripts

I’ve been trying to profile bits of our Python scripting with the goal of learning what’s slowing it down. (Hint: It almost always ends up being an excessive number of tag interactions that we forgot about or didn’t realize were happening because our scripts are too deep, but that’s not what I’m here to talk about.)

In many cases, I’ve been able to get a good view of our execution by using Python’s profile library. I know it’s not super fast or accurate, but it’s good for a rough analysis.

What I’m finding is that while I can get it to run in the Designer scope, it fails in other scopes. I had one particular function I had analyzed a little bit from the Designer, then wanted to re-analyze in a native Perspective session, but it just silently dies every time. The logging lines below it never get run, and there’s no errors or exceptions in any logs that I can find.

For example, one of our Perspective events calls a script which has a block like this in it:

import tempfile
import profile
fhandle, fpathname = tempfile.mkstemp(suffix='.pstats', prefix='mytest_')
fhandle.close()'Starting profile to {}'.format(fpathname))'my_slow_function(foo, bar, baz)', fpathname)'Ending profile to {}'.format(fpathname))

I always see the “Starting profile to” line but not the “Ending profile to” line in the log.

I’m assuming this is going to be one of those “you can’t use that library in this scope” sort of moments, but I figured I’d ask here.

Is there some trick to getting this to work?

Or is there some other library/tool that I should be using instead of Python’s profile?

EDIT: And I checked that it’s not mkstemp file that’s the issue. I removed that bit and just used a fixed filename and had the same result.


I prefer to collect timing information in production using java’s System.nanoTime().

I really prefer the simple setup of the profiler. By just wrapping one higher level function, it automatically gives me a report of all the sub-functions it called, how many times it called each one, and (roughly) how much time it spent in each one.

Your way is definitely workable, but requires adding wrappers at every point of interest and slowly digging down. Don’t get me wrong, it works far more reliably, but is more time consuming to setup.

But I didn’t even think to look at Java profilers. I assume they’re going to be trickier since they’d probably have to hook in in ways that I can’t turn on/off inside the Jython instance… :confused:

I don’t like the overhead of the python profiler’s intrusion into the call stack, plus the overhead of file operations. I want timing to be as close to non-profiled real production as possible.

1 Like

Your way is definitely better for tightening up code after you’ve identified which functions are your bottlenecks. I’m still at the “why the heck did we call funcX 1000 times?” stage.

Hmmm. Haven’t had to profile to find those. Usually it’s a general search followed by a face-palm.

1 Like