Better way to keep transaction alive?

I just wrote what feels like a very hacky script but I don't know that there's a better way to do this and maybe there isn't but I wanted to check with the pros here.

I have a function that runs a bunch of named queries in a transaction.

At one point, I have to compare two datasets to each other before determine what the next query to run is.

The issue is sometimes these datasets can be large, and then I'm not doing anything in the transaction for a extended amount of time which then leads the transaction to closing itself and then I get an error when I finally determine the next query - the transactions already closed.

Right now my "fix" is during my dataset comparison part to do this

	for row in range(savedList.rowCount):
		if row % 10 == 0:
			system.db.runNamedQuery('Util/keepTransactionAlive', tx=tx)
		for row2 in range(initialList.rowCount):
            # do stuff

where 'Util/keepTransactionAlive' is SELECT 1.

Is this a normal pattern for this sort of scenario or is there a better way to handle this?

Change the transaction timeout, perhaps?

Yea I thought about that. These datasets can be undefined length, so they can be millions of rows (though realistically maybe 10000 max). My n^2 algorithm isn't helping either. I should look up what the max timeout time allowable is. Maybe that is sufficient.

This was a case of shooting myself in the foot.

I was iterating through the dataset 1 and dataset 2 and then turning each into a python dictionary (realizing now I turned I unnecessarily did it to dataset 1 within the second loop as well). This was a very costly operation so so I could do stuff like rowData['blah']. I could have just turned it into a pydataset to do that. Or what I ended up doing, sorting the datasets both on the same column first, and then only using one loop, and then using getValueAt. Reduced my time from 30 seconds for 500 items to .05 seconds for the same dataset. Now the transaction timing out is no longer an issue.

Whoops!

6 Likes