Long running transaction

Isn’t this supposed to timeout?

SELECT st.tagpath as "Point", COALESCE(CASE st.datatype WHEN 0 THEN sd.intvalue WHEN 1 THEN sd.floatvalue end,0) AS "State", CASE sd.dataintegrity WHEN 192 THEN 'Good' WHEN 32 THEN 'Not Reported' WHEN 24 THEN 'Comm Failed' ELSE 'BAD' END AS "Quality", substring(sd.stringValue, 'Operator:\\s(\\S+) ') AS "Operator", TIMESTAMP WITH TIME ZONE 'epoch' + sd.t_stamp * INTERVAL '1 millisecond' || ' ' AS "Timestamp" FROM sqlth_te st, sqlt_data_3_20121113 sd WHERE st.id = sd.tagid AND st.datatype = 0 AND sd.t_stamp > 1352834875059 AND sd.t_stamp < 1352924875059 AND st.tagpath LIKE '%%' UNION SELECT st.tagpath as "Point", COALESCE(CASE st.datatype WHEN 0 THEN sd.intvalue WHEN 1 THEN sd.floatvalue end,0) AS "State", CASE sd.dataintegrity WHEN 192 THEN 'Good' WHEN 32 THEN 'Not Reported' WHEN 24 THEN 'Comm Failed' ELSE 'BAD' END AS "Quality", substring(sd.stringValue, 'Operator:\\s(\\S+) ') AS "Operator", TIMESTAMP WITH TIME ZONE 'epoch' + sd.t_stamp * INTERVAL '1 millisecond' || ' ' AS "Timestamp" FROM sqlth_te st, sqlt_data_3_20121114 sd WHERE st.id = sd.tagid AND st.datatype = 0 AND sd.t_stamp > 1352834875059 AND sd.t_stamp < 1352924875059 AND st.tagpath LIKE '%%' ORDER BY "Timestamp" DESC 	12 minutes, 5 seconds

There are 1.1M rows in this table

It is supposed to timeout. Are you seeing this on the database status page? We have seen issues (especially with MySQL) where the query never completes and keeps the connection in the pool open. If you edit the connection and press OK it should remove those queries. Is that what happened to you?

It’s postgres.

I restarted the db :smiling_imp:

Yes this was copied from the status page. There were 12 of these queries running. The box was maxed out at 100% cpu usage. I will try editing the connection next time.

The explain plan for that query is:

"Sort (cost=31933059.77..31960682.89 rows=11049250 width=576)" " Sort Key: ((('1969-12-31 17:00:00-07'::timestamp with time zone + ((sd.t_stamp)::double precision * '00:00:00.001'::interval)) || ' '::text))" " -> Unique (cost=16312439.92..16478178.67 rows=11049250 width=576)" " -> Sort (cost=16312439.92..16340063.04 rows=11049250 width=576)" " Sort Key: st.tagpath, (COALESCE(CASE st.datatype WHEN 0 THEN (sd.intvalue)::double precision WHEN 1 THEN sd.floatvalue ELSE NULL::double precision END, 0::double precision)), (CASE sd.dataintegrity WHEN 192 THEN 'Good'::text WHEN 32 THEN 'Not Reported'::text WHEN 24 THEN 'Comm Failed'::text ELSE 'BAD'::text END), ("substring"((sd.stringvalue)::text, 'Operator:\\s(\\S+) '::text)), ((('1969-12-31 17:00:00-07'::timestamp with time zone + ((sd.t_stamp)::double precision * '00:00:00.001'::interval)) || ' '::text))" " -> Append (cost=104.87..857558.81 rows=11049250 width=576)" " -> Hash Join (cost=104.87..648029.65 rows=9590049 width=576)" " Hash Cond: (sd.tagid = st.id)" " -> Seq Scan on sqlt_data_3_20121113 sd (cost=0.00..240347.70 rows=9590049 width=548)" " Filter: ((t_stamp > 1352834875059::bigint) AND (t_stamp < 1352924875059::bigint))" " -> Hash (cost=79.73..79.73 rows=2011 width=36)" " -> Seq Scan on sqlth_te st (cost=0.00..79.73 rows=2011 width=36)" " Filter: (((tagpath)::text ~~ '%%'::text) AND (datatype = 0))" " -> Hash Join (cost=104.87..99036.67 rows=1459201 width=576)" " Hash Cond: (sd.tagid = st.id)" " -> Seq Scan on sqlt_data_3_20121114 sd (cost=0.00..36915.75 rows=1459201 width=548)" " Filter: ((t_stamp > 1352834875059::bigint) AND (t_stamp < 1352924875059::bigint))" " -> Hash (cost=79.73..79.73 rows=2011 width=36)" " -> Seq Scan on sqlth_te st (cost=0.00..79.73 rows=2011 width=36)" " Filter: (((tagpath)::text ~~ '%%'::text) AND (datatype = 0))"

Given the final sort cost, I guess I shouldn’t be surprised it’s taking so long to return. (At least 14 minutes)

Oops, didn’t relize this was a query on two tables. The first one has 9.9M rows, the second one has 1.6M (not bad for 24 hours per table)

I guess things will be a little slow when your sorting 6.9M rows (the returned dataset).

Ok, the cause of the long running transaction is my SQL. I can/will deal with that.

Can you make sure that it does time out correctly? I do get the timeout exception after 60 seconds. It’s just not killing the query.

Yes, we are working on that.