Fundamentally, there’s no reason why the memory should just grow and never be recovered. On the other hand, I’ve noticed that SQL Server (especially newer versions) aren’t shy about using memory. If you’re not actually running into a problem with memory, there may be nothing to worry about- SQL Server might just be holding on to it for performance, since it thinks it can.
The “delete old records” option does the following:
- Queries the table for the max index id whose timestamp is less than the appropriate time.
- Deletes where the id is less than the returned value.
This might seem convoluted, but it’s based on the idea that it’s faster to delete based on the primary id column of the table. At any rate, if the memory usage really is coming from that, it would either have to come from the “select” query, or the “delete” query. In neither case can I imagine that this would be a permanent memory leak, but I could imagine that a fair amount of memory could get used in these operations. For the select, if the t_stamp isn’t indexed, it probably needs to load a lot of data into memory and sort. For the delete, it probably loads all matching records, in order to rollback if the transaction fails (even a single query is a transaction). In the case of “older than 1 year”, the query is running ever 1 hour. That means that it would very likely be the “select” that’s the problem, as the “delete” probably wouldn’t match many rows- only about an hour’s worth of data from last year.
So, first things first, check the table in SQL Manager - is there an index on t_stamp? The SQL Bridge creates one, but didn’t always. It’s possible this table doesn’t have one, and I think that could lead to a fair amount of overhead.
You could also try running this query in “explain” mode in the query manager, and see what it says:
select max(tablename_ndx) from tablename where t_stamp<'2011-02-08 00:00:00'
with your correct tablename and index, of course.