High SQL Server Memory Usage

I’ve noticed that SQL Server is slowly grabbing all available memory on our server which is also running Ignition among other things. After some analysis I noticed one particular database appeared to be the culprit. It is only being accessed via the Ignition Bridge. The databse in question is quite large. Other database projects with Ignition some of which have a lot more going on don’t seem to use nearly as much memory. What’s going on here?

Kurt

Hi,

Fundamentally, there’s no reason why the memory should just grow and never be recovered. On the other hand, I’ve noticed that SQL Server (especially newer versions) aren’t shy about using memory. If you’re not actually running into a problem with memory, there may be nothing to worry about- SQL Server might just be holding on to it for performance, since it thinks it can.

The “delete old records” option does the following:

  1. Queries the table for the max index id whose timestamp is less than the appropriate time.
  2. Deletes where the id is less than the returned value.

This might seem convoluted, but it’s based on the idea that it’s faster to delete based on the primary id column of the table. At any rate, if the memory usage really is coming from that, it would either have to come from the “select” query, or the “delete” query. In neither case can I imagine that this would be a permanent memory leak, but I could imagine that a fair amount of memory could get used in these operations. For the select, if the t_stamp isn’t indexed, it probably needs to load a lot of data into memory and sort. For the delete, it probably loads all matching records, in order to rollback if the transaction fails (even a single query is a transaction). In the case of “older than 1 year”, the query is running ever 1 hour. That means that it would very likely be the “select” that’s the problem, as the “delete” probably wouldn’t match many rows- only about an hour’s worth of data from last year.

So, first things first, check the table in SQL Manager - is there an index on t_stamp? The SQL Bridge creates one, but didn’t always. It’s possible this table doesn’t have one, and I think that could lead to a fair amount of overhead.

You could also try running this query in “explain” mode in the query manager, and see what it says:

select max(tablename_ndx) from tablename where t_stamp<'2011-02-08 00:00:00'

with your correct tablename and index, of course.

Regards,

Without the ‘delete old records’ option, memory still grows. Currently for the db in question the page count is 6253 where as everything else is in the ‘hundreds’ but also growing. Each table has a primary key called ‘ndx’. At some point SQL will grab all the servers memory so this will become a problem.

Look at the properties for your SQL server. Do you have AWE enabled to allocate memory? What are the max/min limits set to(compared to your server’s mem level).

The server is set to increase memory according to it’s needs. I believe the ‘min’ max figure you can have is 4GB which is the total amount on our server (hopefully this will be upgraded shortly).