How to run a project use less server SQL resource

What confuses me in this case is that that the table has well under a million records. Proper table design is important, yes, but this points to severe resource starvation since even an unindexed table of that size should return within a few milliseconds. MSSQL tends to be rather good about generating temporary indexes as long as it’s not completely out of disk space and RAM, after all.

Just to add one slight detail to this key/index discussion (no intent to start a flamewar), usually, the primary key (when talking about MS SQL) is CLUSTERED - i.e. the table itself gets sorted based on the key column(s), there is no “copy” of the subset of the data aside from the table - the table is then the index itself. In case the index is NON-CLUSTERED, then a copy of the subset of a data is created. Primary key can be, of course, both - clustered and nonclustered.

Best regards,
szcz

Could you try few maintenance activities to your DB …
1- index rebuilt job (DeFragIndex ) .it job run weekly to rebuild index.
2- Purging and Archiving roles to your tables.
Check below link for more details about SQLmaintenance which will improve performance.
https://www.sqlshack.com/sql-index-maintenance/

From where I stand it looks like you have issues (aside from having your tables improperly indexed) with readers blocking writers and vice versa. Having 54 waiting tasks is considerable amount of tasks.

If your MS SQL is at least 2016 SP2, I would suggest to start using optimistic locking (RCSI and Isolation SNAPSHOT - one preserves consistency during a command, the other during the whole transaction). The only question is, if your application supports optimistic locking (under some very specific scenarios, you could get inconsistent results when querying).

Also, about the indexing - as long as you use only “equals to” in your SQL queries, the order of the columns in the index is not that important. It all changes when you start using less than or greater than - rule of thumb would be to have columns used in < and > comparisons as first in the index.

Also, at some point of your indexing, you have to decide whether you are indexing for sorting or looking up the records.

The hard to swallow pill here is - you cannot index your tables so that each and every query goes fast. The indexing is usually done for specific query or set of queries.

If I were you, I would look to transaction isolation levels and optimistic locking - however, indexing is equally important, because proper indexing can minimize lockig issues that you have right now.

HTH, take care
szcz

I am afraid rebuilding the indexes won’t get him over the finish line - the application does not seem to do many deletes or updates to get the indexes fragmented in the first place. It seems to only add new rows. What index defrag will do is to purge execution plan cache and server will have to compile the execution plans of the queries all over again (CPU intensive). From what I have seen it looks like there isn’t proper indexing in the first place - hence rebuild won’t do much good, I am afraid.

I support your argument of making the tables smaller.

What could also help is to update SQL statistics:

USE Database;
EXEC sp_msforeachtable 'UPDATE STATISTICS ? WITH FULLSCAN'

The server could then choose better exec plans for queries given that statistics haven’t been updated for a while.
The author of the original post could run it once and check if he can see at least tiny difference. If yes, then I’d run it after every new 100K-400K records in your biggest tables.

HTH, Take care,
szcz