To answer your questions and provide feedback here is what I can say:
The reason the IT dept is looking at the SQL tags is the fact that there has been tried two products, one is SQL diagnotic and the other is SQL Deadlock Detector. From using these two product it was noticed that there was a lot of deadlocks and deadlock wait timing on this server. When we compared the performance to other SQL servers there was no deadlocks nor deadlock wait times on those servers. When we utilized Deadlock Detector the vast majority of what we seen, and these were the red flagged queries, were queries that contained the table SQLT_CORE. Upon looking at this forum site the IT dept and I seen that this was the core table for SQL Tags. So it was beleived that this was causing the deadlocks since this is what was reported.
There are a number of Tables and Views in this instance of SQL. Some of these tables “Jump” across to other SQL instances to get data from there. One view that is on this is the JobTask view. We have a SQL server dedicated to MTSoft which is our MES system. All order entry and such goes into this system, so all mail jobs are placed here. Then the information is filtered over to our datacollection for use there. Of course another good example would be the employee database, this is fed from the CeleriTime Server which is our timeclock system. Now as for the views, tables and other yes there are quite a few queries that use 3 or more view and/or tables. These are the really sluggish queries and while running these this is when you get TIMEOUTS and SQL SERVER NOT CONNECTED errors on the IA software system. As I mentioned there are quite a few reports that come off of this data contained in this SQL Instance, some of the reports are in Crystal and some are in IA but all use extensive joins and unions and there are some that use the where clause, some on the server and some in the FPMI depending on what is needed out of the table. One thing we did do was to see what all data was needed and where and then instead of creating many, many, many views, we would take the information into one table then separate what we needed at the PMI level since most of the data is the same.
As for the transaction log, I know our IT staff has been watching it closely, mainly because of what happened before we went live with the system. If oyu look on this forum site you will a posting many months back where our transaction log grew so large it caused all kinds of problems.
Now onto the paging and Ram issues, the software products we ran on the SQL instance also have nice little graphs depicting memory usage, hard drive usage, etc… One of the things was that there was no paging and all processes were using memory allocation. Right now the server is running Windows server 2005( i think it might be 2003), SQL 2005, it has plenty of hard drive space, and although it only sees 4 gig of memory the system actually has 6 gig available.
As for the bottle necks, dont these deadlock create or were created by bottlenecks? and do you have any better software products that might be faster or more efficient than the ones we are currently evaluating? because if the SQLT_CORE is not the root cause then why is it that any query pertaining to that table is red flagged with deadlocks?
Thanks and have a great day.
[quote=“nathan”]There’s no reason that you can’t get a project of your size to perform well. My thoughts:
I doubt that the SQLTags tables are the cause of your deadlocks, even though they may be flagging. The reason is that the SQLTags system frequently performs a fast query - look at the gateway page for numbers. My guess is that external problems will indicate problems in the SQLTags queries.
One thing that can hammer database systems is really slow queries, typically written without “LIMIT” or proper “WHERE” clauses. This can often happen if you don’t have appropriate fields keyed. Places to worry here are with reports and graphs. FSQL logging is lightweight on an SQL server, but poorly formed large reads are not.
Another thing to look at with SQL server is the transacation log that allows undo-ing queries. It’s a great feature for low volume, high importance applications, but irrelevant when you have lots of little updates (like temperature changes). Look to disable this feature on tables where appropriate.
All that said, I’m a huge fan of the “Scaling out” approach of moving databases to other computers. This is totally seamless to the user. I suspect that your history table is a good candidate to move to a separate server. SQLTags maybe…
Bottom line - you need to determine exactly what’s slowing down your system. Check RAM, the drive where data is stored, etc. Excessive paging is bad. The best place to investigate is by watching queries including how long they take to run. Do any other applications use that database?
Figure out where your bottleneck is, then you can expand to accomodate. IT should be able to help you - be sure to “de-mythify” the software~there’s nothing magical going on in the background.[/quote]