I've a table with half million records that my application uses continious with several UPDATE e SELECT statement (about 5 requests/sec).
After several (4-5) hours I've a degrade of performance, but if I update the statistics (of thi table) all return ok.
Now the situation is I create a job to maintenance this table updating statististic two times a day ....
Is it normal? SQL should update statistics by itself?
I choose the wrong way or ... what can i do?
Hello group.I have an issue, which has bothered me for a while now:I'm wondering why the column statistics, which SQL Server wants me tocreate, if I turn off auto-created statistics, are so important to theoptimizer?Example: from Northwind (with auto create stats off), I do the following:SELECT * FROM Customers WHERE Country = 'Sweden'My query plan show a clustered index scan, which is expected - no indexexists for Country. BUT, the query plan also shows, that the optimizer ismissing a statistic on Country, which tells me, that the optimizer wouldbenefit from knowing this.I cannot see why? (and I've been trying for a while now).If I create the missing statistics, nothing happens in the query plan (andwhy should it?). I could understand it, if the optimizer suggested an indexon Country - this would make sense, but if creating the missing index, queryanalyzer creates the statistics with an empty index, which seems to me to beless than usable.I've been thinking long and hard about this, but haven't been able to reacha conclusion :) It has some relevance to my work, because allowing theoptimizer to create missing statistics limits my options for designingindexes (e.g. covering) for some rather wide tables, so I'm thinking why notturn it off altogether. But I would like to know the consequences - hopesomebody has already delved into this, and knows a good explanation.RgdsJesper
What is the unit of the numbers you get in the Time Statistics-part when running a query in Microsoft SQL Server Management Studio with Client Statistics turned on?
Currently I get mostly 0īs, but if I try and *** up a query on purpose I can get it up to around 30... Is it milliseconds or som made up number based on clockcycles or... ?
I would also like to know if itīs possible to change the precision.
Some general tips and suggestions on this ? Say that Im running 2000 with a database and later discover that some parts of the application/system only works in 7.0. So I want to degrade SQLServer version from 2000 back to the old 7.0. Reinstall with attach/detach of db ? backup and restore ? Are 2000 backward compatible in any way with 7.0 ?
I would like to share with you the following performance issue:
Configuration-Server
SQL 2005 workgroup edition
Windows 2003 server Small business
2 cpu
3.5 Gbytes Ram
RAID 0
boot.ini /3Gb userva=2560
Page file 2-4 Gbytes to each drive (two drives)
mdf file ~ 300 Mbytes
SQL dedicated server
Configuration-Clients
Windows XP
1 Gbyte RAM
Visual studio 2005 application
9 users
The problem
The users work smoothly for several days. The sql service is running continuously. After few days we have complaints from the users that the system is slow, SPECIALLY when they execute specific queries.
What we have done
1. We refine several queries.
2. We monitored several counters, specially those that reveal performance problems. They all return reasonable values.
3. defragmentation of Hard discs.
4. Stopped the services that are not required.
5. Reindex and update statistics every night.
What we found
When the users are complaining we monitored cpu spikes for sqlserv. We tried to find the reason but failed. The PF is around 2.1 Gbytes, sqlsrv Memory is around 1.5 Gbytes.
What we do not understand is why the system is not slow when we restart the sqlserv service. Also why after several days is becoming slow ??
I have a database of about 5Gb of size. Some queries where taking more than 1 minute to complete execution (all of them are stored procedures). Because of that lack of performance, I call the command DBREINDEX for each table, executed the sp_updatestats system stored procedure and finally I executed the sp_recompile system stored procedure for each sp in my database.
After all this task, queries completed in a matter of a few seconds instead of minutes. Strange enough is that some hours later (about 6 hrs), after normal use (this database belong to a Client/Server information system), the problem appeared again: Queries started to take too long to complete.
I am assuming that indexes are degrading too fast so that they required another ReIndex, but I am not sure.
Hello Guys, I have a Question. I want to set up a sql alert that will monitor a particular counter(Eg. Memory:pages/sec) and send me an email when it reaches a particular threshold. My question is if i set up this, Will sql will start running a perfmon on the background and degrade server's performance OR Will it just read it from some system file or something to to get the values.
I dont know if the perfmon counter values are stored in any system files or not.
I'm seeing a few deadlocks on my SQL 2005 production database and wondered what i could expect if i switch on the trace flag 1222. Can this cause more deadlocks to occur? Will the performance of SQL degrade? Is it safe to have this flag set on a production server or is there another method you would recommend?
We recently implemented merge replication.We were expereincing. The replication is between 2 SQL Servers (2005) over same network box, and since we have introduced the replication, the performance has degraded considerably on subscriber end.
1) One thing that should be mention is that its a "unidirectional Direction" flow of changes is from publisher towards subscriber (only one publisher and distributor as well and one subscriber ).
2) Updates are high than inserts and only one article let say "Article1" ave update up to 2000 per day and i am experiecing that dbo.MSmerge_upd_sp_Article1_GUID taking more cpu time.what should be do..
on subscriber database response time is going to slow and i am experiencing a lot of number of LOCK time outs on application end.
can any one can also suggest me server level settings for aviding locking time out.
As part of my automagical nightly index maintenance application, I am seeing a fairly regular (3-4 failures out of 5 attempts per week) failure on one particular table in my database. The particular line which seems to be failing is this one:
DBCC SHOWCONTIG (WON_Staging_EPSEst) WITH FAST, TABLERESULTS, ALL_INDEXES
The log reports the following transgression(s):Msg 2767, Sev 16: Could not locate statistics 'WON_Staging_EpsEst' in the system catalogs. [SQLSTATE 42000] Msg 0, Sev 16: [SQLSTATE 01000] Msg 0, Sev 16: -------------------- Simple ReIndex for [WON_Staging_EpsEst].[IX_WON_Staging_EpsEst] [SQLSTATE 01000] Msg 2528, Sev 16: DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] Msg 0, Sev 16: [SQLSTATE 01000] Msg 0, Sev 16: -------------------- Post-Maintenance Statistics Report for WON_Staging_EpsEst [SQLSTATE 01000] Msg 0, Sev 16: Statistics for WON_Staging_EpsEst, WON_Staging_EpsEst [SQLSTATE 01000] Msg 2528, Sev 16: DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] Msg 0, Sev 16: Statistics for WON_Staging_EpsEst, IX_WON_Staging_EpsEst [SQLSTATE 01000] Msg 2768, Sev 16: Statistics for INDEX 'IX_WON_Staging_EpsEst'. [SQLSTATE 01000] Updated Rows Rows Sampled Steps Density Average key length -------------------- -------------------- -------------------- ------ ------------------------ ------------------------ Aug 3 2007 3:22AM 674609 674609 196 2.0958368E-4 8.0
(1 rows(s) affected)
This table is dropped and recreated each day during a data import job. After the table is recreated and repopulated with data (using a bulk import from a flat file), the index is also recreated using the following code:CREATE INDEX [IX_WON_Staging_EpsEst] ON [dbo].[WON_Staging_EpsEst](OSID, [Year], Period) ON [PRIMARY]Yet more often than not, that evening, when the index maintenance job runs, it fails with the aforepasted messages complaining of being unable to find table/index statistics.
Worth noting, perhaps, is that this same process is used on roughly 10 data staging tables in this database each day, and none of the other tables fail during the index maintenance job.
Also worth noting, perhaps, is that this IDENTICAL table/code is processed in exactly the same way on TWO other servers, and the failure has not occured in any of the jobs on those other two servers (these other two servers are identical mirrors of the one failing, and contain all the same data, indicies, and everything else.
Any thoughts, suggestions for where to look, or unrestrained abusive comments regarding my ancestry?
I have a small doubt. If we apply a statistics command on a particular table what will it update. Normally statistics are created automatically by the server or we have to create it.
Anybody know how many companies worldwide use SQL server and how manyindividual servers this amounts to? Also, at what rate is SQL usegrowing? Can someone at least point me to a source where I could findclose to exact numbers?
Here my data sample on which I need to perform some stats Time(Sec) Result 1 2 2 8 3 6 4 2 5 2 6 4 7 2 8 7 9 8
What I need from this is a result set that looks as follows GroupNo Value 1 5.33 2 2.67 3 5.67
This is a grouping of the result data in 3's by time. Value is the average of the Group. In need to write a select statement to do this. Note the Group could be done from 1 to 10
The end result of this is to display a Range Chart which shows Results grouped according to requirements. Any Help would nice. Pargat
I would like to know, How can I drop Statistics from tables. My user tables has two indexes and and some statistics created onto them. I would like to drop the statistics indexes and apprecaite, If someone please advice.
What are some ways to analyze index coverage and usage? I have a 18 GB database, half is data, other half is indexes and I want to cut down that number as much as I can without affecting performance. Thanks
Does anyone have any generic scripts that Drop all the statistics that SQL auto generates?? I have hundreds of '_WA_....' indicies that are auto created by SS7 and I just want to get rid of ALL of them. Thanks!
I have been monitoring some indexes on a table with a lot of inserts, no updates and no deletes. I was wanting to determine when to update the statistics on the index. Does anyone know what would be a good target range for the density when you run the dbcc show_statistics?
When the "create statistics" command is run, what table entries are made into system tables?
I want to check for the existence of statistics on certain columns and if they are not there, create them. What is a good way to see if they are already created?
I was wondering how often you should reindex. By looking at dbcc showcontig and statistics I see that I am heavily fragmented and scan density is between 10-30% on my important indexes. I'm thinking of scheduling this to be done nightly. nay help is much appreciated.
I am contemplating creating a job to execute every 5 mins which will update index statisics if they are more than say 8% out. I would like to know what thoughts people have on this? i.e. pros and cons.
I like forward to what you have to say.
I have auto stats on. Our stats are often more than 10% out. At what level do you reckon the query plan might be effected by out of data stats?
It seems to me there are many ways to update statistics for a table. i.e. "sp_updatestats", "sp_recompile", "dbcc updateusage"
Can somebody tell me the difference between those commands and what's the best way for updating your statistics? Does reindexing update the statistics?
Can I copy statistics in SQL Server from one environment to another without copying the actual data. For example from production to development. It is possible to copy statistics in other databases, like DB2/UDB, Oracle. Reason is to execute some poor performing SQLs and analyze their execution plan.
Did not find anything on this subject in BOL. Since statistics is stored in the statblob column of sysindexes, I tried updating statblob column of the index and rowcnt columns of table & index to mimic the copy. After my updates to 'TO' table
showed the results that is identical to the statistics of my FROM table.
But when I execute a small SELECT on (FROM) table(which contains the original, required stats) and the (TO) table (where the statistics is now copied), I get 2 different execution plans. This means I am not successful in my attempt to cheat the optimizer.
I am supporting a SQL Server 6.5 databases that users query using pre-configured reports. The reports use views, stored procedures, and triggers set up by the programmers and accessed thru a client on the workstation.
I need to be able to count which users log in (SQL Server security), how often, and either which reports they use or what tables they select.
I do not have access to the WindowsNT server, so the solution has to work with SQL Server SA rights.
Sawmill Enterprise (www.sawmill.net) that has IISLog support direct from an SQL server. There are other products out there as well like AWStats (http://www.sawmill.net/) and WebLog Expert that does it, but they use the actual log files.
Does anyone have suggestions on where I can look? Thank you.
To update statistics for entire DB i have taken the script from under given link.But need to know the 1 : what is sample percent on update statistics 2 : will it be applicable for 2005 ?
script taken from : http://weblogs.sqlteam.com/tarad/archive/2006/08/14/11194.aspx
Scenario: For the most part we let SQL Server (2005) maintain our statistics for us. However we do have several large processes written in stored procedures. There is one main controller procedure that can call any number of other procedures. This process can take anywhere from 5 minutes to an hour+ to run (based on the size of the client). Back in the day of SQL Server 2000 we found that the performance of this procedure would diminish over time (while it was running). We implemented a queued concept of issuing UPDATE STATISTICS commands. This was done by adding a SQL Server job that ran every 10 minutes looking for new records in a table. Records where inserted at key points in these stored procedures (after large deletes, updates, inserts).
Goal: Now, with all that background and with 2005, I'd like to review this concept and remove this implementation if possible, or at least remove the close association of maintaining the statistics from the business jobs. In 2005, are there better ways to monitor and maintain statistics at more of an administrative (but automated) way?
Hello all,I've written a stored procedure which runs OK for thefirst few thousand records it processes, then aroundabout the 10,000th record it suffers a sudden and dramaticdrop in performance (from about 40 records per second toabout 1 per second).I've found that when this happens, if I run an UPDATE STATISTICSquery on the affected tables, performance picks up again,which is good. However, this is a query that will berunning unattended, so I thought that I good idea wouldbe to put the UPDATE STATISTICS statement in the storedprocedure and have it run after about eight thousand recordshave been processed.However, I find that even though the statement is runafter the 8,000th record, the performance drop *still*occurs after the 10,000th record. I find this odd becausethe statistics have just recently been updated. Is thereanything else I should be looking at?TIA,--Akinaknak at aksoto dot idps dot co dot uk