We are using SQL Server 2005. The auto update statistics and auto create statistics for a database is set to ON. This database has a very heavy work load. When I checked the individual statitics , still the last updated statistics is in a old date value (few months ago).
Is it necessary to manually update the statistics for the same database? Or can we rely upon "auto update statistics" itself ?
Usually in what frequency the manual UPDATE STATISTICS should be run on production system which has heavy transactions ?
I'm using SQL 7, there is a setting on DB properties called "Auto update statistics", what kind of statistics does this refers to and how can this stats be accessed?
Over the past week and a half we started experiencing a sporadic slowdown in our production x64 SQL 2005 Ent. Edition server. Users started complaining of slowness then they started getting timeouts. In looking at sp_who2 and perfmon we saw the following during the slow/frozen periods: * Dramatic increase in Perfmon Active Transactions * CPU higher than norm, but not dramatically so * sp_who2 shows a number of spids in SUSPENDED state (and not running waits) * no blocking indicated from sp_who2 * active connections slowly increasing * no disk queuing (or at most some spikes to 1) After a couple of minutes of this we would then see the following: * no more spids in SUSPENDED state * Logins per second spikes dramatically * Active transactions spikes down to "normal levels" * CPU goes high then levels out at moderately higher than normal * active connections slowly decreases back towards normal levels * large spike in lock wait time
We turned on the Async Auto Update Statistics option (after testing in our staging environment) on the primary database about a week before we saw this problem. By turning it off we can visually see the problem go away by watching the above metrics. So my question is, What metrics can I use to see the "blocking" or resouce locking that is causing these problems? Anyone? Thx Ron
Hello group.I have an issue, which has bothered me for a while now:I'm wondering why the column statistics, which SQL Server wants me tocreate, if I turn off auto-created statistics, are so important to theoptimizer?Example: from Northwind (with auto create stats off), I do the following:SELECT * FROM Customers WHERE Country = 'Sweden'My query plan show a clustered index scan, which is expected - no indexexists for Country. BUT, the query plan also shows, that the optimizer ismissing a statistic on Country, which tells me, that the optimizer wouldbenefit from knowing this.I cannot see why? (and I've been trying for a while now).If I create the missing statistics, nothing happens in the query plan (andwhy should it?). I could understand it, if the optimizer suggested an indexon Country - this would make sense, but if creating the missing index, queryanalyzer creates the statistics with an empty index, which seems to me to beless than usable.I've been thinking long and hard about this, but haven't been able to reacha conclusion :) It has some relevance to my work, because allowing theoptimizer to create missing statistics limits my options for designingindexes (e.g. covering) for some rather wide tables, so I'm thinking why notturn it off altogether. But I would like to know the consequences - hopesomebody has already delved into this, and knows a good explanation.RgdsJesper
I know that statistics called _WA_... are created on tables when auto create statistics is set on a database. Is this an indication that queries against the table would perform better if indexes were created on the columns in question? (The tables I'm interested in optimising are used equally for transactional querying and reporting)
My question: Is it okay to drop all the auto generated column statistics? (for the following scenario)
- I am cleaning up unnecessary objects (tables, unused indexes, overlapping statistics etc) from databases and found out there are more than 1400 auto generated column statistics on one database (lets call it A). - Database A was used to be our reporting database but from last several years we are using database B for reporting. DB A has all the historical data while DB B only has valid records. - We are updating all the column statistics with full scan nightly on database A and it is talking almost 2.5 hours to do that. Now I want to drop all the "unnecessary" statistics those were created when DB A was reporting database and they are no longer in use. There is no way to know the creation date of the column statistics that I know of. Statistics "last update date" is of no use because of our nightly job. So I was thinking of dropping all the auto generated column statistics and let the sql server create as it needs from now.
We have implemented a very small reporting database which has a main table that started off small and has now grown to around half a million rows. Initially, there were no indexes on the table apart from a clustered index, but as the data has grown, performance has dropped and so we have added a number of indexes. This has resolved the performance issues.
Before creating the indexes SQL Server had auto created a number of statistic objects (_WA_Sys_000... etc). After creating the indexes, new statistic objects where created for the new indexes. In some cases, there are duplicate statistics (auto and index) for the same columns.Should I go through and drop the duplicate auto statistics? Will having duplicates cause issues at all?
I am contemplating creating a job to execute every 5 mins which will update index statisics if they are more than say 8% out. I would like to know what thoughts people have on this? i.e. pros and cons.
I like forward to what you have to say.
I have auto stats on. Our stats are often more than 10% out. At what level do you reckon the query plan might be effected by out of data stats?
It seems to me there are many ways to update statistics for a table. i.e. "sp_updatestats", "sp_recompile", "dbcc updateusage"
Can somebody tell me the difference between those commands and what's the best way for updating your statistics? Does reindexing update the statistics?
To update statistics for entire DB i have taken the script from under given link.But need to know the 1 : what is sample percent on update statistics 2 : will it be applicable for 2005 ?
script taken from : http://weblogs.sqlteam.com/tarad/archive/2006/08/14/11194.aspx
Hi All, I update statistics for three tables every day 2:00 AM and in the job we call one stored procedure and, in that stored procedure only three statements are writtern for update statistics Like: Exec('update statistics TBL1 with fullscan') Exec('update statistics TBL2 with fullscan') Exec('update statistics TBL3 with fullscan') And this job was working fine since many months but last two days its getting fail and it gives the error messages like : could not continue scan with NOLOCK due to data movement So could you help me what is the solution for this
I would like to know when we upgrade SQL Server 2000 database to SQL Server 2005 is it required to update the statistics even if we rebuild all the indexes or create new indexes?
I am planning to change our current UPDATE STATISTICS strategy, which is auto stats ON. Our database is terrabytes sizes and some tables with millions of rows with over 200 indexes in one table. Some of these indexes are not really used. Most of the tables are very small.
Droping and creating new indexes are quite often used in our environment. So static script may not help.
How can I identify most frequently used indexes in a table?
With the Microsft recommended auto stats ON, what are the best other practices I can include to improve the effeciency?
Any help would be apprecited. It would be realy great if any of you can share some scripts to generates dynamic scripts.
On a SQL 7 sp 2 server, I have a database with about about 77,000 records, with automatic update statistics on inserting 1000 records took 43 minutes. With automatic update off, it took 23 minutes to insert the same 1000 records. On the same machine, I inserted 1000 records into 2 other databases with the same database structure and automatic update statistics on. On the second database, there are about 174000 records and it took 35 minutes to insert 1000 records. On the third database, there are about 93000 records and it took 19 minutes to insert 1000 records.
I have a query that retrieves a single record from searching on two tables. The statement goes like this... select sum(amount) from Table1 A union Table2 B on a.id = b.id where date < ### and date > #### and account = ###
As people are running a particular report, this statement is executed time and time again to pull up the numbers necesarry for the report. When the report gets slow, I can speed it up by updating the statistics. My concern is that I'm having to update the statistcs every hour; otherwise, the query becomes slow. I have noticed that users are inserting data while users are running the report on one of the tables listed above. I'm sure that's making it become more fragmented and ultimately slowing down the query. Do you have any suggestion on how I can make the union of these two tables faster? Or is there anything I could do to speed the query besides creating clusted indexes? Any help would be appreciated....thank you
I am maintaining a large table with millions of rows that has two non clustered indexes and data changing frequently, I need to keep the indexes fresh. Update Statistics runs much quicker than Reindex. What is the appropriate situation for each and why? Thanks in advance.
We are upgrading from sql 7 to 2000.During the upgrade process do we have to do a reindexing of all tables or will update statistics take care of that.
Or do we have to do both? What is the difference between reindexing and update statistics.
I have recently defragged my SQL server using INDEXDEFRAG. Can somebody please tell me how to update the statistics on all the tables? Thanks in advance.
Below is the script that I executed to defrag all the tables in my database if anyone needs this.
/*Perform a 'USE <database name>' to select the database in which to run the script.*/ -- Declare variables SET NOCOUNT ON DECLARE @tablename VARCHAR (128) DECLARE @execstr VARCHAR (255) DECLARE @objectid INT DECLARE @indexid INT DECLARE @frag DECIMAL DECLARE @maxfrag DECIMAL
-- Decide on the maximum fragmentation to allow SELECT @maxfrag = 20.0
-- Declare cursor DECLARE tables CURSOR FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE'
-- Loop through all the tables in the database FETCH NEXT FROM tables INTO @tablename
WHILE @@FETCH_STATUS = 0 BEGIN -- Do the showcontig of all indexes of the table INSERT INTO #fraglist EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''') WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS') FETCH NEXT FROM tables INTO @tablename END
-- Close and deallocate the cursor CLOSE tables DEALLOCATE tables
-- Declare cursor for list of indexes to be defragged DECLARE indexes CURSOR FOR SELECT ObjectName, ObjectId, IndexId, LogicalFrag FROM #fraglist WHERE LogicalFrag >= @maxfrag AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
-- Open the cursor OPEN indexes
-- loop through the indexes FETCH NEXT FROM indexes INTO @tablename, @objectid, @indexid, @frag
I am looking to run UPDATE STATISTICS for the first time, don't ask why it wasn't done prior please :(, on a set of large tables in our 346gb database whcih has been being populated with transactional data for the past 4 years. The tables contain 1.2, 35, 64, and 92 million rows. I have used the STAT_DATE function to determine that none of these tables have ever had update statistics run for them.
My question is how should I go about this process and what options should I be selecting when issuing the command? I assume that I must first run with the FULLSCAN paramater in order to initially generate statistics for the table then would assume that following this initial population I could run without any paramaters nightly against the tables in the database to keep statistics up to date. Any guideance you all could provide to a newb would be greatly appreciated.
Hi. I have automatic statistic update turned on for all my databases. Isthis an overhead I can do without? Could I update them overnight when thedatabase is hardly in use?Thanks--Chris Weston
I am using the Maintencance Plan wizard, but it only allows me to either select the "reorganize data and indexes" option or the "update statistics" option (in the Optimizations tab). I can't select both of them. What is the reason for this?
Dear Sql Server experts:First off, I am no sql server expert :)A few months ago I put a database into a production environment.Recently, It was brought to my attention that a particular query thatexecuted quite quickly in our dev environment was painfully slow inproduction. I analyzed the the plan on the production server (itlooked good), and then tried quite a few tips that I'd gleaned fromreading newsgroups. Nothing worked. Then on a whim I performed anUPDATE STATISTICS on a few of the tables that were being queried. Thequery immediately went from executing in 61 seconds to under 1 second.I checked to make sure that statistics were being "auto updated" andthey were.Why did I need to run UPDATE STATISTICS? Will I need to again?A little more background info:The database started empty, and has grown quite rapidly in the lastfew months. One particular table grows at a rate of about 300,000records per month. I get fast query times due to a few well placedindexes.A quick question:If I add an index, do statistics get automatically updated for thisnew index immediately?Thanks in advance for any help,Felix
Is it neccessary to schedule a update statistics on index in sql server 2005 on daily basis Is it neccessary to schedule a rebuild index on index in sql server 2005 on daily basis