I would like to know, How can I drop Statistics from tables. My user tables has two indexes and and some statistics created onto them. I would like to drop the statistics indexes and apprecaite, If someone please advice.
I have recently defragged my SQL server using INDEXDEFRAG. Can somebody please tell me how to update the statistics on all the tables? Thanks in advance.
Below is the script that I executed to defrag all the tables in my database if anyone needs this.
/*Perform a 'USE <database name>' to select the database in which to run the script.*/ -- Declare variables SET NOCOUNT ON DECLARE @tablename VARCHAR (128) DECLARE @execstr VARCHAR (255) DECLARE @objectid INT DECLARE @indexid INT DECLARE @frag DECIMAL DECLARE @maxfrag DECIMAL
-- Decide on the maximum fragmentation to allow SELECT @maxfrag = 20.0
-- Declare cursor DECLARE tables CURSOR FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE'
-- Loop through all the tables in the database FETCH NEXT FROM tables INTO @tablename
WHILE @@FETCH_STATUS = 0 BEGIN -- Do the showcontig of all indexes of the table INSERT INTO #fraglist EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''') WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS') FETCH NEXT FROM tables INTO @tablename END
-- Close and deallocate the cursor CLOSE tables DEALLOCATE tables
-- Declare cursor for list of indexes to be defragged DECLARE indexes CURSOR FOR SELECT ObjectName, ObjectId, IndexId, LogicalFrag FROM #fraglist WHERE LogicalFrag >= @maxfrag AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
-- Open the cursor OPEN indexes
-- loop through the indexes FETCH NEXT FROM indexes INTO @tablename, @objectid, @indexid, @frag
How to find out all the statistics from all the tables and drop them..any script anyone can help with? When we are trying to make datatype changes in few related tables,it's giving error saying that some statistics are dependent onthe column blah blah...
I'm working on databases where statistics of some indexes (tables) are changing too frequently. Once I update them manually, one minute after they get 10-20% change, and five minutes after they get over 100% change. Tables get updated very frequently (multiple times in a second).
When I run a query to read from sys.stats, sys.dm_db_stats_properties and other dynamic views, I see that they were last updated when I did it manually, but the change rate overpassed the 500+20% (tables have multiples of 10K rows). Auto create and update statistics are set to true on all databases, and I don't know why sql server does not do that automatically.
Hello group.I have an issue, which has bothered me for a while now:I'm wondering why the column statistics, which SQL Server wants me tocreate, if I turn off auto-created statistics, are so important to theoptimizer?Example: from Northwind (with auto create stats off), I do the following:SELECT * FROM Customers WHERE Country = 'Sweden'My query plan show a clustered index scan, which is expected - no indexexists for Country. BUT, the query plan also shows, that the optimizer ismissing a statistic on Country, which tells me, that the optimizer wouldbenefit from knowing this.I cannot see why? (and I've been trying for a while now).If I create the missing statistics, nothing happens in the query plan (andwhy should it?). I could understand it, if the optimizer suggested an indexon Country - this would make sense, but if creating the missing index, queryanalyzer creates the statistics with an empty index, which seems to me to beless than usable.I've been thinking long and hard about this, but haven't been able to reacha conclusion :) It has some relevance to my work, because allowing theoptimizer to create missing statistics limits my options for designingindexes (e.g. covering) for some rather wide tables, so I'm thinking why notturn it off altogether. But I would like to know the consequences - hopesomebody has already delved into this, and knows a good explanation.RgdsJesper
What is the unit of the numbers you get in the Time Statistics-part when running a query in Microsoft SQL Server Management Studio with Client Statistics turned on?
Currently I get mostly 0īs, but if I try and *** up a query on purpose I can get it up to around 30... Is it milliseconds or som made up number based on clockcycles or... ?
I would also like to know if itīs possible to change the precision.
As part of my automagical nightly index maintenance application, I am seeing a fairly regular (3-4 failures out of 5 attempts per week) failure on one particular table in my database. The particular line which seems to be failing is this one:
DBCC SHOWCONTIG (WON_Staging_EPSEst) WITH FAST, TABLERESULTS, ALL_INDEXES
The log reports the following transgression(s):Msg 2767, Sev 16: Could not locate statistics 'WON_Staging_EpsEst' in the system catalogs. [SQLSTATE 42000] Msg 0, Sev 16: [SQLSTATE 01000] Msg 0, Sev 16: -------------------- Simple ReIndex for [WON_Staging_EpsEst].[IX_WON_Staging_EpsEst] [SQLSTATE 01000] Msg 2528, Sev 16: DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] Msg 0, Sev 16: [SQLSTATE 01000] Msg 0, Sev 16: -------------------- Post-Maintenance Statistics Report for WON_Staging_EpsEst [SQLSTATE 01000] Msg 0, Sev 16: Statistics for WON_Staging_EpsEst, WON_Staging_EpsEst [SQLSTATE 01000] Msg 2528, Sev 16: DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] Msg 0, Sev 16: Statistics for WON_Staging_EpsEst, IX_WON_Staging_EpsEst [SQLSTATE 01000] Msg 2768, Sev 16: Statistics for INDEX 'IX_WON_Staging_EpsEst'. [SQLSTATE 01000] Updated Rows Rows Sampled Steps Density Average key length -------------------- -------------------- -------------------- ------ ------------------------ ------------------------ Aug 3 2007 3:22AM 674609 674609 196 2.0958368E-4 8.0
(1 rows(s) affected)
This table is dropped and recreated each day during a data import job. After the table is recreated and repopulated with data (using a bulk import from a flat file), the index is also recreated using the following code:CREATE INDEX [IX_WON_Staging_EpsEst] ON [dbo].[WON_Staging_EpsEst](OSID, [Year], Period) ON [PRIMARY]Yet more often than not, that evening, when the index maintenance job runs, it fails with the aforepasted messages complaining of being unable to find table/index statistics.
Worth noting, perhaps, is that this same process is used on roughly 10 data staging tables in this database each day, and none of the other tables fail during the index maintenance job.
Also worth noting, perhaps, is that this IDENTICAL table/code is processed in exactly the same way on TWO other servers, and the failure has not occured in any of the jobs on those other two servers (these other two servers are identical mirrors of the one failing, and contain all the same data, indicies, and everything else.
Any thoughts, suggestions for where to look, or unrestrained abusive comments regarding my ancestry?
I have a small doubt. If we apply a statistics command on a particular table what will it update. Normally statistics are created automatically by the server or we have to create it.
Anybody know how many companies worldwide use SQL server and how manyindividual servers this amounts to? Also, at what rate is SQL usegrowing? Can someone at least point me to a source where I could findclose to exact numbers?
Here my data sample on which I need to perform some stats Time(Sec) Result 1 2 2 8 3 6 4 2 5 2 6 4 7 2 8 7 9 8
What I need from this is a result set that looks as follows GroupNo Value 1 5.33 2 2.67 3 5.67
This is a grouping of the result data in 3's by time. Value is the average of the Group. In need to write a select statement to do this. Note the Group could be done from 1 to 10
The end result of this is to display a Range Chart which shows Results grouped according to requirements. Any Help would nice. Pargat
What are some ways to analyze index coverage and usage? I have a 18 GB database, half is data, other half is indexes and I want to cut down that number as much as I can without affecting performance. Thanks
Does anyone have any generic scripts that Drop all the statistics that SQL auto generates?? I have hundreds of '_WA_....' indicies that are auto created by SS7 and I just want to get rid of ALL of them. Thanks!
I have been monitoring some indexes on a table with a lot of inserts, no updates and no deletes. I was wanting to determine when to update the statistics on the index. Does anyone know what would be a good target range for the density when you run the dbcc show_statistics?
When the "create statistics" command is run, what table entries are made into system tables?
I want to check for the existence of statistics on certain columns and if they are not there, create them. What is a good way to see if they are already created?
I was wondering how often you should reindex. By looking at dbcc showcontig and statistics I see that I am heavily fragmented and scan density is between 10-30% on my important indexes. I'm thinking of scheduling this to be done nightly. nay help is much appreciated.
I am contemplating creating a job to execute every 5 mins which will update index statisics if they are more than say 8% out. I would like to know what thoughts people have on this? i.e. pros and cons.
I like forward to what you have to say.
I have auto stats on. Our stats are often more than 10% out. At what level do you reckon the query plan might be effected by out of data stats?
It seems to me there are many ways to update statistics for a table. i.e. "sp_updatestats", "sp_recompile", "dbcc updateusage"
Can somebody tell me the difference between those commands and what's the best way for updating your statistics? Does reindexing update the statistics?
Can I copy statistics in SQL Server from one environment to another without copying the actual data. For example from production to development. It is possible to copy statistics in other databases, like DB2/UDB, Oracle. Reason is to execute some poor performing SQLs and analyze their execution plan.
Did not find anything on this subject in BOL. Since statistics is stored in the statblob column of sysindexes, I tried updating statblob column of the index and rowcnt columns of table & index to mimic the copy. After my updates to 'TO' table
showed the results that is identical to the statistics of my FROM table.
But when I execute a small SELECT on (FROM) table(which contains the original, required stats) and the (TO) table (where the statistics is now copied), I get 2 different execution plans. This means I am not successful in my attempt to cheat the optimizer.
I am supporting a SQL Server 6.5 databases that users query using pre-configured reports. The reports use views, stored procedures, and triggers set up by the programmers and accessed thru a client on the workstation.
I need to be able to count which users log in (SQL Server security), how often, and either which reports they use or what tables they select.
I do not have access to the WindowsNT server, so the solution has to work with SQL Server SA rights.
I've a table with half million records that my application uses continious with several UPDATE e SELECT statement (about 5 requests/sec). After several (4-5) hours I've a degrade of performance, but if I update the statistics (of thi table) all return ok.
Now the situation is I create a job to maintenance this table updating statististic two times a day ....
Is it normal? SQL should update statistics by itself? I choose the wrong way or ... what can i do?
Sawmill Enterprise (www.sawmill.net) that has IISLog support direct from an SQL server. There are other products out there as well like AWStats (http://www.sawmill.net/) and WebLog Expert that does it, but they use the actual log files.
Does anyone have suggestions on where I can look? Thank you.
To update statistics for entire DB i have taken the script from under given link.But need to know the 1 : what is sample percent on update statistics 2 : will it be applicable for 2005 ?
script taken from : http://weblogs.sqlteam.com/tarad/archive/2006/08/14/11194.aspx
Scenario: For the most part we let SQL Server (2005) maintain our statistics for us. However we do have several large processes written in stored procedures. There is one main controller procedure that can call any number of other procedures. This process can take anywhere from 5 minutes to an hour+ to run (based on the size of the client). Back in the day of SQL Server 2000 we found that the performance of this procedure would diminish over time (while it was running). We implemented a queued concept of issuing UPDATE STATISTICS commands. This was done by adding a SQL Server job that ran every 10 minutes looking for new records in a table. Records where inserted at key points in these stored procedures (after large deletes, updates, inserts).
Goal: Now, with all that background and with 2005, I'd like to review this concept and remove this implementation if possible, or at least remove the close association of maintaining the statistics from the business jobs. In 2005, are there better ways to monitor and maintain statistics at more of an administrative (but automated) way?
Hello all,I've written a stored procedure which runs OK for thefirst few thousand records it processes, then aroundabout the 10,000th record it suffers a sudden and dramaticdrop in performance (from about 40 records per second toabout 1 per second).I've found that when this happens, if I run an UPDATE STATISTICSquery on the affected tables, performance picks up again,which is good. However, this is a query that will berunning unattended, so I thought that I good idea wouldbe to put the UPDATE STATISTICS statement in the storedprocedure and have it run after about eight thousand recordshave been processed.However, I find that even though the statement is runafter the 8,000th record, the performance drop *still*occurs after the 10,000th record. I find this odd becausethe statistics have just recently been updated. Is thereanything else I should be looking at?TIA,--Akinaknak at aksoto dot idps dot co dot uk