Is there any way to determine index usage statistics for a given table?
For examle, I have a table, with three indices. I need to know how many times each index was used. Is it possible?
And second part of question: I need to know, which user overloads my base with their giantic queries. Is there any way to determine, how many system resources each of user's sessions uses?
We do have plenty of information about index usage in DMVs and I was wondering if there was any way for us to tell which of the user-created statistics for table were in use.
As part of my data warehouse nightly build, I truncate my tables in mytarget database.As example, I find it is much quicker to do a bulk API load of 13Mrecords and to do an update/insert of 100K rows. I also drop theindexes before the builds and reindex after. Thats an aside.What I am wondering is how is this impacting the statistics? Do I needto update them?Not well versed on statistics and any data is welcomed.ThanksRob
What are some ways to analyze index coverage and usage? I have a 18 GB database, half is data, other half is indexes and I want to cut down that number as much as I can without affecting performance. Thanks
I have been monitoring some indexes on a table with a lot of inserts, no updates and no deletes. I was wanting to determine when to update the statistics on the index. Does anyone know what would be a good target range for the density when you run the dbcc show_statistics?
I was wondering how often you should reindex. By looking at dbcc showcontig and statistics I see that I am heavily fragmented and scan density is between 10-30% on my important indexes. I'm thinking of scheduling this to be done nightly. nay help is much appreciated.
At managing index SQL Server 6.5, there is distribution button. I have a database on production server, one of the table has 20 indexes. When I press the Distribution button, it reveals that most of the indexes have very poor selection of index, from 30% to 99%. There are 6 of them are very good. Based on this statistics, do you think I should remove these indexes? As the book says, if the statistics is higher than 5%, the optimizer will not use index but do a table scan. Removing those index should not affect the performance, is that right? Your suggestion is very appreciated.
I have a question regarding updating statistics for a primary key.
Background: An update statistics with fullscan is sometimes taking 30 minutes - the table is 80 million rows, with only 4 columns. The table is truncated, and then 80 million rows inserted all in one go.
Now why the update stats is taking that long is another question (I have no idea - any thoughts?), but my question is; Since you can't disable the "not automatically recompute statistics" option for a primary key, and you would think it would be imperitive for the stats to be kept up to date for a PK for inserts.... does this mean the stats would be kept up to date? and an update stat with fullscan isn't required?
I ran the DBCC SHOW_STATISTICS command for all of my indexes; I was told that high density numbers are bad, low numbers good. I have some questions about my results, though; I'm not sure how to interpret them.
Of 48 indexes, 14 have a density of 0. Does this mean that the indexes are not selective enough? Does it mean they're garbage and I should toss them?
6 have a density of NULL. They are all primary keys. I suppose this just means that they're never used because these tables are rarely queried. Would this assumption be correct?
13 have a density of 1. I have no idea what this means.
The others have densities ranging from 0.01210491 to 0.5841165. I was told that the lower this number is, the more selective and thus more useful an index is. I think 0.5841165 is too high a number. Would this be correct?
Is it neccessary to schedule a update statistics on index in sql server 2005 on daily basis Is it neccessary to schedule a rebuild index on index in sql server 2005 on daily basis
We have a 20 GB database and reorganize indexes and update statistics maintainance takes about 4 hours and the log files grows out of control what is a serious problem since it can not be truncated (database mirroring).
I have an index that shows distribution statistics of 98.20%, which is very poor. I set show query plan and show statis I/O on. This table has 1113675 rows of data.
************* select orderID, custId, intertcsi from tblorders where intertcsi = '2815'
STEP 1 The type of query is SELECT FROM TABLE tblorders Nested iteration Index : indxInterTCSI orderID custId intertcsi ----------- ----------- --------- 1015245 1011313 2815 2556392 2556392 2815 ....
Table: tblOrders scan count 1, logical reads: 104, physical reads: 58, read ahead reads: 0 *************** Then I use the same select statement to force a table scan:
select orderID, custId, intertcsi from tblorders (index=0) where intertcsi = '2815'
STEP 1 The type of query is SELECT FROM TABLE tblorders Nested iteration Table Scan orderID custId intertcsi ----------- ----------- --------- 60472 61084 2815 102184 102333 2815 ... Table: tblOrders scan count 1, logical reads: 110795, physical reads: 6891, read ahead reads: 103980
When the index is not provided, the logical reads and physical reads increased dramatically. Does this tell me that I should keep that index though it is a poor selection? Is that because a huge table like this make the optimizer use the index. The query without using index takes longer time to run. Any idea or comment would be very appreciated.
We have implemented a very small reporting database which has a main table that started off small and has now grown to around half a million rows. Initially, there were no indexes on the table apart from a clustered index, but as the data has grown, performance has dropped and so we have added a number of indexes. This has resolved the performance issues.
Before creating the indexes SQL Server had auto created a number of statistic objects (_WA_Sys_000... etc). After creating the indexes, new statistic objects where created for the new indexes. In some cases, there are duplicate statistics (auto and index) for the same columns.Should I go through and drop the duplicate auto statistics? Will having duplicates cause issues at all?
Hi all, I need to drop some of my indexes to keep the size of my DB manageable. I know they're not all being used, but what is the best way to determine how often they are being used? Statistics? I haven't come across any text referring to this so any help is appreciated.
Does SQL Server store somewhere (in a table that I can query) when last an index was used by any queries? Or does it store which query plans it's a part of?
I see for some indexes, the columsn User_seeks= 0, User_scans = 0 .
Does this mean that, those indexes are not being used .
I wanted to know, what is the best way & best criteria to look for, in order to find whether particular index is being used (or) not.( Probalby,We can use DTA , but i believe , there should some way through DMV's also)
Because by keeping unncessary indexes, performance can be hammered on a table whose size is 170 Gb with 9 Non-clustered indexes , 1 clustered
I am supporting a SQL Server 6.5 databases that users query using pre-configured reports. The reports use views, stored procedures, and triggers set up by the programmers and accessed thru a client on the workstation.
I need to be able to count which users log in (SQL Server security), how often, and either which reports they use or what tables they select.
I do not have access to the WindowsNT server, so the solution has to work with SQL Server SA rights.
At one of your client sides we have configured Always on with synchronous mode.Also we have schedule rebuild index and update statistics job which runs in night every alternate day. the issue is there are more then 100 sleeping queries which is blocking update statistics job.
I have to stop update statistics job manually once i come to office manually.
Once I have killed blocking sleeping query but then other sleeping query blocked it and so on.
I am really puzzled by an apparent difference between table index key column order and its statistics order. I was under understanding that index statistics mirror index definition. However, in my db 2470 index ordinal definitions match statistics definition but 66 do not. I also can reproduce such discrepancy in 2008 R2, 2012 and 2014.
As per definition,
stats_column_id int
1-based ordinal within set of stats columns
This script duplicates this for me.
BEGIN TRAN GO use tempdb GO CREATE TABLE [dbo].[ItemProperties]( [itmID] [int] NOT NULL, [cpID] [smallint] NOT NULL, [ipuID] [tinyint] NOT NULL,
I've got a table with a pk (bigint, no autoincrement) that has a clustered index. Same table has an integer field with a non-unique index on it.
When I do a count(*) on the table, the non-unique index is used (20m rows, 12 secs). When I force the count(*) to use the clustered index, it takes 43 secs. When selecting rows, usually the clustered index is used.
So I'm curious as to why the count(*) uses the non-unique index and the others don't. I've noticed it's faster but, why? Any ideas/considerations?
I'm trying to establish the mb usage of a series of nonclustered indexes, I'm used to using the manage indexes GUI in 6.5, and showcontig doesn't quite give me what I want, any suggestions?
How can I query the table for following management information (in SQL SERVER 2008)?
1. Last user login? 2. How long(i.e. duration) user has been online for the day? 3. How many times user has login or logout per day? 4. which users logged into system on certain day? 5. Which users still logged in after 11pm? 6. Any other statistic that could be useful to management?
Can anyone tell me a good way to monitor which indexes are not being used? Over time, I'm sure there are extraneous indexes in our database, which I would like to get rid of.
We're using slowly changing dimensions to control a number of data tables in our system. Each table has five or six business keys, but the indexes of the tables are built so they're as efficient as possible (i.e. the fields with the highest diversity are listed first). How does the SCD wizard determine the order of the business key fields? Is there a way I can view or manipulate the statement the SCD task is using to make sure either (a) the indexes match the statement, or (b) the statement matches the indexes?
My company is currently migrating from Interbase to SQL Server 2005. During the migration we have came across a rather peculiar issue and wondering if anyone can advise.
We have a table.. named "prospect" which holds client information
We have a stored procedure which hangs on the following statement.
DECLARE @surname char(25);
SET @surname='BLAH%';
SELECT * FROM Prospect WHERE c_surname LIKE @surname;
The above takes 28 seconds to run. The following statement returns a result inside a second.
SELECT * FROM Prospect WHERE c_surname LIKE 'BLAH%';
In Interbase, the original returned the answer within a second too. The schema in both database is the same.
The 1st statement does not use an index! The execution plan is different to the 2nd statement. I am aware I can create an index recommended by the Database Engine Tuning which solves the issue or specify the index to use in the original statement but why does the engine not use the correct index if there is a variable involved? I need to know as we have just started looking at the code.
I can easily find user created stat in a databaseSELECT * FROM DB.sys.stats WHERE user_created=1But how do I determine what tables those stats are in? with over 6000 tables I don't feel like looking through all the tables.
We are trying to load flat text files with upwards of 7 million records into a table on SQL. The table has a clustered index on 3 fields. We setup the indexes prior to importing the data. We are sometimes able to complete smaller tables (500,000-750,000 records), however when we try the larger tables an error occurs :
Error at Destination for row number 6785496. Errors encountered so far in this task: 1
Location: somerge.c:1573 Expression: mrP->mrStatus!=MERGERUN::NONE SPID: 11 Process ID: 173
The destination row number is the same number as the total number of rows that we are trying to load.
None of the recods end up importing. The row number it gives is always the total number of records that was in the text file I was trying to import. I tried to import the text files first and then build the clustered indexes but a table with only 300,000 records ran for nearly 4 days without completing before we killed it. Be for we try to load the file we always delete whatever is there. Some of the files that we try to load are new and we have to set up the indexes from scratch. We are using a DTS wizard. Someone told me to find a way to get it to commit every 1000 or so but I can't find a way to do it. I looked and looked but can't find it !!!
I am using SSIS to replace set of tables daily. One of the table has primary, unique, foreign keys and full-text index. Before truncating, I am dropping the foreign key constraints (to truncate the parent table), truncating the tables and recreating the foreign keys.
I have few questions:
1) Do I need to drop and recreate the unique key as well? (I am not dropping the primary key) - Unique key is identity column created just for the full-text indexing since it was mentioned that key on integer is better than key on varchar and my pk is a varchar.
2) Do I need to drop and recreate the full-text index or just rebuild/repopulate it every time the table is loaded.
This is the first time i am using full text index and I was able to learn a lot about it from the sites. I would like to understand the correct approach while loading the tables.