I have an index that shows distribution statistics of 98.20%, which is very poor. I set show query plan and show statis I/O on. This table has 1113675 rows of data.
*************
select orderID, custId, intertcsi from tblorders
where intertcsi = '2815'
STEP 1
The type of query is SELECT
FROM TABLE
tblorders
Nested iteration
Index : indxInterTCSI
orderID custId intertcsi
----------- ----------- ---------
1015245 1011313 2815
2556392 2556392 2815
....
Table: tblOrders scan count 1, logical reads: 104, physical reads: 58, read ahead reads: 0
***************
Then I use the same select statement to force a table scan:
select orderID, custId, intertcsi from tblorders (index=0)
where intertcsi = '2815'
STEP 1
The type of query is SELECT
FROM TABLE
tblorders
Nested iteration
Table Scan
orderID custId intertcsi
----------- ----------- ---------
60472 61084 2815
102184 102333 2815
...
Table: tblOrders scan count 1, logical reads: 110795, physical reads: 6891, read ahead reads: 103980
When the index is not provided, the logical reads and physical reads increased dramatically. Does this tell me that I should keep that index though it is a poor selection? Is that because a huge table like this make the optimizer use the index. The query without using index takes longer time to run.
Any idea or comment would be very appreciated.
At managing index SQL Server 6.5, there is distribution button. I have a database on production server, one of the table has 20 indexes. When I press the Distribution button, it reveals that most of the indexes have very poor selection of index, from 30% to 99%. There are 6 of them are very good. Based on this statistics, do you think I should remove these indexes? As the book says, if the statistics is higher than 5%, the optimizer will not use index but do a table scan. Removing those index should not affect the performance, is that right? Your suggestion is very appreciated.
Hi All,I'm a relative newbie to SQL Server, so please forgive me if this is adaft question...When I set "Show Execution Plan" on in Query Analyzer, and execute a(fairly complex) sproc, I note that a particular query is reported ashaving a query cost of "71% relative to the batch" - however, this isnowhere near the slowest executing query in the batch - other querieswhich take over twice as long are reported as having costs in theorder of a few percent each.Am I misreading the execution plan? Note that I'm looking at thegraphical plan, and am not reading the 'estimated' plan - I'm usingthe one generated from executing the sproc. My expectation was thatthis would be based on the execution times of the queries within thesproc, however, this does not appear to be the case. (Note - Idetermined execution times from PRINT statements, using GETDATE() todetermine the current time, down to milliseconds).Any feedback would be of great assistance... I may well have tochange the way I approach optimizing queries based on these findings.Thanks,LemonSmasher.
I have been asked to create a report for one of our clients. The report is pretty basic but I am concerned about the overheads with my planned approach.The report is at a table and field grain to include values for:
* Min column value * Max column value * Number of discrete values * Number of populated values (not NULL)
My current plan is to have a cursor over a limited view of sys.tables and sys.columns that will run a dynamic SQL query to import the results into a table that I can then output.There must be a better way of doing this and I don't have access to any DQS services.
I had some SQL queries which are using department ID for join , filter , Group By and Select so , i am having index on department ID of my table File Master scheme ..
CREATE TABLE [dbo].[FILE_MASTER]( [FILE_ID] [INT] IDENTITY(1,1) NOT NULL, [DEPARTMENT_ID] [INT] NULL, Â Â Â Â [CLIENT_ID] [INT] NULL Â Â Â Â ,[LEAD_DETAIL_ID] [INT] NULL
[code]....
The above index only working when there is condition or group by on department ID .and i when i am querying ..
SELECT DISTINCT CL.CLIENT_ID,CL.LOAN_SANCTION_DATE,MIN(CL.INWARD_DATE)AS Inward FROM dbo.FILE_MASTER AS CLÂ GROUP BY CL.CLIENT_ID,CL.LOAN_SANCTION_DATE
and the plan is showing Index scan on index Indx_FM_department_ID .. Why it is not using Index seek , i guess i have both group by Columns in cover index included columns what is the use of cover index then ?
because if i am giving where condition before group by for specific Client ID , Loan Sanction Date it is telling to create separate index on client ID , Loan Sanction Date as per Execution Plan missing index detail ..
I run a nightly full backup of the databases and then back up the transaction log. After completing the full database backup and the transaction log, I then update all existing statistics.
Is this a good practice to do on a nightly basis? I drop and re-create indexes on all the databases once a week. Does updating the statistics really help?
What are some ways to analyze index coverage and usage? I have a 18 GB database, half is data, other half is indexes and I want to cut down that number as much as I can without affecting performance. Thanks
I have been monitoring some indexes on a table with a lot of inserts, no updates and no deletes. I was wanting to determine when to update the statistics on the index. Does anyone know what would be a good target range for the density when you run the dbcc show_statistics?
I was wondering how often you should reindex. By looking at dbcc showcontig and statistics I see that I am heavily fragmented and scan density is between 10-30% on my important indexes. I'm thinking of scheduling this to be done nightly. nay help is much appreciated.
I have a question regarding updating statistics for a primary key.
Background: An update statistics with fullscan is sometimes taking 30 minutes - the table is 80 million rows, with only 4 columns. The table is truncated, and then 80 million rows inserted all in one go.
Now why the update stats is taking that long is another question (I have no idea - any thoughts?), but my question is; Since you can't disable the "not automatically recompute statistics" option for a primary key, and you would think it would be imperitive for the stats to be kept up to date for a PK for inserts.... does this mean the stats would be kept up to date? and an update stat with fullscan isn't required?
I ran the DBCC SHOW_STATISTICS command for all of my indexes; I was told that high density numbers are bad, low numbers good. I have some questions about my results, though; I'm not sure how to interpret them.
Of 48 indexes, 14 have a density of 0. Does this mean that the indexes are not selective enough? Does it mean they're garbage and I should toss them?
6 have a density of NULL. They are all primary keys. I suppose this just means that they're never used because these tables are rarely queried. Would this assumption be correct?
13 have a density of 1. I have no idea what this means.
The others have densities ranging from 0.01210491 to 0.5841165. I was told that the lower this number is, the more selective and thus more useful an index is. I think 0.5841165 is too high a number. Would this be correct?
Is it neccessary to schedule a update statistics on index in sql server 2005 on daily basis Is it neccessary to schedule a rebuild index on index in sql server 2005 on daily basis
Is there any way to determine index usage statistics for a given table? For examle, I have a table, with three indices. I need to know how many times each index was used. Is it possible?
And second part of question: I need to know, which user overloads my base with their giantic queries. Is there any way to determine, how many system resources each of user's sessions uses?
We have a 20 GB database and reorganize indexes and update statistics maintainance takes about 4 hours and the log files grows out of control what is a serious problem since it can not be truncated (database mirroring).
We have implemented a very small reporting database which has a main table that started off small and has now grown to around half a million rows. Initially, there were no indexes on the table apart from a clustered index, but as the data has grown, performance has dropped and so we have added a number of indexes. This has resolved the performance issues.
Before creating the indexes SQL Server had auto created a number of statistic objects (_WA_Sys_000... etc). After creating the indexes, new statistic objects where created for the new indexes. In some cases, there are duplicate statistics (auto and index) for the same columns.Should I go through and drop the duplicate auto statistics? Will having duplicates cause issues at all?
Our backups by default go to a network location, but I'd like to modify our maintenance plans to backup to an alternative location if the primary location isn't available. I've setup two Backup Database Tasks where the second one runs only if the first one Fails, and if the second one runs (on first one's failure) it then sends a notification to me so I know this occurred.
The Plan is running as expected, when I simulate a bad path in the first Backup Task the second one runs and the notification is sent, but the Job shows failure. I'd like to show the job as Successful when this occurs since I'm handling the issue and notification within the Plan, but I'm unable to find out how. I've set FailParentOnFailure to False on the Plan and I've changed the MaximumErrorCount to 2 with the assumption that this would work, but neither didn't.
Also I'm running into this in both SQL 2008 and SQL 2012.
At one of your client sides we have configured Always on with synchronous mode.Also we have schedule rebuild index and update statistics job which runs in night every alternate day. the issue is there are more then 100 sleeping queries which is blocking update statistics job.
I have to stop update statistics job manually once i come to office manually.
Once I have killed blocking sleeping query but then other sleeping query blocked it and so on.
I am really puzzled by an apparent difference between table index key column order and its statistics order. I was under understanding that index statistics mirror index definition. However, in my db 2470 index ordinal definitions match statistics definition but 66 do not. I also can reproduce such discrepancy in 2008 R2, 2012 and 2014.
As per definition,
stats_column_id int
1-based ordinal within set of stats columns
This script duplicates this for me.
BEGIN TRAN GO use tempdb GO CREATE TABLE [dbo].[ItemProperties]( [itmID] [int] NOT NULL, [cpID] [smallint] NOT NULL, [ipuID] [tinyint] NOT NULL,
[Code] ....
The result I get is this:
object_id      stats_name                                     stats_column_list 1525580473 PK_ItemProperties_itmID_ipuID_cpID itmID, cpID, ipuID,
and
object_id      index_name                                     index_column_list 1525580473 PK_ItemProperties_itmID_ipuID_cpID itmID, ipuID, cpID,
Also a query I used to discover this in my db is:
WITH stat AS ( SELECT s.object_id ,s.name as stats_name ,( SELECT c.name + ', ' as [data()] FROM sys.stats_columns as sc
Recently upgraded SQL Server 2005 x64 to SP2 and upto Build 3159. Since then the Maintenance Plan for Index Reorgs has failed with a System.OutOfMemoryException error. No other errors are logged anywhere. The plan report file has no information either.
I'm in the process of trying to optimize a stored procedure with many queries. The execution plan provides a missing non-clustered index on nearly every query, and they're all fairly similar. The only real difference between them are what's in the INCLUDE statement. The two key columns are listed in every missing index. Let's say each query is approximately 5% of the total batch and 90% of the queries all fall into the category I listed above. How should I go about creating the missing indexes? Create all of the missing indexes or create one generic one that has all the INCLUDE columns? Create a minimal index with just a few of the common INCLUDE columns?
Here's an example of what I'm talking about with the missing indexes:
/* USE [DB] GO CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[TABLE_1] ([COLUMN_A],[COLUMN_B]) INCLUDE ([C4ABCD],[C4ARTX],[C4ASTX],[C4ADNB],[C4AFNB],[C4BKVA]) GO */ /*
The Query Processor estimates that implementing the following index could improve the query cost by 99.9044%.
*/ /* USE [DB] GO CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[TABLE_1] ([COLUMN_A],[COLUMN_B]) INCLUDE ([C4ARTX],[C4ASTX],[C4ADNB],[C4CZST]) GO */
/*
The Query Processor estimates that implementing the following index could improve the query cost by 99.5418%.
*/ /* USE [DB] GO CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[TABLE_1] ([COLUMN_A],[COLUMN_B]) INCLUDE ([C4ABCD],[C4ARTX],[C4ASTX],[C4ADNB],[C4AFNB],[C4BKVA]) GO */
select col1, col2, col3, col4 from Table where col2=5 order by col1
I have a primary key on the column.The execution plan showing the clustered index scan cost 30% & sort cost 70%..When I run the query I got missing index hint on col2 with 95% impact.So I created the non clustered index on col2.The total executed time decreased by around 80ms but I didn't see any Index name that is using in the execution plan.After creating the index also I am seeing same execution plan
The execution plan showing the clustered index scan cost 30% & sort cost 70% but I can see the total time is reducing & Logical reads on that table is reducing.I am sure that index is useful but why there is no change in the execution plan?
If you display the execution plan and run the following:SET STATISTICS IO ONgoSELECT ProductID, SupplierIDFROM ProductsWHERE SupplierID = 1I don't understand how come there is noBookmark Lookup operation happening to get theProductID?I only see an Index Seek happening on SupplierID.There is no composite index SupplierID + ProductIDso what am I not understanding here?Thank you
I have query with an expensive Key Lookup on a joined table. The predicate is the column that I'm joining on, and the output list contains two columns from the joined table.
I've created a basic non-clustered index covering the predicate column and include-ing the two output columns. However, the execution plan ignores this, and insists on using the primary key of the joined table to do the expensive key lookup. I've tried adding the included columns to the index directly and there's no change. I've also tried running dbcc freeproccache and no change.
On SQL Server 2005 SP2 for Publisher and Distributor on the same instance, my old snapshots are not being cleaned up.
The following error is in the agent history:
Executed as user: DomainMyUser. Could not remove directory '\vmsql01ReplDatauncPublication_TRANSACTIONAL20070702104416'. Check the security context of xp_cmdshell and close other processes that may be accessing the directory. [SQLSTATE 42000] (Error 20015). The step failed.
xp_cmdshell is enabled and I can run commands like :
exec master.dbo.xp_cmdshell ' md c:TestFolder'
The permissions to the snapshot share and file system are that DomainMyUser has full control.
I have logged into the machine as this user and can remove snapshots so it does not seem to be a permission issue.
On other machines I do not get any errors but the snapshot folder still is not cleaned up.
Hello group.I have an issue, which has bothered me for a while now:I'm wondering why the column statistics, which SQL Server wants me tocreate, if I turn off auto-created statistics, are so important to theoptimizer?Example: from Northwind (with auto create stats off), I do the following:SELECT * FROM Customers WHERE Country = 'Sweden'My query plan show a clustered index scan, which is expected - no indexexists for Country. BUT, the query plan also shows, that the optimizer ismissing a statistic on Country, which tells me, that the optimizer wouldbenefit from knowing this.I cannot see why? (and I've been trying for a while now).If I create the missing statistics, nothing happens in the query plan (andwhy should it?). I could understand it, if the optimizer suggested an indexon Country - this would make sense, but if creating the missing index, queryanalyzer creates the statistics with an empty index, which seems to me to beless than usable.I've been thinking long and hard about this, but haven't been able to reacha conclusion :) It has some relevance to my work, because allowing theoptimizer to create missing statistics limits my options for designingindexes (e.g. covering) for some rather wide tables, so I'm thinking why notturn it off altogether. But I would like to know the consequences - hopesomebody has already delved into this, and knows a good explanation.RgdsJesper
What is the unit of the numbers you get in the Time Statistics-part when running a query in Microsoft SQL Server Management Studio with Client Statistics turned on?
Currently I get mostly 0´s, but if I try and *** up a query on purpose I can get it up to around 30... Is it milliseconds or som made up number based on clockcycles or... ?
I would also like to know if it´s possible to change the precision.
I'm trying to get an application finished that works like Query Analizer in terms of returning query plans and statistics.
Problem the co-author is having:
>In using ADO to connect to SQL Server, I'm trying to retrieve multiple >datasets AND statistics that are usually returned via the OnInfoMessage >event. For those that are familiar with SQL Server, I need the results >returned by the SET STATISTICS IO ON and SET STATISTICS PROFILE ON options. >Anyone had any luck doing this before?
Can anyone shed any light on this please?
Thanks.
BTW if anyone wants to take a look at the tool so far - to see what I'm delving into: http://81.130.213.94/myforum/forum_posts.asp?TID=78&PN=1
I'm creating a site for a national league and am having difficulty querying for a particular type of statistic that I'm hoping an expert on here can help me with
My data structure is such that an Event is a league meeting (all games take place on the same day) which has Fixtures. These Fixtures have Fixture_Events (essentially someone scoring a goal, a timeout being called, a penalty being awarded). I also have a Teams table, a Players table, a TeamRoster table (all players registered to a team) and a FixtureAttendees table (players from team who played in game). I'm trying to return 2 statistics.
The first is "How many Shutouts has a goalie had?" The rule for this in English, how many games has a player participated in, where they have played in goals, and the score has been x-0 in their favour.
The second is "How many game winning goals has a player scored?" The rule for this is essentially, how many goals has a player scored where the game was previously tied (e.g. 2-2) and they have scored the last goal in the fixture (e.g. 3-2).
My tables
Code BlockCREATE TABLE [dbo].[Fixture_Events]( [FixtureEventID] [int] IDENTITY(1,1) NOT NULL, [FixtureID] [int] NOT NULL, [EventType] [nvarchar](50) NOT NULL, [EventTime] [nchar](5) NOT NULL, [TeamID] [int] NOT NULL, [Player1] [int] NOT NULL, [Player2] [int] NULL, [EventCode] [nvarchar](50) NULL, [PenaltyMinutes] [int] NULL, CONSTRAINT [PK_Fixture_Events] PRIMARY KEY CLUSTERED ( [FixtureEventID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
Code BlockCREATE TABLE [dbo].[Events_Fixtures]( [FixtureID] [int] IDENTITY(1,1) NOT NULL, [EventID] [int] NOT NULL, [LocationID] [int] NOT NULL, [FixtureDate] [smalldatetime] NOT NULL, [HomeTeam] [int] NOT NULL, [AwayTeam] [int] NOT NULL, CONSTRAINT [PK_Seasons_Fixtures] PRIMARY KEY CLUSTERED ( [FixtureID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
Code BlockCREATE TABLE [dbo].[Fixture_Attendees]( [FixtureID] [int] NOT NULL, [PlayerID] [int] NOT NULL, [Goalkeeper] [bit] NOT NULL CONSTRAINT [DF_Fixture_Attendees_Goalkeeper] DEFAULT ((0)), CONSTRAINT [PK_Fixture_Attendees] PRIMARY KEY CLUSTERED ( [FixtureID] ASC, [PlayerID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
Code BlockCREATE TABLE [dbo].[Fixture_Attendees]( [FixtureID] [int] NOT NULL, [PlayerID] [int] NOT NULL, [Goalkeeper] [bit] NOT NULL CONSTRAINT [DF_Fixture_Attendees_Goalkeeper] DEFAULT ((0)), CONSTRAINT [PK_Fixture_Attendees] PRIMARY KEY CLUSTERED ( [FixtureID] ASC, [PlayerID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
It's a complicated database as I'm tried to model it as accurately as I can. It may actually be easier if I made a backup of the database available rather than trying to post code.
If anybody can help me and required further info, please let me know.
I am wokring on an application where in I am migrating the legacy application to SQL server. for this when ever I make any changes to the legacy application,I immediately move this changes to SQL, using insert/delete/update to SQL.
I also want all these data on 2 to 3 other SQL servers which are on different remote machines. for this i thought of using the service broker to send the query as messages.
So on the initiator(where the legacy application reside), i create 2 to 3 queues(based upon number of receivers) and each time a query is successful on the initiator, i move these messages(queries) to the initiator queues and send them to the various targets. on the target i just pop these messages and execute them so that the query run on the all the targets and all my databases are in synch.
I need some info on various operations on the Service broker queues like:
1.How do i insert/update/delete from the queue. 2.Can i set some size constraint on the queue 3.Can i get the info like when the queue is full. 4.Can i set some custom flags on the queue like, queue is invalid or valid(setting and getting these falgvalues). 5.Clearing the entire queue.
These are the operations, i need on the queue for various use cases(failure cases)
Can you point to some documentation/tutorials whihc sheds more light on these queue operations. Thanks,
I need to create multiple reports with each report being sent to the specified user. For example, 3 Managers should each get their own report. A Stored Procedure receives a ManagerID parameter, so I need 3 instances of the report. I also need each report to be sent to its associated Manager:
Mgr ID Manager Name
1 John Q Manager
2 Jane Q Manager
3 Jack O Lantern
I set up a rss file to run the report and can run it multiple times, passing in a different ID value each time, but it will not be data driven because I must hard-code the parameters in the ReportParmaters object of the report for each execution.
Is there a way, either by designing the report in VS2005 or on the Reporting Services server, to accomplish what I'm intending to do?