I try do some tests and I get one doubt, why the optimizer don€™t make a constant scan in normal tables, for instance:
Code Snippet --drop table #tmp create table #tmp (id Int Identity(1,1) Primary key, name VarChar(250)) go insert into #tmp(name) values(NEWID()) insert into #tmp(name) values(NEWID()) go set statistics profile on go -- Execution plan create a Constant Scan select * from #tmp where id = 1 and id = 5 go set statistics profile off
GO
--drop table tmp create table tmp (id Int Identity(1,1) Primary key, name VarChar(250)) go insert into tmp(name) values(NEWID()) insert into tmp(name) values(NEWID())
go set statistics profile on -- Why execution plan does not create a Constant Scan for this case? select * from tmp where id = 1 and id = 5 go set statistics profile off
I have a SQL command which I run on two separate servers. Both servers and configured and built the same. On server 1 it takes mere seconds, but on server 2 it takes over 5 minutes.
I have checked the execution plan on both servers and they are completely different. I ran UPDATE STATISTICS WITH FULLSCAN on both servers, but the execution plans were still different.
My question is why are the execution plans so different and how do I get them to execute with the same plan.
I tried this:use northwindgoSELECT OrderDateFROM Orders WHERE OrderDate > '19950101'see the query plan? okSELECT OrderDate, EmployeeIdFROM Orders WHERE OrderDate > '19950101'see the query plan? what appened?the only way to make an index seek instead of an index scan is toforce theindex usage ( with(index=orderdate) ), but I don't like this solutionalso try this:SELECT *FROM Orders WHERE employeeId > 9andSELECT *FROM Orders WHERE employeeId > 8Can someone explain why this appens? and how can I overturn theperformance loss problem (well not in orders table, but in my tablethere are 300K records and making a scan to retrieve 50 records is notexactly what I want)thanks to all
I'm looking for an in depth book, article, faq, whatever, regarding the query optimizer...
I've read the books online pretty thoroughly and have been sql coding for a number of years. The system I work on relies heavily on real time access to data and the number crunching procedures we use are a critical part of the design. For the most part, sometimes through trial and error, I have been able to find ways to achieve the performance we need, but I'm often surprised by the methods that prove most effective.
For example, I have cases where I can only get the performance I'm looking for using table functions, and other cases where indexed temporary tables are the only way. I have statements that run fast as a select statement, but when converted to an update statement limp along, forcing me to resort to cursors, temp tables, or table hints with varying degrees of success.
I'm wondering if anyone has come across material that takes an in depth look at the various technologies available and how to tweek queries. I want to get away from hours of testing and hacking.
Way back when, and at least in version 7 IIRC, the query optimizer gaveup when the where clause in a statement contained more than 4 searchconditions.Does anyone know if such a limitation still exist in MS SQL 2005? TheBOL seems to be silent on the issue.Boa
I'm very puzzled by the choice of NC index being made by the optimizerin this example. I don't actually think it should use an NC index atall.I have:Table: CustomerStatus_TSingle data page19 recordsClustered Index on CustomerStatusID:CREATE TABLE [CustomerStatus_T] ([CustomerStatusID] [int] NOT NULL ,[Name] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[Description] [varchar] (200) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[Code] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[CodeAlt] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[Ordinal] [int] NULL ,[Default] [int] NULL ,[Display] [bit] NOT NULL ,[StatusType] [varchar] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[DateCreated] [smalldatetime] NULL ,[DateUpdated] [smalldatetime] NULL ,[DateArchived] [smalldatetime] NULL ,CONSTRAINT [PK_ROMS_CustomerStatus] PRIMARY KEY CLUSTERED([CustomerStatusID]) ON [PRIMARY]) ON [PRIMARY]If I run the following query, it does exactly what I expect and scansthe clustered index:SELECT customerStatusID, [Name] FROM CustomerStatus_TWHERE dateArchived IS NULLAND Display = 1AND StatusType = 'Q‘and gives the following QEP and IO statistics:|--Clustered Index Scan(OBJECT:([Reach_Roms].[dbo].[CustomerStatus_T].[PK_ROMS_CustomerStatus]),WHERE:(([CustomerStatus_T].[DateArchived]=NULL AND[CustomerStatus_T].[StatusType]='Q') ANDConvert([CustomerStatus_T].[Display])=1))Table 'CustomerStatus_T'. Scan count 1, logical reads 2, physicalreads 0,read-ahead reads 0.If I now put a NC index on the statustype column:create index ix_nci_statustype on customerstatus_t(statustype)the query plan changes to:SELECT customerStatusID, [Name] FROM CustomerStatus_TWHERE dateArchived IS NULLAND Display = 1AND StatusType = 'Q‘|--Filter(WHERE:([CustomerStatus_T].[DateArchived]=NULL ANDConvert([CustomerStatus_T].[Display])=1))|--Bookmark Lookup(BOOKMARK:([Bmk1000]),OBJECT:([Reach_Roms].[dbo].[CustomerStatus_T]))|--IndexSeek(OBJECT:([Reach_Roms].[dbo].[CustomerStatus_T].[ix_nci_statustype]),S EEK:([CustomerStatus_T].[StatusType]='Q') ORDEREDFORWARD)Table 'CustomerStatus_T'. Scan count 1, logical reads 7,physical reads 0, read-ahead reads 0.For some bizarre reason, the optimizer thinks that a NC index lookupon a single-page table, which ultimately costs 7 IOs, is cheaper thana table (or Clustered Index) scan of a single page. Why? Theshowplan cost also shows that it expects the NC index to be cheaper(which is presumably why it goes and uses it), but even after runningUPDATE STATISTICS on the table it still chooses the same idiotic queryplan.Any thoughts, or has anyone seen similar behaviour before, and cananyone please explain it to me?p.s. I don't actually WANT to put a NC index on this table, but Inoticed the behaviour by accident which is why I'm asking the question:-)
Is there a DMV or similar in SQL 2012, or SQL 2008, that shows when a statistic was last used by the optimizer? I would like to cleanup some of the auto-generated stats, assuming it's possible to do so. In particular I'm looking to drop those statistics that were created by one-off queries, data loads, etc, and are now doing nothing but adding to the execution time of Update Statistics jobs.
declare @ContactId as integerset @ContactId = 5select *from Person.Contactwhere ContactId = @ContactIdOR @ContactId = -1If you run this in SQL 2005 on the AdventureWorks database,why the logical reads is 561Table 'Contact'. Scan count 1, logical reads 56and not 2 when you run without the second OR condition:declare @ContactId as integerset @ContactId = 5select *from Person.Contactwhere ContactId = @ContactIdHow can i use the same SP and either get one record returnedby passing the ID of the field, or pass a dummy parameter like-1 in order to get ALL the records returned.In this case even when i pass a parameter like ContactID = 5there is still a table scan (clustered index scan in this case)happening for the other OR condition.There's no method to tell SQL to start checking the first conditionwhether or not it is true then if it is false then check the second ORconditon. On the same topic does this mean all OR conditions areALWAYS verified regardless if one of them has already been determinedto be True?Thank you
Guys,what I need is a tool which gives details on the choice of anexecution plan by the SQL Server. For example, the cost for a hashjoin might be 200 and 100 for a nested loop, and therefore a nestedloop is used. Same thing for the access paths for each table/viewinvolved. In Oracle, we turn on event 100053 to see this kind of info.ThanxDaniel
I am trying to resolve performance issues in a third party application. I have run the profiler and found a transaction that performs a table scan against a 6 million row table. This transaction occurs repeatedly, so I thought, just add an index on the columns in the where clause used here. After adding the index, I looked at the estimated execution plan in Query analyzer, and I find that it is still performing the table scan. If I run the query it takes over 60 seconds to run, if i add an index hint, it runs in under a second. I ran DBCC SHOW_STATISTICS to see if the statistics were up to date:
Statistics for INDEX 'IX_Finish_dept'. Updated Rows Rows Sampled Steps Density Average key length -------------------- -------------------- -------------------- ------ ------------------------ ------------------------ Jun 26 2007 5:18PM 6832336 6832336 150 2.1415579E-7 18.0
(1 row(s) affected)
All density Average Length Columns ------------------------ ------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.1875491E-7 8.0 finish 1.9796084E-7 18.0 finish, dept
Hello All, I have a series of Stored Procedure that has a query taking a join of 5 tables. These tables are quiet large with couple of them having around 10 million rows. As this is a DSS application having periodic data loads, I thought of creating Indexed View on top of these tables. Now the problem is that the Indexed View is not directly used by the optimizer. I need to change my queries and put a WITH (NOEXPAND) query hint to make sure the indexed views are used. This is inspite getting dramatic improvement in the query timings (from 64 secs down to 3 secs) after using the Indexed Views. I would like to know what can be the possible reason for the optimizer not using the Indexed View by itself. Is it because my Indexed View caters to multiple queries or I am missing out on something basic.
An interesting discussion yesterday. One of the programmers asked about the use of the NOLOCK optimizer hint with an iterator table aka table of numbers. His comment was that this optimizer hint was not efficient. Rather than give a knee-jerk response I thought it would be better to ask. The main circumstance is that the iterator table is completely static with a fill factor of 100%. My purpose is to eliminate lock contention if I can.
Are there reasons to not use the NOLOCK hint in this case to potentially improve performance?
I have SQL 7.0 SP2 on NT 4.0 SP5. My database is 180GIG. 23 Tables. It has been up and running for 2 years without any problems. All of a sudden my queries have started taking a long time to run. The optimizer has decided that table scans are better than indexes. If I use query hints they work just fine, but I can't modify all of our code to make these changes.
This is happening on all tables. Records counts are the in the same range they have always been.
Statistics and indexes are all fine and current. Have dropped and rebuilt both.
Our app has been distributed on more then 300 different sites. On one of the sites we get the error "Could not continue scan with NOLOCK due to data movement" indicating that the query optimizer takes a NOLOCK for our select statement ( has been opened with adOpenDynamic, adLockOptimistic ).
It's no option to change the source, we have to solve this without touching the code.
Is there any way to tweak the query optimizer so that our app works correctly? I know that there will be a reduction of performance but it's our only choose.
I've got this query that runs in 30 seconds and returns about 24000. The table variable returns about 145 rows (no performance issue here), and the TransactionTbl table has 14.2 Million rows, a compound, clustered primary key, and 6 non-clustered indexes, none of which meet the needs of the query.
Actual execution plan shows SQL is doing an index seek, then a nested loop join, and then fetching the remaining data from the TransactionTbl using a Key Lookup.
I designed a new indexes based on the query, which when I force it's usage via an index hint, reduces the run time to sub-second, but without the index hint the SQL optimiser won't use the new index, which looks like this:
CREATE INDEX IX_Test on GLSchemB.TransactionTbl (CltID, Date) include (Ledger_Code, Amount, CurrencyID, AssetID)and I tried this: CREATE INDEX IX_Test on GLSchemB.TransactionTbl (CltID, Date, Ledger_Code, CurrencyID, AssetID) include (Amount)and even a full covering index!
I did some testing, including disabling all indexes but the PK, and the optimiser tells me I've got a missing index and recommends I create one EXACTLY like the one I designed, but when I put my one back it doesn't use it.
I though this may be due to fragmentation and/or stats being out of date, so I rebuilt the PK and my index, and the optimiser started using my index, doing an index seek and running sub-second. Thinking I had solved the problem I rebuilt all the indexes, testing after each one, and my index was used BUT as soon as I flushed the related query plan, the optimiser went back to using a less optimal index, with a seek and key lookup plan and taking 30 seconds.
For now I've resorted to using the OPTION (TABLE HINT(G, INDEX(IX_Test))) to force this, but it's a work around only. Why the optimiser would select a less optimal query plan?
I am having an issue with large queries using Microsoft SQL Server 2005 - 9.00.2221.00 (X64).
I have a query with many INNER/LEFT OUTER/RIGHT OUTER joins which is taking very very very long to run. This looks exactly like this problem described in http://support.microsoft.com/kb/318530. However, this doc says it was fixed in SP1, which is already installed.
Basically I have a query:
SELECT .... FROM TABLEA
INNER JOIN TABLEB ... LEFT OUTER TABLEC... LEFT OUTER TABLED... RIGHT OUTER TABLEF... LEFT OUTER TABLEJ.. LEFT OUTER TABLEH... LEFT OUTER TABLEI... RIGHT OUTER TABLEK... LEFT OUTER TABLEM.. ... 17 joined tables in all...... WHERE TABLEB.field1 = 'abc'
The query plan for this is using TABLEA as the "main" table and joining everything else to it. The problem is, TABLEA has 117 MILLION records. TABLEB has 10,000 records which match the WHERE. I stopped this query after it ran for 62 HOURS.
If I simply change the query to:
SELECT .... FROM TABLEB
INNER JOIN TABLEA ... LEFT OUTER TABLEC... LEFT OUTER TABLED... RIGHT OUTER TABLEF... LEFT OUTER TABLEJ.. LEFT OUTER TABLEH... LEFT OUTER TABLEI... RIGHT OUTER TABLEK... LEFT OUTER TABLEM.. ... 17 joined tables in all...... WHERE TABLEB.field1 = 'abc'
The query runs in 15 mins. The query plan now uses TABLEB and the WHERE clause to join all the other tables.
The problem is, this query is generated from a report writter, and I have no control over the way it creates the SQL code.
I cannot find this info anywhere. We have Hyperion 8.x running against SQL 2000. We will upgrade side-by-side in the coming weeks. It appears that Hyperion 8.x is assured to continue to work without issue with a 2K5 database in 80 compatibility level.
I have seen the BOL table on differences between 80 and 90 (specifically affects us - WITH on Index hints), but what I wonder are about 3 specific features.
1. Do you still get DMV capabilities in 80 mode? (I ran a test on sys.dm_db_index_usage_stats for an 80 compat DB, creating index and using query to hit index, and yes - it does look as if this DMV does get populated).
2. Can you still Table Partition in 80 mode? (Ran a test in DEV, and yes - this does look possible - any gotchas?)
3. Does the optimizer behave quite differently in 80 mode?
I have tested in DEV moving a DB from 80 to 90 and back, and this seems to work with no problems.
I had a view in which I did something like this isnull(fld,val) as 'alias'
when I assign a value to this in the client (vb 6.0) it works ok in sql2000 but fails in 2005. When I change the query to fld as 'alias' then it works ok in sql 2005 . why ?? I still have sql 2000 (8.0) compatability.
Also some queries which are pretty badly written run on sql 2000 but dont run at all in sql 2005 ???
any clues or answers ?? it is some configuration issue ?
I am writing a pgm that attaches to a SQL Server database. I have an Add stored procedure and an Update stored procedure. The two are almost identical, except for a couple parameters. However, the Add function works and the Update does not. Can anyone see why? I can't seem to find what the problem is...
This was my test:
Dim cmd As New SqlCommand("pContact_Update", cn) 'Dim cmd As New SqlCommand("pContact_Add", cn)
Catch ex As Exception Label1.Text = ex.Message End Try
When I use the Add procedure, a record is added correctly and I receive the "done" message. When I use the Update procedure, the record is not updated, but I still receive the "done" message.
I have looked at the stored procedures and the syntax is correct according to SQL Server.
Original suggestion for my problem was - Select * from TableA where ID not in ( Select ID from TableB) When I run the query below using the IN operator I get 227 records returned but when I use NOT IN I get zero records when I expect well over 10,000. What am I missing? using SQL 2000 server SELECT LinksInfo.L_ID, LinksInfo.C_ID, Companies.C_CompanyName, Companies.C_Email, Companies.C_CompanyEmailFROM LinksInfo INNER JOIN Companies ON LinksInfo.C_ID = Companies.C_IDWHERE (LinksInfo.L_ID IN (SELECT ZL_ID FROM Location_Zip)) ZL_ID is not a primary key in Location_Zip
Hello again, I think im missing something here, i just cant find out what it is. I have a temp table: CREATE TABLE #tempSearch(tempID BIGINT IDENTITY(1,1) PRIMARY KEY,username NVARCHAR(20) COLLATE Finnish_Swedish_CI_AS,lastlogin DATETIME,signupdate DATETIME) Now i am trying to retrieve some data for each user that is inside this tempSearch list and have an id over xxx (xxx = the value of the parameter @first_id): SELECT @sql = 'SELECT profile_publicinfo.username, profile_publicinfo.gender, profile_publicinfo.signupdate, profile_profilephoto.imageurl, profile_profilephoto.alttext, settings_username.color, profile_publicinfo.lastloginFROM #tempSearch INNER JOIN dbo.profile_publicinfo ON profile_publicinfo.username = #tempSearch.usernameINNER JOIN dbo.settings_privateinfo ON settings_privateinfo.username = profile_publicinfo.usernameFULL OUTER JOIN dbo.profile_coolfacts ON profile_coolfacts.username = profile_publicinfo.username FULL OUTER JOIN dbo.profile_profilephoto ON profile_profilephoto.username = profile_publicinfo.usernameFULL OUTER JOIN dbo.settings_username ON settings_username.username = profile_publicinfo.usernameWHERE (profile_publicinfo.username IN (SELECT username FROM #tempSearch))AND #tempSearch.tempID >= @first_id' SELECT @paramlist = '@first_id int'EXEC sp_executesql @sql, @paramlist, @first_id I need to get the tempID from the tempSearch table in order to compare it with @first_id When i run this i get the same username repeated like 30 times then it moves over to the next, when i debug the #tempSearch it looks fine, just the users that are suppose to be there.
Hi I think I have installed MSDE sucessfully. The new servie is running, but how can I test ifit is working? Can I place my files anywhere on the system? All help appreciated
OK so I'm using dynamic sql to enable me to have a variable for the order by, but now it's giving me the following error
Syntax error converting character string to smalldatetime data type.
Any help is very much appreciated
Code follows
CREATE PROCEDURE spBattingAve1 @cid datetime, @cid1 datetime, @sid nvarchar(10) AS
DECLARE @query nvarchar(4000)
SET @query = 'SELECT MAX(t_batting.runs) AS BestRuns, COUNT(t_batting.dnb) AS DidNotBat, t_player.surname, t_player.firstname, t_batting.player_id, Sum(t_batting.runs) as SumOfruns, COUNT (*) AS Games, COUNT (t_batting.notout) as nout, (Sum(runs)/(COUNT(*)-(COUNT(notout)+COUNT(dnb)))) as AverageRun, (COUNT(*)-COUNT(dnb)) AS Innings, (SELECT COUNT(*) FROM t_batting inner join t_game on t_batting.game_id = t_game.game_id WHERE (t_batting.player_id = t_player.player_id) AND runs >= 50 AND runs <100 and t_game.date >= '+@cid+' And t_game.date <= '+@cid1+' GROUP BY t_batting.player_id) AS Overfifty, (SELECT COUNT(*) FROM t_batting inner join t_game on t_batting.game_id = t_game.game_id WHERE (t_batting.player_id = t_player.player_id) AND runs >= 100 and t_game.date >= '+@cid+' And t_game.date <= '+@cid1+' GROUP BY t_batting.player_id) AS Overton FROM t_batting INNER JOIN t_game ON t_batting.game_id = t_game.game_id INNER JOIN t_player ON t_batting.player_id = t_player.player_id WHERE t_batting.player_id = t_player.player_id AND t_game.date >= '+@cid+' And t_game.date <= '+@cid1+'
GROUP BY t_batting.player_id, t_player.surname, t_player.firstname, t_player.player_id HAVING (COUNT(*)-(COUNT(notout)+COUNT(dnb))) <> 0 order by '+@sid+' DESC'
In SQL 7.0, when i use the below query in query analyzer, I got an error message. why? ======================
Set Identity_insert on
UPDATE Table1 SET no = 3 WHERE no = 4
Set Identity_insert off go
Error message ============= Server: Msg 156, Level 15, State 1, Line 1 Incorrect syntax near the keyword 'on'. Server: Msg 156, Level 15, State 1, Line 7 Incorrect syntax near the keyword 'off'.
I am on a box with 4 GB of memory that only runs SQL Server. SQL Server is using 1.7 GB as per task manager. There are a lot of databases on this box. I looked at the boot.ini and /3GB was not set. I have done this at least 50 times before: turn on the /3GB switch and SQL Server will use up to 2.7 GB per task manager. (I know it is actually 2GB and 3GB but it seems that in task manager it shows up as 1.7-1.8 and 2.7-2.8)
Anyway back to the story. So I set /3GB, rebooted the server and SQL Server still only consumes up to 1.7GB. I realize that SQL Server might not actually need more memory, but I have a distinct feeling that it wants more but is being constrained; that is, the the /3GB switch is not working. More relevant information:
1) the boot.ini line: multi(0)disk(0)rdisk(0)partition(1)WINNT="Microsoft Windows 2000 Advanced Server" /fastdetect /3GB
2) max server memory (MB) = 3000
3) min server memory (MB) = 3000
4) awe enabled = 0
5) @@version = 8.00.2039 which is SQL Server 2000 SP4
Has anyone seen this before? Any ideas on how to troubleshoot this?