SQL Server 2005 Choosing A Wrong Execution Plan
Jan 15, 2007
Hi
I am having a query
SELECT Dur1.rootId
FROM DurableEventTab Dur1
WHERE (Dur1.dev_ReferenceClusterRoot = 'iyrwd.52' )
AND Dur1.dev_Action = 'Order:Ordered')
AND (Dur1.dev_Active = 1) AND (Dur1.dev_PurgeState = 0)
AND (Dur1.dev_PartitionNumber = 0)
This table has a primary key : aribapk11
and the indexes on the dev_ReferenceClusterRoot,
dev_Action,dev_purgestate .
Now when I fire this query
the query execution plan is actaull doing a Clustered Index scan on the PK :aribaPK11 . What I was expecting was an index seek on the key defined on dev_referenceClusterRoot. Please not the index seek is the behaviour in sql server 2000.
Any idea what is going wrong ?
Clustered Index Scan(OBJECT:([typhoon1902].[dbo].[DurableEventTab].[AribaPK7] AS [Dur1]), WHERE:([typhoon1902].[dbo].[DurableEventTab].[dev_Active] as [Dur1].[dev_Active]=(1.) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PurgeState] as [Dur1].[dev_PurgeState]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PartitionNumber] as [Dur1].[dev_PartitionNumber]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_ReferenceClusterRoot] as [Dur1].[dev_ReferenceClusterRoot]='iyrwd.52' AND [typhoon1902].[dbo].[DurableEventTab].[dev_Action] as [Dur1].[dev_Action]=N'Order:Ordered')) 0 0 Clustered Index Scan Clustered Index Scan OBJECT:([typhoon1902].[dbo].[DurableEventTab].[AribaPK7] AS [Dur1]), WHERE:([typhoon1902].[dbo].[DurableEventTab].[dev_Active] as [Dur1].[dev_Active]=(1.) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PurgeState] as [Dur1].[dev_PurgeState]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PartitionNumber] as [Dur1].[dev_PartitionNumber]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_ReferenceClusterRoot] as [Dur1].[dev_ReferenceClusterRoot]='iyrwd.52' AND [typhoon1902].[dbo].[DurableEventTab].[dev_Action] as [Dur1].[dev_Action]=N'Order:Ordered') [Dur1].[rootId] 1 0.00386574 0.0002263 71 0.00409204 [Dur1].[rootId] PLAN_ROW 0 1
View 3 Replies
ADVERTISEMENT
Jul 6, 2001
I have SQL 7.0 SP2 on NT 4.0 SP5. My database is 180GIG. 23 Tables. It has been up and running for 2 years without any problems. All of a sudden my queries have started taking a long time to run. The optimizer has decided that table scans are better than indexes. If I use query hints they work just fine, but I can't modify all of our code to make these changes.
This is happening on all tables. Records counts are the in the same range they have always been.
Statistics and indexes are all fine and current. Have dropped and rebuilt both.
Has anybody else seen this behavior.
View 1 Replies
View Related
Jul 7, 2006
The benefit of the actual execution plan is that you can see the actual number of rows passing through each step - compared to the estimated number of rows.But what about the "cost percentages" ?I believe I've read somewhere that these percentages is still just an estimate and is not based on the real execution.Does anyone know this and preferable have a link to something that documents it?Thanks
View 1 Replies
View Related
Mar 25, 2008
Hi,
I have created a few packages and i want to execute this in a sequence so I created a wrapper/parent package and added all the other packages as child Package using the Execute Package Task. These packages are file system based packages. I am executing the wrapper/parent package from a web page which will execute all the child packages. All is well and works fine when I choose the TransactionOption as "Supported" in my wrapper/parent package but when I choose the TransactionOption as "Required" in my wrapper/parent package I get the following error
Error Occurred: The package is failed due to following: The SSIS Runtime has failed to enlist the OLE DB connection in a distributed transaction with error 0x8004D024 "The transaction manager has disabled its support for remote/network transactions.".
What I am doing is connecting to 3 DB in the same server and doing some data manipulation. The MSDTC is running in the Target SQL Server and also the DTC Server in my Local is started and running. What else could be the problem.
Thanks in advance for any and all help
View 1 Replies
View Related
Feb 22, 2008
Can anyone explain me simple language and easy to understand query execution plan. I am a fresher assigned to read and evaluate execution plan. i do not understand where is the problem. what percentage is considered as good sql and what percentage is considered as bad sql.
how do i understand whether there is a problem in sql or joins or index or anything else. Please explain me step by step what should be considered and what recomenendation should i give for each problem.
View 6 Replies
View Related
Feb 19, 2006
Im trying to insert a record in my sql server 2005 express database.The following function tries that and without an error returns true.However, no data is inserted into the database...Im not sure whether my insert statement is correct: I saw other example with syntax: insert into table values(@value1,@value2)....so not sure about thatAlso, I havent defined the parameter type (eg varchar) but I reckoned that could not make the difference....Here's my code: Function CreateNewUser(ByVal UserName As String, ByVal Password As String, _ ByVal Email As String, ByVal Gender As Integer, _ ByVal FirstName As String, ByVal LastName As String, _ ByVal CellPhone As String, ByVal Street As String, _ ByVal StreetNumber As String, ByVal StreetAddon As String, _ ByVal Zipcode As String, ByVal City As String, _ ByVal Organization As String _ ) As Boolean 'returns true with success, false with failure Dim MyConnection As SqlConnection = GetConnection() Dim bResult As Boolean Dim MyCommand As New SqlCommand("INSERT INTO tblUsers(UserName,Password,Email,Gender,FirstName,LastName,CellPhone,Street,StreetNumber,StreetAddon,Zipcode,City,Organization) VALUES(@UserName,@Password,@Email,@Gender,@FirstName,@LastName,@CellPhone,@Street,@StreetNumber,@StreetAddon,@Zipcode,@City,@Organization)", MyConnection) MyCommand.Parameters.Add(New SqlParameter("@UserName", SqlDbType.NChar, UserName)) MyCommand.Parameters.Add(New SqlParameter("@Password", Password)) MyCommand.Parameters.Add(New SqlParameter("@Email", Email)) MyCommand.Parameters.Add(New SqlParameter("@Gender", Gender)) MyCommand.Parameters.Add(New SqlParameter("@FirstName", FirstName)) MyCommand.Parameters.Add(New SqlParameter("@LastName", LastName)) MyCommand.Parameters.Add(New SqlParameter("@CellPhone", CellPhone)) MyCommand.Parameters.Add(New SqlParameter("@Street", Street)) MyCommand.Parameters.Add(New SqlParameter("@StreetNumber", StreetNumber)) MyCommand.Parameters.Add(New SqlParameter("@StreetAddon", StreetAddon)) MyCommand.Parameters.Add(New SqlParameter("@Zipcode", Zipcode)) MyCommand.Parameters.Add(New SqlParameter("@City", City)) MyCommand.Parameters.Add(New SqlParameter("@Organization", Organization)) Try MyConnection.Open() MyCommand.ExecuteNonQuery() bResult = True Catch ex As Exception bResult = False Finally MyConnection.Close() End Try Return bResult End FunctionThanks!
View 1 Replies
View Related
May 29, 2004
As a developer, we always say "using a stored procedure, instead of a cliet side SQL statement, provides performance benefits". However, it seems it has not been true anymore since SQL Server 7.0.
See SQL online "Execution Plan Caching and Reuse" at http://msdn.microsoft.com/library/default.asp?url=/nhp/default.asp?contentid=28000409
I am quite confused with the following questions:
1. it seems since SQL 7.0, a SQL statement in client side uses the existing execution plan as a stored procedure does. That means SP doesn't has much advantage over SQL statement in terms of performance.
2. It seems, a stored procedure is not always compled ONLY once. If a stored procedure is not used for a long time, it could be kicked out from procedure cashe.
3. In order to use an existing execution plan, it seems that we have to use the fully qualified identifier, such as
SELECT * FROM Northwind.dbo.Employees
instead of
SELECT * FROM Employees
However, I rarely see anyone uses these kind of fully qualified references for objects both in SQL statements and SP. For example, in the sample database pubs and NorthWind, they don't use the fully qualified expression. I only see the use of it in master database.
I guess I might miss something in the issues above. I would like to get any explanation from SQL guru or anybody. Thanks a lot.
View 7 Replies
View Related
Nov 27, 2014
I have 2 tables,
1. Need
2. NeedCategory
Need table has clusted index on needid column and NeedCategory have composite clustered index on needid and categoryid.
Now take a look on following query and execution plan for the query.
SELECT N.NeedId,N.NeedName,N.ProviderName
FROM dbo.Need N
JOIN dbo.NeedCategory NC
ON nc.NeedId = n.NeedId
WHERE IsActive=1
AND CategoryId= 2
ORDER BY NeedName
* Clustered index scan on need table is happens for Isactive= 1.
* Clustered index scan on needcategory table is happens for CategoryId=2
My question is,
1. Why scan happens before the join occurs? if it happens after join then the filter would be lighter. Even if optimizer chooses the scan to execute first.
2. Is there any chance to rearrange the execution plan manually?
View 6 Replies
View Related
Nov 11, 2014
I'm new to using SQL Server. I've been asked to optimize a series of scripts that queries over 4 millions records. I've managed to add indexes and remove a cursor, which increased performance. Now when I run the execution plan, the only query that cost is a DELETE statement from the main table. It shows a SORT which cost 71%. The table has 2 columns and a unique index. Here is the current index:
ALTER TABLE [dbo].[Qry] ADD CONSTRAINT [Qry_PK] PRIMARY KEY NONCLUSTERED
(
[QryNum] ASC,
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = ON, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
Question: Will the SORT affect the overall performance? If so, is there anything I should change within the index that would speed up my query?
View 5 Replies
View Related
Jun 17, 2015
Is it possible to check query execution plan of a store procedure from create script (before creating it)?
Basically the developers want to know how a newly developed procedure will perform in production environment. Now, I don't want to create it in production for just checking the execution plan. However they've provided SQL script for the procedure. Now wondering is there any way to look at the execution plan for this procedure from the script provided?
View 8 Replies
View Related
Jun 17, 2015
I run a query
select col1, col2, col3, col4
from Table
where col2=5
order by col1
I have a primary key on the column.The execution plan showing the clustered index scan cost 30% & sort cost 70%..When I run the query I got missing index hint on col2 with 95% impact.So I created the non clustered index on col2.The total executed time decreased by around 80ms but I didn't see any Index name that is using in the execution plan.After creating the index also I am seeing same execution plan
The execution plan showing the clustered index scan cost 30% & sort cost 70% but I can see the total time is reducing & Logical reads on that table is reducing.I am sure that index is useful but why there is no change in the execution plan?
View 7 Replies
View Related
Jun 25, 2015
Is there a way to leave the graphical 'Include Execution Plan' on by default in SSMS? I don't know how many times I run a long-running query, say to myself, "wow, that took a while; I wonder what the execution plan looks like?" only to realize that I left it turned off. Now I have to turn it on, and wait for the query to run again. I'm guessing there's a setting in the options somewhere to always leave it on, but I'm not sure where
View 2 Replies
View Related
Sep 15, 2015
I have query with an expensive Key Lookup on a joined table. The predicate is the column that I'm joining on, and the output list contains two columns from the joined table.
I've created a basic non-clustered index covering the predicate column and include-ing the two output columns. However, the execution plan ignores this, and insists on using the primary key of the joined table to do the expensive key lookup. I've tried adding the included columns to the index directly and there's no change. I've also tried running dbcc freeproccache and no change.
View 3 Replies
View Related
Sep 11, 2015
I have some simple files but they are failing because the delete history task is failing as it is looking for files in a non existent directory.
It is looking for files in C:Program FilesMicrosoft SQL ServerMSSQL10_50.INSTANCEMSSQLLog whereas it should be looking in C:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLLog
how I can get this corrected so I can get the Maintenance Plans to run correctly.
I have tried deleting and recreating the Plan but to no avail
View 0 Replies
View Related
Jul 23, 2005
I have one query that executes many times in a week.I created one Maintenances plan that Rebuild all index in my Database thathas been executed at 23:40 Saturday until stop finished at Sunday.However at middle of week (Wednesday or Thursday), that query don’t returnresult like that must be. The time exceeded and the result are total wrong.I compare the normal executed plan and the “crazy” one that SQL create tomount result.The normal is nested with index seek (very fast, the wrong is Merger withhash aggregate (very slow). After Index Rebuild, the executed plan bringresult that must be, but when the merge plan are executed with many updateson that tables (SAM_GUIA_EVENTO and SAM_GUIA), at middle of week, theresult are total wrong, with many rows back.I recommended Index Seek force by coalesce function on one columnaggregate, but everyone here were very panic with that behavior of SQLServer.Please , anyone help me to explain that!Krisnamourt!P.S: Attachments :--Force Index Query with coalesceSELECT count(*)FROM SAM_GUIA_EVENTOS E,SAM_GUIA GWHERE G.PEG=736740AND E.GUIA=coalesce(G.HANDLE,G.HANDLE) AND E.CLASSEGERENCIALPAGTO is NULL--Normal QuerySELECT count(*)FROM SAM_GUIA_EVENTOS E,SAM_GUIA GWHERE G.PEG=736740AND E.GUIA=G.HANDLE AND E.CLASSEGERENCIALPAGTO is NULL--Message posted via http://www.sqlmonster.com
View 5 Replies
View Related
Aug 21, 2001
I'm new to SQL server but familiar enough with databases to know this doesn't seem right.
Here's the situation:
I have a table with real estate property information. There are about 650,000 rows in it. I have a nonclustered non-unique index on the city where the property is located. There are about 40 unique values in this index.
I do a simple query like:
SELECT city,address from propinfo where city= 'CARLSBAD'. The query will return about 4,000 rows. The problem is that the execution plan that it chooses is to do a full table scan. I.E. Even though there is an index on City, it chooses to look through 650,000 rows rather than do an index seek. Something sounds inefficient here. BTW, this happens in both SQL 7 and SQL 2000. Can anyone explain why this happens? I've got to think that SQL Server is more efficient here.
View 5 Replies
View Related
Feb 2, 2006
Hello, I have been looking at the execution plan for a procedure call and the select, compute scalar, stream aggregates, constant scan, nested loops, asserts are all at 0% cost, the PK costs are 2% apart from a rogue 7% and a few 20%, tables scans are all at 23%. The query cost realtive to the batch is 100%. What does this all mean?
I have put non-clustered indexes on all the table attributes that are involved in the select statements but this has made no difference, i am guessing this is because my tables are not heavily populated and i may have seen a difference if i had thousands of entries in the tables the select statements acted on, is this assumption correct?
Does anyone else bother using the execution plan to tweak there DB or is it a negligible tool?
Jill
View 5 Replies
View Related
Aug 29, 2007
In sql server 2005 management studio where do I find the option to run the sql query in the query analyser and also show the execution plan?
At present I see the option under Query menu which is "Display estimated Execution plan" which only shows the plan but does not execute the query.
Thanks
View 2 Replies
View Related
Feb 29, 2008
Does anyone know of a good way to copy the execution plan when using "Include Actual Execution Plan"?
I often need to copy this and mail it.
I know I can use PrintScreen button, but I need a more efficient way to do this.
If I just could rightclick the execution plan and select "Copy" and get complete plan it would be great.
Mladen?
E 12°55'05.25"
N 56°04'39.16"
View 14 Replies
View Related
Jul 20, 2005
Which of the following does NOT cause the execution plan of a query to berecompiled ?- new column is added to a table accessed by a query OR- index used by a query has been dropped from the database OR- query perfoms a join to return data from multiple tables OR- significant amount of data in a table has been mofified
View 1 Replies
View Related
Jul 20, 2005
Hi,I have a table-valued user defined function (UDF) my_fnc.The execution of statement "select * from my_fnc" takes much longertime than runnig the code inside my_fnc (with necessary changes).What can be the reason?How can I see an execution plan used for UDF?Thanks a lotMartin
View 1 Replies
View Related
Jul 9, 2007
Hi,I want to access the real execution plan via my webapplication after I have executed an SQL statement. I know how to get the estimated execution plan:1 cmd.CommandText = "SET SHOWPLAN_XML ON";2 cmd.ExecuteNonQuery();3 4 cmd.CommandText = myStatement;5 SqlDataReader dataReader = cmd.ExecuteReader();6 7 String plan = String.Empty;8 9 while (dataReader.Read()) {10 plan += dataReader.GetSqlString(0).ToString();11 }12 13 cmd.CommandText = "SET SHOWPLAN_XML OFF";14 cmd.ExecuteNonQuery();I want do compare the estimated costs with the real costs of the same statement. If I change code line 1 an 13 to "SET STATISTICS XML [ON|OFF]" the string "plan" will contain the result of the submitted SELECT statement, but I just need to get the plan and not the result itself. Thanks in Advance,Dominik
View 6 Replies
View Related
Jul 28, 1999
What does 'tablename. index... cost: 100%' mean when I use display estimated execution plan?
View 1 Replies
View Related
Sep 24, 2002
Hello ,
I wanted to know whether we have an execution plan enabled in SQL 6.5 as we have it in SQL 7.0 and SQL 2000 .
I.e when we execute a query and if we enable ' show execution plan 'then it creates a map and shows the vital statistics .
If that is available on SQL 6.5 then i am missing that tool .
How can i have it installed on my SQL 6.5 server ??
Thanks.
View 3 Replies
View Related
Jul 9, 2003
Hi,
I want to know how to analyze query execution plan for complex queries and what information is useful from that for improving the performance. I have gone through details in some sites like www.like sql-performance.com (http://www.sql-server-performance.com/query_execution_plan_analysis.asp), where it was more generic. I want more info regarding this.
Can any one tell about the resources for this or do you have any white papers or documents, which you can share with me.
Thanks in advance,
sekhar
View 1 Replies
View Related
Jun 12, 2006
Hi ,
when
operator = then index SEEK
operator <> then index SCAN
Is normal ?
Example
SELECT *
FROM dbo.Batch
WHERE (Status = 'Batch Completed')
(1 row(s) affected)
StmtText
---------------------------------------------------------------------------------------------------------------------------------
|--Bookmark Lookup(BOOKMARK:([Bmk1000]), OBJECT:([PriceAvisPr].[dbo].[Batch]))
|--Index Seek(OBJECT:([PriceAvisPr].[dbo].[Batch].[IX_Batch]), SEEK:([Batch].[Status]='Batch Completed') ORDERED FORWARD)
StmtText
---------------------------------------------------------------------------------
SELECT *
FROM dbo.Batch
WHERE (Status <> 'Batch Completed')
(1 row(s) affected)
StmtText
------------------------------------------------------------------------------------------------------------------------
|--Clustered Index Scan(OBJECT:([PriceAvisPr].[dbo].[Batch].[PK_Batch]), WHERE:([Batch].[Status]<>'Batch Completed'))
View 2 Replies
View Related
Mar 24, 2008
Hi all,
I am experiencing performance problems with one of my stored procedures. When the stored procedure is first compiled an executed, it behaves as expected (it usually takes 1 or 2 seconds to complete). But its performace it is degradated, so in 1 day, it usually takes 120 seconds to complete !!!. Once the stored procedure is compiled, its performance it is then the expected.
It is a complex stored procedure with two integer parameters with only one select, but composed by multiple views and sub-queries. We have been trying to break the query into small pieces using temporary tables but without success. The SQL Profiler shows an unusual number of reads when it goes wrong (more than a million reads).
I think the problem is in the execution plan. I know than compiling the stored procedure, the problem is fixed, but I do not know exactly when and why it starts to happen.
The stored procedure is running under the following configuration:
- Microsoft SQL Server Standard Edition (64-bit).
- Version: 9.00.1399.06
- RAM 16 MB
- 8 CPUs
Anyone has any ideas or possible solutions?
Thanks in advance,
Carlos.
View 2 Replies
View Related
Aug 5, 2005
The cost of query with usage of functions is as same as that of withoutfunctionsIn the below code, the query cost of insert is 0.02% and two selectstatements costs same 0.04%Declare @t table(mydate datetime)Declare @i intset @i=1while @i<=5000Begininsert into @t values(getdate())set @i=@i+1EndSelect mydate from @tSelect convert(varchar,mydate,112) from @tBut I thought usage of convert function will take more query costWhat do you think of this?Madhivanan
View 5 Replies
View Related
Mar 9, 2006
We've got as slightly unusual scenario happening whereby a statement ispassed to SQL which consists of two parts.BEGIN TRANSACTIONDELETE * FROM WhateverBULK INSERT INTO Whatever...(etc)COMMIT TRANSACTIONThe first is a deletion of the data and the second is the bulk insertof replacement data into that table. The error that we see is aviolation of the primary key (composite).The violation only happens if we run both processes together. If we runone, then the other, it works fine. If we set a line by line insert, itworks fine.My suspicion is that the execution plan that is being run is mostlikely working the two parts in parallel and that the records stillexist at the point that the insert is happening. Truncate is not anoption. The bulk insert was added for performance reasons. There is anoption of trying the bulk insert, and if that fails, do the line byline insert, but it's far from ideal.I think we can probably wrap this into two individual transactionswithin the one statement as follows :BEGIN TRANSACTIONDELETE * FROM WhateverCOMMIT TRANSACTIONBEGIN TRANSACTIONBULK INSERT INTO Whatever...(etc)COMMIT TRANSACTIONWill this give sufficient hint to SQL about the order it processes itso that it completes as we intend and not as it sees being the mostefficient method ?Or, is there a better approach to this ?I've seen that some hints can be passed to SQL for optimizing, but myunderstanding was that it was always better to trust the optimiser andre-work the query as needed.With the server having two processors, is it feasible that one is doingone part and the other processor the other part in parallel ? Willtelling it to use a single processor be worthwhile looking at ? MAXDOP1 ?Finally, I'd imagine that the insert is quicker to process than thedeletion. Is this correct ?ThanksRyan
View 14 Replies
View Related
May 2, 2006
Using SQL Server 2000 SP4.There is a relatively complex stored procedure that usually completes inless than 20 seconds. Occasionally it times out after 180 seconds. The SPis called via ADO 2.8, using adCmdStoredProc command type. If I useProfiler to capture the EXEC that ADO sends to run the procedure, and runthat from QA, the procedure completes in less than 20 seconds as it should.The procedure is created WITH RECOMPILE. One additional twist is thatsp_setapprole is called from the client before running the procedure inquestion. This may be irrelevant, because even if I include the samesp_setapprole call when running the procedure from QA, it still executesquickly, and even if I comment out the call to sp_setapprole in the clientcode, the proc still times out when run from the client.The only thing that fixes it, at least for a day or two, is DBCCFREEPROCCACHE. So it appears that a bad plan is somehow stuck in memory andis only used when the procedure is called from the client app, and is notflushed even though the procedure was created WITH RECOMPILE.Other than scheduling the DBCC call to run every night, is there anythingelse I could try to get this resolved? Thanks.--(remove a 9 to reply by email)
View 5 Replies
View Related
May 31, 2006
I was hoping someone could shed some light on wierd situation i'm experiencing. I have the following query:
select count(*) LeadCount
from auto_leads al
where received > dbo.GetDay(GetDate())
dbo.GetDay simply returns a smalldatetime value of today's date.
Now I recently got thrown into a data mess and for some reason this query takes 8 seconds to run. Now the first thing I did was update the stats on the Received column of this auto_leads table. I re-run the query and I'm still getting 8 seconds. I look at the execution plan I can make figure out why this is happening.
I then change the above query so the filter received > dbo.GetDay(GetDate()) is now just received > '5/31/2006' and the query comes back immediately. This doesn't make sense to me because the GetDay function is really simple and comes back immediately. I then try the following query to confirm it isn't a problem with the GetDay function:
declare @Today DateTime
set @Today = dbo.getday(GetDate())
select count(*) leads
from auto_leads al
join type_lead_status tls on (tls.type_lead_status_id = al.type_lead_status_id)
where received > @Today
Sure enough, the query came back immediately. Next thing to go through my mind is that the query execution plan has been cached by SQL Server using the execution plan from before I updated the stats on the received column. So I executed sp_recompile 'auto_leads' and tryed the original query again. Still taking 8-10 seconds to come back.
So my question, is why when I remove the GetDay function call in my query filter is the query slow, as opposed to me just passing a variable into the query? Thanks!
- James
View 6 Replies
View Related
Apr 27, 2007
I'm new to sql server 2005 and was reviewing the execution plan on one of my queries.
I have a query that selects about 62,000 rows from a table of about 20 million
I see there is a index seek indicated but further down the execution plan I see that a large percent is being assigned to a RID LOOKUP on the same table.
Should I be concerned with this and if so, what would you recommend I do to correct it?
View 12 Replies
View Related
Sep 5, 2006
Hi,
We migrated our database from SQL Server 2000 to Yukon last week. Now, when we run our application it has slowed down. We analyzed some stored procedure and they seems to have degarded. The execution plan has changed. Now, this looks like there's lot of work if we have to tune each query w.r.t the new execution plan. Our application has around 4000 stored procs. Is anyone aware of some generic pattern or solution such that these exection plans problem can resolved? Also, does the new execution plan ensure that the once we tune stored procs will perform better than SQL Server 2000.
Need help on this, otherwise it seems we might have to move back to 2000.
Thanks in Advance
Ritesh
View 2 Replies
View Related