T-SQL (SS2K8) :: Query Not Use Same Execution Plan?
Jun 5, 2015
if t-sql query is perfectly run in development and when I execute in production at that time I want to use execution plan which is in development . so how I can do using cache? I know about hint we can use hint USE_PLANE. but I want to do with cache .
I have been trying to the query optimizer to generate a parallel execution plan but no matter the MaxDOP (0) or Cost Threshold (5) settings I use it will only execute in serial.
UPDATE [dbo].[Targus_201412_V7_B] SET [URBAN] =( CASE WHEN [METRO_STATUS] = 'Urban' THEN 1 ELSE 0 END)
I wanted to know whether we have an execution plan enabled in SQL 6.5 as we have it in SQL 7.0 and SQL 2000 . I.e when we execute a query and if we enable ' show execution plan 'then it creates a map and shows the vital statistics . If that is available on SQL 6.5 then i am missing that tool .
How can i have it installed on my SQL 6.5 server ??
This is probably a very stupid question. I have been out of the SQL Server arena for awhile and am now getting re acclimated. It was my understanding that using execution plan in query analyzer does not really execute the query against the query's database tables. Is this right? Tom.
I am developing an application (VB) that should present a query estimated execution plan.
Using the SQL Server Management Studio, I should execute the following commands to see the query's estimated execution plan:
SET SHOWPLAN_XML ON
go
MyQuery go
SET SHOWPLAN_XML OFF
go
The query is not executed. The result is the query execution plan.
In my application, I call Connection.Execute to execute the 'SET SHOWPLAN_XML ON'. Then, I use a Resultset submit the query. The query is executed and the execution plan is not returned.
The benefit of the actual execution plan is that you can see the actual number of rows passing through each step - compared to the estimated number of rows.But what about the "cost percentages" ?I believe I've read somewhere that these percentages is still just an estimate and is not based on the real execution.Does anyone know this and preferable have a link to something that documents it?Thanks
I have SQL 7.0 SP2 on NT 4.0 SP5. My database is 180GIG. 23 Tables. It has been up and running for 2 years without any problems. All of a sudden my queries have started taking a long time to run. The optimizer has decided that table scans are better than indexes. If I use query hints they work just fine, but I can't modify all of our code to make these changes.
This is happening on all tables. Records counts are the in the same range they have always been.
Statistics and indexes are all fine and current. Have dropped and rebuilt both.
I am trying to tune a very long running query (18 minutes on an Axim X51, 8secs on my laptop), but I can't get the query plan file that is generated on the device to load in the Sql Server Management Studio. I am using the Sql Everywhere CTP on the device, and version 9.00.2047 of the management studio shell.
FWIW, when I try to create the execution plan by running the same query on a .sdf file local on my laptop, I get a similar error trying to view the query plan.
Apart from the query plan issues, it would appear (just from the query execution time) that the indexes defined on the sdf file are not being used when executing the query on the device, but are being used when executing the query on the laptop. This is pure SWAG on my part, though.
I can't figure out how to attach a file to the post, unfortunately.
know if there is any way out to run execution plan for parameterized queries?
As application is sending queries which are mostly parameterized in nature and values being used are very robust in nature, So i can not even make a guess.
Hi,I have a question about estimated query execution plans that aregenerated in QA of MSSQL.If I point at an icon/physical operator in the estimated QEP, it showsmesome statistics about the operator.Is there a way to retrieve these statistics through a query, i.e., canthese statistics be available to the user?Also, is there a way to generate these statistics on my own?thanks in advance-TC.
Hi All,I'm a relative newbie to SQL Server, so please forgive me if this is adaft question...When I set "Show Execution Plan" on in Query Analyzer, and execute a(fairly complex) sproc, I note that a particular query is reported ashaving a query cost of "71% relative to the batch" - however, this isnowhere near the slowest executing query in the batch - other querieswhich take over twice as long are reported as having costs in theorder of a few percent each.Am I misreading the execution plan? Note that I'm looking at thegraphical plan, and am not reading the 'estimated' plan - I'm usingthe one generated from executing the sproc. My expectation was thatthis would be based on the execution times of the queries within thesproc, however, this does not appear to be the case. (Note - Idetermined execution times from PRINT statements, using GETDATE() todetermine the current time, down to milliseconds).Any feedback would be of great assistance... I may well have tochange the way I approach optimizing queries based on these findings.Thanks,LemonSmasher.
Does anyone know if it is possible to access the execution plan results programmatically through a stored procedure or .NET assembly? I have the code sample
SET SHOWPLAN_XML ON; query... SET SHOWPLAN_XML OFF;
but it can only be run from the interface. I have tried a couple of solutions including dynamic sql to try to capture the results in a variable or file with no luck.
Does anyone know of a way to programmatically capture this information? We are doing some research with distributed query processing of dynamically generated queries using multiple processing nodes, and it would be helpful to know an estimate of how large the query is before sending it away to be processed.
I have looked at the dm_exec_query_stats view; but it can only be run on a query that has already been executed. I need to know the execution plan before the query is executed. If there is a way to get a query to show up in this view without being executed, then that would work as well.
Is it possible to check query execution plan of a store procedure from create script (before creating it)?
Basically the developers want to know how a newly developed procedure will perform in production environment. Now, I don't want to create it in production for just checking the execution plan. However they've provided SQL script for the procedure. Now wondering is there any way to look at the execution plan for this procedure from the script provided?
Is it possible to check query execution plan of a store procedure from create script (before creating it)?
Basically the developers want to know how a newly developed procedure will perform in production environment. Now, I don't want to create it in production for just checking the execution plan. However they've provided SQL script for the procedure. Now wondering is there any way to look at the execution plan for this procedure from the script provided?
I'm new to SQL server but familiar enough with databases to know this doesn't seem right. Here's the situation: I have a table with real estate property information. There are about 650,000 rows in it. I have a nonclustered non-unique index on the city where the property is located. There are about 40 unique values in this index.
I do a simple query like: SELECT city,address from propinfo where city= 'CARLSBAD'. The query will return about 4,000 rows. The problem is that the execution plan that it chooses is to do a full table scan. I.E. Even though there is an index on City, it chooses to look through 650,000 rows rather than do an index seek. Something sounds inefficient here. BTW, this happens in both SQL 7 and SQL 2000. Can anyone explain why this happens? I've got to think that SQL Server is more efficient here.
Hello, I have been looking at the execution plan for a procedure call and the select, compute scalar, stream aggregates, constant scan, nested loops, asserts are all at 0% cost, the PK costs are 2% apart from a rogue 7% and a few 20%, tables scans are all at 23%. The query cost realtive to the batch is 100%. What does this all mean? I have put non-clustered indexes on all the table attributes that are involved in the select statements but this has made no difference, i am guessing this is because my tables are not heavily populated and i may have seen a difference if i had thousands of entries in the tables the select statements acted on, is this assumption correct? Does anyone else bother using the execution plan to tweak there DB or is it a negligible tool?
In sql server 2005 management studio where do I find the option to run the sql query in the query analyser and also show the execution plan? At present I see the option under Query menu which is "Display estimated Execution plan" which only shows the plan but does not execute the query.
Does anyone know of a good way to copy the execution plan when using "Include Actual Execution Plan"? I often need to copy this and mail it.
I know I can use PrintScreen button, but I need a more efficient way to do this. If I just could rightclick the execution plan and select "Copy" and get complete plan it would be great.
Which of the following does NOT cause the execution plan of a query to berecompiled ?- new column is added to a table accessed by a query OR- index used by a query has been dropped from the database OR- query perfoms a join to return data from multiple tables OR- significant amount of data in a table has been mofified
Hi,I have a table-valued user defined function (UDF) my_fnc.The execution of statement "select * from my_fnc" takes much longertime than runnig the code inside my_fnc (with necessary changes).What can be the reason?How can I see an execution plan used for UDF?Thanks a lotMartin
Hi,I want to access the real execution plan via my webapplication after I have executed an SQL statement. I know how to get the estimated execution plan:1 cmd.CommandText = "SET SHOWPLAN_XML ON";2 cmd.ExecuteNonQuery();3 4 cmd.CommandText = myStatement;5 SqlDataReader dataReader = cmd.ExecuteReader();6 7 String plan = String.Empty;8 9 while (dataReader.Read()) {10 plan += dataReader.GetSqlString(0).ToString();11 }12 13 cmd.CommandText = "SET SHOWPLAN_XML OFF";14 cmd.ExecuteNonQuery();I want do compare the estimated costs with the real costs of the same statement. If I change code line 1 an 13 to "SET STATISTICS XML [ON|OFF]" the string "plan" will contain the result of the submitted SELECT statement, but I just need to get the plan and not the result itself. Thanks in Advance,Dominik
I want to know how to analyze query execution plan for complex queries and what information is useful from that for improving the performance. I have gone through details in some sites like www.like sql-performance.com (http://www.sql-server-performance.com/query_execution_plan_analysis.asp), where it was more generic. I want more info regarding this.
Can any one tell about the resources for this or do you have any white papers or documents, which you can share with me.
I am experiencing performance problems with one of my stored procedures. When the stored procedure is first compiled an executed, it behaves as expected (it usually takes 1 or 2 seconds to complete). But its performace it is degradated, so in 1 day, it usually takes 120 seconds to complete !!!. Once the stored procedure is compiled, its performance it is then the expected.
It is a complex stored procedure with two integer parameters with only one select, but composed by multiple views and sub-queries. We have been trying to break the query into small pieces using temporary tables but without success. The SQL Profiler shows an unusual number of reads when it goes wrong (more than a million reads).
I think the problem is in the execution plan. I know than compiling the stored procedure, the problem is fixed, but I do not know exactly when and why it starts to happen.
The stored procedure is running under the following configuration:
- Microsoft SQL Server Standard Edition (64-bit). - Version: 9.00.1399.06 - RAM 16 MB - 8 CPUs
The cost of query with usage of functions is as same as that of withoutfunctionsIn the below code, the query cost of insert is 0.02% and two selectstatements costs same 0.04%Declare @t table(mydate datetime)Declare @i intset @i=1while @i<=5000Begininsert into @t values(getdate())set @i=@i+1EndSelect mydate from @tSelect convert(varchar,mydate,112) from @tBut I thought usage of convert function will take more query costWhat do you think of this?Madhivanan
We've got as slightly unusual scenario happening whereby a statement ispassed to SQL which consists of two parts.BEGIN TRANSACTIONDELETE * FROM WhateverBULK INSERT INTO Whatever...(etc)COMMIT TRANSACTIONThe first is a deletion of the data and the second is the bulk insertof replacement data into that table. The error that we see is aviolation of the primary key (composite).The violation only happens if we run both processes together. If we runone, then the other, it works fine. If we set a line by line insert, itworks fine.My suspicion is that the execution plan that is being run is mostlikely working the two parts in parallel and that the records stillexist at the point that the insert is happening. Truncate is not anoption. The bulk insert was added for performance reasons. There is anoption of trying the bulk insert, and if that fails, do the line byline insert, but it's far from ideal.I think we can probably wrap this into two individual transactionswithin the one statement as follows :BEGIN TRANSACTIONDELETE * FROM WhateverCOMMIT TRANSACTIONBEGIN TRANSACTIONBULK INSERT INTO Whatever...(etc)COMMIT TRANSACTIONWill this give sufficient hint to SQL about the order it processes itso that it completes as we intend and not as it sees being the mostefficient method ?Or, is there a better approach to this ?I've seen that some hints can be passed to SQL for optimizing, but myunderstanding was that it was always better to trust the optimiser andre-work the query as needed.With the server having two processors, is it feasible that one is doingone part and the other processor the other part in parallel ? Willtelling it to use a single processor be worthwhile looking at ? MAXDOP1 ?Finally, I'd imagine that the insert is quicker to process than thedeletion. Is this correct ?ThanksRyan
Using SQL Server 2000 SP4.There is a relatively complex stored procedure that usually completes inless than 20 seconds. Occasionally it times out after 180 seconds. The SPis called via ADO 2.8, using adCmdStoredProc command type. If I useProfiler to capture the EXEC that ADO sends to run the procedure, and runthat from QA, the procedure completes in less than 20 seconds as it should.The procedure is created WITH RECOMPILE. One additional twist is thatsp_setapprole is called from the client before running the procedure inquestion. This may be irrelevant, because even if I include the samesp_setapprole call when running the procedure from QA, it still executesquickly, and even if I comment out the call to sp_setapprole in the clientcode, the proc still times out when run from the client.The only thing that fixes it, at least for a day or two, is DBCCFREEPROCCACHE. So it appears that a bad plan is somehow stuck in memory andis only used when the procedure is called from the client app, and is notflushed even though the procedure was created WITH RECOMPILE.Other than scheduling the DBCC call to run every night, is there anythingelse I could try to get this resolved? Thanks.--(remove a 9 to reply by email)
I was hoping someone could shed some light on wierd situation i'm experiencing. I have the following query:
select count(*) LeadCount from auto_leads al where received > dbo.GetDay(GetDate())
dbo.GetDay simply returns a smalldatetime value of today's date.
Now I recently got thrown into a data mess and for some reason this query takes 8 seconds to run. Now the first thing I did was update the stats on the Received column of this auto_leads table. I re-run the query and I'm still getting 8 seconds. I look at the execution plan I can make figure out why this is happening.
I then change the above query so the filter received > dbo.GetDay(GetDate()) is now just received > '5/31/2006' and the query comes back immediately. This doesn't make sense to me because the GetDay function is really simple and comes back immediately. I then try the following query to confirm it isn't a problem with the GetDay function:
declare @Today DateTime
set @Today = dbo.getday(GetDate())
select count(*) leads from auto_leads al join type_lead_status tls on (tls.type_lead_status_id = al.type_lead_status_id) where received > @Today
Sure enough, the query came back immediately. Next thing to go through my mind is that the query execution plan has been cached by SQL Server using the execution plan from before I updated the stats on the received column. So I executed sp_recompile 'auto_leads' and tryed the original query again. Still taking 8-10 seconds to come back.
So my question, is why when I remove the GetDay function call in my query filter is the query slow, as opposed to me just passing a variable into the query? Thanks!