I am writing a client application that shows estimated queries plans and statistics. I know how to obtain estimated plans by using SQL Server Management Studio. But is it possible to obtain by using database functions?
I have found sys.dm_exec_query_plan, but it seems that this function can only be used for executed (or executing) queries...
Hi,I have a question about estimated query execution plans that aregenerated in QA of MSSQL.If I point at an icon/physical operator in the estimated QEP, it showsmesome statistics about the operator.Is there a way to retrieve these statistics through a query, i.e., canthese statistics be available to the user?Also, is there a way to generate these statistics on my own?thanks in advance-TC.
When viewing an estimated query plan for a stored procedure with multiple query statements, two things stand out to me and I wanted to get confirmation if I'm correct.
1. Under <ParameterList><ColumnReference... does the xml attribute "ParameterCompiledValue" represent the value used when the query plan was generated?
2. Does each query statement that makes up the execution plan for the stored procedure have it's own execution plan? And meaning the stored procedure is made up of multiple query plans that could have been generated at a different time to another part of that stored procedure?
When I generate an estimated execution plan from Management Studio, one of the things I often see in the execution plan generated is an 'Index Scan'. When I put my mouse over the 'Index Scan' graphic, I will see a window display with something called 'Output List' at the bottom of the window. Do I understand correctly that SQL Server will scan my index looking for values in each of the fields included in this output list?
The benefit of the actual execution plan is that you can see the actual number of rows passing through each step - compared to the estimated number of rows.But what about the "cost percentages" ?I believe I've read somewhere that these percentages is still just an estimate and is not based on the real execution.Does anyone know this and preferable have a link to something that documents it?Thanks
I am running an update query.It is taking long time. To find the estimated completion time i checked sys.dm_exec_request or sys.dm_exec_session or sp_who2 but there is no clue. It is showing as zero.
We have a issue with a MDS server that have been over us for a couple of days, the original error msg from SQL Server Engine is the one "The query processor could not produce a query plan" but the ones we get on the Excel-Addin are "Sequece contains no elements" or "The value cannot be null" T
• Using Microsoft SQL Server 2012 (SP1) - 11.0.3393.0 (X64) for 6months on this server without issues
• Two weeks ago we started to have 2 errors: "Sequence Contains No Elements" | "The Value Cannot Be Null"
• We are using the last version of Excel Add-in
• We try to reinstall the MDS feature
• If I backup/restore MDS database to other server it works
• We updated to SQL 2012 SP2 + CU4 but the error persisted ...
Looking at the MDSTraceLog we are routed to the this msg
SQL Error Debug Info: Number: 8624, Message: Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services., Server: bbdvsql03inst01, Proc: udpMetadataEntityGetDetailsXML, Line: 28
At line 28 udpMetadataEntityGetDetailsXML is calling udfMetadataEntityGetDetailsXML … and here is where we stopped
** Error found when try to get data from a entity using Excel add-in ** =================================== Sequence contains no elements ------------------------------ Program Location: at Microsoft.MasterDataServices.AsyncEssentials.AsyncResultBase.EndInvoke() at Microsoft.MasterDataServices.ExcelAddInCore.AsyncProviderBase`1.EndOperation(IAsyncResult ar)
*Before* I actually call up Microsoft SQL Customer Support Services and ask them, I wanted to ping other people to see if you have ever ran into this exact error
"Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services."
I would have searched the forums myself, but at this moment in time, search is broken :(
If anyone has run into this error before, what conditions would exist that this could happen? That is, if I can sniff this out with suggestions from the community, I would be happy to do so.
It is an oddity because if I alter a couple subqueries in the where clause [ i.e., where tab.Col = (select val from tab2 where id='122') ]to not have subqueries [hand coded values], then the t-sql result is fine. It's not as if subqueries are oddities... I've used them when appropriate.
fwiw - Not a newbie t-sql guy. ISV working almost daily with t-sql since MS SQL 2000. I have never seen this message before...at least I don't recall ever seeing it.
Thanks in advance for other suggested examination paths.
I am trying to optimize a stored procedure in SQL 2008. When I look at an actual execution plan generated from when I run it in SSMS it shows a table being used in the plan that has no relation to what is actually in the query script and this is where the biggest performance hit occurs.
I've never seen a table show up before that wasn't part of the query. why this might occur and how to correct it? I can't just change the query script because the table in question isn't there.
I have a stored procedure that will execute with less than 1,000 reads onetime (with a specified set of parameters), then with a different set ofparameters the procedure executes with close to 500,000 reads (according toProfiler).In comparing the execution plans, they are the same, except for the actualand estimated number of rows. When the proc runs with parameters that producereads that are less than 1,000 the actual and estimated number of rows equal1. When the proc runs with parameters that produce reads are near 500,000 theactual rows are approximately 85,000 and the estimated rows equal 1.Then I run:DBCC DROPCLEANBUFFERSDBCC FREEPROCCACHEIf I then reverse the order of execution by executing the procedure thatinitially executes with close to 500,000 reads first, the reads drop to lessthan 2,000. The execution plan shows the acutual number of rows equal to 1,and the estimated rows equal to 2.27. Then when I run the procedure thatinitially executed with less than 1,000 reads, it continues to run at lessthan 1,000 reads, and the actual number of rows is equal to 1 and theestimated rows equal to 2.27. When run in this order, there is consistency inthe actual and estimated number of rows and the reads for both executionswith differing parameters are within reason.Do I need to run DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE on productionand then ensure that the procedure that ran close to 500,000 reads is runfirst to ensure the proper plan, as well as using a KEEP PLAN option? Or,what other options might you recommend?I am running SQL 2000 SP4.--Message posted via SQLMonster.comhttp://www.sqlmonster.com/Uwe/Forum...eneral/200609/1
I am noticing a discrepency in query plans when a process is run in Analyzer as either a proc or as straight sql.
I have a query that uses a view of 5 tables that have a check constraint on the year. When I run my query in query analyzer and state year = 1999 along with over parameters then the query plan only looks at the one table.
When I take that query and make a stored proc and run the process passing the year = 1999 along with other parameters the plan states that it is looking at all of the tables in the partitioned view.
Hello, i am making a Fulltextsearch on MS SQL Server 2005 (indexed, with "Contains"). Because of performance reasons i am only showing the first 200 rows mssql finds ("select top 200...:"). Is there any possibility to get the estimated totalnumber of all rows? i have heard something that is possible to get this in mssql-server. The server then estimates how many rows with that searchword could be in the whole database. google i.e. makes the same thing.... is that true? what do i have to do to get this? greetings and thx cpt.oneeye
I wanted to know whether we have an execution plan enabled in SQL 6.5 as we have it in SQL 7.0 and SQL 2000 . I.e when we execute a query and if we enable ' show execution plan 'then it creates a map and shows the vital statistics . If that is available on SQL 6.5 then i am missing that tool .
How can i have it installed on my SQL 6.5 server ??
Client database has a complex view with eight nested subqueries used to return "dashboard" information. The application code uses NHibernate to call and filter the view with three parameters, one of which is the CustomerID.
A certain customer, (the biggest client), has more than ten times the number of records of the next largest customer.
Occasionally, the database reaches a state where when this particular customer tries to run the dashboard view, the application times out.
If I open up the view and re-save it, all is well again for a few days.
What gives?
Views are supposedly not pre-compiled, though I know that 2000 stores bits and pieces of query plans.
Any ideas on what causes this and what to do about it?
I have a query like below .. if i add where Served = 1 , the query takes foreever... if i remove it, it takes only 6 sec...
I am not sure why this is hapening?
select distinct a.Episode_Key, case when ag.Category IN ('ASMI', 'COOC', 'SPCL') then 'SMI' when ag.Category = 'SEDC' then 'SED' when ag.Category = 'ACCA' then 'SA' when ag.Category like 'CGA%' then 'Gam' end as [Category], ag.Agreement_Type_Name as [Agreement], p.ServiceProvider, s2.Served from dbo.Assessment a INNER JOIN ( select distinct Episode_Key, p.ServiceProvider, max(CSDS_Object_Key) as [Sequence] from dbo.Assessment a INNER JOIN dbo.CD_Provider_Xref p ON a.Provider_CD = p.Provider_CD where Creation_DT >= '07/01/2007' and Reason_CD = 1 group by Episode_Key, p.ServiceProvider ) as s1 ON a.CSDS_Object_Key = s1.Sequence INNER JOIN dbo.CD_Provider_XREF p ON a.Provider_CD = p.Provider_CD INNER JOIN dbo.CD_Agreement_Type ag ON ag.Agreement_Type_CD = a.Agreement_Type_CD LEFT OUTER JOIN ( select distinct Episode_Key, p.ServiceProvider, 1 as [Served] from dbo.Encounters e INNER JOIN dbo.CD_Provider_Xref p ON e.Provider_CD = p.Provider_CD where Encounter_Begin_DT between '01/01/2008' and '01/31/2008' and Procedure_CD is not null and Encounter_Units > 0 ) as s2 ON a.Episode_Key = s2.Episode_Key and p.ServiceProvider = s2.ServiceProvider ????---where Served = 1 group by a.Episode_Key, ag.Agreement_Type_Name, p.ServiceProvider, Served, case when ag.Category IN ('ASMI', 'COOC', 'SPCL') then 'SMI' when ag.Category = 'SEDC' then 'SED' when ag.Category = 'ACCA' then 'SA' when ag.Category like 'CGA%' then 'Gam' End
I would like to save a query plan (estimated or actual)created in Query Analyzer -- paste it into a document,or simply print. It doesn't seem to be possible toselect and copy the Execution Plan window, and printingit creates microscopic gibberish which is a waste ofpaper. Is it possible to do this?Set showplan_text is of limited help for the SP I'mlooking at -- while analyzing the SP, it reads aheadand complains that a temp table created inside the SPdoesn't exist (yet) and exits. Using Ctrl-K to capturethe query plan allows the SP to complete, but saving theplan is the problem.Thanks,Jim Geissman
I have a SQL 2000 table containing 2 million rows of Trade data. Hereare some of the columns:[TradeId] INT IDENTITY(1,1) -- PK, non-clustered[LoadDate] DATETIME -- clustered index[TradeDate] DATETIME -- non-clustered index[Symbol] VARCHAR(10)[Account] VARCHAR(10)[Position] INTetc..I have a view which performs a join against a security master table (togather more security data). The purpose of the view is to return allthe rows where [TradeDate] is within the last trading days.The query against the view takes over around 30 minutes. When I viewthe query plan, it is not using the index on the [TradeDate] column butis instead using the clustered index on the [LoadDate] column... Theodd thing is, the [LoadDate] column is not used anywhere in the view!For testing purposes, I decided to do a straight SELECT against thetable (minus the joins) and that one ALSO uses the clustered index scanagainst a column not referenced anywhere in the query.There is a reason why I have not posted my WHERE clause until now. Thereason is that I am doing what I think is a very inefficient clause:WHERE [TradeDate] >= fGetTradeDateFromThreeDaysAgo(GetDate())The function calculates the proper trade date based on the specifieddate (in this case, the current date). It is my understanding that thefunction will be called for all rows. (Which COULD explain theperformance issue...)However, this view has been around for ages and never before caused anysort of problems. The issue actually started the day after I had torecreate the table. (I had to recreate the table because some columnswhere added and others where renamed.)On a side note, if I replace the WHERE clause with a hard-coded date(as in 'WHERE [TradeDate] >= '20060324'), the query performs fine butSTILL uses the clustered index on the [LoadDate] column.
I'm hoping somebody can explain exactly what's going on here - I can'tfind it documented anywhere.Go to the Northwind database, and run the following SQL:create index IX_UnitPrice on [order details](unitprice)Now, turn on SHOWPLAN (either graphical or text, it doesn't matter),and run the following query:select * from [order details]where unitprice = 2Output:StmtText|--Index Seek(OBJECT: ([Northwind].[dbo].[OrderDetails].[IX_UnitPrice]), SEEK: ([OrderDetails].[UnitPrice]=Convert([@1])) ORDERED FORWARD)Now, alter the SARG slightly by making it a float:select unitprice from [order details]where unitprice = 2.000Output:StmtText|--Nested Loops(Inner Join, OUTER REFERENCES: ([Expr1003], [Expr1004],[Expr1005]))|--Compute Scalar(DEFINE: ([Expr1003]=Convert(Convert([@1]))-1.00,[Expr1004]=Convert(Convert([@1]))+1.00, [Expr1005]=If(Convert(Convert([@1]))-1.00=NULL) then 0 else 6|If(Convert(Convert([@1]))+1.00=NULL) then 0 else 10))| |--Constant Scan|--Index Seek(OBJECT: ([Northwind].[dbo].[OrderDetails].[IX_UnitPrice]), SEEK: ([Order Details].[UnitPrice] >[Expr1003] AND [Order Details].[UnitPrice] < [Expr1004]), WHERE:(Convert([Order Details].[UnitPrice])=Convert([@1])) ORDERED FORWARD)Right. I understand that in both cases the SARG datatype is differentfrom the column datatype (which is money), and that in the firstexample the SARG constant gets implicitly converted from int -> money(following the datatype hierarchy rules), and so the index can stillbe used.In the second example, the datatype hierarchy dictates that money islower than float, so the table column gets implicitly converted frommoney -> float, which strictly speaking disallows the use of the indexon that column.What I DON'T understand is what exactly all that gubbins about theexpressions (especially the definition of [Expr1005] is all about; howdoes that statement decide whether Expr1005 is going to be NULL, 6, or10?I'm soon going to be giving some worked tutorials on index selectionand use of Showplan to our developers, and being a bolshi lot they'rebound to want to know exactly what all that output means. I'd ratherbe able to tell them than to say I don't actually know!How about it someone?Thanks,Phil
I may just be completely missing something here but, when I view a query plan from a SQL statment that involves a join with a synonym I do not see any reference to the synonym or the underlying table referenced by it in the query plan? Any thoughts?
I was doing a demo last night, something that I've done hundreds of times already. Last night was the first time that it has failed to work. I was trying to show what the sys.dm_db_missing_index_* DMVs can provide.
AdventureWorks database
I'm running the following query:
select city from person.address where city like 'A%'
This is supposed to produce a table scan which in turn will obviously cause SQL Server to detect that an index could be beneficial. However, it does a clustered index scan (yes, I know, basically the same thing) instead and I see absolutely nothing appear in the DMVs. I pulled the data out into a dummy table that did not have a primary key either using the following: select * into person.tmpaddress from person.address
I then execute the same query and get a table scan which is expected:
select city from person.address where city like 'A%'
However, it does not matter how much I execute that query or any other permutation of explicit query, absolutely nothing at all gets logged into the sys.dm_db_missing_index_* DMVs. I have also tried this same type of thing with several other tables in the AW database and can not find a single query which will cause anything to be logged to these DMVs. It seems that something is broken, but for the life of me, I can't figure out what is wrong. No weird settings, I'm running as sa, etc.
I can run queries like this in other databases and stuff gets immediately logged to the DMVs as expected. Any ideas?
How to calculate estimated completion time of a job and what is the variance/difference in time based on previous job history. Looking for tsql query which can accomplish this.For example)...Daily a job is taking 10 mins to complete. However, today due to some reason, the job is running over an hour and still running. It could be a blocking issue or some performance issue on the server due to which the job is still running.
In such cases, using a tsql query or a stored proc which monitor these jobs every 3 mins (Configurable value), so every 3 mins , query has to check, if they are any jobs which are taking more time than its usual completion time/avg completion time in that case shoot an email using dbmail functionality i.e. sp_Senddbmail .. From there, DBA can dig further using waits or sql trace etc...
I'm running the same query on two computers but getting a different query plan. On one of the servers, the query returns in 10 seconds, on the other server it takes over a minute. What the heck!
1. same hardware configuration on both computers 2. same user databases on both computers 3. same NT setup on both computers 4. same software installed on both computers
I can only imagine that MSSQL7 was setup differently and the query optimizer is making a stupid choice. I've compared numerous SQL server options and they are both the same.
1. sp_configure (same on both computers) 2. properties sheet on each server from enterprise manager (same on both computers)
I'm trying to test some queries in SQL analyser without reusing the query plan (already cached). I know that there is a way to avoid that but I don't remember right now. Another option would be to restart MS SQL service but I don't want to do that. Any thoughts...?
if t-sql query is perfectly run in development and when I execute in production at that time I want to use execution plan which is in development . so how I can do using cache? I know about hint we can use hint USE_PLANE. but I want to do with cache .
hi.coming from postgresql, i am used to textual references to most of thethings i do with the database. i feel a little lost with all the graphical.i have few questions regarding MS SQL 20001. what is the best (or easiest) way of getting a table definition in text?it could be either a CREATE TABLE sql-query or a just a definition,something like:TABLE thisTableidintegervaluevarchar(10)etc.etc.2a. how do i get a query plan and how do i get it in text.2b. are there planner modes that show more or less of what actuallyhappened, verbose mode perhaps?2c. if i ask for a query plan, will SQL server actually run the query orwill it only produce a plan. if the query is run, does it commit orrollback by default?stig