Query Analyzer -&> Subtree Cost Vs. Execution Time
Jul 14, 2004
I am using a stored procedure that is behaving badly - the subtree cost is about 2000 and it takes between 3-4 seconds to run, and sometimes it takes over a minute to run. I have made some optimizations that cause the stored procedure to run in generally under 1 second (at most under 2 seconds), but the subtree cost of it jumps to 4000!! All of this while the server was experiencing similar load (the tests were done within minutes of each other).
I know that the subtree cost is a way to gauge the performance of a query against other queries, but I have typically seen the cost go in the same direction as the execution time (they both go up or the both go down).
How does SQL Server determine the cost (I know that is based on statistics, but I was wondering if anyone had more details)? Is it more important to have a lower subtree cost, or a lower execution time? Am I going to get into trouble later with this high subtree cost?
I right in thinking that if the estimated subtree cost is higher than the cost threshold for parallelism then it will use a parallel plan? If so, I've read the cost threshold is measured in minutes but is the subtree cost measured in something else, the mysterious cost number? And if so, how are the two compared?
HI, I have an interesting situation. I have created a stored procedure which has a select union query and it accepts some parameters. When I execute this procedure it takes 8 minutes. When I copy the script in stored procedure and run it directly in Query Analyzer it takes 2 1/2 minutes?? Same numbers of rows are returned either way in the result set with about 13,000.
I cannot figure this out and it is almost the same thing except that in Query Analyzer I declare the parameters variables and its values?
im try to get the subtree cost (sql server 2000) back from a dynimic sql query that i build on the fly.. the sql query is:
SET SHOWPLAN_ALL ON GOselect * from mp_Contacts Where FirstName = 'Jason' --my queryGOSET SHOWPLAN_ALL OFF GOGO
the c# code is: <code> Decimal HoldSubTree; // Make Connection SqlConnection FindSubTree = new SqlConnection(Utilities.GetCRMEnvironment()); //Sql Connection System.Text.StringBuilder SHOWPLANALLON = new System.Text.StringBuilder("SET SHOWPLAN_ALL ON"); System.Text.StringBuilder SHOWPLANALLOFF = new System.Text.StringBuilder("SET SHOWPLAN_ALL OFF"); System.Text.StringBuilder SubTreeSql = new System.Text.StringBuilder(MySqlString.ToString() + " "); // Grab the data and fill the dataset SqlCommand SubTreemyCommand = new SqlCommand(SHOWPLANALLON.ToString(), FindSubTree); SubTreemyCommand.CommandTimeout = 90; SubTreemyCommand = new SqlCommand(SubTreeSql.ToString(), FindSubTree); SubTreemyCommand.CommandTimeout = 90; SqlDataAdapter SubTreemyDA = new SqlDataAdapter(); SubTreemyDA.SelectCommand = SubTreemyCommand; DataSet SubTreemyDS = new DataSet(); SubTreemyCommand = new SqlCommand(SHOWPLANALLOFF.ToString(), FindSubTree); SubTreemyCommand.CommandTimeout = 90; try { SubTreemyDA.Fill(SubTreemyDS); </code> and the result: it bring back the rows of data from the dynimic query instead of the subtree cost.. how could i send 3 statements to the sql database but all in the same transaction.. for example: first the server would switch modes: "SET SHOWPLAN_ALL ON" then i would send it my query: "select * from mp_Contacts Where FirstName = 'Jason' " and then switch the server back over: "SET SHOWPLAN_ALL OFF" Is this possible?? ideas??
This is probably a very stupid question. I have been out of the SQL Server arena for awhile and am now getting re acclimated. It was my understanding that using execution plan in query analyzer does not really execute the query against the query's database tables. Is this right? Tom.
I have two queries yielding the same result that I wanted to compare for performance. I did enter both queries in one Mangement Studio query window and execute them as one batch with the actual query plan included.Query 1 took 8.2 seconds to complete and the query plan said that the cost was 21% of the batchQuery 2 took 2.3 seconds to complete and the query plan said that the cost was 79% of the batch.The queries were run on my local development machine. I was the only user. No other programs were running at the time of this test. The results are repeatable.I understand that the query with the lowest cost is not necessarily the fastest query. On the other hand, the difference is quite big. The query that has approx. 80% of the cost takes 20% of the time and the other way around. I have two questions:
Is such a discrepancy normal?Can conclusions be drawn from the cost distribution? For instance, does the query that takes 8.2 seconds but only costs 21% scale better?
Hello, I need some help, I am in school right now and I am in a SQL server class. We have been working in the query analizer making a database. Well I have to print out everything that I have typed. But I want to view it first. How do I do that?
Sorry I did a search and couldnt find anything.. Probably cause I dont really know what to search for or look under. Thanks for your guys time.
Hell All, Following query takes 7 minutes to execute while using search criteria as shown below in blue text(ie. IN(2006,2007) if criteria changes to =2006 as shown in 2),this takes 2minutes
But I want expected output as in query 1) in less time. How to optimize following query for execution time?
1)select sum(PB.CONSN_QTY)Consumption,Count(*),PB.BillPro_Year from tbtrans_prowaterbill PB INNER JOIN MIDC_AREA MA ON PB.Area_cd = MA.Area_cd INNER JOIN MIDC_Division MD ON MA.Div_CD = MD.Division_CD INNER JOIN MIDC_Circle MC ON MD.Circle_CD = MC.Circle_CD INNER JOIN TBMST_SubDiv TS ON MA.SubDiv_CD = TS.SubDiv_CD INNER JOIN MIDC_Zone MZ ON MD.Zone_CD = MZ.Zone_CD INNER JOIN tbmst_consumer TC ON PB.cons_no = TC.Cons_No INNER JOIN TBMST_CONSTYPE TCT ON TCT.Cons_Type = TC.Cons_Type where pb.billpro_year IN('2006','2007') and MTR_Size = 15 and TCT.Cons_Type = '1A2' and MZ.Zone_Name = 'MUMBAI' and MC.Circle_NAME = 'MMR' and MD.Division_Name = 'Dombivli' and TS.SubDiv_DESC = 'THANE DIVISION STAFF' group by PB.BillPro_Year
2)select sum(PB.CONSN_QTY)Consumption,Count(*),PB.BillPro_Year from tbtrans_prowaterbill PB INNER JOIN MIDC_AREA MA ON PB.Area_cd = MA.Area_cd INNER JOIN MIDC_Division MD ON MA.Div_CD = MD.Division_CD INNER JOIN MIDC_Circle MC ON MD.Circle_CD = MC.Circle_CD INNER JOIN TBMST_SubDiv TS ON MA.SubDiv_CD = TS.SubDiv_CD INNER JOIN MIDC_Zone MZ ON MD.Zone_CD = MZ.Zone_CD INNER JOIN tbmst_consumer TC ON PB.cons_no = TC.Cons_No INNER JOIN TBMST_CONSTYPE TCT ON TCT.Cons_Type = TC.Cons_Type where pb.billpro_year = '2006' and MTR_Size = 15 and TCT.Cons_Type = '1A2' and MZ.Zone_Name = 'MUMBAI' and MC.Circle_NAME = 'MMR' and MD.Division_Name = 'Dombivli' and TS.SubDiv_DESC = 'THANE DIVISION STAFF' group by PB.BillPro_Year
Hello,I ran a query that I thought would take an hour, but instead took 14hours to run. The consequence was it bogged down our data warehouseand the overnight build was adversely impacted.Is there a local setting I can set to limit the execution time myquery will take? I dont want to have a server setting and impact otherqueries, just the one I am running.I know there will be people asking about the 14 hour build and what isit doing and so forth. I will address that but I also look to thesesituations as a learning opportunity.Thanks in advance.Rob
I have created 2 tables in a database which are mostly similar, the table1 will execute with more speed (take only less than or equal to 1 sec) but the table2 will take 4 or 5 secs to execute the query,moreover the similar datas was presented in both the tables. the eg:- query that i have executed is select max(c_code) from table1 and select max(c_code) from table2, the first one take less than 1 sec and the second one take more than 4 or 5 secs, also there is a procedure i hve written to update both the tables, and i got the time out error, if the table1 alone is updated using the procedure it is OK but the table2 alone is updated using the procedure the time out error will be shown, pls reply the reason for this problem as early as possible, it will be a grateful if anybody reply for this trouble?
I have a query, rather complex one to deal with more than 1 million rows, used to run 40 minutes in SQL Server 2000 in query analyzer. Now, it has been 10 hours in SQL Server 2005 in management studio. And still has not finished yet! Anything can go wrong here. Basically nothing changes, except for I have my server upgrade from SQL Server 2000 to SQL Server 2005. Seems something is wrong crazy in SQL Server 2005. Any suggestions?
If I have 6-8 queries running in parallel, Whether having a Single connection Manager (for the same source) for all the Extract performs faster or having Distinct Connection Manager for each of the extract performs faster ?
I want to save every query executed from a given software, let's say Multi Script for example, and save in a table query text, execution time and rows count among other possible useful information. Right now I've created a sp and a job that runs every 1 milliseconds but I can't figure out how to get execution time and rows count. Another problem with this is that if the query takes too long I end up with several rows in my table.
I need to build TSQL query to return the Last unit Cost from my table of movement of goods SL (on CTE) but the MAX(Datalc) must be Less or Equal to my HeaderInvoice.
This is my script:
With MaxDates as ( SELECT ref, MAX(epcpond)[Unitcostprice], MAX(datalc) MaxDate FROM sl
[code]....
the problem I have right now is that the Unitcostprice of my table of goods movements has a top date greather than the date of my bill.
Example:
invoice date : 29.01.2015 unitcost on invoice line = 13,599722 Maxdate (CTE) : 19.03.2015 unitCost from my table of movement of goods = 14,075
That ´s not correct because the MAxdates > invoice date and the unitCost of 14,075 is the cost on 19.03.2015 and not just before my invoice date.
I have a view in SQLServer 2005. It took 30 sec. to finish. Then I deleted 4500 records from one table that is used in view. It took 90 sec. to finish now. I did a comparison on Actual Execution Plan between before I deleted data and after I deleted data, they are almost same, only different is Actual Number Rows become less after deleted data. So, I wonder why data become less but time become more. When I look closely on the Actual Execution Plan, the ridiculous thing is, there are only Estimated Operation Cost on each step, no Actual Operation Cost. I guess there are something wrong with optimizer because reuse same Execution Plan, but how can I tell which step wrong without Actual Operation Cost.
Is there a way to keep track in real time on how long a stored procedure is running for? So what I want to do is fire off a trace in a stored procedure if that stored procedure is running for over like 5 minutes.
I have 2 different queries which produce same result. I want to know which querry is better and why? The query is used to display the employee details who is handling the maximum number of project.
Queries are the following
Query A
Code Snippet
SELECT EmployeeDetails.FirstName+' '+EmployeeDetails.LastName AS EmpName, COUNT(LUP_EmpProject.Empid) AS Number_Of_Projects FROM LUP_EmpProject INNER JOIN EmployeeDetails ON LUP_EmpProject.Empid=EmployeeDetails.Empid GROUP BY EmployeeDetails.FirstName+' '+EmployeeDetails.LastName, LUP_EmpProject.Empid HAVING COUNT(LUP_EmpProject.Empid)>0 AND COUNT(LUP_EmpProject.Empid)=(SELECT MAX(Number_Of_Projects) FROM (SELECT COUNT(LUP_EmpProject.Empid) Number_Of_Projects FROM LUP_EmpProject GROUP BY LUP_EmpProject.Empid)AS sub)
Query B
Code Snippet
SELECT LUP_EmpProject.EmpID, EmployeeDetails.FirstName + ' ' + EmployeeDetails.LastName AS EmpName, COUNT(*) AS NumberOfProjects FROM LUP_EmpProject INNER JOIN EmployeeDetails ON LUP_EmpProject.EmpID = EmployeeDetails.EmpID GROUP BY LUP_EmpProject.EmpID, EmployeeDetails.FirstName + ' ' + EmployeeDetails.LastName HAVING COUNT(*)=(SELECT MAX(Number_Of_Projects) FROM (SELECT COUNT(LUP_EmpProject.Empid) Number_Of_Projects FROM LUP_EmpProject GROUP BY LUP_EmpProject.Empid)AS sub
I have a query i have been optimizing. Now runs in about 15 minutes but was wondering if there is any way tr educe the SORT cost.
Currently the high costs left are the Table insert which is 58% and the Sort cost of 36%
The inner query below is around 400million rows and aggregates to around 15,000,000 rows)
SELECT@1 = DateKey FROM dbo.DimDate WHERE TheDate = CAST(DATEADD(WEEK, -1, GETDATE() -1) as DATE) SELECT@2 = DateKey FROM dbo.DimDate WHERE TheDate = CAST(DATEADD(WEEK, -2, GETDATE() -1) as DATE) SELECT@3 = DateKey FROM dbo.DimDate WHERE TheDate = CAST(DATEADD(WEEK, -3, GETDATE() -1) as DATE) SELECT@4 = DateKey FROM dbo.DimDate WHERE TheDate = CAST(DATEADD(WEEK, -4, GETDATE() -1) as DATE) SELECT@5 = DateKey FROM dbo.DimDate WHERE TheDate = CAST(DATEADD(WEEK, -5, GETDATE() -1) as DATE)
I have enabled the query governor on our SQL2000 SP2 server with athreshold of 3600. Now, some of the maintenance jobs fail due to thelimit being to low (e.g. one of the user databases integrity checkfails nightly).I have tried to put the command 'SET QUERY_GOVERNOR_COST_LIMIT 0' justbefore the line in the step which reads 'EXECUTEmaster.dbo.xp_sqlmaint N'-Plan etc'but it has no effect.Does anyone know how to get around this situation without usingsp_configure to change the query governor settings at a systemwidelevel?GC.
I want to turn on the query governor cost limit option for 20 minutes so that queries do not run longer than 20 minutes but I could not find any info as if this option would also not allow database backup job or other maintenance jobs to run more than 20 minutes. We have backups (Via RedGate) run over 3.5 hours and others like rebuild indexes and integrity checks even more than 3.5 hours....
Hello Anybody ! I want to get the execution time of a query, I mean I will run the one sql statement like this " SELECT * FROM tblname WHERE field1 = '009' and then I want to get from my program execution time of this query. I think I just keep the sys time before run it and compare with sys time when finished it. But I don't like this one, So, can I get the execution time from sql server by running their sys s-procedure or something like. Thanks.
Is it possible that a stored procedure runs slower when called by an application,and runs faster when executed as 'exec xxxxx' on query analyzer? It's actually happening to us.Any clue?? thanks. Di.
i observed a strange problem in my production setup. i have a job which updates usage metrics (for reporting) which is scheduled to run once in a day. (the job invokes an sp to do this. the sp refers two tables to retrieve/update information, say TableA and TableB).
the job normally takes an average of 25 seconds to complete. all of a sudden the job execution time increased to 6 minutes and 52 seconds. now, the average job execution time is 8 minutes. there is no table/sp change in the DB
the only thing i observed is that one of the tables referred by the sp has 30,000 records added to it, on the day from which the job execution time increaed to 6 minutes.
i have updated the statistics on the Table, but the execution time remains unchanged. can any one suggest any possible causes for such a scenario.
i expect a few hints with which i can explore my production DB and find out the causes for the increased execution time for the sp.