DB Engine :: Cost Vs Time When Including Actual Query Plan In SSMS
Nov 11, 2015
I have two queries yielding the same result that I wanted to compare for performance. I did enter both queries in one Mangement Studio query window and execute them as one batch with the actual query plan included.Query 1 took 8.2 seconds to complete and the query plan said that the cost was 21% of the batchQuery 2 took 2.3 seconds to complete and the query plan said that the cost was 79% of the batch.The queries were run on my local development machine. I was the only user. No other programs were running at the time of this test. The results are repeatable.I understand that the query with the lowest cost is not necessarily the fastest query. On the other hand, the difference is quite big. The query that has approx. 80% of the cost takes 20% of the time and the other way around. I have two questions:
Is such a discrepancy normal?Can conclusions be drawn from the cost distribution? For instance, does the query that takes 8.2 seconds but only costs 21% scale better?
I have a view in SQLServer 2005. It took 30 sec. to finish. Then I deleted 4500 records from one table that is used in view. It took 90 sec. to finish now. I did a comparison on Actual Execution Plan between before I deleted data and after I deleted data, they are almost same, only different is Actual Number Rows become less after deleted data. So, I wonder why data become less but time become more. When I look closely on the Actual Execution Plan, the ridiculous thing is, there are only Estimated Operation Cost on each step, no Actual Operation Cost. I guess there are something wrong with optimizer because reuse same Execution Plan, but how can I tell which step wrong without Actual Operation Cost.
Having a SQL Server 2012 Enterprise (x64) on a Windows 2012 R2. We need to know, a reliable way, the number of processor sql server is using at a give time. We already know how many total processor are available to sql by getting info from sys.dm_os_sys_info.
For instance, a server has 40 processors, we want to know how many of those are being used at a given time. Since the load on the server may not be that high, we would like to know how many processors we can eliminate and the load will still be unaffected.
After watching the server performance for a while, we are predicting we may only need 16. But we would like to get some statistics before we reduce it to this number.
I am using a stored procedure that is behaving badly - the subtree cost is about 2000 and it takes between 3-4 seconds to run, and sometimes it takes over a minute to run. I have made some optimizations that cause the stored procedure to run in generally under 1 second (at most under 2 seconds), but the subtree cost of it jumps to 4000!! All of this while the server was experiencing similar load (the tests were done within minutes of each other).
I know that the subtree cost is a way to gauge the performance of a query against other queries, but I have typically seen the cost go in the same direction as the execution time (they both go up or the both go down).
How does SQL Server determine the cost (I know that is based on statistics, but I was wondering if anyone had more details)? Is it more important to have a lower subtree cost, or a lower execution time? Am I going to get into trouble later with this high subtree cost?
I was tuning a query testing out SARG with these two queries:
select col1 from table1 (nolock) where col1 like '#,%ABC%' or col1 like 'BC,%ABC%' select col1 from table1 (nolock) where col1 like '%ABC%'
I flushed out the cache, added an index to col1, then ran those two together. Provided below are the actual query plan and stat time:
Query 1: Query cost (relative to the batch): 0% select col1 from table1 (nolock) where col1 like '#,%ABC%' or col1 like 'BC,%ABC%' ---------------------------------------------------------------------------------------------------------------------------- SELECT Index Seek Cost:0% <------- [DB1].[dbo].[table1].[Idx..]
Cost: 100%
Query 2: Query cost (relative to the batch): 100% select col1 from table1 (nolock) where col1 like '%ABC%' ---------------------------------------------------------------------------------------------------------------------------- SELECT Index Scan Cost:0% <------- [DB1].[dbo].[table1].[Idx..]
Cost: 100%
------------------------------------------------STAT TIME----------------------------------------------------------- SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 1 ms.
SQL Server Execution Times: CPU time = 0 ms, elapsed time = 1 ms. SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 7 ms.
As expected, SARGable Query 1 did a nonclustered index seek and nonSARGable Query 2 did an index scan instead. According to the query plan, Query 1 consumed 0% relative to the batch whereas Query 2 is 100%. When I checked the CPU time, I was a bit confused because Query 1 showed CPU time of 938ms whereas Query 2 showed 515ms. I triple checked and every time I got similar results. I am sure I'm missing something, could someone please tell me what I'm missing? Thanks a bunch!
The benefit of the actual execution plan is that you can see the actual number of rows passing through each step - compared to the estimated number of rows.But what about the "cost percentages" ?I believe I've read somewhere that these percentages is still just an estimate and is not based on the real execution.Does anyone know this and preferable have a link to something that documents it?Thanks
There is a stored procedure. It uses linked server. As we will be migrating to amazon cloud, our architect instructed not to replace linked server with openquery.
I need to build TSQL query to return the Last unit Cost from my table of movement of goods SL (on CTE) but the MAX(Datalc) must be Less or Equal to my HeaderInvoice.
This is my script:
With MaxDates as ( SELECT ref, MAX(epcpond)[Unitcostprice], MAX(datalc) MaxDate FROM sl
[code]....
the problem I have right now is that the Unitcostprice of my table of goods movements has a top date greather than the date of my bill.
Example:
invoice date : 29.01.2015 unitcost on invoice line = 13,599722 Maxdate (CTE) : 19.03.2015 unitCost from my table of movement of goods = 14,075
That ´s not correct because the MAxdates > invoice date and the unitCost of 14,075 is the cost on 19.03.2015 and not just before my invoice date.
I just installed sql server 2014 on a second pc I will be using to vpn into our parent company's domain. When I use explorer to look at file system folders on servers in the parent domain on which I have permissions, I am prompted correctly for my windows creds in their domain which are different from the creds in my originating domain and I can see those files. But when I open ssms and try to connect using server name or ip address, I see two problems....
1) ssms is using my originating domain creds without giving me the opportunity to change them 2) whether I use a server name or ip address I get an error saying login from an untrusted domain cannot be used with windows auth 18452.
I right in thinking that if the estimated subtree cost is higher than the cost threshold for parallelism then it will use a parallel plan? If so, I've read the cost threshold is measured in minutes but is the subtree cost measured in something else, the mysterious cost number? And if so, how are the two compared?
SQL2008R2...Have a SPID that started yesterday and contained a command to drop a server audit. This did not finish (after 3hrs) so I killed the SPID. Problem is it's still there (sp_who2) and I think it's now blocking access to viewing editing logins within SSMS as I'm getting "Lock request time out period exceeded. (Microsoft SQL Server, Error: 1222)" when double clicking a login.Do I need to restart the instance as killing the "alter server" transaction (SPID=62) is not working? Code to show blocking transactions returns this.
I've two instances(Default, Named[dynamicsFINANCE]) running on SQL server 2014. However, when I try to connect to named instance say (dynamics FINANCE) using SQL authentication from local SSMS, I get below error message.A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)I assigned a static port number to the named instance [dynamicsFINANCE] 1450. I also setup the firewall rule to allow access to Port 1450.
Note: but even the connection string "serverBserverA,2040" also connects to Instance "serverB", when i try to connect from SSMS, In SSMS it shows the wrong instance name like, "serverBserverB", but it actually connects to Instance "serverA"
i.what is the reason behind this? ii.Is there any way to avoid this connection, by performing server property changes, if any ?
I have built a Sales Forecast model to predict the sales value. Along with making historic predictions for previous time periods I also want to retrieve the actual sales values for those periods.
How can I achieve this in a time series model?
I also would like to know how do mining models store the data.
Do they store the data in the same table/view format as their respecive data source view or in the Model Content format.
connecting to a SQL Server Instance. I have a SQL server DB on a server having an Instance not running on the default port. Let's say ServerAInstanceA has the services exposed on Port 12345. When I'm trying to connect to ServerAInstanceA without specifying the port number, while I can connect from one client machine (ClientA), I am unable to connect from a different client machine (ClientB).On a side note, when I provide the port number, I can connect both from clients A and B. My question is, when I do not provide the port number, why is one client machine able to connect while the other isn't?
I have 2 different queries which produce same result. I want to know which querry is better and why? The query is used to display the employee details who is handling the maximum number of project.
Queries are the following
Query A
Code Snippet
SELECT EmployeeDetails.FirstName+' '+EmployeeDetails.LastName AS EmpName, COUNT(LUP_EmpProject.Empid) AS Number_Of_Projects FROM LUP_EmpProject INNER JOIN EmployeeDetails ON LUP_EmpProject.Empid=EmployeeDetails.Empid GROUP BY EmployeeDetails.FirstName+' '+EmployeeDetails.LastName, LUP_EmpProject.Empid HAVING COUNT(LUP_EmpProject.Empid)>0 AND COUNT(LUP_EmpProject.Empid)=(SELECT MAX(Number_Of_Projects) FROM (SELECT COUNT(LUP_EmpProject.Empid) Number_Of_Projects FROM LUP_EmpProject GROUP BY LUP_EmpProject.Empid)AS sub)
Query B
Code Snippet
SELECT LUP_EmpProject.EmpID, EmployeeDetails.FirstName + ' ' + EmployeeDetails.LastName AS EmpName, COUNT(*) AS NumberOfProjects FROM LUP_EmpProject INNER JOIN EmployeeDetails ON LUP_EmpProject.EmpID = EmployeeDetails.EmpID GROUP BY LUP_EmpProject.EmpID, EmployeeDetails.FirstName + ' ' + EmployeeDetails.LastName HAVING COUNT(*)=(SELECT MAX(Number_Of_Projects) FROM (SELECT COUNT(LUP_EmpProject.Empid) Number_Of_Projects FROM LUP_EmpProject GROUP BY LUP_EmpProject.Empid)AS sub
I have one maintenance plan for full backup to run at midnight daily, but somehow it runs another one at 11:40PM which I don't have plan for it. I can see it happened twice by opening job history. They all use same maintenance plan.
The only difference, I can see is in the message, one is "The job succeeded. The job was invoked by Schedule 112(Daily Backup.subplan_1)", the one I did not expect has message "The job succeeded.
The job was invoked by user sa". How to find this job that invoked by user sa and delete it? Again I can only see one job for full backup , but I can see it happened twice from view job history.
how to eliminate a key lookup from the execution plan
SET TRANSACTION ISOLATION LEVEL READ COMMITTED SET NOCOUNT ON SELECT COUNT(ph.lid) AS Total FROM PLB ph WHERE ph.lPhysician = @Physician AND ph.BSF = CAST(0 AS bit)
I’ve been looking at a bug in an application stored proc. This merely inserts 7 million rows from one table into another (empty) table.
INSERT dbo.TradeDataPrimary (Id, SinaiTrade) SELECT ts.Id, CAST(ts.XmlTrade AS varbinary(MAX)) FROM dbo.TradePrimary ts
Both tables have a clustered index with the same key order (on the first column) but the query optimizer insisted on placing a SORT step in the query plan. This use a lot of tempdb which I wanted to avoid.
I discovered that the bug was a data type mismatch. Fixing the bug and/or dropping the clustered index from the target table before the insert resulted in the plan I expected, that is, with no sort.
I was hoping it would run faster but, unfortunately, as can be seen from the new plan below, a serial plan is used. This results in the insert taking nearly three times as long as the original.
I’ve been looking at Paul White’s article: [URL] .... and Adam Machanic’s: [URL] ....
Both are excellent articles but neither approaches i.e. trace flag 8649 or make_parallel () give me a parallel plan. Am I missing a trick here? How I can force parallelism? I know there are no features that require a serial zone in the plan otherwise the plan with the sort would not be parallel..
This is SQL Server 2012 Enterprise 11.0.5522.0, 512 GB of RAM and 16 procs.
I have a query i have been optimizing. Now runs in about 15 minutes but was wondering if there is any way tr educe the SORT cost.
Currently the high costs left are the Table insert which is 58% and the Sort cost of 36%
The inner query below is around 400million rows and aggregates to around 15,000,000 rows)
SELECT@1 = DateKey FROM dbo.DimDate WHERE TheDate = CAST(DATEADD(WEEK, -1, GETDATE() -1) as DATE) SELECT@2 = DateKey FROM dbo.DimDate WHERE TheDate = CAST(DATEADD(WEEK, -2, GETDATE() -1) as DATE) SELECT@3 = DateKey FROM dbo.DimDate WHERE TheDate = CAST(DATEADD(WEEK, -3, GETDATE() -1) as DATE) SELECT@4 = DateKey FROM dbo.DimDate WHERE TheDate = CAST(DATEADD(WEEK, -4, GETDATE() -1) as DATE) SELECT@5 = DateKey FROM dbo.DimDate WHERE TheDate = CAST(DATEADD(WEEK, -5, GETDATE() -1) as DATE)
I have enabled the query governor on our SQL2000 SP2 server with athreshold of 3600. Now, some of the maintenance jobs fail due to thelimit being to low (e.g. one of the user databases integrity checkfails nightly).I have tried to put the command 'SET QUERY_GOVERNOR_COST_LIMIT 0' justbefore the line in the step which reads 'EXECUTEmaster.dbo.xp_sqlmaint N'-Plan etc'but it has no effect.Does anyone know how to get around this situation without usingsp_configure to change the query governor settings at a systemwidelevel?GC.
I need to write my own application to start SQL Server Management Studio. What's more, when SSMS appears, the "SQL Server Maintenance Plan Wizard" is also required to be appeared. So could i get some suggest on implementing this scenario? Thanks
On SSMS 9.00.3042.00, any query that runs longer than 10 minutes getsthe following error message:The statement has been terminated.Msg -2, Level 11, State 0, Line 0Timeout expired. The timeout period elapsed prior to completion ofthe operation or the server is not responding.I have changed the setting for Execution time-out to 0 in Tools-