Guys,
what I need is a tool which gives details on the choice of an
execution plan by the SQL Server. For example, the cost for a hash
join might be 200 and 100 for a nested loop, and therefore a nested
loop is used. Same thing for the access paths for each table/view
involved. In Oracle, we turn on event 100053 to see this kind of info.
I try do some tests and I get one doubt, why the optimizer don€™t make a constant scan in normal tables, for instance:
Code Snippet --drop table #tmp create table #tmp (id Int Identity(1,1) Primary key, name VarChar(250)) go insert into #tmp(name) values(NEWID()) insert into #tmp(name) values(NEWID()) go set statistics profile on go -- Execution plan create a Constant Scan select * from #tmp where id = 1 and id = 5 go set statistics profile off
GO
--drop table tmp create table tmp (id Int Identity(1,1) Primary key, name VarChar(250)) go insert into tmp(name) values(NEWID()) insert into tmp(name) values(NEWID())
go set statistics profile on -- Why execution plan does not create a Constant Scan for this case? select * from tmp where id = 1 and id = 5 go set statistics profile off
I would greatly appreciate any help with this problem, as I've been digging thru every resource I can find looking for a solution with no luck.
I'm going to be monitoring a database for all SQL statements containing INSERT, DELETE, or UPDATE. I'm grabbing the user name, time, and the entire text of the query. I can already do this programmatically, no problem. The problem lies in this. When I set up a trace on SQL Server 2000 using the system stored procedures sp_trace_create, sp_trace_setfilter, etc, and set the trace to save to a trace file, I find that I must first stop the trace then close the trace before I can use fn_trace_gettable to get the information that I want. However, this is undesirable, because this database may be accessed worldwide, and stopping the trace to read the data could cause the trace to miss some users making changes. Does anyone know how that I could get my trace data into a table so that I can just run queries on that table to get my data? It's very important that I not stop the trace to do this. Thanks for your help! JR Rickerson Software Engineer Infinite Software Solutions, Inc.
I have a SQL command which I run on two separate servers. Both servers and configured and built the same. On server 1 it takes mere seconds, but on server 2 it takes over 5 minutes.
I have checked the execution plan on both servers and they are completely different. I ran UPDATE STATISTICS WITH FULLSCAN on both servers, but the execution plans were still different.
My question is why are the execution plans so different and how do I get them to execute with the same plan.
I tried this:use northwindgoSELECT OrderDateFROM Orders WHERE OrderDate > '19950101'see the query plan? okSELECT OrderDate, EmployeeIdFROM Orders WHERE OrderDate > '19950101'see the query plan? what appened?the only way to make an index seek instead of an index scan is toforce theindex usage ( with(index=orderdate) ), but I don't like this solutionalso try this:SELECT *FROM Orders WHERE employeeId > 9andSELECT *FROM Orders WHERE employeeId > 8Can someone explain why this appens? and how can I overturn theperformance loss problem (well not in orders table, but in my tablethere are 300K records and making a scan to retrieve 50 records is notexactly what I want)thanks to all
Hi, I need to trace deadlock, one of article was mentioning “QL Server Profiler's Create Trace Wizard to run the "Identify The Cause of a Deadlock" for SQL Server 7.0, is there any way I can do this in Sql Server 2000?
How would I go about tracing UDF performance in profiler? I'd like to specifically know the impact of the UDF without having to dig into the execution plan of the statement containing it. Is this possible?
I am fairly new to SQL Server. I am writing a tool in stored procedureto identify locks in a table. I have already written the basic frameworkof the SP. It will reside in master database and take two inputs. Databasename and table name. From that it will show all locks at that instanton that table of that database. If table name is omitted, then it will showlocks on all tables.I am using syslockinfo, spt_values tables and joining with SP_WHO procedureto get the table name, user name and the session id.Now what I need is to find out which SQL is causing the lock and since whenlock is being held on the table. Which tables in master database holds therequired information.TIA.Ravi
HiI want to trace all the selects/deletes/modifys whatever on a databasein an application that are performed in a seperate application.I need to look into this, any ideas?- Can triggers do this kind of thing- Can you somehow access the profiler via OLE or similiar to do this?- Anything else?TaF
Hi, I need to trace deadlock, one of article was mentioning €œQL Server Profiler's Create Trace Wizard to run the "Identify The Cause of a Deadlock" for SQL Server 7.0, is there any way I can do this in Sql Server 2000?
I am using sp_executesql to get some data but it is not working. Is there a way to actually see the actual statement where subsituted variables are replaced with the actual values.
+ Case @MatchAmount When 1 Then N' and Amount = @BillingAmount ' Else N'' End
+ Case @MatchTicket When 1 Then N' and LTrim(TicketNumber) = STUFF(STUFF(@TicketNumber,Len(@TicketNumber)-@RemoveRight+1,@RemoveRight,''''),1,@RemoveLeft,'''') ' Else N'' End
+ Case @DaysDiff When 0 Then N'' Else N' and DATEDIFF(d,@BillingDate , InvoiceDate) <= @DaysDiff ' End
+ Case @MatchName When 1 Then N' and Left(Name,@CharsToMatch) = Left(@PassengerName, @CharsToMatch) ' Else N'' End ;
I'm looking for an in depth book, article, faq, whatever, regarding the query optimizer...
I've read the books online pretty thoroughly and have been sql coding for a number of years. The system I work on relies heavily on real time access to data and the number crunching procedures we use are a critical part of the design. For the most part, sometimes through trial and error, I have been able to find ways to achieve the performance we need, but I'm often surprised by the methods that prove most effective.
For example, I have cases where I can only get the performance I'm looking for using table functions, and other cases where indexed temporary tables are the only way. I have statements that run fast as a select statement, but when converted to an update statement limp along, forcing me to resort to cursors, temp tables, or table hints with varying degrees of success.
I'm wondering if anyone has come across material that takes an in depth look at the various technologies available and how to tweek queries. I want to get away from hours of testing and hacking.
Way back when, and at least in version 7 IIRC, the query optimizer gaveup when the where clause in a statement contained more than 4 searchconditions.Does anyone know if such a limitation still exist in MS SQL 2005? TheBOL seems to be silent on the issue.Boa
I'm very puzzled by the choice of NC index being made by the optimizerin this example. I don't actually think it should use an NC index atall.I have:Table: CustomerStatus_TSingle data page19 recordsClustered Index on CustomerStatusID:CREATE TABLE [CustomerStatus_T] ([CustomerStatusID] [int] NOT NULL ,[Name] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[Description] [varchar] (200) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[Code] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[CodeAlt] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[Ordinal] [int] NULL ,[Default] [int] NULL ,[Display] [bit] NOT NULL ,[StatusType] [varchar] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[DateCreated] [smalldatetime] NULL ,[DateUpdated] [smalldatetime] NULL ,[DateArchived] [smalldatetime] NULL ,CONSTRAINT [PK_ROMS_CustomerStatus] PRIMARY KEY CLUSTERED([CustomerStatusID]) ON [PRIMARY]) ON [PRIMARY]If I run the following query, it does exactly what I expect and scansthe clustered index:SELECT customerStatusID, [Name] FROM CustomerStatus_TWHERE dateArchived IS NULLAND Display = 1AND StatusType = 'Q‘and gives the following QEP and IO statistics:|--Clustered Index Scan(OBJECT:([Reach_Roms].[dbo].[CustomerStatus_T].[PK_ROMS_CustomerStatus]),WHERE:(([CustomerStatus_T].[DateArchived]=NULL AND[CustomerStatus_T].[StatusType]='Q') ANDConvert([CustomerStatus_T].[Display])=1))Table 'CustomerStatus_T'. Scan count 1, logical reads 2, physicalreads 0,read-ahead reads 0.If I now put a NC index on the statustype column:create index ix_nci_statustype on customerstatus_t(statustype)the query plan changes to:SELECT customerStatusID, [Name] FROM CustomerStatus_TWHERE dateArchived IS NULLAND Display = 1AND StatusType = 'Q‘|--Filter(WHERE:([CustomerStatus_T].[DateArchived]=NULL ANDConvert([CustomerStatus_T].[Display])=1))|--Bookmark Lookup(BOOKMARK:([Bmk1000]),OBJECT:([Reach_Roms].[dbo].[CustomerStatus_T]))|--IndexSeek(OBJECT:([Reach_Roms].[dbo].[CustomerStatus_T].[ix_nci_statustype]),S EEK:([CustomerStatus_T].[StatusType]='Q') ORDEREDFORWARD)Table 'CustomerStatus_T'. Scan count 1, logical reads 7,physical reads 0, read-ahead reads 0.For some bizarre reason, the optimizer thinks that a NC index lookupon a single-page table, which ultimately costs 7 IOs, is cheaper thana table (or Clustered Index) scan of a single page. Why? Theshowplan cost also shows that it expects the NC index to be cheaper(which is presumably why it goes and uses it), but even after runningUPDATE STATISTICS on the table it still chooses the same idiotic queryplan.Any thoughts, or has anyone seen similar behaviour before, and cananyone please explain it to me?p.s. I don't actually WANT to put a NC index on this table, but Inoticed the behaviour by accident which is why I'm asking the question:-)
There is a bug in one of the service packs where Profiler (7.0) only traces one server (regardless of the server you tell it to trace). Can anyone tell me how to fix this or point me to a KB article? I thought this was fixed in SQL 7 SP 3, however I'm experiencing this problem with SP3 installed.
SQL 7 profiler has an event in the Misc. category of Failed Login. It does not, or at least I cannot get it to, produce any output when a failed login occurs. Any hints?
I tried this because every week or so I get this in the error log: Login failed for user 'Admin'. It occurs several hundred times within a minute or so. It obviously has to be an automated process as you couldn't click a button or press a key 13 times a second.
The login does not exist as a SQL login so I can't tell which database it is trying to get at. Any suggestions gratefully received.
Hi There, I hope someone can assist me in tracing the cause of a problem I am experiencing. I have a Web Server with ASP's querying the SQL Server 6.50.416 database. There is only one user db on this machine and yet I am running out of User Connections (current setting 2000) and memory (128MB RAM). Also, NT repeatedly experiences Stack Dumps. I have used the PRINTDMP utility to try and trace the cause of the error. The "Input Buffer" section of the Stack Dump (symptom dump) contains the following:
SELECT FK_SUB_PRODUCT_GROUPING, FK_SUB_PRODUCT_NAME FROM RESOLUTION_PRODUCT_SUB_PRODUCT WHERE FK_PRODUCT_CODE=1072 order by FK_SUB_PRODUCT_GROUPING
Is this SQL Statement the cause of the Stack Dump? Does anyone have any other ideas on what may be causing my problem. Any help would be greatly appreciated.
I have a very simple piece of code (see below) which when executed sometimes takes around 7 minutes and sometimes around 3.5 hours. the difference is that during the 3.5 hours there is a lot of querying of the table being updated. But I don't know how to find out if this is the case. How can I find out whether my process is waiting (for locks or for any other reason) - is there a trace or debug facility within the tandard Microsoft Toolset which I can use.
Regards Colin
Problem code below =============== print 'Updating stm_brnline - Start time is ' + convert(char(25),getdate(),113) -- update m set m.branchpgrade = s.branchgrade from stm_brnline m, tmp_brngrades s where m.traddiv = s.traddiv and m.contcode = s.contcode and m.merchsect = s.merchsect and m.branch = s.branchcode -- print 'Updating stm_brnline - End time is ' + convert(char(25),getdate(),113)
I am running 6.5 sql and work with a traffic and billing software ( called NOvar) from another company(encoda system) which does a lot of scheduling, reporting etc
I dont know the contents of table (100 table ) and their column or which table its querying to take out reports
Can i create a trace to know the syntax each time some thing is executed.
I also need to create customized reports, can this be done by sql reporting or does i need to go from crystal reports or someone else For i dont know any language except sql and HTML
As a newbie to DBA type tasks, how can I trace who has accessed the server/database. I know there is a SPID in the Server log but what does this represent?
I have a Stored Procedure that execute some queries on link server. It takes so long to complete so my application get timeout error. There was no problem until last week. I suspect, remote queries that qorks on link server takes long. How can i trace the time of queries. Any idea about link server timeout problems?
Is there a DMV or similar in SQL 2012, or SQL 2008, that shows when a statistic was last used by the optimizer? I would like to cleanup some of the auto-generated stats, assuming it's possible to do so. In particular I'm looking to drop those statistics that were created by one-off queries, data loads, etc, and are now doing nothing but adding to the execution time of Update Statistics jobs.
declare @ContactId as integerset @ContactId = 5select *from Person.Contactwhere ContactId = @ContactIdOR @ContactId = -1If you run this in SQL 2005 on the AdventureWorks database,why the logical reads is 561Table 'Contact'. Scan count 1, logical reads 56and not 2 when you run without the second OR condition:declare @ContactId as integerset @ContactId = 5select *from Person.Contactwhere ContactId = @ContactIdHow can i use the same SP and either get one record returnedby passing the ID of the field, or pass a dummy parameter like-1 in order to get ALL the records returned.In this case even when i pass a parameter like ContactID = 5there is still a table scan (clustered index scan in this case)happening for the other OR condition.There's no method to tell SQL to start checking the first conditionwhether or not it is true then if it is false then check the second ORconditon. On the same topic does this mean all OR conditions areALWAYS verified regardless if one of them has already been determinedto be True?Thank you
I am trying to resolve performance issues in a third party application. I have run the profiler and found a transaction that performs a table scan against a 6 million row table. This transaction occurs repeatedly, so I thought, just add an index on the columns in the where clause used here. After adding the index, I looked at the estimated execution plan in Query analyzer, and I find that it is still performing the table scan. If I run the query it takes over 60 seconds to run, if i add an index hint, it runs in under a second. I ran DBCC SHOW_STATISTICS to see if the statistics were up to date:
Statistics for INDEX 'IX_Finish_dept'. Updated Rows Rows Sampled Steps Density Average key length -------------------- -------------------- -------------------- ------ ------------------------ ------------------------ Jun 26 2007 5:18PM 6832336 6832336 150 2.1415579E-7 18.0
(1 row(s) affected)
All density Average Length Columns ------------------------ ------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.1875491E-7 8.0 finish 1.9796084E-7 18.0 finish, dept