Hi , I want to trace the deadlock information. I am enabling trace flags 1204 ,1205. Is there any difference in setting these trace flags by using DBCC TRACEON and by setting them at command prompt by statrting the SQL server with SQLSERVR command. Actually i don't want to bring the server down. I want the information to be logged to Error log. Any help is greatly appreciated.
To solve a problem I encountered with Restoring from Backups in 6.5, I had to install a hotfix and thereafter do the load using Trace Flag 3282.I need help on using the trace flag (syntax etc.) Also there is no mention in books online of this particular Trace Flag. Please help.
I see only a few trace flags and their description in BOL but see a lot of references to various flags like: 1211 and so on... Where can I find all the flags and their descriptive actions?
To get deadlock victim alert do we need to turn on deadlock trace flags or if I create an alert and if there's any deadlock incident happen, it will throw alert (if no deadlock flag is set)?
We have the following trace flags present in startup in SQL server 2000:
809 1204 3605 3913
Need to understand if these should be required in SQL Server 2005 + SP2 version. I have run the upgrade advisor tool which indcates that the behaviour of some flags has changed and some other TFs are no longer applicable. Hence, I want to know about the above mentioned TFs.
In one of my views I am having trouble finding where to put it in my existing statement:
USE [pec_prod] GO /****** Object: View [dbo].[PEC_Claim_Export_All] Script Date: 8/10/2015 9:18:26 AM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER VIEW [dbo].[PEC_Claim_Export_All]
[Code] ....
Msg 156, Level 15, State 1, Procedure PEC_Claim_Export_All, Line 56 Incorrect syntax near the keyword 'OPTION'.
During a newly set up on one of our SQL server 2012:
We had enable the trace flags 1117 and 1118 as a good practice using DBCC TRACEON(1117,-1) and similar for 1118.
We have been base lining the server and it came to notice that trace flags are no more enabled.
Property               Value                              CaptureDate DBCC_TRACESTATUS TF 1117: Status = 1, Global = 1, Session = 0    2015-10-20 00:00:00 DBCC_TRACESTATUS     TF 1118: Status = 1, Global = 1, Session = 0      2015-10-20 00:00:00
After reboot: Property                 Value                                  CaptureDate DBCC_TRACESTATUS      No trace flags enabled             2015-10-21 00:00:02.340
What can be the reason? What can be done to turn them on permanently, if its actually a good bet in enabling so.
Is the process of using integer data types to represent multiple values via the use of bit flags bad practice? It seems to go against the rules of normalization in a single field can represent multiple values. On the other hand that since these values can be tested for via bitwise operations, that it's not entirely bad.
Can someone explain me how does work that flags? # “Auto Close Flag� # “Auto Create Statistics Flag� # “Auto Shrink Flag� # “Auto Update Flag�
ok ok, I suppose that they shrink, update statistics and stuff automatically, but... when? every second? is it ok to leave all them as true?
I have a scenario where I have 3 columns and all 3 of them are used in the where clauses of simple queries or ones having joins .
TABLE( Column1 int FLAG1 bit FLAG2 bit )
Sample queries :
Select * from TABLE where FLAG1 =1 and FLAG2 =0 (Any combination of these flags) Select * from TABLE inner join SOMEOTHERTABLE on TABLE.Column1 = SOMEOTHERTABLE .Column1 where FLAG1 =1 and FLAG2 =0
( any join and combination of flags)
Questions :
What would be the best nonclustered index strategy :
Column1 as the index key including FLAG1 and FLAG2 or Column1,FLAG1 and FLAG2 in the index key
Points to note :
The queries are part of an ETL process and are used to track new records vs old records. The Flags switch states within the same job . So if we are creating an index on all 3 columns, the index has to be reorganized more than once based on the flag states. If we keep them in the include list , then its only about updating the leaf data with the latest flag values.
On the other hand, an index on all 3 columns will result in an index Seek alone , where as for the included list , there will be an index seek and a predicate .
Does the predicate cause more overhead than reorganizing the index or is it the opposite ?
Wanted to know which among these options is better and why? Or if theircould be scenarios where we could opt for one of these.a) flags passed from code to control the execution of queries within astored procedure i.e. - where queries within a single stored procedureare controlled by flags passed to them.ORb) Break individual queries into separate stored procedure
I have a data output with many rows. In order to group things with flags, I do this in excel using 2 formulas which *** a flag of 0 or 1 in 2 new columns.
This takes a long long time as I have hundreds of thousands of rows and wondered of I could do it in sql?
Its transact SQL and the formulas I use in excel are:
What happens when you add the Ignore Case flag into the mix?
I'm having a hell of a time - I'm dealing with an SCD situation using TableDifference component and I have both existing dimensions and new data coming in, each go through identical Case-Insensitive/Sort with remove duplicates, but I'm getting identical new and deleted records detected - I think because of ordering issues. I'm still trying to whittle the test case down, but I think data from all around the records I'm investigating seems to get sorted in between them, so I'm having trouble getting a small test case built.
I think the mixed case data is the root of the problem, and I think the design is bad, but before I go back to the technical lead, I need to understand enough to show that you cannot take two pipelines sorted and de-duped case-insensitively and then do a case-sensitive table difference operation.
if i dont use the trace to record all the sql transaction. and someone execute the delete command to delete one of the table (ofcourse the person have high enough permissions to do that). is there anyway i can find out who that user's ID is to run the deleted ?? Thanks:D :p
I'm trying to debug a vendor package and would like to turn on JDBC trace (either client or server side). The only information I found is DBCC TRACE which seems not very useful (I even don't know where the trace result located). Any help is appreciated!
Env: Windows Server 2003 & SQL Server 2000 8.00.818.
How to trace/find out some one has dropped a database from my QA environment? Unfortunately we havent enabled the trace on this server. We havent find any useful information from SQL Server logs also.
Can any one reply me how to find the details of who dropped the DB, when? is there any query/SP/command/tools avaialble?
-- Prepare sample data DECLARE@tbl1 TABLE (box varchar(10), loc varchar(5) )
INSERT@tbl1 SELECT'P1', 'aa' UNION ALL SELECT'P1', 'bb' UNION ALL SELECT'P1', 'aa' UNION ALL SELECT'P3', 'cc'
DECLARE@tbl2 TABLE (box varchar(10), loc varchar(5) )
INSERT@tbl2 SELECT'P1', 'aa' UNION ALL SELECT'P3', 'cc'
--expected result SELECT 'P1' as box, 'aa' as Location, 'aa' as HeaderLoc UNION ALL SELECT 'P1' as box, 'bb' as Location, 'aa' as HeaderLoc how do i trace from @tbl1, that has 1 Box with more than 1 Location? in this case P1 has 2 distinct location (aa & bb).. and left join in the loc from @tbl2 just to retrieve the loc for that box..
Hello:I am working with SQL Server 2000. I have a stored procedure that creates 3temporary tables (#temp1, #temp2, #temp3). When I view the trace of thestored procedure I see entries that have Object Created #temp1, ObjectCreated #temp2, Object Created #temp3. How can I fix the trace so it doesnot report any objects that get created in the tempdb.Any help will be appreciated.Alee
Hi, I'm trying to use trace flag 1204 to get some detailed deadlock information In EM, I add startup parameter -T1204, and then I stop and start the server. I run two jobs that I have setup to deadlock, and they do. But no information about the deadlock is in my sql error log? anyone know what I am doing wrong? thanks.
I was wondering how much of an overhead would Running this trace (1204) have over the system. Will my perfomance degrade significantly. If yes, by what percentage ???
The trace is 1204 to keep a watch on DeadLocking
I am running this command C:mssql7innsqlservr -T1204 /dc:mssql7datamaster.mdf
How much performance degrade are we talking about here ? The Application is Peoplsoft and the db sizes is about 10 Gb !!
SUPPOSE A THIRD PERSON WHO (MODIFIES THE STRUCTURE OF A TABLE) OR (DOING SOME MODIFICATIONS ON A CERTAIN COLUMN), HOW THE SQL SERVER DBA WILL FIND WHO HAS DONE THIS. WHETHER THESE MODIFICATIONS WILL BE STORED IN SYSTEMS TABLE?. THIS IS MY QUESTION. CAN ANYBODY GIVE SOLUTION TO THIS PLEASE?. THANKS ---- Srinivasan.
Hello, I have to create a trace to monitor how the stored proc.are performing. The sp are for update/select/insert. What all counters should I monitor? The sp are called in from Java applications thru weblogic server and some of them are taking as long as 30 secs to fetch 80 records! Any help/thoughts appreciated.
One of my SQL servers (v6.5 with SP5a) has begun to display 'DBCC TRACEON 208' in the Error Log. I have not been able to find any reference to the Trace Flag - 208. Can someone please help me?
I've just joined a project for improving performance issue on MS-SqlServer(6.5). I was DBA Sybase a couple years ago I am unsure that MSSQL works in a manner as Sybase. I want to change some configuration options (sp_compile) and modify the table and indexes definition for improving the performance and avoid having deadlocks.
Is true that : When field, which has a default value assigned to NULL, is equal at a variable-length field and when you execute an update on it you will get 2 transactions : first a delete and then insert ? (On Sybase there is the concept of Index Covering to avoid that overhead what about Ms-Sqlserver 6.5).
To avoid having deadlock I want to use the FILLFACTOR I think this option is usefull only on cluster indexes, that's right ?
For huge tables the order of fields is important : first position put the primary key field after put the FK, secondary key, short type field (tinyint,int,datetime...) go on with the alpha fields (char) NOT NULL at last the NULL fields Is that design efficient for Ms-SQLserver ?
Is there any (easy!) way in which I can see the SQL that has been executed. I'm using stored procs that create & execute other stored procs(via sp_executesql). At the early stages there are often trivial errors in the created procs that cause rather general exception messages that do not give much of a clue as to where the error is. (It's tedious to find these in the debugger) ... and was wondering if there's any trace output type option I can turn on to see what sql has been presented for execution.