SQL 2012 :: Impact Of Trace Flag 272 On Server Apart From Disabling Identity Jump
Sep 5, 2013
I am upgrading my application's SQL Server from 2008 R2 to 2012.
As discussed in the below URL I am able to see the Identity jump after I upgrade and the server is restarted.
Now since I cannot afford this and at this moment I do not have the time to create a sequence with NOCACHE and test it again I have to go ahead and add trace flag 272 in the start up parameter as this is the only solution which I can implement and even rollback without much hassles.
[URL] ....
What I got to know, this flag will disable the new feature of IDENTITY property that has been implemented as part of SQL Server 2012 and will make it work like it was doing in SQL Server 2008 R2.
But I want to know implementing this flag would impact any other feature or performance (except the performance of IDENTITY) of SQL Server.
After SQL server service restart, a column which is set to auto increment jumped 1000. To fix the issue, I had to add T272 trace flag to SQL startup parameters. However, I did not see the column being reseeded after the service restart, it is still showing the 1000 jump. Am I doing something wrong?
Below the log showing the flag being added to the error log:
After carefully analyzing the situation for almost a month, I pulled the trigger and made an exception at work; we did enable trace flag 2371. We have some tables with billion of rows and outdated statistics causing horrible plans. Tried several methods to update those, but did not solve the problem or was too CPU intensive, causing other issues.
Anyway, one of the side effects I am seeing so far is average vCPU went down by almost 40%. Nothing out of usual (besides the flag) has been enabled or was executed. So my assumption is, CPU hungry plans are now gone or reduced.
Using SQL Server 2000 std. edition, I was bitten by the bug described in KBs 818671 and 289149. Query optimizer using Hash Match Team operators would sometimes fail. I added -T8679 at SQL Server startup.
Now that I'm upgrading to SQL Server 2005, is this trace flag still required?
I see that "this was fixed in SQL 2000, SP1." However, I would like a more precise confirmation that this flag is no longer needed in SQL 2005. Sometimes, no news is not necessarily good news.
The error is intermittent, and at least partially dependent on data conditions not available to me for exhaustive regression testing (or else of course I would do that).
Hi, I'm trying to use trace flag 1204 to get some detailed deadlock information In EM, I add startup parameter -T1204, and then I stop and start the server. I run two jobs that I have setup to deadlock, and they do. But no information about the deadlock is in my sql error log? anyone know what I am doing wrong? thanks.
In order to use Microsoft Dynamics NAV with the SQL Server 2005, I need to set Trace Flag 4616, and I know it's a setting somwhere, where I can add this flag no. -but I can't remember where? Please help?
i am using Sql Server 2005 SP2, I am trying to configure Mirroring. The issue is ,After configuring mirroring ,Whem i am starting START MIRRORING button .I am getting the error
Database mirroring is disabled by default. Database mirroring is currently provided for evaluation purposes only and is not to be used in production environments. To enable database mirroring for evaluation purposes, use trace flag 1400 during startup. For more information about trace flags and startup options, see SQL Server Books Online. (Microsoft SQL Server, Error: 1498)
I used BDCC TRACEON (1400) and I altered the Startup parameter in configuration manager of SQL SERVER ;-T1400
But still I am getting the above error which is indicated in red
I know that several hotfixes need you to turn on trace flag 4199 to activate them. But after reading an article on that trace flag, and the way it was worded, I couldn't tell if you still needed to use the trace flag, once that hotfix was incorporated into a service pack.
I have a request where i would like to get the start date/time and end date/time and flag (with an int) which hours (24 hour clock) have values between the two dates. Example car comes into service on 2013-12-25 at 0800 and leaves 2013-12-25 at 1400 the difference is 6 hours and i need my table to show
As i'm working away at it i'm trying to figure out how i could use a Time Dimension table for this but dont really see much. So far i have the difference between the two times in hours (hour_diff) and the start hour (min_hour) so i would like to do something where i update the first hour (min_hour) and update columns based on the numbers of hours (hour_diff)
I want to join 2 tables, table a and table b where b is a lookup table by left outer join. my question is how can i generate a flag that show whether match or not match the join condition ?
**The lookup table b for column id and country are always not null values, and both of them are the keys to join table a. This is because same id and country can have multiples rows in table a due to update date and posting date fields.
example table a id country area 1 China Asia 2 Thailand Asia 3 Jamaica SouthAmerica 4 Japan Asia
example table b id country area 1 China Asia 2 Thailand SouthEastAsia 3 Jamaica SouthAmerica 5 USA America
Expected output id country area Match 1 China Asia Y 2 Thailand SouthEastAsia Y 3 Jamaica SouthAmerica Y 4 Japan Asia N
Somehow someone turned on a audit on the sql server and it is filling up our hard drive and shutting down sql server eventually. Been trying to google how to shut this audit off but coming up with no via soolution yet. how can I turn this trace off. Each fiel says AuditTrace and date and they happen every other minute. I went into the sql profiler and can pull up the files but how to shut the trace off, it does not say.
I am attempting to create a new trace but I get the following error message: "failed to start a new trace".
I have been doing some digging and as I understand it, I had to find the directory Profiler uses for temporary files. So, I typed the following in the command window "SET TMP" and I received the following reply:
C:UsersRossAppDataLocalTemp
Now, according to the forum: [URL] ...
I am supposed to check that the system folder pointed to by the TMP environment variable exists and is not crammed with files.
Well, when I went to the directory C:UsersRossAppDataLocalTemp, it is indeed full of both files and directories. The size is 16.3 MB and has 133 files and 63 folders.
When I had a look at the Environment Variables window and chose TMP the value is "%USERPROFILE%AppDataLocalTemp" which according to my limited understanding is the equivalent to C:UsersRossAppDataLocalTemp.
So, what I am wondering is am I supposed to totally clear out this directory? I am not too keen on doing this because I don't want to stuff my PC up.
Set up a trace with the events RPC:Completed, SQL:BatchCompleted, SQL:BatchStarting, and SQL:StmtCompleted.
When I issue the statement: SELECT * FROM XyzView there is nothing captured in Profiler. If I script out the view and then execute the select statement that defines the view, it does show up in Profiler.
I've tried adding a lot of the other events, i.e. SP:StmtCompleted and the various other StmtStarting events and the trace still does not capture anything.
Am I capturing the wrong events or is this known behavior? My goal is to see what the overhead is for using a view versus persisting the results of the view as a table and referencing that instead. The view in question is against static data, joins 9 tables, and is referenced a lot.
I can use the stats generated when I execute the select that defines the view but I still find this to be curious behavior so I assume I'm doing something wrong.
I have a sp. I alter stored procedure by adding some logic to that. How can I test that is there any functional impact by altering that stored procedure? How to prove to the team that the modifications doesn't impact any functionality?
What is the impact on the users to drop an index on a table while in use? I will recreate the index afterwards. The table is used constantly by a three of processes/users at all times.
I have been investigating the number of connections activeinactive to a certain database server and I have stumbled across an application which seems to not be clearing its database connections.For one instance of a client there was >70 sql connections which eventuated from the closing and reopening one 1 screen in the culprut app. Once the application was closed all of the connections are recycled but its evident that within the application itself it is not correctly reusing already existing open connections.
I have raised a point with the main programmer that we need to investigate more into how the application is managingot managing its ADO .NET connections to SQL.
I am starting with doing some reading here URL... and I was hoping to get some more information about the possible impact of excessive sql connections on the SQL Server itself. Our organization is quite lucky in that our SQl Servers are Overspecced given their workload, bearing that in mind I would like to dig a bit deeper to get some stats if I can to highlight the scope of the issue to the managementprogrammers.Our SQL server peaks at 6500 processes and a good 70% of those are due to this applications mis-management of its sql connections.
I'm trying to improve the loading of some tables with large amounts of data that forms part of an ETL. I was going to try removing any indexes before the inserting to speed up the process, but I had some questions on whether or not I should include the clustered index (assuming one exists).
I was originally planning on including a step to disable all indexes on the destination table using the following:
ALTER INDEX ALL ON MyTable DISABLE
Once the load had finished I'd simply rebuild all the indexes.
should I simply disable the non-clustered indexes?
Here's my predicament. I changed the filter on an article within a subscription, but not in a meaningful way. I just added "database.dbo." before a column name, but now my subscription is flagged for reinitialization, and won't replicate any transactions unless I start over with a new snapshot. I don't want that to happen. How can I reset the article properties somewhere to get it back the way it was and continue replicating transactions.
(I needed to add "database.dbo" so that another job that looks at the filter info would point to the correct database. Next time maybe I'll just modify the table that stores the filter info)
The reason I don't want to snapshot from the beginning is that I have procedures on the subscribing server that fire triggers from the transactions, so all the jobs "downstream" would get all out of sync.
We have a typical issue with Column Store Index, we have a procedure which does 2 activities - Switch & Reverse Switch
Switch: 1. Fetch the Partitions needed to be switched 2. Switch the data from Main Table to Switch table 2. Disable the Column store on Switch table
SSIS Package: 3. Load data to Switch (Insert / Update)
Reverse Switch: 4. Enable the Switch 5. Switch back the data from Switch table to Main table
Issue: Some time the Column store is not getting disabled, and the package fails complaining try disabling the Column store index and try loading data.
If we re-run the procedure, the column store gets disabled.
I've started using a SEQUENCE in a table instead of an identity.
I seem to be experiencing problems of the sequence getting reset to a lower value periodically. Inserting will work on the table, producing the next bigint in the sequence as the primary key, for days and then all of the sudden duplicate primary key errors show up. When I check, the last primary key value in the table is higher than the current value of the sequence.
For example: right now I have primary key values 6000 through 7032 contiguously in the table, all of which were generated with the sequence. Suddenly I'm getting duplicate primary key errors. A quick check of the sequence shows it's at 7002, but the last inserted row has a primary key of 7032!
I'm populating this table in one place (in the application layer), leaving the primary key null, which allows the default constraint to get the next sequence.
When the problem shows up, I've reset the sequence to the higher number in the past and all is well for many days, then the problem occurs again.
The definition for the sequence is:
CREATE SEQUENCE [dbo].[IntegrationQueueSEQ] AS [bigint] START WITH 1 INCREMENT BY 1 MINVALUE 0 MAXVALUE 9223372036854775807 CYCLE CACHE 50
The default constraint for the primary key on the table is defined as:
ALTER TABLE [dbo].[IntegrationQueue] ADD CONSTRAINT [DF_IntegrationQueue_IntegrationQueueID] DEFAULT (NEXT VALUE FOR [dbo].[IntegrationQueueSEQ]) FOR [IntegrationQueueID]
Can a Primary Key column also be a Identity column? The reason I am asking this question is because I have created a table and each time I insert data into the Address Table I am also inserting the AddressID, how do I get the Primary Key (AddressID column) to self generate ID values.
So for years I was using the int identity(1,1) primary_key for all the tables I created, and then in this project I decided, you know, I like the uniqueidentifier using newsequentialid() to ensure a distinctly unique primary key.
then, after working with the php sqlsrv driver, I realized huh, no matter what, i am unable to retrieve the scope_identity() of the insert
So of course I cruised back to the MSSMS and realized crap, I can't even make the uniqueidentifier an identity.
So now I'm wondering 2 things...
1: Can I short cut and pull the uniqueidentifier of a newly inserted record, even though the scope_identity() will return null or 2: do I now have to add a column to each table, keep the uniqueidentifier (as all my tables are unified by that relationship) and also add a pk field as an int identity(1,1) primary_key, in order to be able to pull the scope_identity() on insert...
I am in the middle of capturing a workload to try and tune a SQL instance and was wondering what kinds of sizes people capture in terms of traces. I am only 1 day into a capture and I believe a typical workload would be a week long capture and I am already at 10GB of files. I am only capturing rpc_completed and sql_batch_completed.
What sizes of workloads do other people capture and then where do you analyse them, do you have particular dedicated server for this kind of thing as at present I am looking to use my local PC. Also what rollover file sizes do people tend to use, I am currently using 1GB.
what the ideal CPU count and Max Degree of Parallelism are for a 3rd party database server.The server has 12 CPUs, 32GB RAM and all database sizes add up to < 30GB so they can all fit in memory (I tried to force this by doing a select * from every table). On certain payroll days, the CPU gets maxed out to 100% for a few seconds.
MAXDOP was originally set to the default 0. We later changed it to 8 based on several 'best-practices' articles. However the vendor suggests to change it to 1 (no parallelism), while others suggest changing it to 4, so that one run-away query doesn't hog most of the CPUs.
I'd like to find out how many CPUs are actually being used by queries. There is a Degree of Parallelism event in URL.... The BinaryData column says :
0x00000000, indicates a serial plan running in serial. 0x01000000, indicates a parallel plan running in serial. >= 0x02000000 indicates a parallel plan running in parallel.- What does "parallel plan running in serial" mean ?
I see a lot of 0x01000000, and a few 0x08000000's in my trace.How can i determine whether one query is hogging CPUs and if reducing it to 4 will work?
I have a problem described as follows: I have a table with one instead of insert trigger:
create table TMessage (ID int identity(1,1), dscp varchar(50)) GO Alter trigger tr_tmessage on tmessage instead of insert as --Set NoCount On insert into tmessage
[code]....
When I execute P1 it returns 0 for Id field of @T1.
How can I get the Identity in this case?
PS: I can not use Ident_Current or @@identity as the table insertion hit is very high and can be done concurrently.Also there are some more insertion into different tables in the trigger code, so can not use @@identity either.
I am getting deadlock in my production, i was taken deadlock information from trace file , i found deadlock graph but i am unable to find exact scenario . I am attaching deadlock trace file.
Is there a way to setup a trace to show only direct TSQL statements triggered on my server? note I don't want to capture Procedure calls or the statements called within the procs.
Actually many people are firing direct SQL statements on server. And some are coming from entity framework as well. I just want to capture those.