SQL 2012 :: Does Trace Flag 2371 Indirectly Reduces CPU Usage
Oct 13, 2015
After carefully analyzing the situation for almost a month, I pulled the trigger and made an exception at work; we did enable trace flag 2371. We have some tables with billion of rows and outdated statistics causing horrible plans. Tried several methods to update those, but did not solve the problem or was too CPU intensive, causing other issues.
Anyway, one of the side effects I am seeing so far is average vCPU went down by almost 40%. Nothing out of usual (besides the flag) has been enabled or was executed. So my assumption is, CPU hungry plans are now gone or reduced.
I am upgrading my application's SQL Server from 2008 R2 to 2012.
As discussed in the below URL I am able to see the Identity jump after I upgrade and the server is restarted.
Now since I cannot afford this and at this moment I do not have the time to create a sequence with NOCACHE and test it again I have to go ahead and add trace flag 272 in the start up parameter as this is the only solution which I can implement and even rollback without much hassles.
[URL] ....
What I got to know, this flag will disable the new feature of IDENTITY property that has been implemented as part of SQL Server 2012 and will make it work like it was doing in SQL Server 2008 R2.
But I want to know implementing this flag would impact any other feature or performance (except the performance of IDENTITY) of SQL Server.
Hi, I'm trying to use trace flag 1204 to get some detailed deadlock information In EM, I add startup parameter -T1204, and then I stop and start the server. I run two jobs that I have setup to deadlock, and they do. But no information about the deadlock is in my sql error log? anyone know what I am doing wrong? thanks.
In order to use Microsoft Dynamics NAV with the SQL Server 2005, I need to set Trace Flag 4616, and I know it's a setting somwhere, where I can add this flag no. -but I can't remember where? Please help?
i am using Sql Server 2005 SP2, I am trying to configure Mirroring. The issue is ,After configuring mirroring ,Whem i am starting START MIRRORING button .I am getting the error
Database mirroring is disabled by default. Database mirroring is currently provided for evaluation purposes only and is not to be used in production environments. To enable database mirroring for evaluation purposes, use trace flag 1400 during startup. For more information about trace flags and startup options, see SQL Server Books Online. (Microsoft SQL Server, Error: 1498)
I used BDCC TRACEON (1400) and I altered the Startup parameter in configuration manager of SQL SERVER ;-T1400
But still I am getting the above error which is indicated in red
I know that several hotfixes need you to turn on trace flag 4199 to activate them. But after reading an article on that trace flag, and the way it was worded, I couldn't tell if you still needed to use the trace flag, once that hotfix was incorporated into a service pack.
Using SQL Server 2000 std. edition, I was bitten by the bug described in KBs 818671 and 289149. Query optimizer using Hash Match Team operators would sometimes fail. I added -T8679 at SQL Server startup.
Now that I'm upgrading to SQL Server 2005, is this trace flag still required?
I see that "this was fixed in SQL 2000, SP1." However, I would like a more precise confirmation that this flag is no longer needed in SQL 2005. Sometimes, no news is not necessarily good news.
The error is intermittent, and at least partially dependent on data conditions not available to me for exhaustive regression testing (or else of course I would do that).
Here's my predicament. I changed the filter on an article within a subscription, but not in a meaningful way. I just added "database.dbo." before a column name, but now my subscription is flagged for reinitialization, and won't replicate any transactions unless I start over with a new snapshot. I don't want that to happen. How can I reset the article properties somewhere to get it back the way it was and continue replicating transactions.
(I needed to add "database.dbo" so that another job that looks at the filter info would point to the correct database. Next time maybe I'll just modify the table that stores the filter info)
The reason I don't want to snapshot from the beginning is that I have procedures on the subscribing server that fire triggers from the transactions, so all the jobs "downstream" would get all out of sync.
I have a request where i would like to get the start date/time and end date/time and flag (with an int) which hours (24 hour clock) have values between the two dates. Example car comes into service on 2013-12-25 at 0800 and leaves 2013-12-25 at 1400 the difference is 6 hours and i need my table to show
As i'm working away at it i'm trying to figure out how i could use a Time Dimension table for this but dont really see much. So far i have the difference between the two times in hours (hour_diff) and the start hour (min_hour) so i would like to do something where i update the first hour (min_hour) and update columns based on the numbers of hours (hour_diff)
I want to join 2 tables, table a and table b where b is a lookup table by left outer join. my question is how can i generate a flag that show whether match or not match the join condition ?
**The lookup table b for column id and country are always not null values, and both of them are the keys to join table a. This is because same id and country can have multiples rows in table a due to update date and posting date fields.
example table a id country area 1 China Asia 2 Thailand Asia 3 Jamaica SouthAmerica 4 Japan Asia
example table b id country area 1 China Asia 2 Thailand SouthEastAsia 3 Jamaica SouthAmerica 5 USA America
Expected output id country area Match 1 China Asia Y 2 Thailand SouthEastAsia Y 3 Jamaica SouthAmerica Y 4 Japan Asia N
I am in the middle of capturing a workload to try and tune a SQL instance and was wondering what kinds of sizes people capture in terms of traces. I am only 1 day into a capture and I believe a typical workload would be a week long capture and I am already at 10GB of files. I am only capturing rpc_completed and sql_batch_completed.
What sizes of workloads do other people capture and then where do you analyse them, do you have particular dedicated server for this kind of thing as at present I am looking to use my local PC. Also what rollover file sizes do people tend to use, I am currently using 1GB.
what the ideal CPU count and Max Degree of Parallelism are for a 3rd party database server.The server has 12 CPUs, 32GB RAM and all database sizes add up to < 30GB so they can all fit in memory (I tried to force this by doing a select * from every table). On certain payroll days, the CPU gets maxed out to 100% for a few seconds.
MAXDOP was originally set to the default 0. We later changed it to 8 based on several 'best-practices' articles. However the vendor suggests to change it to 1 (no parallelism), while others suggest changing it to 4, so that one run-away query doesn't hog most of the CPUs.
I'd like to find out how many CPUs are actually being used by queries. There is a Degree of Parallelism event in URL.... The BinaryData column says :
0x00000000, indicates a serial plan running in serial. 0x01000000, indicates a parallel plan running in serial. >= 0x02000000 indicates a parallel plan running in parallel.- What does "parallel plan running in serial" mean ?
I see a lot of 0x01000000, and a few 0x08000000's in my trace.How can i determine whether one query is hogging CPUs and if reducing it to 4 will work?
hi all~i have an questionmark here~lolx..i wish to explain my situation 1st~..first of all, i did three table in mssql..which are main table,pc table and notebook table..the attribute in main table got user name, pc brand, notebook brand.the attribute in pc table got user name, pc brand, pc model. and notebook table got user name notebook brand, notebook model.Right now the problem is when i open a gridview, it wil shows main table, when i click in the hyperlink which shown on the gridview, it wil auto link to pc or notebook table..~but when i wan to do edit part, like edit pc brand , pc model and user name in pc table, what kind of command should i type to auto update the main table while i update pc or notebook table?or while i insert new records in pc table, main table will update the pc brand and user name too~~i guess dat need link between table right?i did dat~but it didnt work....
Somehow someone turned on a audit on the sql server and it is filling up our hard drive and shutting down sql server eventually. Been trying to google how to shut this audit off but coming up with no via soolution yet. how can I turn this trace off. Each fiel says AuditTrace and date and they happen every other minute. I went into the sql profiler and can pull up the files but how to shut the trace off, it does not say.
I am getting deadlock in my production, i was taken deadlock information from trace file , i found deadlock graph but i am unable to find exact scenario . I am attaching deadlock trace file.
Is there a way to setup a trace to show only direct TSQL statements triggered on my server? note I don't want to capture Procedure calls or the statements called within the procs.
Actually many people are firing direct SQL statements on server. And some are coming from entity framework as well. I just want to capture those.
I am attempting to create a new trace but I get the following error message: "failed to start a new trace".
I have been doing some digging and as I understand it, I had to find the directory Profiler uses for temporary files. So, I typed the following in the command window "SET TMP" and I received the following reply:
C:UsersRossAppDataLocalTemp
Now, according to the forum: [URL] ...
I am supposed to check that the system folder pointed to by the TMP environment variable exists and is not crammed with files.
Well, when I went to the directory C:UsersRossAppDataLocalTemp, it is indeed full of both files and directories. The size is 16.3 MB and has 133 files and 63 folders.
When I had a look at the Environment Variables window and chose TMP the value is "%USERPROFILE%AppDataLocalTemp" which according to my limited understanding is the equivalent to C:UsersRossAppDataLocalTemp.
So, what I am wondering is am I supposed to totally clear out this directory? I am not too keen on doing this because I don't want to stuff my PC up.
Dear all, I have built an SSIS package by using the BI Dev Studio and enabled its configuration xml file. The package have a variable called TranDate and I want to put it dynamically from a Calendar on my website (just like assigning a variable). I have successfully change the value of that variable in the configuration files (affected to the xml file). Then I loaded the package and executed it (through my web). It's still get the old value (which I have assigned while creating the package). I didn't understand that where else can the package get the value of that variable so it still get the old value (that value have never appeared in the xml configuration file anymore when I changed it).
Thanks for reading this, and I am looking forward to seeing any helps from you guys.
Set up a trace with the events RPC:Completed, SQL:BatchCompleted, SQL:BatchStarting, and SQL:StmtCompleted.
When I issue the statement: SELECT * FROM XyzView there is nothing captured in Profiler. If I script out the view and then execute the select statement that defines the view, it does show up in Profiler.
I've tried adding a lot of the other events, i.e. SP:StmtCompleted and the various other StmtStarting events and the trace still does not capture anything.
Am I capturing the wrong events or is this known behavior? My goal is to see what the overhead is for using a view versus persisting the results of the view as a table and referencing that instead. The view in question is against static data, joins 9 tables, and is referenced a lot.
I can use the stats generated when I execute the select that defines the view but I still find this to be curious behavior so I assume I'm doing something wrong.
I can get a snapshot of tables in tempDB, but I would like to track which procs are causing the load in the tempDB.
I think I can sample and record objects in the tempdb, but I would like to record the proc creating the most tempDB usage, and disk read/writes associated with those procs.
The DMV's give usage in the individual DB's, but what's a good way to correlate procs in the DB's to tempdb usage?
I am trying to find out CPU utilization from the history using process.%processor time. I am having dual core CPU with 2 numa nodes each having 16 logical cpus bind to it.
how to calculate the CPU utilization using perfmon.I tried to use SQL query which gives CPU history using SQL DMV, but I am unable to get the exact value. Because in between I have used the same querry to capture my CPU usage on the run day, the value on run day and the query which iam tryting to pull out is different. I am using the same query to pull the history data with providing the date.
-- Get CPU Utilization History (SQL Server 2008 and above)
DECLARE @ts BIGINT SELECT @ts =(SELECT cpu_ticks/(cpu_ticks/ms_ticks) FROM sys.dm_os_sys_info); SELECT SQLProcessUtilization AS [SQLServer_Process_CPU_Utilization], SystemIdle AS [System_Idle_Process], 100 - SystemIdle - SQLProcessUtilization AS [Other_Process_CPU_Utilization],
Recently I needed to find all processes connected to a particular database, let's call it Test_db. I have a simple query to find all connections to my database:
select * from sys.databases d join sys.sysprocesses p on d.database_id = p.dbid where d.name = 'test_db'
But there was a process that was connected to another database like USE another_db_name; but was actually selecting from tables in test_db. Is it possible to catch such connections?
I am struggling figuring out the token from a CMDEXEC job (as opposed to TSQL Job). It is not an option to execute the command by enabling the executing CMDs via TSQL, which is why I am using the agent. I have seen the Microsoft Site on tokens but all examples seem to be oriented to TSQL Job Type.
I am trying to delete a particular trace file and at same time keeping the SQL Directory dynamic.Taking it a step further is adding in "deleting if file exist".
After SQL server service restart, a column which is set to auto increment jumped 1000. To fix the issue, I had to add T272 trace flag to SQL startup parameters. However, I did not see the column being reseeded after the service restart, it is still showing the 1000 jump. Am I doing something wrong?
Below the log showing the flag being added to the error log:
I am trying to find out what could be causing this issue. Why would we be waiting on cpu when its barely being used. Signal waits are varying from 35 to 55% and cpu usage is only at 5% usage.We are using Windows Server 2012 with SQl Server 2012 Standard edition with cpu5. There are 3 instances on the server each with max memory 50gb memory and the server has a total of 190gb memory. The machine is a 12 core machine with hyperthreading enabled.