Sarfaraz writes "I have captured SQL Profiler data. I was reviewing top running CPU intensive SQL statements. The Duration (in seconds) 1.39, 1.09, 0.16 and CPU (in seconds) 0.97, 0.95, 0.16 respectively for some SQL statements. How do I know what is the normal baseline for duration and CPU in order to determine the CPU intensive SQL statements.
Secondly same question for long running procedure duration 0.14, 0.11. What is the normal baseline here. Is this normal or too long.
If monitoring for duration with sql profiler, what does the number represent ie 2733906 is it milliseconds, thousandths, looked in BOL no clear definition
We have a VLDB ( few table with above 200 million records ). This database is used for performance testing by simulating for 150 users and executing all necessary functional flows.
When I examined the profiler results , I could see some very high values as shown below in the duration column for many events.
1521729 3462142 1624325 3211255 1248276 3903998
Does it mean that that SP or the T-Sql statement is taking this much time in milliseconds to execute and give the output ?
I have a procedure in a history database that does insert into 3 tables inside a transaction. users complaint that the proc sometimes takes too long during heavy usage. I did some traces to see what is taking up the time, I found that the rpc duration was averaging > 500 ms (should only take 50ms). I checked to see if one of that statements were taking too much time, but only see the commit transaction statement taking around 500 ms). I check the avg disk queue to be around 30. ( this is on a single local disk) .
So is this definitely a disk issue, or is there something else I need to check
I have a very stranger problem that I need to understand... In last days I executed a plan SQL 2005 Profiler to review TSQL Duration. When reviewing the results encounter that a SP displays a value of 4037312 in field DURATION which are not normal. Could to help me to identify why passes this?
I've set the Duration of my trace to "Greater than or Equal to: 1000". However when I start my trace the Duration column is now empty. Prior to the setting, there were values showing in this column. Any ideas on how to fix this?
Profiler was run against a database looking for "long running" queries. I used the Duration column to filter out the queries that I didn't want. When reviewing the output, I noticed that for some queries the StartTime was equal to the EndTime even though the Duration was set higher. My question is, "What can account for this discrepancy and what inferences should I draw about the difference?" Does the difference represent a resouce being locked or some other type of blocking (Duration) and once the query was allowed to run, it completed quickly? TIA
======================================= If Tyranny and Oppression come to this land, it will be in the guise of fighting a foreign enemy. -James Madison, fourth US president (1751-1836)
Which works fine, but what I need to calculate the total duration of a request based on the duration of the tasks completed in the request based on Req_ID. I would like to use the CASE statement I have to determine the SLA_Mins for each task and add them together to get total request SLA_Mins.
Below is the create table schema and data
GO /****** Object: Table [dbo].[MidrangeOtherSourceControl] Script Date: 06/03/2015 18:13:15 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[MidrangeOtherSourceControl]( [Req_ID] [float] NULL, [Service_Name] [nvarchar](255) NULL,
I have a 5G database with 1G log file running on SQL server6.5. I would like to establish a short list of performance Monitor objects (NT & SQL server) so that is adequate to capture application performance. I would like to get some advice: 1.What kind of objects should I monitor on my list? 2.How to automate the process (via a resource kit utility?) to save a new log file off daily and manage an archive process? 3.How to set alerts/notification (but no SQL mail is available now)?
Any help would be appreciated and thanks in advance.
Hello, I'm running sql2k sp3 on a win2k server. I run the baseline security Analyzer and get yellow (X's) for unchecked buffer in windows help facility, ms02-55.
Flaw in SMB signing could enable group policy to be mofdified, MS02-070
These warnings are telling me that my file versions are greater than expected. the file i have is version 5.2.3718.0 and the one it wants is 5.2.3669.0 and its the same for the other file as well. do i need to fix this?
I want to create a server performance baseline report for my database server. I know I can use System Mointor and SQL Profiler to monitor the server by reading the "SQL Server Books Online". However, there have too many counters, I don't know which counters should I use. If I choose to use the counter, I don't know what is the expect value for the counter.
For example, SQL Server: Buffer Manger Object has almost 22 counters. Which counters should I monitor? If I monitor AWE Lookup Maps/Sec, then what is the expect value for the good performance.
Anyone know a good refer or test book can help me to create a basline report
I am training to be a DBA in a company running about 30 machines with MS SQL Server (2000 and 2005). Last week I went to a class where the instructor recommended establishing a performance baseline using windows performance monitor. He also advised to run perfmon remotely so as to not effect the performance.
What I am wondering is since I have so many different machines to baseline, can I run perfmon on one box, using a seperate counter log for each server? I would like to get a nice week-long baseline for each machine, but I also don't want to get bad data by running too many logs at the same time.
My plan is to do a small set of counters for processor, memory, disk, and the SQL server instance(about 10 counters total).
If anyone has experience in this area, I would appreciate any advice that you might have.
What is the best method of creating schema creation scripts that can bestored into a version control system. The process of using em togenerate a script is not an appealing option. I am still learning theMS Sql sys tables and have not found a useful list of all the codes &types to join the tables etc.mike--Posted via http://dbforums.com
I downloaded MBSA and ran it against my SQL 2005 Server. It tells me that I have a severe risk because
'The following databases have public access.Remove the public access if it is not required - tempdb , model , msdb , ReportServer , ReportServerTempDB'
I have checked these databases and each have the Guest User but it is disabled. If I check the database properties the public role has no permissions against the listed databases.
Is this a bug with MBSA? If not how do I remove Public Access?
I've been put in a situation where I have a number of SQL DB's running on 2005/2008 which I have responsibility for. I've been given limited information so am looking at a starting point to determine where I go from here.
I have of course ensured there is a backup strategy in place to secure the data.
We are adding 4-5 indexes to one database and dropping 2 unused indexes. I don't have proper testing environment. How to monitor these indexes changes? Do we need to run any baseline but we don't get load all the time same load all the days
I'm trying to run the SQL Server 2012 Best Practices Analyzer. After learning that I first had to install the Microsoft Baseline Configuration Analyzer, I did that.
When I tried to run the Microsoft Baseline Configuration Analyzer/Best Practices Analyzer remotely, though, I got an extremely verbose error message and want to confirm if I really need to do all of the steps involved on each target server that I want to analyze.
Here is what I tried after launching the Microsoft Baseline Configuration Analyzer application (I'm using 'MYSERV' as the target server name):
1. Clicked on "Connect to another computer" 2. Clicked "Another computer:" entered: MYSERV 3. Checked the "Connect as another user:" box 4. Clicked "Set User..." added by Windows credentials. 5. Checked the Use CredSSP box 6. Clicked OK.
After a second or two, the error below came back. Is that what I have to do on each remote computer to run the analyzers?
Microsoft Baseline Configuration Analyzer
Connecting to the remote server failed with the following error message : The WinRM client cannot process this request. CredSSP authentication is currently disabled in the client configuration. Change the client configuration and try the request again. CredSSP authentication must also be enabled in the server configuration. Also, Group Policy must be edited to allow credential delegation to the target computer. Use gpedit.msc and look at the following policy: Computer Configuration -> Administrative Templates -> System -> Credentials Delegation -> Allow Delegating Fresh Credentials. Verify that it is enabled and configured with an SPN appropriate for the target computer. For example, for a target computer name "myserver.domain.com", the SPN can be one of the following; WSMAN/myserver.domain.com or WSMAN/*.domain.com For more information
I found this nifty code on stackoverflow that works well but I'm trying to send the results to a text file and the column lengths are huge. I used CAST for the first line and it worked great but I can't seem to make it work with duration. Here's the original code:
I am trying to get a query that will allow me to report the time taken to complete a certain training module.
The database itself does not have a duration field so I am tring to get the duration by using MIN and MAX. I can get the timing for when the module was opened and the time for the last mouse click on it, from this I need to be able to calculate the time taken to complete.
Query I am using to get the basic info comes from 3 tables so I have only attached the relevent output. Query used is as follow:
SELECT * FROM PPS_SCOS, PPS_TRANSCRIPTS, PPS_TRANSCRIPT_DETAILS, PPS_PRINCIPALS WHERE PPS_SCOS.SCO_ID = PPS_TRANSCRIPTS.SCO_ID AND PPS_TRANSCRIPTS.TRANSCRIPT_ID = PPS_TRANSCRIPT_DETAILS.TRANSCRIPT_ID AND PPS_TRANSCRIPTS.PRINCIPAL_ID = PPS_PRINCIPALS.PRINCIPAL_ID AND PPS_SCOS.NAME LIKE 'MTM-106 The Dangers of Smoking' AND PPS_PRINCIPALS.NAME LIKE 'Nigel Cordiner' AND PPS_TRANSCRIPTS.TICKET NOT LIKE 'l-%' ORDER BY PPS_TRANSCRIPT_DETAILS.DATE_CREATED
Output:
pps_scospps_scospps_transcript_detailspps_principalspps_principals SCO_IDNAME DATE_CREATED PRINCIPAL_ID NAME 136850MTM-106 The Dangers of Smoking08:17:2516287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:17:2516287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:17:4016287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:18:2516287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:18:5716287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:19:1416287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:19:4716287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:20:2116287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:20:4416287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:21:2616287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:22:1316287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:24:5516287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:25:1216287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:25:2916287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:26:4916287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:27:0216287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:27:2916287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:27:4316287Nigel Cordiner
Have added the column heading and the tables the output comes from.
Relatively new to SQL so any help would be greatly received.
SELECT h.JobNum, (CASE WHEN MONTH(h.JobCompletionDate) = 1 THEN datediff(day,MIN(l.ClockInDate),MAX(l.clockInDate)) ELSE 0 END) AS JAN, (CASE WHEN MONTH(h.JobCompletionDate) = 2 THEN datediff(day,MIN(l.ClockInDate),MAX(l.clockInDate)) ELSE 0 END) AS FEB, ... FROM JobHead h INNER JOIN LaborDtl l ON h.JobNum=l.JobNum WHERE JobCompletionDate>='20070101' AND JobCompletionDate <'20080101' AND l.ClockInTime<>0 GROUP BYh.JobNum ,h.JobCompletionDate
The query shows, for each job, the month in which the job completed, and the number of hours it took to complete. I€™m calculating the number of days€™ duration by doing a datediff between the oldest and newest clockindates. I need to ignore adjustment transactions in the labordtl table €“ these rows are easily identified as they have clockintime values of 0. So far, so good. Now here€™s my problem.
There are some jobs which have only one €śreal€? labor transaction €“ this could happen if the job only took one day to complete. Other labor transactions may exist for that job, but let's say they are adjustments which we can ignore -- the date they were entered should not extend the duration of the job. In this situation, my datediff between the oldest valid transaction and the newest, returns 0. I don€™t have to count hours between clockintime and clockouttime. The rule is simply that if there is only one "real" labor transaction, I need to count this as a 1 day job.
I thought a nested CASE statement or expression might be the way to go but I didn't make any real progress.
Any ideas to solve this problem would be appreciated.
I'm developing a web app that displays the running packages and the total elapsed time. I'm calling GetRunningPackages() method and using the ExecutionDuration property of the returned package. The duration seems to be only for the currently executing container and not the entire package. Is there a way to get the duration of the entire package? Thanks.
Hi all, I want to find working duration between two datetimes in c#.i'm using following code... DateTime starttime = Convert.ToDateTime(Session["StartTime"]); DateTime endtime = DateTime.Now; TimeSpan duration = endtime - starttime; DateTime period = new DateTime(duration.Ticks); i want to store this duration in database through stored procedure, i've give datetime datatype to duration but it is giving error in conversion of TimeSpan to DateTime..Please help... Thanks
SQL 6.5 - run duration 6-7 hours SQL 7.0 - run duration 12-13 hours 175+ columns with total record size=570 4.2M records with tablesize 2.5G
It's a simple 'select into...' with some embedded logic from a work table with all char fields into the actual table converting char fields into various data types (int, datetime, real, etc.) Here is a sample of the code:
SELECT LoanNum=CASE WHEN ISNUMERIC(ACCT#)=1 THEN CONVERT(int,ACCT#) ELSE NULL END, PaidToDt=CASE WHEN PAIDDT = '0001-01-01' THEN NULL WHEN ISDATE(PAIDDT)=1 AND SUBSTRING(PAIDDT,1,2) = '19' THEN CONVERT(smalldatetime,PAIDDT) WHEN ISDATE(PAIDDT)=1 AND SUBSTRING(PAIDDT,1,2) = '20' AND SUBSTRING(PAIDDT,3,2) < '79' THEN CONVERT(smalldatetime,PAIDDT) ELSE NULL END, . . . INTO db.owner.tablename FROM db.owner.wrktablename (NOLOCK)
Hearing complaints from users about speed on db server (I have almost no control on design) it just has to work. Ran profiler looking for all sql statements over 4000 millsec and in one hour returned over 715 tsql statements. Over 300 of these were over 10000 milliseconds. THis is on an 8 way Dell with 8 gig of RAM. Looking for opinions, how bad does this look compared to other servers you are taking care of? Cache hit ratio is at 99 % and system queue length still under 1, but this does not look good.
There is a trigger to monitor the modification on a table, and it turn on. For a special duration, I need to turn off this trigger to modify the table. And then turn on the trigger again.
I have a table called Tickets which contains ticket information for a machine. Each machine can have more than one ticket number opened at the same time. The ticket number contains start date/time and end date/time of the ticket. Thereefore the table looks something like this:
I want to be able to calculate total duration time(in hours) that EACH MACHINE had a ticket open...but here is the tricky part. The total duration time that a machine had ticket open has to encompas any tickets that may fall in the same time period. For example: If Machine A has a ticket open at 8:30 and the ticket is closed at 10:00. Meanwhile, Machine A had another separate ticket open at 9:30 which was closed at 10:30. In this case, the total duration time for this machine would be from 8:30 to 10:30 for a total of 2 hrs duration time.
Can anyone help me get started in tackling this problem or provide any examples?
Is there anyway to tell how long this will run for -- or how far it has got? I have a large database that has just had most of the data removed. The command has been running for 8 hours and I have just stopped it to let something else run quickly. Any way of telling how much longer it will take?
/* This SP has 2 functions. a) if @method='duration' gives the average run duration in minutes for successful jobs b) if @method='failures' displays failures/cancels/still executing jobs It defaults to today's date. Specify @xdate for a different date -- Louis Nguyen */
CREATE PROCEDURE UtilityJobsHistory ( @method varchar(100)='duration' ,@xdate datetime=null ) AS set nocount on set transaction isolation level read uncommitted
if @method='duration' begin
select @xdate=isnull(@xdate,getdate())
/*run_duration is in HHMMSS format; drop SS*/ /*run_staus: 1 complete 2 retry*/ /*step_id: 0 is final job outcome*/ /*run_date: yyyymmdd format*/
/*today's performance*/ select a.name,minutes=avg((b.run_duration / 100)/100*60 + (b.run_duration / 100)%100) into #today from msdb..sysjobs as a join msdb..sysjobhistory as b on a.job_id=b.job_id where run_status in ('1','2') and step_id=0 and run_date =convert(varchar,@xdate,112) group by a.name
/*7 day average performance*/ /*populate #D with dates in yyyymmdd format*/ create table #D (run_date varchar(50)) declare @idate datetime set @idate=@xdate while @idate>dateadd(day,-7,@xdate) begin insert into #D select run_date=convert(varchar,@idate,112) select @idate=dateadd(day,-1,@idate) end
/*Avg7Days*/ select a.name,minutes=avg((b.run_duration / 100)/100*60 + (b.run_duration / 100)%100) into #avg7Days from msdb..sysjobs as a join msdb..sysjobhistory as b on a.job_id=b.job_id join #D as c on b.run_date = c.run_date where run_status in ('1','2') and step_id=0 group by a.name
/*output*/ select name=cast(a.name as varchar(35)),OneDayAvg=a.minutes,SevenDayAvg=b.minutes from #today as a join #avg7days as b on a.name=b.name order by a.name
return end
if @method='failures' begin
select @xdate=isnull(@xdate,getdate())
select status=case run_status when 0 then 'FAILED' when 3 then 'CANCELED' when 4 then 'EXECUTING' end ,name=cast(a.name as varchar(35)),step_name ,time=replace(convert(varchar,@xdate,107),' ','')+' '+right('0000'+cast(b.run_time/100 as varchar),4) ,b.message from msdb..sysjobs as a join msdb..sysjobhistory as b on a.job_id=b.job_id where run_status in ('0','3','4') and run_date=convert(varchar,@xdate,112) order by run_status,a.name