T-SQL (SS2K8) :: Agent Job High Run-duration Alert?
May 11, 2015
how i can setup an e-mail alert for a specific agent job, such that it sends an e-mail report when the run duration exceeds 30 minutes.
The agent job in question kicks off our ETL, and it runs every hour. It's happened before where a job running elsewhere(Not best practice, but we run reports from our datawarehouse) creates a deadlock on 1 of the tables being updated in the ETL, but i only get notified when i get calls from end users saying their reports aren't returning results.
I was thinking of creating a new job that query either the sysjobhistory or sysjobactivty table in msdb, but i would need it to refer to the run duration value of the job as it's still running( I think i can assume it's not updating that value every second...)
I have a very stranger problem that I need to understand... In last days I executed a plan SQL 2005 Profiler to review TSQL Duration. When reviewing the results encounter that a SP displays a value of 4037312 in field DURATION which are not normal. Could to help me to identify why passes this?
/* This SP has 2 functions. a) if @method='duration' gives the average run duration in minutes for successful jobs b) if @method='failures' displays failures/cancels/still executing jobs It defaults to today's date. Specify @xdate for a different date -- Louis Nguyen */
CREATE PROCEDURE UtilityJobsHistory ( @method varchar(100)='duration' ,@xdate datetime=null ) AS set nocount on set transaction isolation level read uncommitted
if @method='duration' begin
select @xdate=isnull(@xdate,getdate())
/*run_duration is in HHMMSS format; drop SS*/ /*run_staus: 1 complete 2 retry*/ /*step_id: 0 is final job outcome*/ /*run_date: yyyymmdd format*/
/*today's performance*/ select a.name,minutes=avg((b.run_duration / 100)/100*60 + (b.run_duration / 100)%100) into #today from msdb..sysjobs as a join msdb..sysjobhistory as b on a.job_id=b.job_id where run_status in ('1','2') and step_id=0 and run_date =convert(varchar,@xdate,112) group by a.name
/*7 day average performance*/ /*populate #D with dates in yyyymmdd format*/ create table #D (run_date varchar(50)) declare @idate datetime set @idate=@xdate while @idate>dateadd(day,-7,@xdate) begin insert into #D select run_date=convert(varchar,@idate,112) select @idate=dateadd(day,-1,@idate) end
/*Avg7Days*/ select a.name,minutes=avg((b.run_duration / 100)/100*60 + (b.run_duration / 100)%100) into #avg7Days from msdb..sysjobs as a join msdb..sysjobhistory as b on a.job_id=b.job_id join #D as c on b.run_date = c.run_date where run_status in ('1','2') and step_id=0 group by a.name
/*output*/ select name=cast(a.name as varchar(35)),OneDayAvg=a.minutes,SevenDayAvg=b.minutes from #today as a join #avg7days as b on a.name=b.name order by a.name
return end
if @method='failures' begin
select @xdate=isnull(@xdate,getdate())
select status=case run_status when 0 then 'FAILED' when 3 then 'CANCELED' when 4 then 'EXECUTING' end ,name=cast(a.name as varchar(35)),step_name ,time=replace(convert(varchar,@xdate,107),' ','')+' '+right('0000'+cast(b.run_time/100 as varchar),4) ,b.message from msdb..sysjobs as a join msdb..sysjobhistory as b on a.job_id=b.job_id where run_status in ('0','3','4') and run_date=convert(varchar,@xdate,112) order by run_status,a.name
Hi,Does anyone know of a script that will give "weighted job duration"?I want to use it, to identify which jobs are hogging the CPU. That isfor a given server, list the sql agent jobs ordered by:(avg job duration in minutes) times (avg num of times job runs in agiven day).
I have a publisher, remote distributor and subscriber all running SS2000.
Under replication monitor on the distributor, under replication alerts i have enabled the "replication agent failure" alert. All i need to know is should this alert trigger if the distribution agent that runs on the subscriber not the distributor fails?
I have add occurences of the distribution agent failing and the alert is not triggerred, is this because it only triggers for the agents that run on the distributor, ie: snapshot and log reader agents?
I have six stored procedures that I have to run hourly during the day. When I schedule them into separate SQL jobs (they must run concurrently), the percent CPU of Sqlagent90.exe process goes way up, eventually pegging out the overal CPU usage on the server. When I run the six stored procedures concurrently from query windows in SQL Server Management Sudio, the overall CPU usage stays very low.
Is there something about the SQL Server Agent that inherently adds CPU overheard? Thanks in advance ....
Which works fine, but what I need to calculate the total duration of a request based on the duration of the tasks completed in the request based on Req_ID. I would like to use the CASE statement I have to determine the SLA_Mins for each task and add them together to get total request SLA_Mins.
Below is the create table schema and data
GO /****** Object: Table [dbo].[MidrangeOtherSourceControl] Script Date: 06/03/2015 18:13:15 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[MidrangeOtherSourceControl]( [Req_ID] [float] NULL, [Service_Name] [nvarchar](255) NULL,
I have this report i setup as an agent job in SSMS; It's setup to run every 7 minutes and it only sends the report if data is present. I'd like to add the ability to omit any rows that were sent in the previous report.
This is what the script looks like:
if exists (select o.ord_billto, o.ord_refnum , o.ord_hdrnumber, o.mov_number, o.ord_status, o.ord_cmdvalue, o.ord_startdate from orderheader o where ord_billto in ('A','B','C','D') and DATEDIFF(minute , o.ord_datetaken, GETDATE())<=7
[Code] ....
Also, why I can't seem to use IF ( Select [...]) > 0? When i try using that instead of IF EXISTS i get this error: "Msg 116, Level 16, State 1, Line 7 Only one expression can be specified in the select list when the subquery is not introduced with EXISTS."
Is there a way to configure mirroring to go from High Availability to High Protection without having to reconfigure Database Mirroring? Using the interface in Management Studio, I can change the configuration option to High Performance, but not High Protection despite both of them being Synchronous.
If not, what are the recommended steps to configure the mirror once it already has been configured? Is just like initially setting up the mirror or would there be any shortcuts I could take? If I stop the mirroring and remove the witness, will the High Protection option be available?
I realise this is a stupid quesiton but i cannot really find any confirmation of this in BOL.
If you are running High Safety with automatic failover, when failover occurs does this automatically change to High Performance mode. SInce for failover to occur something has happen with the primary , it will be impossible to commit transactions on the new primary and mirror asyncronously since 1 of them is no longer available.
So am i correct in assuming that automatic failover also automatically changes the mode to High Performacne for that session?
I'm running into the following message, "String or binary data would be truncated. [SQLSTATE 22001] (Error 8152)" when running a sql agent job. I'm attempting to execute a stored procedure through the job. Keep in mind that when I run the stored procedure in a normal query window, it works fine and only fails when running it as a scheduled job. My guess is that it has to do with how SQL Jobs execute procedures (especially long procedures). If I use Set Ansi_Warnings OFF, the job will work fine, however, I don't know what other issues this may cause.
Can you use the below query to get CPU high utilisation alert purposes for both named and default instance? or, do I need to make any changes here (@wmi_namespace=N'.ROOTCIMV2' ) ?
USE [msdb] GO EXEC msdb.dbo.sp_add_alert @name=N'CPU_WM_Utilization_Check', @message_id=0, @severity=0,
when I run a package from a command window using dtexec, the job immediately says success. DTExec: The package execution returned DTSER_SUCCESS (0). Started: 3:37:41 PM Finished: 3:37:43 PM Elapsed: 2.719 seconds
However the Job is still in th agent and the status is executing. The implications of this are not good. Is this how the sql server agent job task is supposed to work by design.
If monitoring for duration with sql profiler, what does the number represent ie 2733906 is it milliseconds, thousandths, looked in BOL no clear definition
I found this nifty code on stackoverflow that works well but I'm trying to send the results to a text file and the column lengths are huge. I used CAST for the first line and it worked great but I can't seem to make it work with duration. Here's the original code:
I am trying to get a query that will allow me to report the time taken to complete a certain training module.
The database itself does not have a duration field so I am tring to get the duration by using MIN and MAX. I can get the timing for when the module was opened and the time for the last mouse click on it, from this I need to be able to calculate the time taken to complete.
Query I am using to get the basic info comes from 3 tables so I have only attached the relevent output. Query used is as follow:
SELECT * FROM PPS_SCOS, PPS_TRANSCRIPTS, PPS_TRANSCRIPT_DETAILS, PPS_PRINCIPALS WHERE PPS_SCOS.SCO_ID = PPS_TRANSCRIPTS.SCO_ID AND PPS_TRANSCRIPTS.TRANSCRIPT_ID = PPS_TRANSCRIPT_DETAILS.TRANSCRIPT_ID AND PPS_TRANSCRIPTS.PRINCIPAL_ID = PPS_PRINCIPALS.PRINCIPAL_ID AND PPS_SCOS.NAME LIKE 'MTM-106 The Dangers of Smoking' AND PPS_PRINCIPALS.NAME LIKE 'Nigel Cordiner' AND PPS_TRANSCRIPTS.TICKET NOT LIKE 'l-%' ORDER BY PPS_TRANSCRIPT_DETAILS.DATE_CREATED
Output:
pps_scospps_scospps_transcript_detailspps_principalspps_principals SCO_IDNAME DATE_CREATED PRINCIPAL_ID NAME 136850MTM-106 The Dangers of Smoking08:17:2516287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:17:2516287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:17:4016287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:18:2516287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:18:5716287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:19:1416287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:19:4716287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:20:2116287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:20:4416287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:21:2616287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:22:1316287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:24:5516287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:25:1216287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:25:2916287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:26:4916287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:27:0216287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:27:2916287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:27:4316287Nigel Cordiner
Have added the column heading and the tables the output comes from.
Relatively new to SQL so any help would be greatly received.
SELECT h.JobNum, (CASE WHEN MONTH(h.JobCompletionDate) = 1 THEN datediff(day,MIN(l.ClockInDate),MAX(l.clockInDate)) ELSE 0 END) AS JAN, (CASE WHEN MONTH(h.JobCompletionDate) = 2 THEN datediff(day,MIN(l.ClockInDate),MAX(l.clockInDate)) ELSE 0 END) AS FEB, ... FROM JobHead h INNER JOIN LaborDtl l ON h.JobNum=l.JobNum WHERE JobCompletionDate>='20070101' AND JobCompletionDate <'20080101' AND l.ClockInTime<>0 GROUP BYh.JobNum ,h.JobCompletionDate
The query shows, for each job, the month in which the job completed, and the number of hours it took to complete. I€™m calculating the number of days€™ duration by doing a datediff between the oldest and newest clockindates. I need to ignore adjustment transactions in the labordtl table €“ these rows are easily identified as they have clockintime values of 0. So far, so good. Now here€™s my problem.
There are some jobs which have only one €śreal€? labor transaction €“ this could happen if the job only took one day to complete. Other labor transactions may exist for that job, but let's say they are adjustments which we can ignore -- the date they were entered should not extend the duration of the job. In this situation, my datediff between the oldest valid transaction and the newest, returns 0. I don€™t have to count hours between clockintime and clockouttime. The rule is simply that if there is only one "real" labor transaction, I need to count this as a 1 day job.
I thought a nested CASE statement or expression might be the way to go but I didn't make any real progress.
Any ideas to solve this problem would be appreciated.
I'm developing a web app that displays the running packages and the total elapsed time. I'm calling GetRunningPackages() method and using the ExecutionDuration property of the returned package. The duration seems to be only for the currently executing container and not the entire package. Is there a way to get the duration of the entire package? Thanks.
Hi all, I want to find working duration between two datetimes in c#.i'm using following code... DateTime starttime = Convert.ToDateTime(Session["StartTime"]); DateTime endtime = DateTime.Now; TimeSpan duration = endtime - starttime; DateTime period = new DateTime(duration.Ticks); i want to store this duration in database through stored procedure, i've give datetime datatype to duration but it is giving error in conversion of TimeSpan to DateTime..Please help... Thanks
SQL 6.5 - run duration 6-7 hours SQL 7.0 - run duration 12-13 hours 175+ columns with total record size=570 4.2M records with tablesize 2.5G
It's a simple 'select into...' with some embedded logic from a work table with all char fields into the actual table converting char fields into various data types (int, datetime, real, etc.) Here is a sample of the code:
SELECT LoanNum=CASE WHEN ISNUMERIC(ACCT#)=1 THEN CONVERT(int,ACCT#) ELSE NULL END, PaidToDt=CASE WHEN PAIDDT = '0001-01-01' THEN NULL WHEN ISDATE(PAIDDT)=1 AND SUBSTRING(PAIDDT,1,2) = '19' THEN CONVERT(smalldatetime,PAIDDT) WHEN ISDATE(PAIDDT)=1 AND SUBSTRING(PAIDDT,1,2) = '20' AND SUBSTRING(PAIDDT,3,2) < '79' THEN CONVERT(smalldatetime,PAIDDT) ELSE NULL END, . . . INTO db.owner.tablename FROM db.owner.wrktablename (NOLOCK)
Hearing complaints from users about speed on db server (I have almost no control on design) it just has to work. Ran profiler looking for all sql statements over 4000 millsec and in one hour returned over 715 tsql statements. Over 300 of these were over 10000 milliseconds. THis is on an 8 way Dell with 8 gig of RAM. Looking for opinions, how bad does this look compared to other servers you are taking care of? Cache hit ratio is at 99 % and system queue length still under 1, but this does not look good.
There is a trigger to monitor the modification on a table, and it turn on. For a special duration, I need to turn off this trigger to modify the table. And then turn on the trigger again.
I have a table called Tickets which contains ticket information for a machine. Each machine can have more than one ticket number opened at the same time. The ticket number contains start date/time and end date/time of the ticket. Thereefore the table looks something like this:
I want to be able to calculate total duration time(in hours) that EACH MACHINE had a ticket open...but here is the tricky part. The total duration time that a machine had ticket open has to encompas any tickets that may fall in the same time period. For example: If Machine A has a ticket open at 8:30 and the ticket is closed at 10:00. Meanwhile, Machine A had another separate ticket open at 9:30 which was closed at 10:30. In this case, the total duration time for this machine would be from 8:30 to 10:30 for a total of 2 hrs duration time.
Can anyone help me get started in tackling this problem or provide any examples?
Is there anyway to tell how long this will run for -- or how far it has got? I have a large database that has just had most of the data removed. The command has been running for 8 hours and I have just stopped it to let something else run quickly. Any way of telling how much longer it will take?
Sarfaraz writes "I have captured SQL Profiler data. I was reviewing top running CPU intensive SQL statements. The Duration (in seconds) 1.39, 1.09, 0.16 and CPU (in seconds) 0.97, 0.95, 0.16 respectively for some SQL statements. How do I know what is the normal baseline for duration and CPU in order to determine the CPU intensive SQL statements.
Secondly same question for long running procedure duration 0.14, 0.11. What is the normal baseline here. Is this normal or too long.
Hi guys, I am having difficulty calculating the time duration between receiving process to shipping process. I have a table that consists of: Order#, Processes, Time_In, Time_Out. Order# can be 1, 2, 3, 4, 5. While at the same time Order# 1 can go through more than one process, i.e.: Receiving, VisualTest, MechanicalTest, ..., Shipping. Every Order# does not necessarily goes through all processes, but surely they will go through receiving process and shipping process. For each process we will have recorded time when the order# comes in and when it finishes with each process. I need to calculate the length of time from Time_In from Receiving to Time_Out in Shipping.
I'm using RDA (Remote Data Access) to pull 20 tables to my Pocket PC. It took quite a long time so I ran a trace to see what happened. Everything looks fine except for when it runs:
exec [mydb]..sp_primary_keys_rowset N'Person',NULL The duration is: 18446744073!!!
A couple of more tables has this enomous duration others have about 5000 which seems more normal.
i want to get the execution duration of the ssis package and insert into one table,so i used a variable--"duration" and specified it's expression is getdate(). I think that only i can get the start value of the varible and the finish value,then i can know the duration.but whether i can get these two values? or better way i should try?