OnPostexcuteEvent And OnTaskFailed Event Execution
May 30, 2008
Hello,
I've a sql task, and I would like to filled a log file at the end of the task execution.
If execution OK -> SUCCES Task XXXX
If execution KO -> FAILURE Task XXXX
I use OnPostexcuteEvent and OnTaskFailed with a script task. But the problem is that the OnPostExecuteEvent is always lauched. So when the task failed I've got two lines in my logfile :
SUCCES Task XXXX
FAILURE Task XXXX
Is there a way to block OnPostExecuteEvent when task failed ? Or Is there a way OnPostExecuteEvent script task, to test the if tere's an error message ( ex. : if error message do nothing else write in log file). ?
I have a package which have two sequence container, first container is used to transfer data to a staging area and second sequence container is used to transfer to destination from that staging area. And I also apply transaction required to second sequence container. There are several execute sql tasks and several data flow tasks inside two sequence container. first sequence container( 1.execution sql task-> 2.data flow ->3.execution sql task) -> second sequence container(4.execution sql task-> 5.execution sql task-> 6.data flow-> 7.data flow -> 8.execution sql task-> 9.data flow...)
I create ExecutionLog table which is used to log status for this package on our sql server. First this status field is null, then during this package run , it change to 'in process', and after this package finished, it change to 'success' or 'failure' depending this package can run successfully or not. This package can be run only if status is 'success' ,'failure' or null. So I need to change this status field during package execution. For updating package to failure, I need to add event handler to change that status using execute sql task.
First time I perform to execute sql task on onError event handler tab (this event handler is applyed on package level ) . And for testing envent handler I use old schema version to make sure I get failure for '8 execution sql task'. But package seems to get stuck at '8 execution sql task' inside second sequence container( always yellow when I run from ssis) and never fire envent handler. '8 execution sql task' is used to update related table.
Second time I remove onError envent handler and change to use on onTaskFailed event handler tab (this event handler is applyed on package level ) . But everything is the same as using onError event handler except I got error output but still can not fire event. Why '8.execution sql task' can not fire onError or onTaskFailed? For this case what kind of event handler I need to use, what kind of level I need to apply for this event handler.
I'd like to alter OnInformation event in order to add more parameters (as TaskHost). Is it possible? I've tried but appears an error:
OnInformation' cannot to implement OnInformation' because of it doesn't exists on the Microsoft.SqlServer.Dts.Runtime.IDTSEvents'
Sub OnInformation(ByVal taskHost As TaskHost, ByVal [source] As DtsObject, ByVal informationCode As Integer, ByVal subComponent As String, ByVal description As String, ByVal helpFile As String, ByVal helpContext As Integer, ByVal idofInterfaceWithError As String, ByRef fireAgain As Boolean) Implements IDTSEvents.OnInformation
I suppose that I must add an overload method but how?
We know we can use the event lock_deadlock and xml_deadlock_report to capture the deadlock info, however I also want to capture the execution plans for all of the SPIDs in the deadlock graph, how to output the execution plans to the extended events trace results either ? such as if there is an action for execution plan or workaround for it ?If there is no built in action for execution plan , may I know if we can add the customized info to the extended events results file also ? Such as when the deadlock related event happens , then we can run a query to get some info ,then added the info along with other info such as sql_text, dbname etc to the events trace results file either ? The reason is if we also know the execution plans when the deadlock happens, it is useful to turning the query based on the execution plans to reduce deadlock happening .
If an error occurs in ScriptTaskA from PackageB, the OnError event handler in PackageA fires once and catches the event from ScriptTaskA; that is, the output of the SourceName system variable is "ScriptTaskA" from PackageB. So far so expected.
Now, the same error is handled differently by PackageA's OnTaskFailed handler. The OnTaskFailed handler fires twice - once for ScriptTaskA and once for ExecutePackageTaskA; that is, two outputs are returned - one for "ScriptTaskA" and the second one for "ExecutePackageTaskA". That's strange to me.
Why does the OnError handler only fire once and the OnTaskFailed twice? Is there a setting that does this?
My SQL Server 2005 SP4 on Windows 2008 R2 is flooded with the below errors:-
Date 10/25/2011 10:55:46 AM Log SQL Server (Current - 10/25/2011 10:55:00 AM) Source spid Message Event Tracing for Windows failed to send an event. Send failures with the same error code may not be reported in the future. Error ID: 0, Event class ID: 54, Cause: (null).
Is there a way I can trace it how it is coming? When I check input buffer for these ids, it looks like it is tracing everything. All the general application DMLs are coming in these spids.
I have been testing with the WMI Event Watcher Task, so that I can identify a change to a file. The WQL is thus:
SELECT * FROM __InstanceModificationEvent within 30 WHERE targetinstance isa 'CIM_DataFile' AND targetinstance.name = 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\AdventureWorks.bak'
This polls every 30 secs and in the SSIS Event (ActionAtEvent in the WMI Task is set to fire the SSIS Event) I have a simple script task that runs a message box).
My understanding is that the event polls every 30 s and if there is a change on the AdventureWorks.bak file then the event is triggered and the script task will run producing the message. However, when I run the package the message is occurring every 30s, meaning the event is continually firing even though there has been NO change to the AdventureWorks.bak file.
Am I correct in my understanding of how this should work and if so why is the event firing when it should not ?
Server 2003 SE SP1 5.2.3790 Sql Server 2000, SP 4, 8.00.2187 (latest hotfix rollup) We fixed one issue, but it brought up another. the fix we applied stopped the ServicesActive access failure, but now we have a failure on MSSEARCH. The users this is affecting do NOT have admin rights on the machine, they are SQL developers. We were having
Event Type: Failure Audit Event Source: Security Event Category: Object AccessEvent ID: 560 Date: 5/23/2007 Time: 6:27:15 AM User: domainuser Computer: MACHINENAME Description: Object Open: Object Server: SC Manager Object Type: SC_MANAGER OBJECT Object Name: ServicesActive Handle ID: - Operation ID: {0,1623975729} Process ID: 840 Image File Name: C:WINDOWSsystem32services.exe Primary User Name: MACHINE$ Primary Domain: Domain Primary Logon ID: (0x0,0x3E7) Client User Name: User Client Domain: Domain Client Logon ID: (0x0,0x6097C608) Accesses: READ_CONTROL Connect to service controller Enumerate services Query service database lock state
We recently upgraded to SQL 2005 from SQL 2000. We have most of our issues ironed out however about every 1 minute there is a message in the Application Event log and the SQL log that states:
EVENT ID 18456 Login Failed for the users DOMAIN/ACCOUNT [CLIENT: <local machine>]
This is a state 16 message which I thought meant that the account does not have access to the default database. The account is actually the account that the SQL services run under.
Any ideas? We can't seem to figure this one out. We actually upgraded to 2005 from 2000 and had an error appear after every reboot that prevented the SQL Agent from running(This application has failed to start because GAPI32.dll was not found. Re-installing the application may fix this problem.) We did a full uninstall of SQL and reinstalled fresh and restored the databases from .bak files and that is when the EVENT ID 18546 started occuring every minute.
We don't have any SQL heavy hitters here so please be detailed with any possible solutions. That you very much for any help you can provide!
after moving off VS debugger and into management studio to exercise our SQLCLR sp, we notice that the 2nd execution gets an error suggesting that our static SqlCommand object is getting reused from the 1st execution (of the sp under mgt studio). If this is expected behavior, we have no problem limiting our statics to only completely reusable objects but would first like to know if this is expected? Is the fact that debugger doesnt show this behavior also expected?
Hi I am slowly getting to grips with SQL Server. As a part of this, I have been attempting to work on producing more efficient queries. This post is regarding what appears to be a discrepancy between the SQL Server execution plan and the actual time taken by a query to run. My brief is to produce an attendance system for an education establishment (I presume you know I'm not an A-Level student completing a project :p ). Circa 1.5m rows per annum, testing with ~3m rows currently. College_Year could strictly be inferred from the AttDateTime however it is included as a field because it a part of just about every PK this table is ever likely to be linked to. Indexes are not fully optimised yet. Table:CREATE TABLE [dbo].[AttendanceDets] ([College_Year] [smallint] NOT NULL ,[Group_Code] [char] (12) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[Student_ID] [char] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[Session_Date] [datetime] NOT NULL ,[Start_Time] [datetime] NOT NULL ,[Att_Code] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ) ON [PRIMARY]GO CREATE CLUSTERED INDEX [IX_AltPK_Clust_AttendanceDets] ON [dbo].[AttendanceDets]([College_Year], [Group_Code], [Student_ID], [Session_Date], [Att_Code]) ON [PRIMARY]GO CREATE INDEX [All] ON [dbo].[AttendanceDets]([College_Year], [Group_Code], [Student_ID], [Session_Date], [Start_Time], [Att_Code]) ON [PRIMARY]GO CREATE INDEX [IX_AttendanceDets] ON [dbo].[AttendanceDets]([Att_Code]) ON [PRIMARY]GOALL inserts are via an overnight sproc - data comes from a third party system. Group_Code is 12 chars (no more no less), student_ID 8 chars (no more no less). I have created a simple sproc. I am using this as a benchmark against which I am testing my options. I appreciate that this sproc is an inefficient jack of all trades - it has been designed as such so I can compare its performance to more specific sprocs and possibly some dynamic SQL. Sproc:CREATE PROCEDURE [dbo].[CAMsp_Att] @College_Year AS SmallInt,@Student_ID AS VarChar(8) = '________', @Group_Code AS VarChar(12) = '____________', @Start_Date AS DateTime = '1950/01/01', @End_Date as DateTime = '2020/01/01', @Att_Code AS VarChar(1) = '_' AS IF @Start_Date = '1950/01/01'SET @Start_Date = CAST(CAST(@College_Year AS Char(4)) + '/08/31' AS DateTime) IF @End_Date = '2020/01/01'SET @End_Date = CAST(CAST(@College_Year +1 AS Char(4)) + '/07/31' AS DateTime) SELECT College_Year, Group_Code, Student_ID, Session_Date, Start_Time, Att_Code FROM dbo.AttendanceDets WHERE College_Year = @College_YearAND Group_Code LIKE @Group_CodeAND Student_ID LIKE @Student_IDAND Session_Date <= @End_DateAND Session_Date >=@Start_DateAND Att_Code LIKE @Att_CodeGOMy confusion lies with running the below script with Show Execution Plan:--SET SHOWPLAN_TEXT ON--Go DECLARE @Time as DateTime Set @Time = GetDate() select College_Year, group_code, Student_ID, Session_Date, Start_Time, Att_Code from attendanceDetswhere College_Year = 2005 AND group_code LIKE '____________' AND Student_ID LIKE '________'AND Session_Date <= '2005-11-16' AND Session_Date >= '2005-11-16' AND Att_Code LIKE '_' Print 'First query took: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS VarCHar(5)) + ' milli-Seconds' Set @Time = GetDate() EXEC CAMsp_Att @College_Year = 2005, @Start_Date = '2005-11-16', @End_Date = '2005-11-16' Print 'Second query took: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS VarCHar(5)) + ' milli-Seconds'GO --SET SHOWPLAN_TEXT OFF--GOThe execution plan for the first query appears miles more costly than the sproc yet it is effectively the same query with no parameters. However, my understanding is the cached plan substitutes literals for parameters anyway. In any case - the first query cost is listed as 99.52% of the batch, the sproc 0.48% (comparing the IO, cpu costs etc support this). BUT the text output is:(10639 row(s) affected) First query took: 596 milli-Seconds (10639 row(s) affected) Second query took: 2856 milli-SecondsI appreciate that logical and physical performance are not one and the same but can why is there such a huge discrepancy between the two? They are tested on a dedicated test server, and repeated running and switching the order of the queries elicits the same results. Sample data can be provided if requested but I assumed it would not shed much light. BTW - I know that additional indexes can bring the plans and execution time closer together - my question is more about the concept. If you've made it this far - many thanks.If you can enlighten me - infinite thanks.
Here's my case, I have written a stored procedure which will perform the following: 1. Grab data from a table using cursor, 2. Process data, 3. Write the result into another table
If I execute the stored procedure directly (thru VS.NET, or Query Analyser), it will run, but when I tried to execute it via a scheduled job, it fails.
I used the same record, same parameters, and the same statements to call the stored procedure.
The benefit of the actual execution plan is that you can see the actual number of rows passing through each step - compared to the estimated number of rows.But what about the "cost percentages" ?I believe I've read somewhere that these percentages is still just an estimate and is not based on the real execution.Does anyone know this and preferable have a link to something that documents it?Thanks
Following are the two events frequently observed in the system event error log has any body come across these errors
1.The description for Event ID ( 318 ) in Source ( SQLServerAgent$XYZ ) could not be found. It contains the following insertion string(s):
2.he Open Procedure for service "MSSQLServer" in DLL "SQLCTR70.DLL" failed. Performance data for this service will not be available. Status code returned is DWORD 0.
I get this error in My Event Log on My WSUS server. I am running a Windows 2000 with SQL 2K SP4.
Connection to database failed. Reason=Cannot open database requested in login 'SUSDB'. Login fails. Login failed for user 'WSUSASPNET'.. Connection string: Data Source=WSUS;Initial Catalog=SUSDB;Connection Timeout=60;Application Name=WSUS SQL Connection; Trustedd_Connection=Yes;Pooling='true'; Max Pool Size = 100
i heard somewhere that there is a time a "time event wizard" or something like that in SQL. and i like to know were can i find it. any link or answer will be really appreciated thanks.
hello all, i have a nested gridview. how do i add a SELECTING event for this datasource for my inner gridview? because i need to increase the commandtimeout. thnx. private SqlDataSource ChildDataSource(string strCustno) { string strQRY = "Select ..."; SqlDataSource dsTemp = new SqlDataSource(); dsTemp.ConnectionString = ConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString ; dsTemp.SelectCommand = strQRY; return dsTemp; }
i need some clarification here,i changed the CommandField for 'Delete" to the template field like this and its working well with the extender .<asp:TemplateField> <ItemTemplate><asp:LinkButton ID="lnkDelete" runat=server Text="Delete" OnCommand=onCommand_deleter CommandArgument="Deleting" CommandName="Delete" ></asp:LinkButton> <ajax:ConfirmButtonExtender ID="cnfDelete" TargetControlID="lnkDelete" runat=server ConfirmText="Are you sure you want to delete this ?" ></ajax:ConfirmButtonExtender> </ItemTemplate> </asp:TemplateField> I am confused b/c Earlier,i was handling the delete of a gridview row under OnRowDeleting event.Now its being handled by "onCommand_deletePlayer".Then why do I still need to handle onRowDeleting event?It gave me error,when i tried to avoid it.What exactly i shud code in OnRowDeleting. I am checking something like this rightnow in onRowDeleting,but this doesn't cancel the deletion despite of raising the messageif (gridView.Rows.Count <= 1) {e.Cancel = true; lblMessage.Text = "You must keep at least one record // e.cancel doe NOT work,rows gets deleted despite the message,if delete link is clicked..} Is there a way to stop the user from deleting the LAST and ONLY row from db for a given condition?thanks
We are trying to schedule a DTS package. If we run the package when the server is logged in the job works. If we logoff the server the job fails. We looked in the NT Event log and we get a message from the SQLServerAgent that the job failed with Event 208. If we check the job Run History in the Next Run we get message 'Date and Time is not Available'. I should mention that the DTS pachage contains some mapping to drives on the server. Any ideas? We have been struggling with this for a long time. Thanks.
Why would I get an application event viewer error when I truncate the transaction logs on my databases? All the error tells me is that i truncated the database log.
I need to run an SQL job if a specific file exists. Actually, that specific files gets created different time in different day. I need to run the job when the file arrives. Is there any way to do this in SQL? I have done this in seagate scheduler but I need to do it in SQL. Please help!!!!!!!
Hi, We would like to capture events in our system. There seem to be three obvious capture points for us - application, triggers, transaction log. The latter seems to be the most attractive, since we`re looking for a solution with minimal performance impact. In general, our problem is similar to populating data warehouses from on-line databases. Can anyone proffer some advice? In particular, being quite new to SQL Server, I am not sure how difficult/possible it is to read the transaction log in order to cull events. Some direction here would be greatly appreciated. Thanks, Karl
I want to experiment with setting the Truncate Log on Checkpoint option to True to see if this will lessen the chance of my transaction log running out of space. Before I do this I want to be sure that the Transaction Log is not tied to the NT Event Log, SQL Error Log, etc. For example, does the NT Event Log (or any other log) use the Transaction Log? Thanks, Kevin.
I see these 4 events posted in my event log every second. I am using SQL 6.5 The first event says that "Login succeeded- User: probe Connection: Non-Trusted" The second event in the log has this description "DB-LIBRARY - SQL Server message: EXECUTE permission denied on object sp_replcounters, database master, owner dbo" The third event has "DB-LIBRARY error - General SQL Server error: Check messages from the SQL Server." and the fourth event contains "CollectSQLPerformanceData : dbsqlsend failed "
These messages are being posted repeatedly. Could someone shed some light on this please. Thanks Sudarshan
Does anyone know if there is any software available that notifies specified people when an error above a certian level occurs. Im thinking along the lines of email and text message.
Im running sql server 2k at sp3a level, and the software will have to be compatible with it so as to get the event notification to pass the information on.
If there isn't any software, would anyone know of any scripts that will do this?