I am currently working to write a progress log for my SSIS packages. So far I am able write a new log entry, update this log entry using OnProgress and OnError Event Handlers. I'd like to take it one step further. Whenever the package ends whether cancelled or finished normally; I'd like to write to my logging table COMPLETED_ABNORMALLY on cancelled or COMPLETED_NORMALLY on a normal finish of the package. I'm not sure where to begin with this process. I'd like to utilize a simple method and event handler.
I'd like to know if there is a way to catch the error messages when a tasks fails, that's because i's like to store every message on a user variable, so i could log all of them later, I was thinking that it may be possible with the event handlers, could it be?
We are currently facing an issue in ensuring restartability of an SSIS package. The scenario is explained below.
Context: The SSIS Package has two Data Flow tasks. The Data Flow task named DFT1 is the predecessor for DFT2 and chained with OnSuccess precedence constraint.
OnPreExecute and OnPostExecute event handlers have been implemented for DFT1. Each task in both event handlers as well as DFT1 and DFT2 have FailPackageOnFailure set to True.
Scenario1: Task in OnPreExecute of DFT1 fails. DFT1 is attempted and succeeded. OnPostExecute of DFT1 was not attempted. DFT2 was not attempted. Checkpoint file was created; however, no entries were made.
When restarted, execution started from first step in Control flow.
Scenario2: Task in OnPostExecute of DFT1 fails. DFT1 and its OnPreExecute Event were executed. DFT2 was not attempted. Checkpoint file was created and entries were made. Entries had DTS:result as 0 for OnPreExecute and DFT1 tasks.
When restarted, DFT2 was executed. OnPostExecute event, which failed during previous execution, was not attempted.
Each task in the package, whether it is in Control flow or as part of an event handler is crucial for seamless execution. But apparently, as explained above, there is no reliability on the event handlers in case of failures. Has anyone encountered similar scenario? Is this behavior as per design of the runtime engine?
I have an SSIS Package that loads data to a SQL Server table and also logs package statistics along the way with individual SQL statements. In the event of failure, I want the data loaded to the target table rolled back but I want the statistics updates saved to the database. My package consists of several Execute SQL tasks that handle the logging and a Data Flow task that loads the data to the target table along with a couple of event handlers to handle errors. I have the Transaction Option property on the Package set to Required, to Supported on the Data Flow, and to Not Supported on the Execute SQL tasks and the OnError Event Handlers.
When we run the package (and cause an error) everything runs fine until it gets to the On Error event handler for the Data Flow task. This task hangs and never finishes. If we set the Transaction Option for the Event Handler to Supported (allowing it to enlist in the parent transaction) it works but the updates that it makes roll back along with the data from the Data Flow.
Is there a problem with having Event Handlers stay out of a transaction started by the parent package?
I would like to create an event handler that would catch any errors that result from a sys.<table> not existing. The package is designed to run on both SQL Server 2000 and SQL Server 2005 and when I query sys.<tables> there is an error when the query is run on SQL Server 2000. I just need a good starting point...I would like something that when the server isn't 2005 it just skips the server and doesn't fail the package and doesn't get counted towards the max error count. Thanks for any help. -Kyle
event handlers was not executed when my package get fails it will go to directly to on error...not executing on preexecute. but it was working fine previously...i haven't change anything i have run it again...got this issue...
My SQL Server 2005 SP4 on Windows 2008 R2 is flooded with the below errors:-
Date 10/25/2011 10:55:46 AM Log SQL Server (Current - 10/25/2011 10:55:00 AM) Source spid Message Event Tracing for Windows failed to send an event. Send failures with the same error code may not be reported in the future. Error ID: 0, Event class ID: 54, Cause: (null).
Is there a way I can trace it how it is coming? When I check input buffer for these ids, it looks like it is tracing everything. All the general application DMLs are coming in these spids.
I have been testing with the WMI Event Watcher Task, so that I can identify a change to a file. The WQL is thus:
SELECT * FROM __InstanceModificationEvent within 30 WHERE targetinstance isa 'CIM_DataFile' AND targetinstance.name = 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\AdventureWorks.bak'
This polls every 30 secs and in the SSIS Event (ActionAtEvent in the WMI Task is set to fire the SSIS Event) I have a simple script task that runs a message box).
My understanding is that the event polls every 30 s and if there is a change on the AdventureWorks.bak file then the event is triggered and the script task will run producing the message. However, when I run the package the message is occurring every 30s, meaning the event is continually firing even though there has been NO change to the AdventureWorks.bak file.
Am I correct in my understanding of how this should work and if so why is the event firing when it should not ?
Server 2003 SE SP1 5.2.3790 Sql Server 2000, SP 4, 8.00.2187 (latest hotfix rollup) We fixed one issue, but it brought up another. the fix we applied stopped the ServicesActive access failure, but now we have a failure on MSSEARCH. The users this is affecting do NOT have admin rights on the machine, they are SQL developers. We were having
Event Type: Failure Audit Event Source: Security Event Category: Object AccessEvent ID: 560 Date: 5/23/2007 Time: 6:27:15 AM User: domainuser Computer: MACHINENAME Description: Object Open: Object Server: SC Manager Object Type: SC_MANAGER OBJECT Object Name: ServicesActive Handle ID: - Operation ID: {0,1623975729} Process ID: 840 Image File Name: C:WINDOWSsystem32services.exe Primary User Name: MACHINE$ Primary Domain: Domain Primary Logon ID: (0x0,0x3E7) Client User Name: User Client Domain: Domain Client Logon ID: (0x0,0x6097C608) Accesses: READ_CONTROL Connect to service controller Enumerate services Query service database lock state
We recently upgraded to SQL 2005 from SQL 2000. We have most of our issues ironed out however about every 1 minute there is a message in the Application Event log and the SQL log that states:
EVENT ID 18456 Login Failed for the users DOMAIN/ACCOUNT [CLIENT: <local machine>]
This is a state 16 message which I thought meant that the account does not have access to the default database. The account is actually the account that the SQL services run under.
Any ideas? We can't seem to figure this one out. We actually upgraded to 2005 from 2000 and had an error appear after every reboot that prevented the SQL Agent from running(This application has failed to start because GAPI32.dll was not found. Re-installing the application may fix this problem.) We did a full uninstall of SQL and reinstalled fresh and restored the databases from .bak files and that is when the EVENT ID 18546 started occuring every minute.
We don't have any SQL heavy hitters here so please be detailed with any possible solutions. That you very much for any help you can provide!
Following are the two events frequently observed in the system event error log has any body come across these errors
1.The description for Event ID ( 318 ) in Source ( SQLServerAgent$XYZ ) could not be found. It contains the following insertion string(s):
2.he Open Procedure for service "MSSQLServer" in DLL "SQLCTR70.DLL" failed. Performance data for this service will not be available. Status code returned is DWORD 0.
I get this error in My Event Log on My WSUS server. I am running a Windows 2000 with SQL 2K SP4.
Connection to database failed. Reason=Cannot open database requested in login 'SUSDB'. Login fails. Login failed for user 'WSUSASPNET'.. Connection string: Data Source=WSUS;Initial Catalog=SUSDB;Connection Timeout=60;Application Name=WSUS SQL Connection; Trustedd_Connection=Yes;Pooling='true'; Max Pool Size = 100
i heard somewhere that there is a time a "time event wizard" or something like that in SQL. and i like to know were can i find it. any link or answer will be really appreciated thanks.
hello all, i have a nested gridview. how do i add a SELECTING event for this datasource for my inner gridview? because i need to increase the commandtimeout. thnx. private SqlDataSource ChildDataSource(string strCustno) { string strQRY = "Select ..."; SqlDataSource dsTemp = new SqlDataSource(); dsTemp.ConnectionString = ConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString ; dsTemp.SelectCommand = strQRY; return dsTemp; }
i need some clarification here,i changed the CommandField for 'Delete" to the template field like this and its working well with the extender .<asp:TemplateField> <ItemTemplate><asp:LinkButton ID="lnkDelete" runat=server Text="Delete" OnCommand=onCommand_deleter CommandArgument="Deleting" CommandName="Delete" ></asp:LinkButton> <ajax:ConfirmButtonExtender ID="cnfDelete" TargetControlID="lnkDelete" runat=server ConfirmText="Are you sure you want to delete this ?" ></ajax:ConfirmButtonExtender> </ItemTemplate> </asp:TemplateField> I am confused b/c Earlier,i was handling the delete of a gridview row under OnRowDeleting event.Now its being handled by "onCommand_deletePlayer".Then why do I still need to handle onRowDeleting event?It gave me error,when i tried to avoid it.What exactly i shud code in OnRowDeleting. I am checking something like this rightnow in onRowDeleting,but this doesn't cancel the deletion despite of raising the messageif (gridView.Rows.Count <= 1) {e.Cancel = true; lblMessage.Text = "You must keep at least one record // e.cancel doe NOT work,rows gets deleted despite the message,if delete link is clicked..} Is there a way to stop the user from deleting the LAST and ONLY row from db for a given condition?thanks
We are trying to schedule a DTS package. If we run the package when the server is logged in the job works. If we logoff the server the job fails. We looked in the NT Event log and we get a message from the SQLServerAgent that the job failed with Event 208. If we check the job Run History in the Next Run we get message 'Date and Time is not Available'. I should mention that the DTS pachage contains some mapping to drives on the server. Any ideas? We have been struggling with this for a long time. Thanks.
Why would I get an application event viewer error when I truncate the transaction logs on my databases? All the error tells me is that i truncated the database log.
I need to run an SQL job if a specific file exists. Actually, that specific files gets created different time in different day. I need to run the job when the file arrives. Is there any way to do this in SQL? I have done this in seagate scheduler but I need to do it in SQL. Please help!!!!!!!
Hi, We would like to capture events in our system. There seem to be three obvious capture points for us - application, triggers, transaction log. The latter seems to be the most attractive, since we`re looking for a solution with minimal performance impact. In general, our problem is similar to populating data warehouses from on-line databases. Can anyone proffer some advice? In particular, being quite new to SQL Server, I am not sure how difficult/possible it is to read the transaction log in order to cull events. Some direction here would be greatly appreciated. Thanks, Karl
I want to experiment with setting the Truncate Log on Checkpoint option to True to see if this will lessen the chance of my transaction log running out of space. Before I do this I want to be sure that the Transaction Log is not tied to the NT Event Log, SQL Error Log, etc. For example, does the NT Event Log (or any other log) use the Transaction Log? Thanks, Kevin.
I see these 4 events posted in my event log every second. I am using SQL 6.5 The first event says that "Login succeeded- User: probe Connection: Non-Trusted" The second event in the log has this description "DB-LIBRARY - SQL Server message: EXECUTE permission denied on object sp_replcounters, database master, owner dbo" The third event has "DB-LIBRARY error - General SQL Server error: Check messages from the SQL Server." and the fourth event contains "CollectSQLPerformanceData : dbsqlsend failed "
These messages are being posted repeatedly. Could someone shed some light on this please. Thanks Sudarshan
Does anyone know if there is any software available that notifies specified people when an error above a certian level occurs. Im thinking along the lines of email and text message.
Im running sql server 2k at sp3a level, and the software will have to be compatible with it so as to get the event notification to pass the information on.
If there isn't any software, would anyone know of any scripts that will do this?
We have a data import t-sql job which runs every morning to extract data from a large Unix database. This has worked fine on an old server which ran the same SQL version and service pack.
The t-sql uses a linked server and simply truncates the local tables and reimports the whole lot. Appreciate there are other ways to do this but it works fine for a medium sized business and is not mission critical.
However, when we have moved this onto the new server its throwing an error message in the event log when the job finishes. It actually completes the job and reports it as successful but at the exact same time it throws the following error message into the event log:
Error: 0, Severity: 19, State: 0 SqlDumpExceptionHandler: Process 14 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process.
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
INSERT INTO arista_caclient SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM caclient WHERE cl_datopn>=''01/01/1900'' OR cl_datopn is null') INSERT INTO arista_camatgrp SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM camatgrp WHERE (mg_datcls>=''01/01/1900'' OR mg_datcls is null) AND (mg_datopn>=''01/01/1900'' OR mg_datopn is null)') INSERT INTO arista_camatter SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM camatter WHERE (mt_estcmp>=''01/01/1900'' OR mt_estcmp is null)') INSERT INTO arista_cabilhis SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM cabilhis WHERE (bh_bildat>=''01/01/1900'' OR bh_bildat is null) AND (bh_laspay>=''01/01/1900'' OR bh_laspay is null) AND (bh_rundat>=''01/01/1900'' OR bh_rundat is null) AND (bh_remdat>=''01/01/1900'' OR bh_remdat is null)') INSERT INTO arista_cablaloc SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM cablaloc') INSERT INTO arista_cafintrn SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM cafintrn WHERE (tr_trdate>=''01/01/1900'' OR tr_trdate is null)') INSERT INTO arista_cadescrp SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM cadescrp') INSERT INTO arista_catimtrn SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM catimtrn WHERE (tt_trndat>=''01/01/1900'' OR tt_trndat is null)') INSERT INTO arista_cafeextn SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM cafeextn') INSERT INTO arista_fmsaddr SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM fmsaddr') INSERT INTO arista_cabilloi SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM cabilloi') INSERT INTO arista_caferate SELECT * FROM OPENQUERY(arista_ODBClink,'SELECT * FROM caferate')
I want to monitor updates on a certain table using an alert. When the table gets updated (any update), the alert will trigger a job.
Btw, I'm not allowed to add a trigger on the table, hence my idea to use an alert. Somehow I think I should use a WMI event alert, but I'm not sure which server event I should choose. In fact I'm not even sure I shóuld use a WMI event alert.