I have a simple SSIS package running under SQL 2005 SP2a.
In the data flow, I have several lookup transforms with error outputs. Each error output links to its own audit transform and then writes data to its own flat file destination.
After the data flow is complete, my control flow will then use a ForEach file container to mail the flat files to the source system data owners with a message about incomplete/inconsistent source data. I am effectively providing feedback to the source owners so that we may improve the quality of data that gets sent to us.
My problem is that the flat file (which contains the offending rows) seems to get created everytime even when there were no errors in the lookup in question. Thus my ForEach container will always send a mail to the data source teams even if there were no errors as the error flat file will always exist albeit empty.
How can I stop this happening? How can I only create flat files when there really were errors? How can I prevent the source teams from receiving feedback emails when there is no reason to?
have a SQL2K/VB.NET05 -based website that uses a complex search query, whose results will contain additional logic to be evaluated. There are thousands of records and growing, so it is not feasible to code this within the program...it must be evaluated inline or after the query, and it is also not feasible to set up additional fields and tables to handle the logic.
For a very general example: In the .NET code, the following variables are recognized: sex="M" Paid=4350.00 Outstanding=28000.50
One of the query result fields will contain the additional logic to evaluate and another will tell the type of expression..
EVAL EXPR BOOL Sex='F' and Paid/Outstanding < 27.50 BOOL Sex='M' and Paid/Outstanding < 38 or Sex='F' INT Paid*52.33
In other words..the thousands of records being returned have their own additional logic to evaluate. Is there a way this can be done by importing the variable into SQL server and testing it during the query?
If not, is there a way that I can run the code in the middle of .NET? I know I could run scripted code while in ASP, but ASP.NET is compiled, so I dont know if it can be done there....
I have Lookup task to determine if source data should be updated to or insert to the customer table. After Lookup task, the Error Output pipeline will redirect to insert new data to the table and the Output pipeline will update customer table. But these two tasks will be processing at the same time which causes stall on the process. Never end.....
The job is similiart to what Slow Changing Dimention does but it won't update the table at the same time.
Declare @mx intSet @mx = (select max(score)from ,,,where ,,,,andsubstring(city,1,1)='g')select score,@mxfrom ,,,,where ,,,, andsubstring(city,1,1)='g')group by scoregothis seems to work for small tablesbut when table is big and substring(city,1,1)='g')condition is not met it takes aweful time like goesin an infinit loop.--Sent by 3 from yahoo part of comThis is a spam protected message. Please answer with reference header.Posted via http://www.usenet-replayer.com
I am having a little trouble with something very simple:
I have a stored procedure with the following sql in it:
UPDATE mfsWS_CustNewCustomer SET Result_Id = 0 WHERE QueryGUID = @Guid
IF (@@ERROR <> 0)
BEGIN
RAISERROR ('Error setting Result_Id', 16, 1)
UPDATE mfsWS_CustNewCustomer SET Result_Id = 1 WHERE QueryGUID = @Guid
RETURN
END
To test if this works i SET the Result_Id = 'X' this is a smallint column so this should not work.
My issue is that all i get is the error:
cannot convert char to numeric etc etc
However the @@ERROR logic is not executed , my error is not raised and the update does nto happen, it si as though the entire sp aborts at the point where i try do the bogus update.
Is there something i should SET at the beginning of my sp or something ? How do i ensure that any error will be passed to the logic that checks @@ERROR and performs the appropriate actions ? This is SS2000, so i cannot use try catch.
I have a sql server automatic job set up containing 2 steps.First step is an Operating System Command (CmdExec) type job whichruns a .bat file which copies across a backup file to this machine.The second step is a Transact -SQL Script (TSQL) type job which runsthe RESTORE DATABASE REPLACE command which restores the backup filecopied across in step 1.Step 1's advanced option says: On Success Action: Goto Step [2]Restore the database.Step 2's advanced option says: On Success Action: Quit the jobreporting success.On clicking ok I receive the error message:"WARNING: The following job step(s) cannot be reached with currentflow logic: [1] Transfer the Backup File. Are you sure this is whatyou want".The job fails to run.Both steps do run successfully when set up as two separate jobs - notlike this which has the two steps in one job.Anyone have any ideas as to what the flow logic problem is?Thiko!
every few days i'm getting an error in the application log , eventID 824, saying 'SQL Server detected a logical consistency-based i/o error; incorrect checksum. it then recommends completing a full database consistency check (dbcc checkdb).
i have run the dbcc checkdb ('databasename') and it is returning an error msg of:
Msg 8697, Level 16, State 215, Line 1 An internal error occurred in DBCC which prevented further processing Msg 8921, Level 16, State 1, Line 1 Check terminated. A failure was detected while collecting facts. Possibly tempdb out of space or a system table is inconsistent. Check previous errors
i have oodles of disk space left. any ideas on where to go from here.
I could deploy across my environment, which is a mix of 2008R2/2012 servers, to give some information on log files. Running into a silly issue right off the bat. The table that DBCC LogInfo() conjures out of magic is different between the two. In 2012 it gained the RecoveryUnitID column. So I'm trying to write some logic to create a temp table based on which version is running. I would like to avoid a global temp table if possible. Here's what I've tried:
sp_executesql creates a table outside of the scope of my session: DECLARE @PrVers NVARCHAR(128) , @PrVersNum DECIMAL(10,2) , @StageTable NVARCHAR(1024) = N''
i have a weird situation here, i tried to load a unicode file with a flat file source component, one of file lines has data like any other line but also contains the character "ÿ" which i can't see or find it and replace it with empty string, the source component parses the line correctly but if there is a data type error in this line, the error output for that line gives me this character "ÿ" instead of the original line.
simply, the error output of flat file source component fail to get the original line when the line contains hidden "ÿ".
Hi guys i'm a newbie to this forums so hopefully i've posted in the right place, so here goes .....
My company has been given a CMS to look at, thing is everything seems to be working except for this beasty stored procedure.. the purpose of this stored procedure is to gather data and output it as xml, but for some reason and i havent a clue why - i am getting an error stated below - i have also attahced the stored procedured in hopes of some kind gurus can help me
Appreciate any and all help guys andy
ERRROR MESSAGE: Microsoft OLE DB Provider for SQL Server error '80040e21' Parent tag ID 1 is not among the open tags. FOR XML EXPLICIT requires parent tags to be opened first. Check the ordering of the result set.
I would like to get the actual name of the column that has the error. Using the ErrorColumn (int value) I thought there would be some type of lookup collection based on the input (like column names)- if there is, can someone tell me how to get to it?
I have my error output writing to a stored proc, but instead of "32226" as the column name, I need to have the actual name of the column. I am going from Flat File to OLE DB Destination. I have a Script Component getting the output to write to my sproc, and I just need to get the column name.
I used the oTable.Script method to output the DDL for a Sql Server 2000 user defined table. The result is DDL with an error. I don't think the problem is with DMO itself so I posted this here. Note the 'TEXTIMAGE_ON ' clause. The table does not have a text column, and the DDL will not execute.
CREATE TABLE [MyTable] ( [FilterID] [int] IDENTITY (1, 1) NOT NULL , [LoanAgentID] [int] NOT NULL , [Key] [varchar] (16) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [IsActive] [int] NOT NULL , [tStamp] [timestamp] NOT NULL , [Formula] [varchar] (4000) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [_AuditAddDt] [datetime] NOT NULL , [_AuditUpdateDt] [datetime] NOT NULL , CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED ( [FilterID] ) WITH FILLFACTOR = 90 ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO
Server: Msg 1709, Level 16, State 1, Line 1 Cannot use TEXTIMAGE_ON when a table has no text, ntext, or image columns.
What is the purpose of the error output for an OLE DB Source component. Any sql that would cause an error such as converting a character to a number or division by zero causes the OLE DB Source component to fail regardless of the settings for the error output. Works perfect for OLE DB Destination but I cannot come up with any scenario where it would work for the OLE DB Source component.
I am using SSIS to load a lot of Excel, csv files. Some of the files will fail for various formating/validation reason. Is it a good way to capture the error and generate a nice error report so the provider can read it easily and correct the data files?
The error log of the package is difficult to read.
In the Input and Output Properties tab under Advance Editor for OLE DB Source, I cannot remove columns. I copied this Source from a standard template and have made the normal changes to make it work. However I keep getting this error...Error: 0xC020837B at Load Server Security, OLE DB Source [1]: The output column "DBName" (1632) on the error output has no corresponding output column on the non-error output.Error: 0xC004706B at Load Server Security, DTS.Pipeline: "component "OLE DB Source" (1)" failed validation and returned validation status "VS_ISBROKEN".DBName of course is one of the columns that no longer exist, but I can't remove. Whenever I try to remove one of the columns, I get this error...Error at Load Server Security [OLE DB Source[1]]: The column cannot be deleted. The component does not allow columns to be deleted from this input or output. Is there anything that I can do to remove the columns? Is there just a simple setting that I can change to make this work?
I realize I have a question about what constitutes an "error" for an error output.
For example, a flat file source has an error output for "bad rows", that is, when it encounters "unexpected data". What specifically is "unexpected data"? Is this documented somewhere?
Another example would be an OLE DB source that uses a query to retrieve row. This too, has an error output, but I realize I have no clue what would constitute bad data from a table. I mean, data in a table is just data, so what would constitute an error from an OLE DB source? I can't think of one thing. Where are these "rules" documented, if anywhere?
I want to automate the dbcc checkdb process. I create a temp table called #CheckDbTbl and run the following command:
INSERT INTO #CheckDBTbl dbcc checkdb(MyDbName) with tableresults
I plan to send myself an email if any problems are found.
Does anyone know what Error numbers or Levels or anything else I should look for in the #CheckDBTbl that will tell me a problem exists? Right now I'm only checking for: Level >= 16.
I have a c# app. This is a piece of code out of a stored proc. it is erroring: Procedure or function getTopParentDealerFromChildDealer has too many arguments OR @dealerID is not a parameter for procedure getTopParentDealerFromChildDealer.(if I put ",@dealerID=@parentID)
I have tried all combinations "@dealerID",@dealerID=@parentID" etc.
BEGIN --get the top parent dealerID DECLARE @parentID INT SET @parentID = 0 EXEC getTopParentDealerFromChildDealer @dealerID, @parentID OUTPUT IF (@parentID>0) BEGIN
------------------------------------------------------ here is the getTopParentDealerFromChildDealer as called ------------------------------------------------------ ALTER PROCEDURE getTopParentDealerFromChildDealer @childDealerID INT AS
SET NOCOUNT ON DECLARE @dealerID INT DECLARE @parentID INT SET @dealerID = 0 SELECT @dealerID = dealerParentID from dealerRelations where dealerChildID = @childDealerID
WHILE @dealerID <> 0 BEGIN declare @temp INT set @temp = @dealerID IF (SELECT count(dealerParentID) FROM dealerRelations WHERE dealerChildID = @temp)>=1 BEGIN SELECT @dealerID = dealerParentID FROM dealerRelations where dealerChildID = @temp END ELSE BEGIN SET @dealerID=0 set @parentID = @temp END END
if (@parentID IS NULL) BEGIN set @parentID = 0 --set @parentID = @dealerID END
return @parentID
I don't usually use stored procedures but the job I have taken over previously used them. Any help would be much appreciated.
I have the following stored procedure working with an Access 2000 front end. The output parameters returned to Access are both Null when the record is successfully updated (ie when @@Rowcount = 1), but the correct parameters are returned when the update fails. I'm a bit new to using output parameters, but I have them working perfectly with an insert sproc, and they look basically the same. What bonehead error have I made here? The fact that the record is updated indicates to me that the Commit Trans line is being executed, so why aren't the 2 output parameters set?
TIA
EDIT: Solved, sort of. I found that dropping the "@ResNum +" from "@ResNum + ' Updated'" resolved the problem (@ResNum is an input parameter). This implies that the variable lost its value between the SQL statement and the If/Then, since the SQL correctly updates only the appropriate record from the WHERE clause. Is this supposed to happen? I looked in BOL, and if it's addressed there I missed it.
CREATE PROCEDURE [procResUpdate]
Various input parameters here,
@RetCode as int Output, @RetResNum as nvarchar(15) Output
AS
Declare @RowCounter int
Begin Tran
UPDATE tblReservations SET Various set statements here, LastModified = @LastModified + 1 WHERE ResNum = @ResNum AND LastModified = @LastModified
SELECT @RowCounter = @@ROWCOUNT
If @RowCounter = 1 Begin Commit Tran Select @RetCode = 1 Select @RetResNum = @ResNum + ' Updated' End Else Begin Rollback Tran Select @RetCode = 0 Select @RetResNum = 'Update Failed' End GO
Am new to SSIS and developing a component which pulls data from a staging table and drops them into another table in the same database.
Am using a 1) OLE DB Source to get the data from the staging table. 2) OLE DB Destination to insert or push the data into another table of the same database. 3) Script component to get the error rows and to update the staging table column with a flg value.
The rows that throw an error like primary key violation, or any other error should be redirected to the script component and the process should get completed.
The Error Output of the OLE DB Destination doesnt show any columns to be selected for Redirect Row option
The script executes without any error and the records are shown in error path but the records are not updated in the DB.
This is what i have in the script
Public Class ScriptMain
Inherits UserComponent
Dim sqlConn As SqlConnection
Dim sqlCmd As SqlCommand
Dim connMgr As IDTSConnectionManager90 Dim txnIdParam As SqlParameter
Dim errorDescParam As SqlParameter
Public Overrides Sub AcquireConnections(ByVal Transaction As Object)
When configuring error output, I want everything that is good in the row to make it to the destination, and then the offending column that is causing an error to be set to NULL, and then sent to the destination as well. In addition, I want to take the offending column's data, and route it over to an error holding table. I know about the ability to redirect the whole row, but I just sort of want to redirect just that column. For example....
Have a table with 5 columns
col1 int null,
col2 int null,
col3 char(3) null,
col4 bit null,
col5 int null
My data flow loads data from a flat file and has a record that looks like this
1 5 ABC R 3
I want the row to make it to the destination as follows....
1 5 ABC NULL 3
Then the offending data needs to go over to my error table
I am developing a custom destination component and I have encountered a few areas where there seems to be a lack of helpful documentation and examples.
1. I have not been able to find any information on or examples of creating custom destinations with an error output. The OLE DB Destination has an error output so I investigated the input and error output properties in the advanced editor and found that the OLE DB Destination error output is synchronous with the input (its SynchronousInputID matches the input's ID) and has its ExclusionGroup value set to 1. Using this information, I modeled my error output after the OLE DB Destination.
Shortly after I start my SSIS package and it encounters an error row, I get the following exception: [My Destination Adapter 1 [3512]] Error: System.ArgumentException: Value does not fall within the expected range. at Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSBuffer90.DirectErrorRow(Int32 hRow, Int32 lOutputID, Int32 lErrorCode, Int32 lErrorColumn) at Microsoft.SqlServer.Dts.Pipeline.PipelineBuffer.DirectErrorRow(Int32 outputID, Int32 errorCode, Int32 errorColumn) at MyDestination.ProcessInput(Int32 inputID, PipelineBuffer buffer) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostProcessInput(IDTSManagedComponentWrapper90 wrapper, Int32 inputID, IDTSBuffer90 pDTSBuffer, IntPtr bufferWirePacket)
2. My custom destination component is used for writing a file with a fixed schema. I followed the means by which source component examples add their output columns, but applied this to my external metadata columns. In my Validate() I check if the ExternalMetadataColumnCollection.Count == 0 and return DTSValidationStatus.VS_NEEDSNEWMETADATA; to force a call to ReinitializeMetaData(). In ReinitializeMetaData() I call a method that creates the input's external metadata columns that reflect my external data source.
This works fine except every time I add my custom destination component to a SSIS Package and go to edit the component I am greeted with a dialog box that states: "The component is not in a valid state. ... Do you want the component to fix these errors automatically?" Pressing the Yes button, I assume, makes the call to ReinitializeMetaData() and I have my external metadata columns. Where is the correct place to add the external metadata columns so the user does not have to take this extra step every time they add my component to their package?
When setting an output's "IsErrorOut" property to true, is it also possible to add additional columns to that error output?
I'd like to add a message beyond the standard errorCode and errorColumn columns, a column which is the "specific error message", not just a lookup on the errorCode.
I have a data flow that takes an OLE DB Source, transforms it and then uses an OLE DB Command as a destination. The OLE DB Command executes a call to a stored procedure and I have the proper wild cards indicated. The entire process runs great and does exactly what is intended to do.
However, I need to know when a SQL insert fails what record failed and I need to log this in a file somewhere. I added a Flat File Destination object and configured appropriately. I created 3 column names for the headers in the flat file and matched them with column names existing for output. When I run this package the flat file log is created ok, but no data is ever pumped into the file when a failure of the OLE DB Command occurs.
I checked the Advanced Editor for the OLE DB Command object and under the OLE DB Command Error Output node on the Input and Output Properties tab I notice that the ErrorCode and ErrorColumn output columns both have ErrorRowDisposition set to RD_NotUsed. I would guess this is the problem and why no data is written to my log file, but I cannot figure out how to get this changed (fields are greyed out so no access).
Iam redirecting the error output of a OLEDB destination component to a script component. My aim is to create a HTML report having the information about the bad records, the error occuring in the rows and the column name that fails. The error output provided two new columns i.e the errorcode and errorcolumn , the errorcolumn value for a bad record gives the linage id for the column, is there a way to derieve the name of the column by using the lineage id?
I'm in the process of running some tests to determine which method is faster...
I created a data flow task OleDB Source -> Data Conversion -> OleDb Destination. Error outputs from the OleDB destination is sent to a flat file destination. This works great.
I'm importing millons of rows and found that using SQL Server Destination (local) is much faster than the OleDB Destination. However, I have not figured out how to output errors to a flat file destination like I did when using the OleDB destination.
Is there any way to trap errors in a flat file when using a SQL Server Destination?
I've got a Derived Columns component as part of a data flow. On this I've set the error output for my columns to Redirect Row in all cases. I've set a data watcher on the error output and then ran the package.
There are several rows which I'm expecting to fail - about 3 of them. These fail but there are also another 697 which seem to have no problem. So I fixed one of the problem columns in the source data and then re-ran the package. I only updated one row in the source so this row no longer appeared in the error output, but neither did several hundred of the other rows.
Is it possible that the error output has been tripped for that one row but for some reason it sends several hundred more rows? The ids on these additional rows follow on from the erroneous row, and when I fix that row the rows following it no longer appear in the error output.
when I'm trying to redirect a row I'm having the following error:
Error 4 Validation error. Data Flow Task: OLE DB Destination [535]: The error row disposition on "input "OLE DB Destination Input" (548)" cannot be set to redirect the row when the fast load option is turned on, and the maximum insert commit size is set to zero. PCKG_MPP.dtsx
Hi, In a mapping I have two look up€™s before setting the configure error output €œERROR€? as ignore failure, the job was stopped with an error message €œRow yield no match€?. Now that I had changed the error as ignore failure the job is successful but only concern here is will the destination table get inserted with those records which are not matched in look up.
I am trying to transfer data from one table to the other using a Data Flow Task of SSIS (SQL Server Integration Services) I am using an OLE DB Source and an OLE DB Destination.
Source Table TABLE1 Column Datatype ID - Int ,not null EmployeeName - nvarchar(50), null
Destination Table TABLE2 Column Datatype ID - Int ,not null EmployeeName - nvarchar(50), not null
There are 10 rows in TABLE1 of which 2 have null value in EmployeeName column. If I try to populate all TABLE1 row values into TABLE2 the data flow will fail as TABLE2.EmployeeName will not accept null value. So I have inserted a FlatFileDestination into the DataFlow and the OLE DB Destination ErrorOutput is set as input to the FlatFile. In the OLE DB Destination Editor the ErrorOutput€™s error property is set to Redirect Row. When I do this the 8 correct data from TABLE1 will be inserted into TABLE2 and the two rows with null value will be inserted into the FlatFile.
My requirement is this: I don€™t want any data to be inserted into TABLE2 but I want the two erroneous records to be written into the FlatFile.