I used the oTable.Script method to output the DDL for a Sql Server 2000 user defined table. The result is DDL with an error. I don't think the problem is with DMO itself so I posted this here. Note the 'TEXTIMAGE_ON ' clause. The table does not have a text column, and the DDL will not execute.
CREATE TABLE [MyTable] (
[FilterID] [int] IDENTITY (1, 1) NOT NULL ,
[LoanAgentID] [int] NOT NULL ,
[Key] [varchar] (16) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[IsActive] [int] NOT NULL ,
[tStamp] [timestamp] NOT NULL ,
[Formula] [varchar] (4000) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[_AuditAddDt] [datetime] NOT NULL ,
[_AuditUpdateDt] [datetime] NOT NULL ,
CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED
(
[FilterID]
) WITH FILLFACTOR = 90 ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
Server: Msg 1709, Level 16, State 1, Line 1
Cannot use TEXTIMAGE_ON when a table has no text, ntext, or image columns.
I have Lookup task to determine if source data should be updated to or insert to the customer table. After Lookup task, the Error Output pipeline will redirect to insert new data to the table and the Output pipeline will update customer table. But these two tasks will be processing at the same time which causes stall on the process. Never end.....
The job is similiart to what Slow Changing Dimention does but it won't update the table at the same time.
I writing a unit test which has one stored proc calling data from another stored proc. Each time I run dbo.ut_wbTestxxxxReturns_EntityTest I get a severe uncatchable error...most common cause is a trigger error. I have checked and rechecked the columns in both of the temp tables created. Any ideas as to why the error is occurring?
--Table being called.
ALTER PROCEDURE dbo.wbGetxxxxxUserReturns
@nxxxxtyId smallint,
@sxxxxxxxxUser varchar(32),
@sxxxxName varchar(32)
AS
SET NOCOUNT ON
CREATE TABLE #Scorecard_Returns
( NAME_COL varchar(64), ACCT_ID int,
ACCT_NUMBER varchar(10),
ENTITY_ID smallint,
NAME varchar(100),
ID int,
NUM_ACCOUNT int,
A_OFFICER varchar(30),
I_OFFICER varchar(30),
B_CODE varchar(30),
I_OBJ varchar(03),
LAST_MONTH real,
LAST_3MONTHS real,
IS int
)
ALTER PROCEDURE dbo.ut_wbTestxxxxReturns_EntityTest
i have a weird situation here, i tried to load a unicode file with a flat file source component, one of file lines has data like any other line but also contains the character "ÿ" which i can't see or find it and replace it with empty string, the source component parses the line correctly but if there is a data type error in this line, the error output for that line gives me this character "ÿ" instead of the original line.
simply, the error output of flat file source component fail to get the original line when the line contains hidden "ÿ".
Hi, I have application in which i am performing synchronization between SQL Server 2000 and SQL Server 2005 CE. I have one table "ItemMaster" in my database.There is no relationship with this table,it is standalone.I am updating its values from Windows Mobile Device.
I am performing below operations for that. Step : 1 Pull To Mobile
Code BlockmoSqlCeRemoteDataAccess.Pull("ItemMaster", "SELECT * FROM ItemMaster", lsConnectString,RdaTrackOption.TrackingOn);
Step : 2 Using one device form i am updating table "ItemMaster" table's values.
So i am getting an error on 3rd step. While i am trying to push it says, "The Push method returned one or more error rows. See the specified error table. [ Error table name = ]". I have tried it in different ways but still i am getting this error.
Note : Synchronization is working fine.There is not issue with my IIS,SQL CE & SQL Server 2k.
Can any one help me?I am trying for that since last 3 days.
Hi everyone, While creating a stored prodecure, I now that it is possible to use an output parameter which is a table. However, I do not know how to do it ?? In other words, I would like to return a table but as an output parameter, so how can this happen ?
Hi guys i'm a newbie to this forums so hopefully i've posted in the right place, so here goes .....
My company has been given a CMS to look at, thing is everything seems to be working except for this beasty stored procedure.. the purpose of this stored procedure is to gather data and output it as xml, but for some reason and i havent a clue why - i am getting an error stated below - i have also attahced the stored procedured in hopes of some kind gurus can help me
Appreciate any and all help guys andy
ERRROR MESSAGE: Microsoft OLE DB Provider for SQL Server error '80040e21' Parent tag ID 1 is not among the open tags. FOR XML EXPLICIT requires parent tags to be opened first. Check the ordering of the result set.
I would like to get the actual name of the column that has the error. Using the ErrorColumn (int value) I thought there would be some type of lookup collection based on the input (like column names)- if there is, can someone tell me how to get to it?
I have my error output writing to a stored proc, but instead of "32226" as the column name, I need to have the actual name of the column. I am going from Flat File to OLE DB Destination. I have a Script Component getting the output to write to my sproc, and I just need to get the column name.
What is the purpose of the error output for an OLE DB Source component. Any sql that would cause an error such as converting a character to a number or division by zero causes the OLE DB Source component to fail regardless of the settings for the error output. Works perfect for OLE DB Destination but I cannot come up with any scenario where it would work for the OLE DB Source component.
I am using SSIS to load a lot of Excel, csv files. Some of the files will fail for various formating/validation reason. Is it a good way to capture the error and generate a nice error report so the provider can read it easily and correct the data files?
The error log of the package is difficult to read.
In the Input and Output Properties tab under Advance Editor for OLE DB Source, I cannot remove columns. I copied this Source from a standard template and have made the normal changes to make it work. However I keep getting this error...Error: 0xC020837B at Load Server Security, OLE DB Source [1]: The output column "DBName" (1632) on the error output has no corresponding output column on the non-error output.Error: 0xC004706B at Load Server Security, DTS.Pipeline: "component "OLE DB Source" (1)" failed validation and returned validation status "VS_ISBROKEN".DBName of course is one of the columns that no longer exist, but I can't remove. Whenever I try to remove one of the columns, I get this error...Error at Load Server Security [OLE DB Source[1]]: The column cannot be deleted. The component does not allow columns to be deleted from this input or output. Is there anything that I can do to remove the columns? Is there just a simple setting that I can change to make this work?
I realize I have a question about what constitutes an "error" for an error output.
For example, a flat file source has an error output for "bad rows", that is, when it encounters "unexpected data". What specifically is "unexpected data"? Is this documented somewhere?
Another example would be an OLE DB source that uses a query to retrieve row. This too, has an error output, but I realize I have no clue what would constitute bad data from a table. I mean, data in a table is just data, so what would constitute an error from an OLE DB source? I can't think of one thing. Where are these "rules" documented, if anywhere?
I have a simple SSIS package running under SQL 2005 SP2a.
In the data flow, I have several lookup transforms with error outputs. Each error output links to its own audit transform and then writes data to its own flat file destination.
After the data flow is complete, my control flow will then use a ForEach file container to mail the flat files to the source system data owners with a message about incomplete/inconsistent source data. I am effectively providing feedback to the source owners so that we may improve the quality of data that gets sent to us.
My problem is that the flat file (which contains the offending rows) seems to get created everytime even when there were no errors in the lookup in question. Thus my ForEach container will always send a mail to the data source teams even if there were no errors as the error flat file will always exist albeit empty.
How can I stop this happening? How can I only create flat files when there really were errors? How can I prevent the source teams from receiving feedback emails when there is no reason to?
First some background info. SQL Express Visual Studio 2005
VB.Net
Data.SQLClient Namespace
Well I was using the OUTPUT option in my program and it was great, i was returning my PrimaryKey from the newly inserted row. (I am using paramaters in my program and running a sqlCommand.ExecuteScalar)
I added trigger to the tblInfo, that after a Insert,Update, Delete it would perform a audit on the table saving the old info that was overritten, type of action taken (insert, update or delete), and who did it.
Well my OUTPUT didn't like that says, it can't do a OUTPUT on a table with a trigger.
--Error msg Msg 334, Level 16, State 1, Line 1 The target table 'tblInfo' of the DML statement cannot have any enabled triggers if the statement contains an OUTPUT clause without INTO clause.
So what am i supose to do? I want the audit to occur, but I walso want the primarykey without doing a select (maxID) which could potentially get the wrong id (multiuser).
I run a SQL query to select a few colums of data using a select statement, I want the output to be stored in the new table which can be defined in the SQL statement how can I do it?
I've been looking into using the ability to select the table of an OLEDB destination task based on a package variable but would like to know if this could be utilised in the following example.
I have a CSV file (potentially there are many) with a number of rows. The rows do not necessarily contain the same number of elements. Each row has an EventID and this is used to determine the database table that the row is to be inserted in.
Initially i thought about using a conditional split to send each row down the path relating to its event ID. There would be multiple destinations depending on the table. With the possibility of 60 to 70 different event ID's this is very quickly going to get out of hand in the designer. I'd like to know if it would be possible to set up a case statement (or similar) in a script component and specify the table name as a variable depending on the event ID. I'd then use this table variable to set the destination task's table property.
Does this sound like a possibility or do i have to go down the route of multiple paths?
I want to automate the dbcc checkdb process. I create a temp table called #CheckDbTbl and run the following command:
INSERT INTO #CheckDBTbl dbcc checkdb(MyDbName) with tableresults
I plan to send myself an email if any problems are found.
Does anyone know what Error numbers or Levels or anything else I should look for in the #CheckDBTbl that will tell me a problem exists? Right now I'm only checking for: Level >= 16.
I have a c# app. This is a piece of code out of a stored proc. it is erroring: Procedure or function getTopParentDealerFromChildDealer has too many arguments OR @dealerID is not a parameter for procedure getTopParentDealerFromChildDealer.(if I put ",@dealerID=@parentID)
I have tried all combinations "@dealerID",@dealerID=@parentID" etc.
BEGIN --get the top parent dealerID DECLARE @parentID INT SET @parentID = 0 EXEC getTopParentDealerFromChildDealer @dealerID, @parentID OUTPUT IF (@parentID>0) BEGIN
------------------------------------------------------ here is the getTopParentDealerFromChildDealer as called ------------------------------------------------------ ALTER PROCEDURE getTopParentDealerFromChildDealer @childDealerID INT AS
SET NOCOUNT ON DECLARE @dealerID INT DECLARE @parentID INT SET @dealerID = 0 SELECT @dealerID = dealerParentID from dealerRelations where dealerChildID = @childDealerID
WHILE @dealerID <> 0 BEGIN declare @temp INT set @temp = @dealerID IF (SELECT count(dealerParentID) FROM dealerRelations WHERE dealerChildID = @temp)>=1 BEGIN SELECT @dealerID = dealerParentID FROM dealerRelations where dealerChildID = @temp END ELSE BEGIN SET @dealerID=0 set @parentID = @temp END END
if (@parentID IS NULL) BEGIN set @parentID = 0 --set @parentID = @dealerID END
return @parentID
I don't usually use stored procedures but the job I have taken over previously used them. Any help would be much appreciated.
I have the following stored procedure working with an Access 2000 front end. The output parameters returned to Access are both Null when the record is successfully updated (ie when @@Rowcount = 1), but the correct parameters are returned when the update fails. I'm a bit new to using output parameters, but I have them working perfectly with an insert sproc, and they look basically the same. What bonehead error have I made here? The fact that the record is updated indicates to me that the Commit Trans line is being executed, so why aren't the 2 output parameters set?
TIA
EDIT: Solved, sort of. I found that dropping the "@ResNum +" from "@ResNum + ' Updated'" resolved the problem (@ResNum is an input parameter). This implies that the variable lost its value between the SQL statement and the If/Then, since the SQL correctly updates only the appropriate record from the WHERE clause. Is this supposed to happen? I looked in BOL, and if it's addressed there I missed it.
CREATE PROCEDURE [procResUpdate]
Various input parameters here,
@RetCode as int Output, @RetResNum as nvarchar(15) Output
AS
Declare @RowCounter int
Begin Tran
UPDATE tblReservations SET Various set statements here, LastModified = @LastModified + 1 WHERE ResNum = @ResNum AND LastModified = @LastModified
SELECT @RowCounter = @@ROWCOUNT
If @RowCounter = 1 Begin Commit Tran Select @RetCode = 1 Select @RetResNum = @ResNum + ' Updated' End Else Begin Rollback Tran Select @RetCode = 0 Select @RetResNum = 'Update Failed' End GO
Am new to SSIS and developing a component which pulls data from a staging table and drops them into another table in the same database.
Am using a 1) OLE DB Source to get the data from the staging table. 2) OLE DB Destination to insert or push the data into another table of the same database. 3) Script component to get the error rows and to update the staging table column with a flg value.
The rows that throw an error like primary key violation, or any other error should be redirected to the script component and the process should get completed.
The Error Output of the OLE DB Destination doesnt show any columns to be selected for Redirect Row option
The script executes without any error and the records are shown in error path but the records are not updated in the DB.
This is what i have in the script
Public Class ScriptMain
Inherits UserComponent
Dim sqlConn As SqlConnection
Dim sqlCmd As SqlCommand
Dim connMgr As IDTSConnectionManager90 Dim txnIdParam As SqlParameter
Dim errorDescParam As SqlParameter
Public Overrides Sub AcquireConnections(ByVal Transaction As Object)
When configuring error output, I want everything that is good in the row to make it to the destination, and then the offending column that is causing an error to be set to NULL, and then sent to the destination as well. In addition, I want to take the offending column's data, and route it over to an error holding table. I know about the ability to redirect the whole row, but I just sort of want to redirect just that column. For example....
Have a table with 5 columns
col1 int null,
col2 int null,
col3 char(3) null,
col4 bit null,
col5 int null
My data flow loads data from a flat file and has a record that looks like this
1 5 ABC R 3
I want the row to make it to the destination as follows....
1 5 ABC NULL 3
Then the offending data needs to go over to my error table
I am developing a custom destination component and I have encountered a few areas where there seems to be a lack of helpful documentation and examples.
1. I have not been able to find any information on or examples of creating custom destinations with an error output. The OLE DB Destination has an error output so I investigated the input and error output properties in the advanced editor and found that the OLE DB Destination error output is synchronous with the input (its SynchronousInputID matches the input's ID) and has its ExclusionGroup value set to 1. Using this information, I modeled my error output after the OLE DB Destination.
Shortly after I start my SSIS package and it encounters an error row, I get the following exception: [My Destination Adapter 1 [3512]] Error: System.ArgumentException: Value does not fall within the expected range. at Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSBuffer90.DirectErrorRow(Int32 hRow, Int32 lOutputID, Int32 lErrorCode, Int32 lErrorColumn) at Microsoft.SqlServer.Dts.Pipeline.PipelineBuffer.DirectErrorRow(Int32 outputID, Int32 errorCode, Int32 errorColumn) at MyDestination.ProcessInput(Int32 inputID, PipelineBuffer buffer) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostProcessInput(IDTSManagedComponentWrapper90 wrapper, Int32 inputID, IDTSBuffer90 pDTSBuffer, IntPtr bufferWirePacket)
2. My custom destination component is used for writing a file with a fixed schema. I followed the means by which source component examples add their output columns, but applied this to my external metadata columns. In my Validate() I check if the ExternalMetadataColumnCollection.Count == 0 and return DTSValidationStatus.VS_NEEDSNEWMETADATA; to force a call to ReinitializeMetaData(). In ReinitializeMetaData() I call a method that creates the input's external metadata columns that reflect my external data source.
This works fine except every time I add my custom destination component to a SSIS Package and go to edit the component I am greeted with a dialog box that states: "The component is not in a valid state. ... Do you want the component to fix these errors automatically?" Pressing the Yes button, I assume, makes the call to ReinitializeMetaData() and I have my external metadata columns. Where is the correct place to add the external metadata columns so the user does not have to take this extra step every time they add my component to their package?
When setting an output's "IsErrorOut" property to true, is it also possible to add additional columns to that error output?
I'd like to add a message beyond the standard errorCode and errorColumn columns, a column which is the "specific error message", not just a lookup on the errorCode.
I have a data flow that takes an OLE DB Source, transforms it and then uses an OLE DB Command as a destination. The OLE DB Command executes a call to a stored procedure and I have the proper wild cards indicated. The entire process runs great and does exactly what is intended to do.
However, I need to know when a SQL insert fails what record failed and I need to log this in a file somewhere. I added a Flat File Destination object and configured appropriately. I created 3 column names for the headers in the flat file and matched them with column names existing for output. When I run this package the flat file log is created ok, but no data is ever pumped into the file when a failure of the OLE DB Command occurs.
I checked the Advanced Editor for the OLE DB Command object and under the OLE DB Command Error Output node on the Input and Output Properties tab I notice that the ErrorCode and ErrorColumn output columns both have ErrorRowDisposition set to RD_NotUsed. I would guess this is the problem and why no data is written to my log file, but I cannot figure out how to get this changed (fields are greyed out so no access).
Iam redirecting the error output of a OLEDB destination component to a script component. My aim is to create a HTML report having the information about the bad records, the error occuring in the rows and the column name that fails. The error output provided two new columns i.e the errorcode and errorcolumn , the errorcolumn value for a bad record gives the linage id for the column, is there a way to derieve the name of the column by using the lineage id?
I'm in the process of running some tests to determine which method is faster...
I created a data flow task OleDB Source -> Data Conversion -> OleDb Destination. Error outputs from the OleDB destination is sent to a flat file destination. This works great.
I'm importing millons of rows and found that using SQL Server Destination (local) is much faster than the OleDB Destination. However, I have not figured out how to output errors to a flat file destination like I did when using the OleDB destination.
Is there any way to trap errors in a flat file when using a SQL Server Destination?
I've got a Derived Columns component as part of a data flow. On this I've set the error output for my columns to Redirect Row in all cases. I've set a data watcher on the error output and then ran the package.
There are several rows which I'm expecting to fail - about 3 of them. These fail but there are also another 697 which seem to have no problem. So I fixed one of the problem columns in the source data and then re-ran the package. I only updated one row in the source so this row no longer appeared in the error output, but neither did several hundred of the other rows.
Is it possible that the error output has been tripped for that one row but for some reason it sends several hundred more rows? The ids on these additional rows follow on from the erroneous row, and when I fix that row the rows following it no longer appear in the error output.
when I'm trying to redirect a row I'm having the following error:
Error 4 Validation error. Data Flow Task: OLE DB Destination [535]: The error row disposition on "input "OLE DB Destination Input" (548)" cannot be set to redirect the row when the fast load option is turned on, and the maximum insert commit size is set to zero. PCKG_MPP.dtsx
Hi, In a mapping I have two look up€™s before setting the configure error output €œERROR€? as ignore failure, the job was stopped with an error message €œRow yield no match€?. Now that I had changed the error as ignore failure the job is successful but only concern here is will the destination table get inserted with those records which are not matched in look up.