I am having a strange problem that I have been looking at for a day now, and my head is straing to hurt as I have banged my head on the desk so many times. I have written an extensible set of classes that allow me to build SSIS packages dynamically via a web front end. I am finding this code is working OK, but I have this silly bug.
The code is trying to generate an SSIS package that does somethign very simple, and transfer data from a 10 coumn table with a mix of data types, move through another component that adds a couple of extra columns on basic on some variables, then map it onto a OLD DB destination. This code works fine, until I start using strings of various lengths.
When the package runs, it fails validation with errors saying that truncation may occur as I am trying to put a 100 character string, into a 50 character string. The error is logical as you wouldn';t want to do that, but this is not what I am doing. I am actually transfering data from a 50 character string into a 100 character string. When I try it with a table where the strings are the same length at both ends, or no strings are involved, everything works fine, and the data goes from the source to the destination.
I must be setting something slightly wrong which only triggers this problem when the sizes don't match, but the data flow direction is fine, and the data types match. I have included the code from the piece of code that 'writes' the output part of the package. If anyone has any idea what might be going wrong, I would be forever in their debt!
IDTSInput90 input = _Destination.InputCollection[0];
IDTSVirtualInput90 vInput = input.GetVirtualInput();
int iIndex = 0;
foreach (IDTSVirtualInputColumn90 vColumn in vInput.VirtualInputColumnCollection)
{
// Call the SetUsageType method of the destination
// to add each available virtual input column as an input column.
_InstanceOfDestination.SetUsageType(input.ID, vInput, vColumn.LineageID, DTSUsageType.UT_READWRITE);
}
IDTSExternalMetadataColumn90 exInputColumn;
foreach (IDTSInputColumn90 inColumn in _Destination.InputCollection[0].InputColumnCollection)
{
// create the MAP
// What we need to do here, is say what source column, goes to what destination column.
// we read by index, and we need to map the specifics, we could control just 3 of 20 columns
// to whatever column we wanted in the destination......
// We know the name of the inColumn - It is called inColumn.Name, we need to find the column
// we want to map with by finding its name.
if (inColumn.Name == "BatchID" || inColumn.Name == "ValidationStatus")
{
exInputColumn = _Destination.InputCollection[0].ExternalMetadataColumnCollection[inColumn.Name];
// map it
_InstanceOfDestination.MapInputColumn(_Destination.InputCollection[0].ID, inColumn.ID, exInputColumn.ID);
}
else if (Mapping.Map.ContainsKey(inColumn.Name))
{
exInputColumn = _Destination.InputCollection[0].ExternalMetadataColumnCollection[Mapping.Map[inColumn.Name]]; // inColumn.Name
// map it
_InstanceOfDestination.MapInputColumn(_Destination.InputCollection[0].ID, inColumn.ID, exInputColumn.ID);
}
}
_InstanceOfDestination.ReleaseConnections();
}
I have a data file that has numeric data that looks like:
1.123456
And this column is defined as a DT_NUMERIC(18.6) in the flat file conn mgr.
As an experiment, I changed the destination column to a NUMERIC(18,0) - hoping that this would throw a truncation error at the flat file task level (where I have Truncation on all columns set to "fail component").
Not a peep. It loaded the data into the table, chopping off the 6 digits after the decimal point.
You would THINK that this would cause an error, but no. Why is this? The flat file task complains about all kinds of things, but this is such a gross error, you would think it would catch it!
Hello Everyone and thanks for your help in advance. I am working on importing a flat text file into SQL Server 2005 and am having problems. The flat file is a CSV text file with " being used as a text qualifier. Each line is broken by a CrLf combination. When I try importing this file into a SQL Server 2000 table using the same datatypes and sizes for each column, it works perfectly fine with the data importing as expected. However, in SQL Server 2005, again using the identical column datatypes and sizes, the import fails giving me warnings such as: * Warning 0x802092a7: Data Flow Task: Truncation may occur due to inserting data from data flow column "Column 0" with a length of 50 to database column "MLS_ID" with a length of 10. (SQL Server Import and Export Wizard) Virtually every columns gives this type of warning, yet I don't understand why since the columns are all variable in length (every message says a column length of 50) and all are delimited rather than fixed size. Then later in the import, errors occur something like: * Error 0xc02020a1: Data Flow Task: Data conversion failed. The data conversion for column "Column 15" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.". (SQL Server Import and Export Wizard) * Error 0xc020902a: Data Flow Task: The "output column "Column 15" (70)" failed because truncation occurred, and the truncation row disposition on "output column "Column 15" (70)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. (SQL Server Import and Export Wizard) I haven't got a clue as to why this is happening. For the record, on the flat file source screen, I have ensured that delimited has been selected rather than fixed width. Any help on this issue owuld be greatly appreciated. Thanks.
I am trying to write a SSIS package to move data from an access database table to a SQL db table. I have a field which has data that too long for NVARCHAR(255), so, I end upin this error: "A truncation error occurred on the specified object of the specified component"
Ok, this is starting to drive me nuts, I've been trying to get this to work for 2 days now. I have a .csv file that I'm reading and importing the data into my table. I defined Derived columns for the type and the lengths as well. I have one column that keeps kicking out this error: The column in the db has a length of 50, the column in my derived column is set to 50, now there are some strings in the colum from the csv file that contain more then 50 characters so I have this in my derived column section: to get the first 50 characters.
(DT_WSTR,50)NAME
[Flat File Source [1]] Error: Data conversion failed. The data conversion for column "NAME" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
[DTS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Flat File Source" (1) returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
so to some it up. My field (name) in my .csv file can contain 2 to 80 characters but I only want the first 50 and if the field contains greater then 50 characters I'm getting the error above.
Has anyone else run into something like this? If so how did you get it working?
I am getting real stressed out because while trying to import an excel file into a temporary table i get a truncation error...
The max lenght used in the origin column is about 800 characters. So how the hell can i get the column to load? Jesus... this should be easy task... :P
Warning 1 Validation warning. Data Flow Task: Destination - TransItem [25]: Truncation may occur due to inserting data from data flow column "Info" with a length of 128 to database column "Column 4" with a length of 50. Package3.dtsx 0 0
I have an tab delimiter ed file that I'm trying to load into a database using SSIS. The the database have a column called Comments that can hold up to 1000 Unicode characters (nvarchar[1000])
I have appropriately defined the flat file connection and marked every field to the intended length, but every time I run it it will give me the following error:
Data conversion failed. The data conversion for column "COMMENTS" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page. All the columns have a match, this specific column is has no field longer than 1000, larger record has 528 characters in it and the fields are defined as Unicode string in the file connection.
I already ran out of ideas of what may be giving this error, anyone has an idea of what else to try?
An SSIS package has worked fine for a long time in 32-bit SQL Server. It now fails in 64-bit SQL Server. I run this same package in my dev environment against the same data and it works, but my dev environment is 32-bit
System.Data.SqlTypes.SqlTruncateException: Numeric arithmetic causes truncation. at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.HandleUserException(Exception e) at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.PrimeOutput(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper90 wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer90[] buffers, IntPtr ppBufferWirePacket)
SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Parse Into DataSet" (5196) returned error code 0x80131932. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
I have a script source to deal with a source that has different "record types" (first 3 columns are the same then the remaining 2 to 30 columns are different based on the record type).
Script source was working fine... then one of the columns that I had set to String with length of 2 came in with a length of 3 (which is not per spec)... instead of failing - all the columns after the one that had the bad value were null and the script just stopped as soon as it hit that.. AND said it was success. Which means it imported the data incorrectly and since the script says it was a sucess you'd never know anything went wrong and it only imported 30 rows instead of 10k+
Any ideas on how to capture this error?
Code (shortened with .... but should be enough - sorry the forum butchers the code formatting - if someone has a tip for pasting code from VS let me know):
Select Case aCurrentLine(2) ' This is the 3rd column.. "Record Type" which tells us what type of record it is and how many columns etc (Chr(44) is a comma)
well...here is the error I get...Msg 4863, Level 16, State 1, Server TRADES, Line 1 Bulk insert data conversion error (truncation) for row 1, column 8 (Splits). Msg 4863, Level 16, State 1, Server TRADES, Line 1 Bulk insert data conversion error (truncation) for row 2, column 8 (Splits). Msg 4863, Level 16, State 1, Server TRADES, Line 1 Bulk insert data conversion error (truncation) for row 3, column 8 (Splits). Msg 4863, Level 16, State 1, Server TRADES, Line 1 Bulk insert data conversion error (truncation) for row 4, column 8 (Splits). Msg 4863, Level 16, State 1, Server TRADES, Line 1 Bulk insert data conversion error (truncation) for row 5, column 8 (Splits). Msg 4863, Level 16, State 1, Server TRADES, Line 1 Bulk insert data conversion error (truncation) for row 6, column 8 (Splits). Msg 4863, Level 16, State 1, Server TRADES, Line 1 Bulk insert data conversion error (truncation) for row 7, column 8 (Splits). Msg 4863, Level 16, State 1, Server TRADES, Line 1 Bulk insert data conversion error (truncation) for row 8, column 8 (Splits). Msg 4863, Level 16, State 1, Server TRADES, Line 1 Bulk insert data conversion error (truncation) for row 9, column 8 (Splits). Msg 4863, Level 16, State 1, Server TRADES, Line 1 Bulk insert data conversion error (truncation) for row 10, column 8 (Splits). Msg 4863, Level 16, State 1, Server TRADES, Line 1 Bulk insert data conversion error (truncation) for row 11, column 8 (Splits). Msg 4865, Level 16, State 1, Server TRADES, Line 1 Could not bulk insert because the maximum number of errors (10) was exceeded. Msg 7399, Level 16, State 1, Server TRADES, Line 1 OLE DB provider 'STREAM' reported an error. The provider did not give any information about the error. OLE DB error trace [OLE/DB Provider 'STREAM' IRowset::GetNextRows returned 0x80004005: The provider did not give any information about the error.]. The statement has been terminated.Is the problem something about the :OD:0A at the end of each row, rather than just the :0A??? If so, how the heck do I specify that? I think based on my testing so far (also tried ' ' as the rowtermination param, but then it gives me a truncation error for the first row only, leading me to think it cannot find the end of the row that way.
I am still looking through archives and on the web, but have not seen anything specific to my issue yet...and cannot believe that I am the first to BULK INSERT this kind of data.
I am trying to use an XML Source on xml data from an XML webservice, I am putting the document into a variable the trying to import the data from there with the XML Source, but I am getting an error telling me that truncation occured
The Error is "[XML Source [1]] Error: The "component "XML Source" (1)" failed because truncation occurred, and the truncation row disposition on "output column "linking" (1579)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component."
The linking column mensioned in the error is sometime quite a long string but there is nowhere in the XML Source editor to change the size.
I'm unable to find a solution to this truncation error on google.This happens only on one field which has comments. The offending Excel row/column has text that was entered in two lines i.e they entered the data and pressed "enter" and wrote a new line in the same row.Im using an Excel file source in SSIS and an OLEDB Destination (SQL Server) but one column keeps erroring out and I have tried to do the following:
1) Change output column width in advanced editor (still errors)
2) Data conversion tool between the source and destination (still errors)
I'm trying to export sql table as fixed length text file with format file but I got the following error message:
Error = [Microsoft][ODBC SQL Server Driver][SQL Server] Warning: Server data (61 bytes) exceeds host-file field length (60 bytes) for field (4). Use prefix length, termination string, or a larger host-file field size. Truncation cannot occur for BCP output file
All fields in the SQL input table have char data type and I use the format file like this:
"Bulk insert data conversion error (truncation) for row 1, column 1 (id)."
when you get the error above or similar in sql server 2000 does it continue inserting the data by truncating it or does it stop beacause looking at the data that i have got it seems to continue inserting the data but just truncates the colunm. i have tried it several time its seeems to be consistent.
I have data that has white spaces after the actual data e.g. '00093 ' hence i am happy aslong as i can be sure that it does always continue as i will be loading alot of data using a similar process.
hence my question is that will it load all the data all the time and just truncate it to fit the column size?
I am having problems reading from and ODBC conncetion from Oracle RDB into SSIS. I am using a DataReader source which uses a ADO.NET odbc connection to an oracle RDB database. I am having that write to a flat file. When I read integers from the source, it works just fine. When I read character data (char(48) for example), it gives me trucation errors. Is the DataReader source capable of reading char data from an odbc connection?
Here is the errors I receive:
SSIS package "Package.dtsx" starting.
Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning.
Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning.
Information: 0x40043006 at Data Flow Task, DTS.Pipeline: Prepare for Execute phase is beginning.
Information: 0x40043007 at Data Flow Task, DTS.Pipeline: Pre-Execute phase is beginning.
Information: 0x402090DC at Data Flow Task, Flat File Destination [792]: The processing of file "D:Documents and SettingsAdministratorDesktop est.txt" has started.
Information: 0x4004300C at Data Flow Task, DTS.Pipeline: Execute phase is beginning.
Error: 0xC020902A at Data Flow Task, DataReader Source [575]: The "component "DataReader Source" (575)" failed because truncation occurred, and the truncation row disposition on "output column "REPORT_PART_NUMBER" (789)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
Error: 0xC02090F5 at Data Flow Task, DataReader Source [575]: The component "DataReader Source" (575) was unable to process the data.
Error: 0xC0047038 at Data Flow Task, DTS.Pipeline: The PrimeOutput method on component "DataReader Source" (575) returned error code 0xC02090F5. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
Error: 0xC0047021 at Data Flow Task, DTS.Pipeline: Thread "SourceThread0" has exited with error code 0xC0047038.
Error: 0xC0047039 at Data Flow Task, DTS.Pipeline: Thread "WorkThread0" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
Error: 0xC0047021 at Data Flow Task, DTS.Pipeline: Thread "WorkThread0" has exited with error code 0xC0047039.
Information: 0x40043008 at Data Flow Task, DTS.Pipeline: Post Execute phase is beginning.
Information: 0x402090DD at Data Flow Task, Flat File Destination [792]: The processing of file "D:Documents and SettingsAdministratorDesktop est.txt" has ended.
Information: 0x40043009 at Data Flow Task, DTS.Pipeline: Cleanup phase is beginning.
Information: 0x4004300B at Data Flow Task, DTS.Pipeline: "component "Flat File Destination" (792)" wrote 0 rows.
Task failed: Data Flow Task
Warning: 0x80019002 at Package: The Execution method succeeded, but the number of errors raised (6) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
I have created a SSIS package programmatically using C#.
The package should do the following take data from source A, and place rows into destination B, if there are any error rows then redirect the rows to destination C. In my package I have the following components:
DTSAdapter.OLEDBSource.1 - Used as the Source
DTSAdapter.OLEDBDestination.1 - Used for the Destination Output - (let me call this normalOutput)
DTSAdapter.OLEDBDestination.1 - Used for the Destination Error Output - (let me call this errorOutput)
All my mappings appear to be correct, I build and save the package and receive a Successful validation and Success on Execution.
However, When I open the application using the Execute Package Utility I get the warning:
Warning:No rows will be sent to the error output(s). Configure error or truncation dispositions to redirect rows to the error output(s), or delete data flow transformations or destinations that are attached to the error output(s)
How do I get around this?
I have placed on the DTSAdapter.OLEDBDestination.1 (Used for the Destination Output), on the input collection I have placed:
However, the above scenario does not pass the package validation, in the Execute Package Utiltity, I get the wanring mentioned above and also the error:
Error: The input "OLE DB Destination Input" (16) has an invalid error or truncation row disposition.
So my question is what are the correct configuration settings to have in this scenario?
I have am having some issues bulk inserting from a flat file (CSV) to the database. I have also tried this by using the import and export wizard and get the following error:
I dont understand what the issue. The table that i have created looks like this:
CREATE TABLE IderaPatchAnalyzer ( IP_Adresse varchar(64) NOT NULL, Release_ varchar(50) NOT NULL, Level_ varchar(50)NOT NULL, Edition_ varchar(50) NOT NULL,
[Code] .....
I have in the changed the outputcolumnwidth in Ip_Adresse to 64. The length of the cells are not near 50 however i want it to be sure that its not the case. When I try to do the same in my SSIS project, i also get an error. I do get a warning: Truncation may occur due to inserting data from data flow column """"KB Available""" with a length o..... in that column there are max 5 varchar:  "yes" and "no". The  """"KB Available""" is the column name in the flat file (CSV), I have made checkmark in Column names in the first data row.Â
I have used the following guide for my SSIS project:
Is there a switch I can use to force a bulk insert and if data is truncated, I'm good with that. The truncated data, in this case, is not data I can use anyway if it is long enough to be truncated.
I need to keep the field at VARCHAR(23) and if I expand it, I won't be able to join on it after the file load completes. I'd like the data to be inserted (truncated if need be) and then I'll deal with the records that are truncated after I load the file.
I need to export some data from sql server 2012 to a excel file(.xlsx). Truncation error happened when executing the exporting task, error happened in conversion from a column of type nvarchar(max) to a column of type LongText. Max length of the source column data is 4303, and documented length limit of LongText, which is a alias of type Memo, is 64,000. why this error happen?
Below is detailed error message:
- Executing (Error)
Messages
Error 0xc02020c5: Data Flow Task 1: Data conversion failed while converting column "extended_info" (59) to column "extended_info" (143). The conversion returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
(SQL Server Import and Export Wizard)
Error 0xc020902a: Data Flow Task 1: The "Data Conversion 0 - 0.Outputs[Data Conversion Output].Columns[extended_info]" failed because truncation occurred, and the truncation row disposition on "Data Conversion 0 - 0.Outputs[Data Conversion Output].Columns[extended_info]" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
I'm having trouble importing a CSV file into SQL using SSIS. The trouble seems to stem from truncation and from quote encapsulated fields.
Firstly it's worth noting that some of the data within the quotes contains the separator. E.g.
12,"some text, and a comma",34
Thankfully SSIS seems to cope with that by specifying " as the Text Qualifier. My next problem was that the SSIS Import and Export Wizard gave an error: "failed because truncation occurred". But I fixed this by specifying the OutputColumnWidth for the NCHAR and NVARCHAR type columns to be the width from the table definition.
But now I have another problem with the length of fields. Consider the following where the center column is NCHAR (22)
101,"some text, and a comma",303 102,"some ""quoted text"" bye",303
The first row has the correct width (once the encapsulating quotes are stripped out), but the second row does not, because it seems that when I exported the table the export wizard escaped the quote characters within encapsulated strings using quote characters. So I am back to getting the "truncation occurred" error. How can I get around this within SSIS?
I receive the error " [OLE DB Destination [1907]] Warning: Truncation may occur due to inserting data from data flow column "GENDER" with a length of 255 to database column "GENDER" with a length of 2. " on the source flow.
I have done some googling and came up with this post: http://torontosql.dotnetnuke-portal.com/Default.aspx?tabid=32233 which I thought may help, but it does not.
The query against the access datasource features the column: iif([sex]=1, 'm', 'f'). I tried using left(..., 2) as well, but SSIS is determind to treat the field as 255 characters for some reason.
I don't even particualrly care that the field is 255 chars and the sources is only two, I just want the data in! I have other fields coming up with similar error.
Can someone please advise?
PS, what is th significance of the "External Columns" Vs "Output Columns" on the Input and Output Properties tab in Advanced Editor?
I am really struggling with SSIS, it is not as intuitive as DTS.
If you are running in Full Recovery Mode and do a full backup every night but never do a backup of the log during the day does the log file ever truncate? From what I read this should be in Simple Recovery Mode but I'm wondering what happens in the case that I mention in the first sentence. Thanks.
Could someone please let me know what are the exact steps to follow to truncate the transaction log files? As these log files grow very fast and there seems to be no space in the drive.
Currently am using the below steps to truncate the log file: Step1:Use the below syntax: backup log <database name> with no_log Step2:shrink the log file. Right click the correct database and choose shrink file ->chosse the log -> ok
I would be grateful if someone can give me a proper solution.
Hello All,I am attempting a bulk load of fixed position flat file data via bcpand I have noticed that I get a Right Truncation error when trying toload a row where the last column value is NULL.For example:Flat file row:0000016MFMT file:7.031 SQLCHAR 0 7 "" 1 RECORD_KEY2 SQLCHAR 0 1 "" 2 SEX3 SQLCHAR 0 1 " " 3 HEIGHTIn this row, the height info is null and I get a right truncationerror. The row below, with height info goes in fine:Flat file row:0000016M510Let me know what I am doing wrong!Thanks in advance
How is it possible to avoid truncation errors in MS SQL? For example,if I run the followingdeclare @a as decimal(38,8)declare @b as decimal(38,8)declare @c as decimal(38,8)set @a = 30.0set @b = 350.0set @c = @a/@bselect @cset @c = @c*@bselect @cI get 29.99990000 instead of 30.0. Is there a way around this?ThanksBruno
Hi,I'm trying to upload a large number of log entries currently stored astext files into a database table using bcp. For a few rows I get a"right truncation" error and the offending rows are not uploaded to thetable.I don't want to increase the size of the table varchar fields becauseit's only about a dozen out of almost million rows that have thisproblem ... I want to provide an override - i.e. if a row will resultin truncated data, truncate but still bulk copy the offending row. Isthat possible?I couldn't find such an option in the documentation.Any help is greatly appreciated.Thanks,Mudassir Latif
Hello,I am attempting to write a stored procedure that builds and executes adynamic SQL statement which can be up to 8000 characters long.Therefore, I have declared a variable of type varchar(8000) which,according to the documentation, is the maximum acceptable length ofsuch a variable. Unfortunately, however, SQL Server seems allowvarchars to only be half this size: the resulting string keepingsgetting truncated to 4000 characters as reported by the len function.Is there setting somewhere that would fix this behavior or somework-around that I can employ that would allow me to execute a dynamicsql statement that is longer than 4000 characters?(note: I am not using the sp_executesql proc as it maxes out at 4000; Iam simply calling EXEC which, according to the docs, should be fine)Thank You.
We use SQL Server 2005 x64 Enterprise and I have created a SSIS routine to replace a legacy DTS routine that reads from a Data Reader Source and writes to a SQL Server 2005 database. The field I am receiving the truncation error on is "Description" and it is set as nvarchar(50), which it always has been, and the old DTS routine works fine on it. I checked the contents of description and the maximum number of characters in any row is 28. I have tried changing it to nvarchar(max), nvarchar(4000) and ntext but it still fails with a truncation error. Any leads on how I may solve this issue?
I have always assumed that when you backup a SQL Server database the transaction log is automatically truncated so that there is no need to explictly truncate it. It makes sense to me, you would not normally need logs from before the most recent backup. BOL, with all its talk about check points etc, seems to hint at this but I can't find an explicit statement to this effect.
Hello Guys I am using XML files and dumping data to sql server 2005 , i have field called as rate which is having money as datatype and i am getting following error
LoadDataXML to XML Source -- LoadDataXML [907]: The value was too large to fit in the output column "RATE" (95245).
please help me out with the solution of this ...the data which is coming from xml file is unsigned itneger single bit and my database is having money .so should i use a conversion task in between if any body can give idea about this that would be great , if you want more information tell me ... thanks krish