DataReader Source Error - Cannot Change The Datatype, Precision Or Scale In The Output Columns
Oct 3, 2007
I have a data source that I access via odbc in a DataReader Source component in SSIS. I can access the data fine. However, I am having problems with certain fields that are numeric (specifically home prices ranging from 100,000.00 to 99,999,999.00). In the advanced editor for my data reader source under the input and output properties tab, in data reader output under the external columns and output columns, these fields for some reason default to numeric data types with a precision of 4 and a scale of zero, not large enough to hold the data that is coming in. This causes errors that make the data come in as null (after i specify to ignore the errors).
I can change the precision and scale to 18 and 4 in the external columns, but when I try to change the datatype, precision or scale in the output columns I get the following message:
Property Value is not valid.
The details are:
Error at Import DataReader Source: The data type of output columns on the component "DataReader Source" cannot be changed.
Error at DataReader Source: System.Runtime.InteropServices.COMException (0xC020837D)
at Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter.SetOutputColumnDataTypeProperties(Int32 iOutputID, Int32 iOutputColumnID, DataType eDataType, Int32 iLength, Int32 iPrecision, Int32 iScale, Int32 iCodePage)
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostSetOutputColumnDataTypeProperties(IDTSManagedComponentWrapper90 wrapper, Int32 iOutputID, Int32 iOutputColumnID, DataType eDataType, Int32 iLength, Int32 iPrecision, Int32 iScale, Int32 iCodePage)
I am trying to migrate data from MySQL 5.0 to SQL Server 2005. The MySQL database has a table which stores the profile description in different languages like (Arabic, Spanish etc). I use MySQL ODBC 5.1 driver for creating the ODBC connection and creating a ADO connection in SSIS using that ODBC. The datareader source connection is set to this ADO connection. When I view the properties of columns in Datareader source it shows as Unicode, which is good. But when I migrtae to SQL Server 2005 I get junk data instead of the data in Arabic, Spanish etc. Am I missing something or is there any other alternative to do the data transfer correctly?
I have configured my DataReader to use an ADO.net (ODBC) connectivity (entered Select * from AMPFM) in Sqlcommand and can see my database columns listed in the Advanced Editor / Column mappings window. My process needs to perform a straight column to column population from AMPFM table into my dbo.visitfinancials table. How do I point the output to the above table?
I have been trying to develop an automatic way of programmatically accessing datasources and performing some predefined(-supported) processing on them.
The question I would like to ask you people has to do with numeric fields. What exactly is precision? Is it the maximum length in digits of a field, or is there more to it? What about a "field's scale", what is it and how does it affect a field's value handling?
I am using SQL CLR Integration to create a series of stored procedures.
I am building and deploying from Visual Studio 2005 SP1 and everything is working well except for my stored procedures that have a SqlDecimal typed input argument. By default, the precision and scale of the SqlDecimal is deployed to SqlServer as (18,0).
How can I change this default?
This is an example of my stored procedure definition:
Our shop recently upgraded to MS SQL 2005 server from the prior SQL 2000 product.
I receive and error during processing related to inserting a value into a field of datatype real. This worked for years under MS SQL 2000 and now this code throws an exception.
The exception states:
The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Parameter 15 ("@TEST"): The supplied value is not a valid instance of data type real. Check the source data for invalid values. An example of an invalid value is data of numeric type with scale greater than precision.
This error is caused by inserting several values that fall outside of a range that MS SQL 2005 documentation specifies.
The first value that fails is 6.61242e-039. SQL Server 2005 documentation seems to indicate that values for the datatype real must be - 3.40E + 38 to -1.18E - 38, 0 and 1.18E - 38 to 3.40E + 38.
Why doesn't 6.61242e-039 just default to 0 like it used to?
I saw an article that might apply, even though I just use a C++ float type and use some ATL templates.
Is my question related to this post?http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=201636&SiteID=1
I need to create an Bulk upload utility using ASP.Net and SQL Server. Below is the process for the uploads -
Excel Template wherein user will enter the details. A Tab-delimited output file will be generated using the VBA. There are 2 tables - one is Temp Table which is replica of the the final table and second is the final table Using File.OpenText(filePath).ReadLine() - All the Rows from the tab delimited data file will be inserted into DataTable.
using SQLBulkCopy the tab-delimited data file data will be inserted into the Temp Table.
Data will be validated based on the data inserted in the temp table. If the data as errors then the temp table will be cleared else the data will be inserted from the temp table to the final table.
My Issue is that in both the tables there is a column (Name : PeopleKey (Int PrimaryKey)). If the user enters Alphabetic value then the Bulk Utility is failing. Below are the two options in my mind -
1. I can change the DataType in Temp table from int to VARCHAR. So, the data can be inserted at first and then I can validate and get the data corrected. But i am not sure whether it is the right way to fix issue as the source and target tables columns are different.
2. When the data in inserted into the Datatable by following Step 3. So, once the data in inserted into DataTable then i can validate there. Thus the source and target tables Datatype will be same.
I have a problem with DataReaderSource. I'm trying to get data from Notes table. I created a Connection manager and the connection was successful. The SQLCommand in "Component properties" tab is a simple "select * from <table_name>". When I switch to the "Column mappings" tab, only the first column from the table is displayed. Pressing the "Reftesh" button resulst in the following error: Error at Data Flow Task {DTS.Pipeline]: The "output column <column_name> has a length that is not valid. The length must be between 0 and 4000. When I go to the "Input and Output Properties" tab, the DataType for the output column is not populated and the error message "Error in Data Flow Task [DTS.Pipeline]: The output column <column_name> had an invalid datatype (0) set." The DataType property is not populated at all. Changing the data type to DT_STR results in error: "Property value is not valid". Details: Error at Data Flow Task [DataReader Source]: The data type of output columns on the component "DataReader Source" cannot be changed".
I read on a previous post to explicitly convert field , and tried to explicitly covnert the dataype on the field in my query (ex. select convert(varchar(50) from fieldname)
It then gives foll err:
ERROR [42000] [Lotus][ODBC Lotus Notes]Incorect syntax near ',' [Lotus][ODBC Lotus Notes]Name, constant or expression expected.
I get an error at the end of a 47 million row job when I use the datareader source. It goes through all the records and then the package fails. The error (DataReader Source [1]] Error: System.NullReferenceException: Object reference not set to an instance of an object. ) occurs at the datareader source. I suspect it's because my record set returns a null value at some point. Any ideas?
I'm writing a custom source component that reads data from a SharePoint list with dynamic mapping to output columns. It's my first custom component and it's based on several samples and tutorials from Internet
Output columns are not created by the component itself, they must be added by user at design time. The component makes dynamically an association between SharePoint fields and available output columns at run-time (based on an mapping table).
I made a very basic skeleton and I encounter a problem when I add a column to output: it has no datatype and when I try to set one I have an the error Property value is not valid, The component xxxxxx does not allow setting output column datatype properties.
Imports System Imports Microsoft.SqlServer.Dts.Pipeline Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper Imports Microsoft.SqlServer.Dts.Runtime.Wrapper <DtsPipelineComponent(ComponentType:=ComponentType.SourceAdapter, DisplayName:="SharePoint Dynamic Assoc List Source",
Hello, I get the following error when I run my package interactively. From the logs written out by the driver, it appears that all is working well as far as connecting to the data source and pulling data. It seems as if this error occurs when the DataReader source tries to process the received data.
SSIS package "MyPackage.dtsx" starting. Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning. Information: 0x40043006 at Data Flow Task, DTS.Pipeline: Prepare for Execute phase is beginning. Information: 0x40043007 at Data Flow Task, DTS.Pipeline: Pre-Execute phase is beginning. Error: 0xC0047062 at Data Flow Task, DataReader Source [1]: System.Data.Odbc.OdbcException: ERROR [42000] XML parse error at 162:1338: not well-formed (invalid token) at System.Data.Odbc.OdbcConnection.HandleError(OdbcHandle hrHandle, RetCode retcode) at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader, Object[] methodArguments, SQL_API odbcApiMethod) at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader) at System.Data.Odbc.OdbcCommand.ExecuteReader(CommandBehavior behavior) at System.Data.Odbc.OdbcCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter.PreExecute() at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPreExecute(IDTSManagedComponentWrapper90 wrapper) Error: 0xC004701A at Data Flow Task, DTS.Pipeline: component "DataReader Source" (1) failed the pre-execute phase and returned error code 0x80131937. Information: 0x40043009 at Data Flow Task, DTS.Pipeline: Cleanup phase is beginning. Information: 0x4004300B at Data Flow Task, DTS.Pipeline: "component "OLE DB Destination" (691)" wrote 0 rows. Task failed: Data Flow Task SSIS package "MyPackage.dtsx" finished: Success.
I am not sure where to look next. Any help is much appreciated.
Hi, I am trying to import data from Oracle RDB into SQL Server 2005 using SSIS. Created a ODBC data source to connect to Oracle and used DataReader Source component and ADO.net to connect to the ODBC data source.
Under the Component properties tab, the SQL Command looks something like this.
Select ID, ADDRESS, REVISED from ADDRESS
The data type for the source columns are Integer, Varchar(30) and DATE VMS.
Now when I look at the Input and Output properties window,
The External columns has the following data types.
ID - four-byte signed integer [DT_I4] ADDRESS - Unicode string [DT_WSTR], length = 0 REVISED - database timestamp [DT_DBTIMESTAMP]
The Output columns has the following data types
ID - four-byte signed integer [DT_I4] ADDRESS - Unicode string [DT_WSTR], length = 0 REVISED - database timestamp [DT_DBTIMESTAMP]
When I tried to change the length of the ADDRESS on the output column, I get the following error.
Error at Data Flow Task [DataReader Source [1]]: The data type of output columns on the component "DataReader Source" (1) cannot be changed.
Is this the default length for the Unicode string type. I was not able to load the ADDRESS column as it gets truncated before I load it into destination. Even if I use Derived or Data Conversion transformation, the ADDRESS is getting truncated before it reaches this transformation.
Attempting to create a data flow task to copy data from AS/400 (DB2) to SQL2005, using an existing System DSN ODBC connection defined on the SQL2005 host.
Problem:
When adding the DataReader Source component to the package, I cannot assign the Connection Manager. Designer issues the error message:
"The runtime connection manager with the ID "" cannot be found. Verify that the connection manager collection has a connection manager with that ID."
Editing the DataReaderSrc component shows only one row under the Connection Managers tab:
The datareadersrc component editor displays the warning message: "Not all connection managers have been set. Set all connection managers.". Clicking the Refresh button causes the error message to be displayed "The runtime connection manager with the ID "" cannot be found. Verify that the connection manager collection has a connection manager with that ID."
I am prevented from assigning my Connection Manager object the DataReaderSrc.
The package already contains one Connect Manager object:
Provider: .Net Providers/Odbc Data Provider System DSN
as far as I know from docs and forum datareader is for .NET data in memory. So if a use a complex dataflow to build up some data and want to use this in other dataflow componens - could i use data datareader source in the fist dataflow and then use a datareader souce in the second dataflow do read the inmemoty data from fist transform to do fursther cals ?
how to pass in memory data from one dataflow to the next one (i do not want to rebuild the logic in each dataflow to build up data data ?
Is there a way to do this ? and is the datareader the proper component ? (because its the one and only inmemory i guess, utherwise i need to write to temp table and read from temp table in next step) (I have only found examples fro .NET VB or C# programms to read a datareader, but how to do this in SSIS directly in the next dataflow?
i need help to solve following error in ssis package when i aun ::
Error: 0xC0047062 at CTPKPF, DataReader Source [1]: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter.PrimeOutput(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper90 wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer90[] buffers, IntPtr ppBufferWirePacket) Error: 0xC0047038 at CTPKPF, DTS.Pipeline: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "DataReader Source" (1) returned error code 0x80004003. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. Error: 0xC0047021 at CTPKPF, DTS.Pipeline: SSIS Error Code DTS_E_THREADFAILED. Thread "SourceThread0" has exited with error code 0xC0047038. There may be error messages posted before this with more information on why the thread has exited. Information: 0x40043008 at CTPKPF, DTS.Pipeline: Post Execute phase is beginning. Information: 0x40043009 at CTPKPF, DTS.Pipeline: Cleanup phase is beginning. Information: 0x4004300B at CTPKPF, DTS.Pipeline: "component "OLE DB Destination" (1993)" wrote 0 rows. Task failed: CTPKPF
I am seeing a particular problem in the XML Source Editor "Columns" configuration where it is not persisting the "Output name" selection.
Control Flow Tab: 1. I use a "Exec SQL Command" to drop, create, or alter the destination tables in the database that I want to be repository for the inbound XML data. The data types are fairly straightforward.
2. I add a singular "Data Flow"
Data Flow Tab:
1. I add a "XML Source" task, and assign a well-defined XML file. I then use the "Generate XSD" option in the "Connection manager"; and I am fairly satisfied with the generated XSD.
2. I create "OLE DB Destination"
3. I wire the "XML Source" to the "OLE DB Destination". In the "XML Source" in the "Columns".
4. I go to the dropdown list of "Output name" and see the list ordered with the various complex-types that I want to map and transfer to a target table.
For the sake of this report, I select the 5th one down on the list (for which I already have a target table) - let's call this "Mesh"
5. In the "Input Output" dialog, I select the "output" to be the desired 5th item, "Mesh"
6. I check all my mappings so that they map one-to-one ... XML name entries match SQL table destination mapping entries; correct types; correct size
7. Check the metadata and it all looks good.
8. When I hit "Debug" to test the package the failure occurs at the "XML Source". The error report comes back saying that it failed because "field xxx in Contributor was truncated". However, "Contributor" corresponds to the 1st name in the dropdown list presented in "Columns" "Output name:".
If I select return to Step 4, when I open up "Columns" I see that my previous selection of the 5th item on the list named "Mesh" was not persisted, but invariably and no matter how often I select item #5 "Mesh" and save to ensure that selection sticks, it is not persisted.
I hand-edited the .dtsx file and only then was I able to make this stick. However, if I ever re-save the package this non-persistency pops up again.
Am I doing something wrong here or is this a known defect? As I have several dozen XSD mappings that I want to transfer to tables, hand-editing is not something I relish.
Right now the database I am working with is storing time inan Integer data type and is storing the time value in seconds.The application does not allow entering seconds. It acceptsminutes and hours.I have a report where it is doing:SELECT SUM(TIMEENTERED)and the SUM is *blowing* up as the SUM is reachingthe BIGINT range.I can fix the problem by changing all codes to:SELECT SUM(CAST(TIMEENTERED AS BIGINT))But now that I ran into this problem I want to find outif storing the time in seconds using INTEGER datatype is the best solution?I've been searching this newsgroup and other placesthe whole day. I even ran into my own three year oldpost. Three years ago my problem was data migrationrelated and now it is more of performance related thananything else.http://groups.google.com/groups?as_...y=2006&safe=offI could not find this specific topic in SQL books likeSQL for Smarties 2005 by Joe Celko (very good stuff ontemporal topics but nothing specific to my question),or Inside SQL Server 2000.Which data type would be ideal and why?smalldatetime?integer?decimal?float?The type of operations that are being done in the databaseare:1- Entering time in hours on work done on a taskFor the data entry part, the application accepts2.5 as 2 and a half hours and it is storing2.5 * 3600 = 9000 seconds.It also accepts entering 2:30 as 2 hours and30 minutes and again storing 9000 seconds.I even saw a page where you can enter clocktime: I worked from 9:30AM to 12:45PMas an exampleWhen i checked the underlying table(s) I sawthat the ENTEREDTIME is always the durationin seconds. So the data entry can either be2.5 hours where ENTEREDTIME = 9000 secondsor9:00AM to 11:30AMwhere STARTDATE is today's date for examplestored as 1/27/2005 09:00AMand where ENTEREDTIME = 9000 seconds2- All kinds of reports showing total time in hoursfor example: Project1 = 18.5 hoursThe code in the SP are all like:SUM(ENTEREDTIME) / CAST(3600 AS DECIMAL(6,2))AS TOTALTIME3- I am sure a lot of other arithmetic calculations arebeing done with this ENTEREDTIME field.What would be the best way to store hours/minutesbased on how we are using Time in the database?Either I will stick with Integer but store in minutestime instead of calculating in seconds and most likelyupdate all the SUM(ENTEREDTIME) toSUM(CAST(ENTEREDTIME AS BIGINT))or I will switch to storing in decimal/float andmaybe avoid doing :SUM(ENTEREDTIME) / CAST(3600 AS DECIMAL(6,2))AS TOTALTIMEsince the ENTEREDTIME would already be storedin hours time.or I will use DATETIME since in the cases ofI worked from 9:00AM to 11:30AMI have to have a separate column to store the date also.I am a little confused I am hoping I will get some helpfrom you and maybe if I can't find the best solution, atleast eliminate the NOT so good ones I am thinking of.Thank you
I'm importing data from and oracle database to an SQL one through a SSIS package, I'm getting this error: "The output column "earned_hours" has a precision that is not valid. The precision must be between 1 and 38". the package runs but returns this column as NULL values
earned_hours is of type "NUMBER" in oracle (some of the values are decimals), I tried making it numeric(x,y),float or decimal(x,y), but I'm still getting the same results.
does anybody know why is this happening or have a solution for this error?
The query fails to execute and returns an error: String[1]: the Size property has an invalid size of 0. If I change the SqlDbType.Text parameter type to SqlDBType.Varchar, 100 (or any other fixed varchar length), it works but limits the length my unlimited field text. Any suggestions on how I can use db type text or varchar(max)? The field I need to retrieve is string characters of unlimited length and hence the datatype varchar(max).
I am using execute sql task to run a stored procedure in oracle database which returns a resultset. This works. Now I need to send the ouput to a destination table in a sql database. Should I use for each loop to pick the resultset and insert it into the destination one by one (which I dont think is a great idea) or is there a better way to accomplish this task (in data flow task) ?
When I use dataflow task instead of execute sql task, the main issue is I am not able to see the output columns when I execute an oracle stored procedure, but when I see the preview I can see the resultset . But I can see the output columns for a sql server stored procedure.
i have a weird situation here, i tried to load a unicode file with a flat file source component, one of file lines has data like any other line but also contains the character "ÿ" which i can't see or find it and replace it with empty string, the source component parses the line correctly but if there is a data type error in this line, the error output for that line gives me this character "ÿ" instead of the original line.
simply, the error output of flat file source component fail to get the original line when the line contains hidden "ÿ".
What is the purpose of the error output for an OLE DB Source component. Any sql that would cause an error such as converting a character to a number or division by zero causes the OLE DB Source component to fail regardless of the settings for the error output. Works perfect for OLE DB Destination but I cannot come up with any scenario where it would work for the OLE DB Source component.
In the Input and Output Properties tab under Advance Editor for OLE DB Source, I cannot remove columns. I copied this Source from a standard template and have made the normal changes to make it work. However I keep getting this error...Error: 0xC020837B at Load Server Security, OLE DB Source [1]: The output column "DBName" (1632) on the error output has no corresponding output column on the non-error output.Error: 0xC004706B at Load Server Security, DTS.Pipeline: "component "OLE DB Source" (1)" failed validation and returned validation status "VS_ISBROKEN".DBName of course is one of the columns that no longer exist, but I can't remove. Whenever I try to remove one of the columns, I get this error...Error at Load Server Security [OLE DB Source[1]]: The column cannot be deleted. The component does not allow columns to be deleted from this input or output. Is there anything that I can do to remove the columns? Is there just a simple setting that I can change to make this work?