SSIS OLE DB Source No Columns Displayed Problem
Oct 3, 2007
Hi SSIS Experts
I have a problem in that I execute the following code within a OLE DB Source to a SQL 2k database. The results are returned when I press the Preview button however when I open the Columns tab I do not get results returned.
As you will see from my code I have tried to use both a table var & # table both produce the same results.
Any Solutions to this more then welcome
set nocount on
declare @l_Table_name varchar(255)
,@l_cmd varchar(4000)
,@l_db_name varchar(255)
Declare @result table (
SQLInstanceName varchar(255)
,DatabaseName varchar(255)
,TableName varchar(255)
,RecordDate datetime
)
-- Create temp table to hold Table data
create table #TableList
(
SQLInstanceName varchar(255)
,DatabaseName varchar(255)
,TableName varchar(255)
)
-- Load list of databases from master into Cursor
declare db_name_cursor insensitive cursor
for
select name from master..sysdatabases where name <> 'Tempdb' -- Exclude Tempdb
open db_name_cursor
fetch next from db_name_cursor into
@l_db_name
While (@@fetch_status = 0)
begin
-- Build select statment to be executed on each database.
set @l_cmd = 'use ' + @l_db_name
set @l_cmd = @l_cmd + ' insert into #TableList (SQLInstanceName,DatabaseName,TableName) '
set @l_cmd = @l_cmd + ' select @@servername SQLInstanceName,db_name(), name from sysobjects WHERE type = ''U'''
-- Exec the command
exec (@l_cmd)
--print @l_cmd
fetch next from db_name_cursor into
@l_db_name
end
-- Clean up Cursor
close db_name_cursor
deallocate db_name_cursor
insert into @Result
select *,getdate() RecordDate from #TableList
drop table #TableList
set nocount off
Select * from @result
View 13 Replies
ADVERTISEMENT
Dec 1, 2006
Hi I have an SSIS package that retrieves table data from a database using an OLE DB source component.
This then passes the rows of data to a script destination component.
Within this script my code takes the row data and inserts it into the database (amongst other things).
However I have noticed that if any of the Row columns contain nulls then the component falls over with errors when run like so:
[Back Up Prize Banks [2875]] Error: System.Data.SqlClient.SqlException: Parameterized Query '(@PASId int,@ScriptName nvarchar(17),@LastModified datetime,@Pri' expects parameter @RegenerateXML, which was not supplied. at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.HandleUserException(Exception e) at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.ProcessInput(Int32 inputID, PipelineBuffer buffer) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostProcessInput(IDTSManagedComponentWrapper90 wrapper, Int32 inputID, IDTSBuffer90 pDTSBuffer, IntPtr bufferWirePacket)
I have solved the problem by checking the column contents before inserting it as follows:
Before:
Command.Parameters.AddWithValue("@RegenerateXML", Row.RegenerateXML)
After:
If (Row.RegenerateXML_IsNull()) Then
Command.Parameters.AddWithValue("@RegenerateXML",System.DBNull.Value)
Else
Command.Parameters.AddWithValue("@RegenerateXML", Row.RegenerateXML)
End If
Which is great but it does turn 1 line into 5 lines. I have 20 parameters which takes up 20 lines of code which will now be 20 x 5 = 100 lines of code.
My question is :
Is there a better way to write this code without having to take up so many lines?
Thanks
Matt.
View 2 Replies
View Related
Dec 7, 2007
Hi,
I am new to SSIS programming and trying to export data from a flatfile source to SQL server destination table dynamically. I need to get the table schema info (column length, data type etc.) from SQL server table and then map the source columns from flatfile to destination table columns.
I am referring to one of the programming samples from Microsoft and another excellent article by Moim Hossain. Can someone help me understand how to map the Source columns to destination table columns depending on table schema? Please help.
Thanks
View 5 Replies
View Related
Nov 29, 2006
(Applies to SQLServer 2005 SP1)
We have found that using the SSIS "Import and Export Wizard" using the "Microsoft Excel" data source that there appears to be a maximum column length of 255 characters for any row.
Even when defining the destination table columns as nvarchar(4000), the wizard fails with the errors shown below.
We have found no workaround except manually changing the imput data. There doesn't appear to be any "Advanced" options for the Excel importer as there are for the flat-text importer. So, no question here, just posting the bug so that *next* time someone searches the web for an answer, this post comes up
MessagesError 0xc020901c: Data Flow Task: There was an error with output column "English String" (18) on output "Excel Source Output" (9). The column status returned was: "Text was truncated or one or more characters had no match in the target code page.". (SQL Server Import and Export Wizard) Error 0xc020902a: Data Flow Task: The "output column "English String" (18)" failed because truncation occurred, and the truncation row disposition on "output column "English String" (18)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. (SQL Server Import and Export Wizard) Error 0xc0047038: Data Flow Task: The PrimeOutput method on component "Source - Sheet1$" (1) returned error code 0xC020902A. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. (SQL Server Import and Export Wizard) Error 0xc0047021: Data Flow Task: Thread "SourceThread0" has exited with error code 0xC0047038. (SQL Server Import and Export Wizard) Error 0xc0047039: Data Flow Task: Thread "WorkThread0" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown. (SQL Server Import and Export Wizard) Error 0xc0047021: Data Flow Task: Thread "WorkThread0" has exited with error code 0xC0047039. (SQL Server Import and Export Wizard)
edit: After searching further this is documented under "Excel Source"
in BOL which provides a registry-based workaround. I guess the issue
is that the wizard considers truncation to be a 'fail' case and
there's no easy way to override this behaviour, specify the column
types nor determine which line is in error)
Truncated text. When the driver determines that an Excel column contains
text data, the driver selects the data type (string or memo) based on the
longest value that it samples. If the driver does not discover any values longer
than 255 characters in the rows that it samples, it treats the column as a
255-character string column instead of a memo column. Therefore, values longer
than 255 characters may be truncated. To import data from a memo column without
truncation, you must make sure that the memo column in at least one of the
sampled rows contains a value longer than 255 characters, or you must increase
the number of rows sampled by the driver to include such a row. You can increase
the number of rows sampled by increasing the value of TypeGuessRows under
the HKEY_LOCAL_MACHINESOFTWAREMicrosoftJet4.0EnginesExcel registry
key.
)
View 21 Replies
View Related
May 27, 2008
Hi Guys,
I am creating a report in Reporting Services 2000. I have a stored procedure that returns 4 columns with one column having 21 weeks of entries in it. I am using matrix to group these 21 weeks as 21 week columns created dynamically but when I preview the report it is only displaying columns for first 5 weeks.
Any idea why Reporting services is not displaying the rest of the columns? Any limitation on maxiumum number of columns that can be generated dynamically in SSRS 2000?
Any help will be highly appreciated.
View 3 Replies
View Related
Sep 5, 2006
When I run an SSIS package from the MSDB in SQL Server Management Studio it never shows up under 'Running Packages'.
I have confirmed that the package is indeed running.
I've also tried to check the packages running on the server programatically by using the GetRunningPackages() method. This too returns nothing when clearly packages are running.
Any ideas?
Malcolm Barclay
Disney Consumer Products UK
View 4 Replies
View Related
May 3, 2007
I have deployed a report that is configured for landscape printing. It does print in landscape, however, only the first seven columns appear on the first page and the other five columns appear on the next page. Is there a method, like in print preview setup in MS Excel, where we can scale down the print (like to 80%) so all columns appear on each page?
Thanks!
View 3 Replies
View Related
Apr 23, 2008
Hello All,
I am rather new to reporting on SQL Server 2005 so please be patient with me.
I need to create a report that will generate system information for a server, the issue im having is that the table I am having to gather the information from seems to only allow me to pull off data from only one row.
For example,. Each row contains a different system part (I.e. RAM) this would be represented by an identifier (1), but I to list each system part as a column in a report
The table (System Info) looks like:-
ID | System part |
1 | RAM
2 | Disk Drive
10| CPU
11| CD ROM |
Which
So basically I need it to look like this.
Name | IP | RAM | Disk Drive|
----------------------------------------------
A | 127.0.0.1 | 512MB | Floppy
So Far my SQL code looks like this for 1 item
SELECT SYSTEM PART
FROM System Info
WHERE System.ID = 1
How would I go about displaying the other system parts as columns with info
Any help is much appreciated!
View 3 Replies
View Related
Feb 13, 2007
Hi,
I am trying to create a program that transfers tables to flat files.
At this point in time, I have suceeded in created one that creates delimited files.
However, I am now trying to create fixed-width files as you can do with the SSIS designer, but programatically.
Is there a way to programatically determine the width of a column from the source table? I can not seem to find any kind of function or member that stores this information or allows me to retrieve it.
I know what I need to change in order to set a width for a column, but I just don't know how to find the width without just asking the user to provide one.
View 5 Replies
View Related
Sep 14, 2007
Hi
I am having a huge xml file with nested section.
i also have a xsd file for that xml.
i have a destination table where the data from the xml should be loaded into.
i am using the xml source transformation. But o get all the data i need to use multiple merje joins to get the data in a single row which i can insert into the destination.i was not quiet convinced with using so many joins.
so i tried using the script source transformation where i am using xml objects to get the node and dynamically construction the data row. and the output is then inserted into the destination.
on comparing the two approach the one using the script source is working much faster than the xml source transformation.
i wanted to know is there any limitaion using the script source to parse through xml files.
also i would like to know any other better way of getting the data from xml source without using the joins.
Hari
View 7 Replies
View Related
Jun 2, 2006
Hi,
I'm finding that the standard components often just don't quite meet my
needs, but would only need some fairly minor changes to save me and my team a lot of
work (and produce more elegant solutions). So I was just wondering whether the source code was available for the standard components that come with SSIS, or if there is anyway to extend their functionality?
Or do you just have to start form scratch?
Thanks,
Lawrie
View 4 Replies
View Related
Nov 27, 2011
I need to create an Bulk upload utility using ASP.Net and SQL Server. Below is the process for the uploads -
Excel Template wherein user will enter the details. A Tab-delimited output file will be generated using the VBA.
There are 2 tables - one is Temp Table which is replica of the the final table and second is the final table
Using File.OpenText(filePath).ReadLine() - All the Rows from the tab delimited data file will be inserted into DataTable.
using SQLBulkCopy the tab-delimited data file data will be inserted into the Temp Table.
Data will be validated based on the data inserted in the temp table. If the data as errors then the temp table will be cleared else the data will be inserted from the temp table to the final table.
My Issue is that in both the tables there is a column (Name : PeopleKey (Int PrimaryKey)). If the user enters Alphabetic value then the Bulk Utility is failing. Below are the two options in my mind -
1. I can change the DataType in Temp table from int to VARCHAR. So, the data can be inserted at first and then I can validate and get the data corrected. But i am not sure whether it is the right way to fix issue as the source and target tables columns are different.
2. When the data in inserted into the Datatable by following Step 3. So, once the data in inserted into DataTable then i can validate there. Thus the source and target tables Datatype will be same.
View 1 Replies
View Related
Apr 10, 2007
Hi all,
I want to copy 2 columns from 1 database to another database.
I managed to do this, using a Ole DB source and a Ole DB destination dataset.
Now I want to merge 2 colums into 1:
Source Database: Column A: first name, Column B: Lastname.
Destination Database: Column 1: First and lastname customer.
Thanks for your help!
Kind regards,
Marcel Hijnen
eXDe Solutions B.V.
View 4 Replies
View Related
Jan 31, 2007
Hi:
I use a SSIS package to loop thro a folder and load data from multiple excel files to a SQL2005 table. Works fine except when an excel has a missing col.
Col names in xls are always a subset of col names in the table. The missing cols are random, else I would just have made another package:-)
Once a missing column is found, I get runtime and design time errors, and metadata problems. How can a get SSIS to ignore missing columns?
TIA
View 3 Replies
View Related
Apr 23, 2008
I have query like below that I am using as a OLE DB source
Set NOCOUNT ON
Select *
Into #temp1
from A
Select *
Into #temp2
From B
Select * from #temp1 a
Join #temp2 b on a.episode_key = b.episode_key
I can see the preview data , but when I click columns, there are no available external columns..
Howcan I fix this issue?
View 8 Replies
View Related
Feb 21, 2007
can somebody show an example of how to map source and destination columns when uploading a file to sql server?
Also, please send me the mapping when i want to map source to different destination columns.
View 1 Replies
View Related
Apr 19, 2007
Hi there,
I was trying to execute an OLE DB Source task with a SQL Command and I got the following error:Only text pointers are allowed in work tables, never text, ntext, or image columns. The query processor produced a query plan that required a text, ntext, or image column in a work table.
I read this forum and I checked the data types in boths sides (source and target) and they are the same data types, TEXT. http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=46086
I already execute my query from SSMS using linked server to connect to the source and it worked. I could load the data into my target table. Then, I tried to execute it on SSBIMS and it failed. I wanted to try an Execute SQL Task but the problem is that I can only have one connection object assigned to the task. So I cannot pull the data from one db and insert them into another with one Execute SQL Task.
Any ideas of why am I getting this error? Do I need to set a property to something in order to run my query using OLE DB Source Task?
I'd appreciate any help/comments/suggestions.
Thanks!
View 1 Replies
View Related
Apr 17, 2008
Hi
I have an excel source which is a 41 column sheet. The excel filepath is stored in a table and captured into a variable. The excel source import is contained within a foreach loop and will loop through each file and continue until all the excel files are processed. It works fine until it gets to the last file. The import then fails with the following error:
The column "F42" needs to be added to the external metadata column collection.
The column "F43" needs to be added to the external metadata column collection.
The column "F44" needs to be added to the external metadata column collection.
The column "F45" needs to be added to the external metadata column collection.
The column "F46" needs to be added to the external metadata column collection.
The column "F47" needs to be added to the external metadata column collection.
Now when i open the excel sheet and hit CTRL+END the cursor goes to a column 6 to the right of the last column with data in it, effectively column 47 where column 41 is the end of my data.
I guess that the jet engine is trying to import these additional columns but because i am not expecting them there is no destination set up for them in the OLEDB destination and susequently the metadata needs to be added. I do not want to do this as these are excel files originating from the client and i cannot control how many additional columns they are going to "add".
Does anyone have any ideas as to how i can solve this? Is there a way of identifying the last column with data and only importing those columns?
Thanks in advance for any help or experience of this issue
View 2 Replies
View Related
Aug 17, 2007
I've read about the XML Source sometimes setting error output columns to DT_WSTR(255), but mine is now setting them to DT_NTEXT.
Anyone have any suggestions short of an XML editor? I'm concerned that I might do something to "refresh" the columns and cause the problem again.
View 2 Replies
View Related
Aug 22, 2007
Hi,
I have a package that uses an Excel file source. There appears to be no place to modify the column data types as you can with a flat file manager. As such, the source columns do not match the columns in the database.
I believe I must be overlooking something here.
Can someone please tell me how I can modify the Excel column datatypes?
Thanks
View 7 Replies
View Related
Dec 14, 2006
Take a Dataflow Task, with an OLEDB Source Component and an OLEDB Destination Component in it. The Source component's Source is stored in a SQLQuery variable, and the Destination component's Destination is stored in a TableName variable. The Dataflow Task is put into an For Loop container.
I just want to do one thing:
In the For Loop, everytime it send a new value to the SQLQuery variable and the TableName variable, then I could use it to transfer many tables in a same logic.
My question is: Really, I could send new values to the variables and make it go. But everytime it says "The external metadata column collection is out of synchronization with the data source columns." Because the external metadata was recorded and not be change automatically everytime of Loop.
How to update the source columns'metadata automatically?
View 3 Replies
View Related
Mar 14, 2006
We have a complicated select query that needs to build a couple temporary work tables that are then used in the final select statement (in an OLEDB Source data flow control). We can click preview and see the resultset, but if we click on the Columns view there are no columns. We can save and close the OLEDB Source control but downstream from it there are messages saying that there are no input columns. The T-SQL looks something like this (abbreviated):
SELECT fieldlist INTO #temp1 FROM table
SELECT fieldlist INTO #temp2 FROM table
SELECT fieldlist FROM table INNER JOIN #temp1 INNER JOIN #temp2
DROP TABLE #temp1; DROP TABLE #temp2
Has anyone been able to use temp tables in a source SQL statement in a data flow? Are we doing something wrong or incomplete?
Thanks, Gordy
View 3 Replies
View Related
Feb 7, 2007
Q: How do I use Calculated Columns from a Data Source View in an OLEDB Data Source Adapter.
I took the following steps:
- Created new SSIS project
- Added a Data Source connecting to a SQLServer2005 DB (MyDataSource)
- Added a Data Source View based on MyDataSource (MyDSV)
- Created a Calcualted field to Table Object MyTable (MyCalcField)
- Added a Connection Manager based on MyDSV
- Added Data Flow to Project
- Added OLEDB Source Adapter to Data Flow
- Attempting to Access Calculated Field MyCalcField to be used in Data Flow.
ISSUE: I can't seem to find a way to get the Calculated field to pass through. It's as though this metadata is not available to the Flow.
Anyone have any ideas?
Thanks - MikeyNero
View 6 Replies
View Related
Dec 12, 2007
I am building an SSIS package that loops through a table in SQL Server and dynamically builds a select statement that i would like to use as an ole db source. I have been having a difficult time with this as the select statement that i am generating is over 200,000 characters long so using an sql variable is out of the question.
I ended up placing the select statement into a table where each row of the table represents a piece of the select. I then use an execute_sql task that selects the entire rowset from this table into a variable object. I then use a for each loop to shred the variable and concatenate it into on big string variable called user:: sql_statement that is my select.
After setting up the loop and testing to see if the user:: sql_statement variable populates correctly i then added a data flow transfer with an ole db source and destination. I then go into the advanced editor for the source and set it to accept an sql statement from a variable and use my user:: sql_statement variable. I was forced to set validate external metadata option to false to avoid an error since there is no way to validate the columns until the for each loop runs during run time.
Now thats all fine and good but what is causing my problem is that during run time, when the package gets to the data flow task, the select statement doesn't seem to be populating the input columns of the data source. I have been searching to no avail on a way to tell the data source to update the input columns but every time it gets there, the package bombs out telling me the ole db source has no available output columns.
Specifically the error i get is :
[DTS.Pipeline] Error: "output "OLE DB Source Output" (6616)" contains no output columns. An asynchronous output must contain output columns.
Any help with this would be much appreciated.
View 18 Replies
View Related
May 30, 2006
Hi All,
With the OLEDB source, is it wrong to use a table / view as a source and only check the columns required or is it beneficial to write a select col1, col2 etc etc as a SQL command?
I cannot see any difference in performance between the two.
Thanks.
View 6 Replies
View Related
Nov 4, 2015
I am trying to Import data into SQL server 2008 using management studio. The source data is an Access dbase. I am trying to do this with queries as the tables do not match but I do need to copy specific columns for the source to the destination. Any brief example selecting a column from the source table, and just entering a dummy value for other columns(other column of data for destination table does not exist in source table).
For the example
Source access dbase, just two columns but no primary key, T2dbase, Employee
New table.
First Name, Description
Tom , manager
Destination dbase SQLServer (T1dbase,Employee table), note 4 columns
Primary key NameID, FirstName nvarchar (20), LastName nvarchar (20), Description nvarchar(20)
View 5 Replies
View Related
Jan 28, 2008
Hi All,
Hopefully someone can help me with what I'm sure is a very simple question (new to the XML thing). I receive an XML file from "someplace" that I need to parse out using the XML Source in SSIS. I have SSIS generate me an XSD document, as one isn't provided for me. However, after I do this, SSIS does not show any available external columns to pull data from- the "Columns" section of the source is just blank. I'm pretty sure this has to do with a syntax error in either the XML file that is being provided to me, or the XSD doc that SSIS is generating. Below are both (obviously with data dummied up). Can someone take a look and let me what needs to be changed in either file to get this up and running? I'm looking to grab the AccountNumber, RecordNumber, ProcessedDate, Status, and StatusMessage elements.
XML File:
Code Snippet
<?xml version="1.0" encoding="utf-16"?>
<AccountResponse xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<AccountNumber>S12345678</AccountNumber>
<RecordNumber>AAA1122</RecordNumber>
<ProcessedDate>Monday, January 28, 2008 11:07 AM</ProcessedDate>
<Status>0</Status>
<StatusMessage>Complete</StatusMessage>
</AccountResponse>
XSD File:
Code Snippet
<?xml version="1.0"?>
<xsd:schema xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsd="http://www.w3.org/2001/XMLSchema" attributeFormDefault="unqualified" elementFormDefault="qualified">
<xs:element name="AccountResponse">
<xs:complexType>
<xs:sequence>
<xs:element minOccurs="0" name="AccountNumber" type="xs:string" />
<xs:element minOccurs="0" name="RecordNumber" type="xs:string" />
<xs:element minOccurs="0" name="ProcessedDate" type="xs:string" />
<xs:element minOccurs="0" name="Status" type="xs:unsignedByte" />
<xs:element minOccurs="0" name="StatusMessage" type="xs:string" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xsd:schema>
Thanks!
View 1 Replies
View Related
Jun 29, 2015
I have a scenario where we have to handle dynamically changing source columns.
For example , some times in the source files the number of columns will be increased or decreased, new columns can be added in the middle or in the end of the source file.
How to handle this kind of scenario in the SSIS ?
View 9 Replies
View Related
Oct 25, 2007
Greetings everyone,
I am seeing a particular problem in the XML Source Editor "Columns" configuration where it is not persisting the "Output name" selection.
Control Flow Tab:
1. I use a "Exec SQL Command" to drop, create, or alter the destination tables in the database that I want to be repository for the inbound XML data. The data types are fairly straightforward.
2. I add a singular "Data Flow"
Data Flow Tab:
1. I add a "XML Source" task, and assign a well-defined XML file. I then use the "Generate XSD" option in the "Connection manager"; and I am fairly satisfied with the generated XSD.
2. I create "OLE DB Destination"
3. I wire the "XML Source" to the "OLE DB Destination". In the "XML Source" in the "Columns".
4. I go to the dropdown list of "Output name" and see the list ordered with the various complex-types that I want to map and transfer to a target table.
For the sake of this report, I select the 5th one down on the list (for which I already have a target table) - let's call this "Mesh"
5. In the "Input Output" dialog, I select the "output" to be the desired 5th item, "Mesh"
6. I check all my mappings so that they map one-to-one ... XML name entries match SQL table destination mapping entries; correct types; correct size
7. Check the metadata and it all looks good.
8. When I hit "Debug" to test the package the failure occurs at the "XML Source". The error report comes back saying that it failed because "field xxx in Contributor was truncated". However, "Contributor" corresponds to the 1st name in the dropdown list presented in "Columns" "Output name:".
If I select return to Step 4, when I open up "Columns" I see that my previous selection of the 5th item on the list named "Mesh" was not persisted, but invariably and no matter how often I select item #5 "Mesh" and save to ensure that selection sticks, it is not persisted.
I hand-edited the .dtsx file and only then was I able to make this stick. However, if I ever re-save the package this non-persistency pops up again.
Am I doing something wrong here or is this a known defect? As I have several dozen XSD mappings that I want to transfer to tables, hand-editing is not something I relish.
I look forward to your reply.
RudyC
View 1 Replies
View Related
Aug 12, 2007
I'm using the XML Source to process a hierarchical set of XML. As such, the XML Source creates keys to maintain the hierarchy. This is very convenient, and keeps me from having to invent my own keys.
The problem is that the datatype of these keys defaults to DT_UI8. Which SQL Server 2005 datatype should I use to store these values in my staging tables? BIGINT corresponds to DT_I8, which can't accept DT_UI8 values.
View 8 Replies
View Related
Feb 26, 2007
I am writing a package that will process delimited flat files that will come in one of a few different versions. Within each flat file, the number of delimited columns will be the same, but each version of the file has a different number of columns. I have tried configuring the flat file data source to expect the version with the largest number of columns, but it will then throw away rows that have less than this number of columns (warning: There is a partial row at the end of the file).
Is it possible to use a single flat file data source that will work with all of the different width files?
View 1 Replies
View Related
Oct 3, 2007
I have a data source that I access via odbc in a DataReader Source component in SSIS. I can access the data fine. However, I am having problems with certain fields that are numeric (specifically home prices ranging from 100,000.00 to 99,999,999.00). In the advanced editor for my data reader source under the input and output properties tab, in data reader output under the external columns and output columns, these fields for some reason default to numeric data types with a precision of 4 and a scale of zero, not large enough to hold the data that is coming in. This causes errors that make the data come in as null (after i specify to ignore the errors).
I can change the precision and scale to 18 and 4 in the external columns, but when I try to change the datatype, precision or scale in the output columns I get the following message:
Property Value is not valid.
The details are:
Error at Import DataReader Source: The data type of output columns on the component "DataReader Source" cannot be changed.
Error at DataReader Source: System.Runtime.InteropServices.COMException (0xC020837D)
at Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter.SetOutputColumnDataTypeProperties(Int32 iOutputID, Int32 iOutputColumnID, DataType eDataType, Int32 iLength, Int32 iPrecision, Int32 iScale, Int32 iCodePage)
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostSetOutputColumnDataTypeProperties(IDTSManagedComponentWrapper90 wrapper, Int32 iOutputID, Int32 iOutputColumnID, DataType eDataType, Int32 iLength, Int32 iPrecision, Int32 iScale, Int32 iCodePage)
Any help is greatly appreciated.
Dave
View 1 Replies
View Related
Apr 17, 2007
Hello,
I have a SSIS package with a Data Flow task. This task transfers the data from SQL Server 2000 to a table in SQL Server 2005.
I deployed and tested this package on the Test Server. Then put this package in a job and executed it - Works fine.
On the production server- If I execute the package through DTEXECUI, it works fine. But when I try executing it through a job- the job fails and it gives me following error:
Description: The external metadata column collection is out of synchronization with the data source columns. The "external metadata column "T_FieldName" (82)" needs to be removed from the external metadata column collection....
What I don't understand is, why are there no errors displayed when I execute the package through DTEXECUI.
Can anyone help me to resolve this issue.
Thanks.
View 3 Replies
View Related