I need to create an ODBC source script component that outputs into SQL Server. When I debug I get the following error message:
Error at Data Flow Task [Script Component [1]]: System.InvalidCastException: Unable to cast object of type 'System.Data.Odbc.OdbcConnection' to type 'System.Data.SqlClient.SqlConnection'. at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.HandleUserException(Exception e) at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.AcquireConnections(Object transaction) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostAcquireConnections(IDTSManagedComponentWrapper90 wrapper, Object transaction)Error at Data Flow Task [DTS.Pipeline]: component "Script Component" (1) failed validation and returned error code 0x80004002.
Public Class ScriptMain
Inherits UserComponent
Dim connMgr As IDTSConnectionManager90
Dim sqlConn As SqlConnection
Dim sqlReader As SqlDataReader
Public Overrides Sub AcquireConnections(ByVal Transaction As Object)
connMgr = Me.Connections.PP
sqlConn = CType(connMgr.AcquireConnection(Nothing), SqlConnection)
End Sub
Public Overrides Sub PreExecute()
Dim cmd As New SqlCommand("SELECT Solution_Code_From, Solution_Code_To FROM Solconv", sqlConn)
sqlReader = cmd.ExecuteReader
End Sub
Public Overrides Sub CreateNewOutputRows()
Do While sqlReader.Read
With SolutionOutputBuffer
.AddRow()
.solcodefr = sqlReader.GetString(1)
.solcodeto = sqlReader.GetString(0)
End With
Loop
End Sub
Public Overrides Sub PostExecute()
sqlReader.Close()
End Sub
Public Overrides Sub ReleaseConnections()
connMgr.ReleaseConnection(sqlConn)
End Sub
I have a package that has a data lfow task. this task imports data from a db2 database (using the IBM Ole DB provider fro db2) and adds it to sql server database table. This package was created on the server. then though version control (using TFS source control) I check out the package on my local machine. and when I open the package I get the foll 3 errors.
Error 1 Validation error. Import Account Num from BMGP_BDR: DTS.Pipeline: The component metadata for "component "DataReader Source" (1113)" could not be upgraded to the newer version of the component. The PerformUpgrade method failed.
Error 2 Error loading BMAG Download Xref Tables - bmag.dtsx: Microsoft.SqlServer.Dts.Pipeline.ComponentVersionMismatchException: The version of component "DataReader Source" (1113) is not compatible with this version of the DataFlow. [[The version or pipeline version or both for the specified component is higher than the current version. This package was probably created on a new version of DTS or the component than is installed on the current PC.]] at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostCheckAndPerformUpgrade(IDTSManagedComponentWrapper90 wrapper, Int32 lPipelineVersion)
Error 3 Error loading BMAG Download Xref Tables - bmag.dtsx: The component metadata for "component "DataReader Source" (1113)" could not be upgraded to the newer version of the component. The PerformUpgrade method failed.
I have a package which reads an Access file from a folder. My connection manager to this file is .NET providers for OledbMicrosoft Jet 4.0 OLE DB Provider.
Package works from my computer. But when I execute it on the server as a SQL Agent job, I get
The component metadata for "component "DataReader Source" (1) could not be upgraded to the newer version of the component. The PerformUpgrade method failed.
I copied the mdb file to a folder on the server which my packages have no problem reading data from.
My packages run under the same domain account as defined in proxies.
At our business we are getting a lot of PDF documents that are being hand keyed into a database. Has anyone heard ior know of a SSIS Data Flow Source component that I coud use to read thos documents into a data stream (?) and process?
I am loading/executing packages from c# and I need to populate a temp table from user input and pass this table as a variable to the datareader source components sql command. I am using expression to build this query, but I am getting design time error when I have this command..
"select id, (SysDate + 28) as ExpiresDate from Table1 where id in (Select Id from" +@[User::Table2]+")"..
I have declared Table2 as a variable of type Object and I am creating Table2 in C# and I am assigning that Table to the user Table. But in the design mode, I am getting an error...expression cannot be evaluated.
we need download files from FTP location , so we are using script component as source and when we are using the fallowing code// read a file and write out as columns in rows
WebClient myWebClient = new WebClient(); myWebClient.Credentials = new NetworkCredential(Variables.ftpLogin, Variables.ftpPassword);
// Concatenate the domain with the Web resource filename. string myStringWebResource = Variables.fileURL + Variables.fileName;
string s = myWebClient.DownloadString(myStringWebResource); StringReader reader = new StringReader(s);
we are getting the fallowing error The server returned an error: (404) Not Found. are we missing anything in the code.
I have implemented a custom source component that can be used as the data source in the Data Flow task.
I have also created a custom UI for this component by using the IDtsComponentUI .
But my component does not have the capability of setting the custom properties via the DTS Variables using the Expression Builder.
I have looked around for samples on how to do this, but I can only find samples of how to do this for custom Control Tasks, i.e. IDtsTaskUI.
My question is, How can implement the Expression Builder in my custom Source component + custom Source UI. Or do you know of any samples which I can look at.
Hi all,I am having trouble getting linked Oracle 9 server in MS SQL Server2005 Express to work properly. My machine is running Windows XP.The Microsoft and Oracle OLE DB Providers have problems dealing withOracle's Numeric Data Type, so I decided to use Microsoft's OLE DB forODBC Provider and an Oracle ODBC source. When using the Microsoft ODBCfor Oracle Driver in my ODBC source I have inconsistent behavior.Sometimes my queries are processed properly, then other times I get thefollowing errorOLE DB provider "MSDASQL" for linked server "ODBCBEAST" returnedmessage "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttrfailed".OLE DB provider "MSDASQL" for linked server "ODBCBEAST" returnedmessage "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttrfailed".OLE DB provider "MSDASQL" for linked server "ODBCBEAST" returnedmessage "[Microsoft][ODBC driver for Oracle][Oracle]".Msg 7303, Level 16, State 1, Line 1Cannot initialize the data source object of OLE DB provider "MSDASQL"for linked server "ODBCBEAST".I have no idea why sometimes I can connect to the linked server with noproblems andwhy other times it performs like this. I'm not changing anything aboutthe system I can think of. When I use an Oracle client (PL/SQL) I haveabsolutely no problems connecting. TNSPING returns that the connectionis good.This is unacceptable so I decided to try my luck with the Oracle 10gODBC driver. However when I use this and perform an openquery selectagainst the linked server I get back only 11 rows, when I know that thedatabase has over 100 rows (in fact when using the Microsoft ODBCdriver and it works that's what I get). I figured maybe the buffersetting needed to be raised in the ODBC configuration so I took it from64000 to 600000 (a magnitude of 10) but I still get back only 11 rows.I'm at my wit's end.Any suggestions on resolving one or the other problem would be muchappreciated.Thanks much
In a Data Flow, I have the necessity to use a SSIS variable of type €œObject€? inside Script Component and assign to it the content of 'n' variables of string type. On exiting from the script the variable of type object should contain something like in the following lines: AAAAAAAAAAAAAAAAAAAAAAAAAAAAA BBBBBBBBBBBBBBBBBBBBBBBBBBBBB CCCCCCCCCCCCCCCCCCCCCCCCCCCCC DDDDDDDDDDDDDDDDDDDDDDDDDDDDD €¦€¦€¦€¦€¦€¦€¦. €¦€¦€¦€¦€¦€¦€¦. On exiting from the data flow I will use the variable of type Object in a Script Task, by reading each element in a cyclic fashion. Is there anyone who have experienced something like this? Could anyone provide any example of that? Thanks in advance!
Hi all I'm into a project which uses a lot of views for joining 2 or more tables. Using the MERGE component in SSIS will be a huge effort coz it only has 2 inputs and I gotta SORT the input too. Isnt it possible to have a VIEW like component that joins more than 2 tables and DOESNT need sorting?? (I've thought about creating views in database engine but it breaks my data floe in SSIS and is'nt a practical solution)
I am using SSIS 2014 with the below .net framework version and installed in Windows server 2012 R2 . I have installed my client's odbc drivers (both 32 bit and 64 bit) in my production server and created ODBC system DSNs for 32 bit and 64 bit.
When i open SSIS 2014 and tried to create the odbc connection but i can able to see only the 32 bit system DSN connection ,i can't able to see my 64 bit odbc system dsn connection.
Microsoft Visual Studio 2012 Shell (Integrated) Version 11.0.50727.1 RTMREL Microsoft .NET Framework Version 4.5.51650
SQL Server Integration Services   Microsoft SQL Server Integration Services Designer Version 12.0.1524.0
And i installed my client odbc drivers(32,64 bit) and created ODBC system DSNs in my local system and when i open ssis 2014 and i can able to see both the ODBC system DSNS(32,64) connections from SSIS ODBC connection.
I am using below version of .net framework in my local system which was installed in windows 7 and i have SSIS 2012 also installed in my system and i can able to see both ODBC connections using 2012 as well in my local system.
Microsoft Visual Studio 2012 Shell (Integrated) Version 11.0.50727.1 RTMREL Microsoft .NET Framework Version 4.5.50938
SQL Server Integration Services   Microsoft SQL Server Integration Services Designer Version 12.0.1524.0
why i can not see the ODBC 64 bit system DSN connection from SSIS in my production server ?
I have an XML data file and an associated XSD file with properly defined datatypes. However, the datatype of all the data elements are always "string" datatype. For example, in my current xml file, all the data elements are of Decimal datatype which is properly defined in XSD file. However, datatype of all the output columns are of string datatype.
I have a problem of performance with my XML source component. In my XSD file, I have over 300 outputs, for each output of the XML component I generate a Row file (so I generate 300 Row File in the same Data Flow) that I integrate into a table in another Data Flow. When I debug just the data flow of the XML source, it takes much time over 10 hours and the XML source component keeps a yellow color, and it generate empty Row Files, but when I interrupt the debug process it generates the right row files. For information I use a machine with 8 microprocessor and 4GO of Ram, and my XML file has 850KO. So my question is, how can I increase the performance of my package without splitting the XML file. Thank you in advance
Any knowledge of an SSIS source pipeline component which reads the JSON, a data interchage format.
JSON looks pretty tempting for heavy data interchange (somewhat human read-able, name/value pairs + arrays, nesting, lighter weight than most xml serializiers), and if its gaining momentum, I should think a source component would follow on (most likely third party)
I want to use Script Component as Source, but I don't know how to code the output rows. Someone can give me some clue or some sample codes? Thanks in advance.
I need to iterate through a bunch of packages and replace the source component in all data flows with a different source component type. That much I've got figured out, but I'm wondering the best way to fix all the LineageID references downstream.
In figured there would be some helper classes to do that for you or something, but the best I could come up with so far is build a dictionary... the key being the LineageIDs from the source component I just deleted... the dictionary value being the corresponding new column's LineageID. Then I run the following code. Is there a better way?
Also, derived column transforms end up broken because the expression is still referencing the old LineageIDs, so I had to write a RefreshExpression function and run it on each input and output column downstream. That function replaces the Expression (which uses the LineageID) with the FriendlyExpression (which uses the column name). Also, I had to name the new source component the same name as the old so that two-part names (i.e. [My Source].[My Col]) in the derived column expressions would still be valid. There's got to be a better way! And I'm worried that derived column transform isn't going to be the only special case.
While working on a recent project, I needed to import a number of old mainframe generated text reports. Since they didn't work on the one-record/one-row construct, the Flat File Source won't work.
So, I created a custom component that allows you to specify a Regular Expression pattern, and it will parse a text file and return columns. Each capture in the RegEx pattern is returned as a column, and you can also specify column names in the regex with the standard (?<colname>) syntax. The code is fairly basic really, but if you are comfortable with regular expression syntax, it can handle a huge variety of unusual text file formats. It also includes a UI editor for the RegEx pattern property that will allow you to test the pattern against a sample text file; it will highlight each row in blue and each column in green (see screenshot)
Since this might have applications for other folks, I added it to SourceForge under GPL. So if you might find that functionality useful, please check out http://sourceforge.net/projects/textregexsource/ and send me some feedback. There isn't much there yet, so check out the screenshots/news/release notes for the basics.
If there is demand, I'll create a more robust install package and documentation. Also looking for feature suggestions.
I am using XML source component to integrate an xml file into an sql server data base. So for this I have tried to test only this XML component with a small XML file (43 KO) and its XSD (434KO) (not generated using XML component), so my package contain a dataflow with only the XML source component. When I execute the package, in the progress window I get:
1- Validation step 100%, (generate warning, because the outputs are not used)
2-Preparation of execution step 100%
3-Excution step
But when the "Execution step" starts it does not stop and it does not fail, so the XML component keeps the yellow color indifinely and it generate temporary file in "Temp" repository.
I run this package in a box with SQL server 2005 SP2 with, 8 Processor and 4GO of RAM.
So is there any solution or explication for this problem. Is it a limit of SSIS? , and how can I increase the SSIS RAM.
I also generated an XSD file using the XML component, but I get the same problem.
A colleague of mine has discovered some behaviour in the XML Source component that I am having trouble understanding or explaining. Here is the (obfuscated) XML document that we are looking to parse:
I'd like to draw your attention to the bit I've highlighted in red. SSIS has incorrectly defined this attribute as an unsignedLong. if you look at the data above you'll see that it has leading zeros therefore we need it to be interpreted as a string. Unfortunately (and here's the problem) there doesn't seem to be a way for us to change the generated xsd and thus refresh the external column metadata. The only way to change it is to go into the Advanced Editor and manually change the external column and output column metadata.
Is this by design? It seems very limiting if you ask me to not let us have control of what the metadata should be.
I am debugging a Data Flow task in my SSIS package. When I run the package in debug mode, one of the OLEDB Data Sources turns red. I have rerouted all Error Output to a flat file, and put a Data Viewer on that path: no rows get sent. When I click the Preview button on this component in Design mode, I see the expected data and get no error messages. The connection does a simple table access...no SQL command. I don't see anything different between this component and other OLEDB sources in the same package that don't trigger any errors. I've tried dropping and re-creating the component with the same results.
in the acquireconnection method Using the below statment I can get a connection Object
oledbConnection = cmado.AcquireConnection(transaction) as OleDbConnection;
from the connection object I can get the connectionstring from the object by calling
oledbConnection.connectionstring() property which will have all the details like DataBase, UserName & other Inofrmation but there is no password Info.
How to get the password Information, I need that information since I will use that info to make OCI calls to fetch the data from the Oracle database in m,y custome source component.
Hi,I have developed a custom source component to get data from a generic odbc connection with some special caracteristics.
The component works fine by getting and mapping the output fields etc...
The only two problems existing are that when i run the task it says that the data flow has no components inside... how is this possible since i have my source mapped to a flat file inside that data flow?
This is the error:
SSIS package "BVEIT000D.dtsx" starting.
Warning: 0x80047034 at BVEIT000D_<EMPRESA> TXT, DTS.Pipeline: The DataFlow task has no components. Add components or remove the task.
Information: 0x4004300C at BVEIT000D_<EMPRESA> TXT, DTS.Pipeline: Execute phase is beginning.
SSIS package "BVEIT000D.dtsx" finished: Success.
The other problem is that if i want to <ignore> a certain source column the component already shows me an error saying that the no column with ID 0 was found...
Any one with experience in creating custom components?
I have set up a new connection as a connection from data source, but I cannot see how to use this connection to create my Data Flow Source. I have tried using an OLE DB connection, but this is painfully slow! The process of loading 10,000 rows takes 14 - 15 minutes. The same process in Access using SQL on a linked table via DSN takes 45 seconds.
Have I missed something in my set up of the OLE DB source / connection? Will a DSN source be faster?
I have an OLEDB Source component with a Oracle OLEDB connection manager. In my SQL statement I must do something like this ...
SELECT * FROM OracleTable WHERE convert(datetime, OracleDate) = parameter
I've dynamicaly build the statement like this : http://blogs.conchango.com/jamiethomson/archive/2005/12/09/2480.aspx
Unfortunately this doesn't work because the variable is a datetime value and not a string.
Maybe another way is to store the date in a table in my SQL database and include it in my Oracle SQL statement ? This would mean that there are multiple connection managers being used in one SQL statement ... I have no idea if this would be possible ...
I'm wondering if it is possible to create a flat file source on the fly while bypassing the following step:
On the Connection Managers page, add or create the Flat File connection manager, using a descriptive name such as MyFlatFileSrcConnectionManager. Then close the Script Transformation Editor.
I want to create the connection totally in script, yet i'm having a hard time proving this out...does anybody have any experience with this?
I am trying to figure out how the Excel source component determines that a column in an Excel sheet is a datatype of float when there is no data in the sheet, only column headers.
I created a package and dragged an 'Execute Sql task' from the toolbox onto the control flow. I made the ConnectionType = 'Excel' and then Connection = 'New Connection' and used the dialog to specify a path and also checked the 'First row has column names' box. In the SqlStatement I put the following:
create table `Tab1` (Id integer ,Description nvarchar(255) )
Then I ran the package and it sucessfully created the workbook and the 'Tab1' worksheet.
Now I create a new package and I drag a 'Data Flow Task' from the toolbox onto the control flow. I edit the Data flow task and drag an 'Excel Source' onto the 'Data Flow'. I edit the 'Excel Source' component and click the 'New Connection' button and I browse to the workbook file I created above. In the 'Name of excel sheet' drop down, I select 'Tab1' and click OK. Now when I open the Advanced Editor for the 'Excel Source' and then click on the 'Input and Output Properties' tab, I expand the 'Excel Source Output' node. Then I select the 'External columns' and I select the 'Id' column and I see that the datatype is 'double-precision float'.
I am wondering how SSIS knows that it is a float even though there is as yet no data in the table only the column headers. There does not appear to be any numeric formatting either. Interestingly, if I close the Advanced Editor and then open the workbook and I merely click the 'Save' button without having made any change, then if I again look in the Advanced Editor, it now prompts me that the datatype of the 'Id' column has changed. It seems that the datatype has changed to Unicode string. Can anyone explain why this happens? Why did simply clicking the 'Save' button in the excel sheet cause the datatype to change from float to string. What actually changed in the worksheet? I am using Excel 2003, SQL Server 2005 SP2 and Visual Studio 2003 Pro SP1. Thanks for any insight into this.
Can someone please point me to one or more examples of a Source component that has two or more outputs?
I wrote a source, which works with a single output, but after adding a second output, I see no rows there. I've single-stepped the code in the debugger, and it looks like it should be adding rows to the second output, but the downstream component never sees any.