We are trying to use SSIS to import data from a flat file. The file contains a mixture of 2 different sets of data - first there are an unknown number of rows that define the column names, and then there are rows of the data itself that are delimeted and that contain the same number of columns as are defined above.
Anyway, the only way we can see to import this data is first via a Script Source task, since for a flat file source we'd need to specify the number of rows to skip, which is unknown at design time. If anyone knows any way of getting round this I'd appreciate your advice.
So in the scrip source task we are looping through the rows in the flat file and writing to 2 output buffers - one for the metadata and one for the data itself.
The problem is the Script Source task only lets you specify output columns at design time, and we don't know what columns to output until run time as the number of delimited columns is unknown.
So does anyone know of a way of either:
Adding output columns for a script source task at run-time OR
Alternatively, we could output the delimited data from the Script Sourceas one long column. If so, what would be the best way of then splitting this up into individual columns?
Hi, I have the following two tables in MS SQL 2000 1.Products: ProdID intProName char(10) 2.Orders: OrderID intProdID intOrderDate DateTimeQuantity int I want to join these two tables to form the following result format: _________Prod1__Prod2__Prod3__Prod421/01/07__1______0_______1_____0___22/01/07__5______1_______2_____3___23/01/07__8______0_______1_____2___24/01/07__3______3_______4_____3___25/01/07__2______0_______1_____4___26/01/07__1______2_______6_____2___ So the first row would have the product name, Left hand column has the order date and then the sum of all the orders within Any pointers on how to achive this would be great.
I have master tables that I will be updating from our ERP system. Some examples I have seen take an approach of dropping a table in SQL server then creating it again before importing; some, and probably my choice, append and update; I have not seen an example where records are all deleted then the data appended afterwards. Of the three approaches which is generally regarded as best practise / most efficient?
Hi, I want to transform an xml flow to an html flow. For this, I create an XmlDataDocument, I add on it, all that I want, and I store it in a file (in a dataflow task). After that, in an xml task, I set the input properties like that: Operation type: xslt SourceType: File connection Source: ras.xml
All works fine. But now, I want to use a variable to store my xml data. So, instead of storing my data in ras.xml, I store it in a variable like that:
Dim v As Microsoft.SqlServer.Dts.Runtime.Wrapper.IDTSVariables90 Me.VariableDispenser.LockOneForWrite("RAS", v) v("RAS").Value = doc.OuterXml v.Unlock()
and in the xml task, I set configuration like that:
But when I execute, I got this following exception:
Error: 0xC002F304 at Format HTML Mail, XML Task: An error occurred with the following error message: "Root element is missing.". Error: 0xC002928F at Format HTML Mail, XML Task: Property "New Source" has no source Xml text; Xml Text is either invalid, null or empty string.
Furthermore, when I want to debug it, in replacing the xml task, by a script task, I can see that in my RAS variable, I get the xml data which is well formed.
When I try to set the transaction required on a SQL Task, the package will return the following error:
"[Execute SQL Task] Error: Failed to acquire connection "MYDATABASE.ADONET_CONNECTION". Connection may not be configured correctly or you may not have the right permissions on this connection."
But if the connection is made to a SQL Server in the local machine, I can have the transaction required option, if I change the SQL Server Connection to another machine, the task only runs if I get the transaction to the supported option...
Does anyone have any ideia about this problem?
I've googled the problem, but all the possible solutions I've found came to nothing... so I'ved ran out of ideias!
I have a stored procedure that is executed via a sql script task that returns a full result set. I map this result set to a variable or object type. Is there a way to use this variable as a data source in a subsequent data flow task?
I'm making a copy of some tanles between 2 servers.
Server 1 requires a sql login
Server 2 is using Windows Auth.
I have a user on server 1 named "odbc" able to log in.
however my copy task fails, when I drill the error, it's lists the first user in server 1 alphabetically as the failed login???? but in my dts I am specifying the "odbc" user and password.
I think I have a permissions problem on server 1. So my Question, what minimum permissions does user "odbc" need to copy a table?
On server 1 I can copy from northwind to server 2 just fine..but any other db on server 1 causes the weird failure with the wrong username.
I am trying to sync a database with data from a web service.
In my control flow I have the Web Service Task setup correctly (I can run the project and the expected data is output to a file if I set the output of the web service task to type "File Connection").
What I would like to do is set the OutputType of my Web Service Task to "Variable" and have the Xml Source component in my Data Flow read the variable and process it.
I have already created a variable of type string to link the two parts together, but I have not figured out how.
Any suggestions? I have been searching the web, but have not found an answer to my question.
I was trying to execute an OLE DB Source task with a SQL Command and I got the following error:Only text pointers are allowed in work tables, never text, ntext, or image columns. The query processor produced a query plan that required a text, ntext, or image column in a work table. I read this forum and I checked the data types in boths sides (source and target) and they are the same data types, TEXT. http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=46086
I already execute my query from SSMS using linked server to connect to the source and it worked. I could load the data into my target table. Then, I tried to execute it on SSBIMS and it failed. I wanted to try an Execute SQL Task but the problem is that I can only have one connection object assigned to the task. So I cannot pull the data from one db and insert them into another with one Execute SQL Task.
Any ideas of why am I getting this error? Do I need to set a property to something in order to run my query using OLE DB Source Task?
I am trying to use an XML Source on xml data from an XML webservice, I am putting the document into a variable the trying to import the data from there with the XML Source, but I am getting an error telling me that truncation occured
The Error is "[XML Source [1]] Error: The "component "XML Source" (1)" failed because truncation occurred, and the truncation row disposition on "output column "linking" (1579)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component."
The linking column mensioned in the error is sometime quite a long string but there is nowhere in the XML Source editor to change the size.
Inside a data flow task, i have a oledb source and destination. In my situation, I need to pull data from a table in the source, but also hard code some columns myself, which means my source is a blend of data from table, hard coded data, which will then have to be mapped to columns in oledb destination. Does anyone which option to choose in the oledb source dropdown for the data access mode. Keep in mind, i do need to run a a select query, as well as get data from a table. Is it possible to use multiple oledb sources and connect to one destination, because that is really what intend to do here. I am not sure how it will work, or even if its possible. Basically my source access mode needs to be a blend of sql command and table columns, how would that be implemented? Any help or advice is appreciated.
I am trying to executed a packege so that it loads data from from the excel file to the SQL Server Server database. When I execute it, it prompts the following error message and 1 warning The excel file has three colums, Week, Item and Value
Error 4 Validation error. Data Flow Task: OLE DB Source [94]: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E14. An OLE DB record is available. Source: "Microsoft OLE DB Provider for Oracle" Hresult: 0x80040E37 Description: "ORA-00942: table or view does not exist ". Test - GET NW PERF 1.dtsx 0 0
Warning
Warning 1 Validation warning. Data Flow Task: OLE DB Destination [36]: The external metadata column collection is out of synchronization with the data source columns. The column "DAY" needs to be added to the external metadata column collection. The column "TCH_AVAIL" needs to be added to the external metadata column collection. The column "PDROP" needs to be added to the external metadata column collection. The column "P_HR" needs to be added to the external metadata column collection. The column "SFAIL" needs to be added to the external metadata column collection. The "external metadata column "VALUE" (90)" needs to be removed from the external metadata column collection. The "external metadata column "ITEM" (89)" needs to be removed from the external metadata column collection. Not in use - GET NW STATS.dtsx 0 0
I want to set the body of the mail which i need to send to a variable. What is the best variable to use for that since the message can be quite large and Im not sure whether string is the best option since I see there is an Object variable type but I have not Object variable types before.
In many DTS packages I have used parameterised queries for incremental loads from Oracle database sources using the Microsoft ODBC Driver for Oracle.
Now I want to migrate these packages to SSIS, but the OLE DB connection for Oracle does not support parameters.
I cannot use the "SQL command from variable" data access mode because of the 4000 character limitation on the length of string variables and expressions.
Hi, I am using SQL Server2005 for SSIS. I want to change the source connection dynamicaly evertime. Let me clear, I have to extract some column from excel to MS-Access. I am using Data Flow Task and able to successfully complete the job. But problem is that, whenever a new file comes , i must have to reconfigure my Excel Source. All the time column in file are same, so no need to worry about mapping but how can my package select a file automatically. I have a directory, suppose "C:dpak". I should able to pick the filename and sheet name from this directory every time when my package will execute.
Hello, I am running a package that used to transfers data from one SQL2005 to another SQL2005. There are multiple schemas associated with the database. Until recently, this pacakage would work. Now I am getting the following error for all the tables not owned by dbo:
Any help on this would be appreciated.
Thanks, sck10
[Transfer SQL Server Objects Task] Error: Table "tblAudiocast" does not exist at the source.
Microsoft SQL Server Management Studio 9.00.3042.00 Microsoft Analysis Services Client Tools 2005.090.3042.00 Microsoft Data Access Components (MDAC) 2000.085.1117.00 (xpsp_sp2_rtm.040803-2158) Microsoft MSXML 2.6 3.0 4.0 5.0 6.0 Microsoft Internet Explorer 7.0.5730.11 Microsoft .NET Framework 2.0.50727.832 Operating System 5.1.2600
Please can anyone tell me if or how I have to cross reference to a user defined variable from a script task in the SQL of a Data Reader Source?
The script task creates a variable for the last Sales Ledger Session. The SQL in the data reader source that updates the DW sales invoice lines file based upon those invoice lines where the session number is greater than the value from the script task.
The SQL command in the custom properties doesn't cross reference back to the variable.
If anyone can help or needs further detail to help please post a resonse.
I have a package that does simple exporting from an excel sheet to a table. I used a Dataflow task with Excel Source and OLEDB Destination Components. And i created Package configurations for Source and Destination Components. After than when i execute the package i get the following error.
Information: 0x40016041 at ProductDetails_Import: The package is attempting to configure from the XML file "D:TEST_ETLLPL_Config2.dtsConfig".
Information: 0x40016041 at ProductDetails_Import: The package is attempting to configure from the XML file "D:TEST_ETLDBCon2.dtsConfig".
Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning.
Error: 0xC0202009 at ProductDetails_Import, Connection manager "Excel Connection Manager": SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E21.
An OLE DB record is available. Source: "Microsoft OLE DB Service Components" Hresult: 0x80040E21 Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.".
Error: 0xC020801C at Data Flow Task, Excel Source [1]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "Excel Connection Manager" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed.
Error: 0xC0047017 at Data Flow Task, DTS.Pipeline: component "Excel Source" (1) failed validation and returned error code 0xC020801C.
Error: 0xC004700C at Data Flow Task, DTS.Pipeline: One or more component failed validation.
Error: 0xC0024107 at Data Flow Task: There were errors during task validation.
How do i use the foreach loop container and pass each file found according to a specified pattern to a Flat File Source in a Data Flow Task Object so i can operate on each file found in the foreach loop object instead of having to specify a static file name
Does anyone know what could be causing the error on Transfer SQL Server Objects Task? I tried to develope a SSIS project in the Business Intelligence studio to transfer table between databases on the same server. However, I have been getting the following error:
[Transfer SQL Server Objects Task] Error: Table "XXXXXX" does not exist at the source.
Is there a setting that I need to change to make this work? Thank you for your help.
I'm importing a large csv file two different ways - one with Bulk Import Task and the other way with the Data Flow Task (flat file source -> OLE DB destination).
With the Bulk Import Task I'm putting all the csv rows in one column. With the Data Flow Task I'm mapping each csv value to it's own column in the SQL table.
I used two different flat file sources and got the following:
We have a single generic SSIS package that is used to import several hundred iSeries tables into SQL. I am not looking to rewrite the process. But I am looking for ways to improve performance.
I have tried retain same connection, maximum insert commit size, lock table (tablock), removed some large columns, played with the log file location and size, and now I am working to tweak the defaultbuffermaxrows.
To describe the data flow task - there are six data flows tasks (dft)  working at the same time. Each dtf has their own list of iSeries tables and columns and the corresponding generic SQL table names. Each dtf determines their list of tables based on the number of columns to import. So there is dft30 (iSeries table has 1-30 columns to import), dtf60 (iSeries table has 31-60 columns to import), etc. The destination SQL tables are generically called Staging30, Staging60, etc. Each column in the generic Staging tables are varchar(100). The dtfs are comprised of an OLE DB Source and an OLE DB Destination.
The OLE DB Source uses a SQL Command from Variable to build a SELECT statement. The OLE DB Source uses a connection manager that uses an IBM iAccess IBMDA400 provider. The SQL Command ends up looking like this for the dtf30. This specific example is importing from the iSeries table TDACLR and it only has two columns so it will be copied to the Staging30 table.
select TCREAS AS C1,TCDESC AS C2,0 AS C3,0 AS C4,0 AS C5,0 AS C6,0 AS C7,0 AS C8,0 AS C9,0 AS C10,0 AS C11,0 AS C12,0 AS C13,0 AS C14,0 AS C15,0 AS C16,0 AS C17,0 AS C18,0 AS C19,0 AS C20,0 AS C21,0 AS C22,0 AS C23,0 AS C24,0 AS C25,0 AS C26,0 AS C27,0 AS C28,0 AS C29,0 AS C30,''TDACLR'' AS T0 from Store01.TDACLR
The OLD DB Source variable value looks like the following, but I am not showing the full 30 columns
select cast(0 AS varchar(100)) AS C1,cast(0 AS varchar(100)) AS C2,cast(0 AS varchar(100)) AS C3,cast(0 AS varchar(100)) AS C4,cast(0 AS varchar(100)) AS C5, ... cast(0 AS varchar(100)) AS C30.
The OLE DB Destination uses OpenRowSet Using FastLoad From Variable. The insert into Staging30Â ends up looking like this.
Of course we then copy and transform the Staging30 data to the SQL table that equals T0.
But back to defaultbuffermaxrows. Previously the dtfs had default values of 10000 for DefaultBufferMaxRows and 10485760 for DefaultBufferSize. I added a SQL task to SUM the iSeries column sizes, TCREAS and TCDESC in this example, and set the DefaultBufferMaxRows by dividing the SUM of the columns max_length into 10485760. But I did not see a performance improvement. Do you think that redefining the columns as varchar(100) for the insert is significant? Should I possibly SUM the actual number of columns (2) as 2x100 or SUM the 30x100?
I have a table that I'm loading as part of a control flow that in turn is copied to a target table by using a data flow task. I am doing this because a different set of fields may be used from the source entry to create the target entry based on a field in the source table. That same field may indicate that multiple entries need to be created in the target table from one source table entry for which I use a multi-cast transformation.
The problem I'm having is that no matter how many entries there are in the source table, the OLE DB Source during execution only shows 7,532 entries being taken from the source table. If there are less than 7,532 entries in the source table, everything processes fine. More than 7,532 and the data flow task only takes 7,532 and then seems to hang. It also seems as though only one path of the multi-cast transformation is taken when the conditional split directs a source entry down that path.
Seems like a strange problem I know, but any insight anyone could provide is appreciated. Thanks.
I am using execute sql task to run a stored procedure in oracle database which returns a resultset. This works. Now I need to send the ouput to a destination table in a sql database. Should I use for each loop to pick the resultset and insert it into the destination one by one (which I dont think is a great idea) or is there a better way to accomplish this task (in data flow task) ?
When I use dataflow task instead of execute sql task, the main issue is I am not able to see the output columns when I execute an oracle stored procedure, but when I see the preview I can see the resultset . But I can see the output columns for a sql server stored procedure.
I have created a File System task which is contained in a Foreach Loop Container. I have .bak files that are populating a directory from a maintenance backup plan.
There is a point where I need to delete the .bak file's after I've zipped them all up.
How do I set the SourceVariable to read through the directory and pick up just the .bak file's in the directory to delete.
I have a source files folder where the files generated everyday. My goal is pick the latest file and copy this single file to another folder. I used the Foreach loop container and got the latest file and stored the file name to a varible i.e. LatestFile Then i want to use the File System Task to copy this to the destination. On the beginning, I could not setup the Latestfile since I don't its name then, so when I setup the Source Connection property of the File system task, it is not allowed to leave the SourceVarible as blank!