I've created a SSIS package which takes a matrix from Excel file and insert into SQL table. It works perfectly! However, if I would add a new column into that matrix in Excel. Unpivot tool should take into process dynamically. Is there a way to provide this automatically?
I have a situation where i need to unpivot multiple columns using ssis. The data looks like
Name Age products1 products2 orders1 orders2 abc 23 cycle radio 12 24 as Name Age Products orders abc 23 cycle 12 abc 23 radio 24
Is it possible to do this using the unpivot task in ssisMy actual data is has 18 columns which needed to be unpivoted into one and another 18 into another one.when using unpivot task it gives an error saying only one pivotvalue key is allowed.
I have SSRS report that has around 80+ columns. I have requirement where in dynamically hideshow columns in report based on user selection. I could able to do it by setting expression for "Visiblity" property and having report parameter thro' which columns to display can be choosen.
My problem is 2 points
1. fox example if columns 2 and 4 to be hidden, then there is an empty column between 1 and 3 and 5 columns. How to avoid this
2. When i export to PDF / Excel these spaces prevail.
I have a scenario where we have to handle dynamically changing source columns.
For example , some times in the source files the number of columns will be increased or decreased, new columns can be added in the middle or in the end of the source file.
I have a situation where I want to load the Excel file dynamically, and the excel file have different columns or even worksheet name. How I could approach this? I believe there's no way to modify the meta data (specifically the mapping) in the data flow.
Every day an application creates new tables and dumps static info into them.
I would like to create a package to dynamically export those database tables to raw files for long term archive, one file per table. Here is what I have so far and the issue I am having.
1) Get a list of un-archived tables. 2) Foreach table do the following.
a. Export the table into raw file. b. Zip the raw file. c. Update archive tracking table.
As long as the metadata for each table is the same this package seems to work fine. However, I have many tables with different metadata. How can I dynamically get the package update the metadata column collection when it hits a new table? When it hits a table with different metadata I am getting warnings like this:
The column "some_column" needs to be added to the external metadata column collection.
The "external metadata column "someother_column" (103)" needs to be removed from the external metadata column collection.
Then I get this error: Error: 0xC004706B at dump the table into a raw file, DTS.Pipeline: "component "OLE DB Source" (1)" failed validation and returned validation status "VS_NEEDSNEWMETADATA"
I had Excel file input & import to DB Table by using Data flow in SSIS.but it had duplicates so I dont use the Dupe Records
So I planned like below:
Method 1: Here OLEDB Destination are Good Records(Without Duplicates) OLEDB Destination are Not Good Records(only Duplicates) or Method :2 If I add a column(GOOD_RECORD) in DB Table and Should I update '1' for top 1 record (for Good Record) and remaining as '0' for other Records (for Dups)latter I utilize Through flag of GOOD_RECORD
i.e.,, select * from DB_TABLE where GOOD_RECORD='1' .
I think that Method :2 Advisable for Performance/flexible but Here How can I update by using SSIS(Data flow) ????
I am using SSIS 2008 and I am trying to remove all parameters from my Connection manager so that the package will execute without errors. How do I do this?
I had a package that was deployed to the SSIS server. That server went away. I would like to now deploy the package to a file system. What settings do I need to change in the package so that it will not attempt to deploy to a non-existant server?
I have create packages which loads the data from flat file to sql server table, now I want to make my destination table connection dynamic what is format of connection string. I also need to pass user name and password for sql server dynamically in this case, what is the format for the connection string.
Also in package i used ADO.net as source for *.mdb files how i can set the commection to .mdb files dynamically which is used as source in my package.
I have one scenario. I am calling all columns result set to an variable and inside for each loop container using script task to get message about how many columns are coming in the loop.
At last using send mail task to send automated mails to group of people,but issue it is taking only person's mail id and coming out of loop.
When i use single object variable to pass it to two sequence container having 2 different Foreach ADO Enumerator; seems like the scope of the variable is completed once sequence container 1 is done and second fails with below error
Foreach ADO Enumerator Error: COM error object information is available. Source: "ADODB.Recordset" error code: 0x800A0BCD Description: "Either BOF or EOF is True, or the current record has been deleted. Requested operation requires a current record.".
Does we have to set any property so that object variable scope persist until both loops are completed.
It works fine if i push the data to 2 different object variables.
I built a small package two years ago that uses Flat File Sources to copy in small text data files. Each source connection object has a UNC path to flat text files on another server. The source system changed, so I opened the package and updated the UNC path in one Connection Manager object, and clicked OK. The Flat File Source Editor that uses this source seemed to be able to see the new location when I clicked "Preview". Then I went back to the file source, and the connection had reverted back to the original one. it would not save the new UNC path.
I am using SQL Server 2012 SP2 with SSDT (run as admin). I closed the package in SSDT, edited the connection strings using XMLnotepad, and was then able to open, test, build and deploy the package.
It seems that the Source object will not let itself be changed. The other option is to delete it and recreate it, but I didn't want to remap the fields.
I've got this issue with a query in SSIS. From a table in SQL Server I'm getting over 25000 different identifiers. These identifier are associated to many values in a table in one Oracle Database. This is the schema that I have implemented for doing this.
The problem is that some days the identifiers can be over 45000, and at this point perform a loop for every one is not the best solution (It can take to much time to get the result). Previously I have performed another query where from the SQL statement.
I am creating and sending a unique row with all the values concatenated and then I have recover this unique string from an object and use it to create the query in the ODBC Source that invoke the table in Oracle: something like this: 'Select * from Oracle_table' + @string_values
with @string_values = 'where value in (........)'. It works good because the number of values is small enough to be used, like 250. But in this case I can not use this approach because the number is really big and obviously the DBA of Oracle is going to cancel the query.
So I wonder, how can I iterate over the object getting only a few number of values everytime, something like 300 or maximum 500, to avoid the cancellation of the query but at the same time doing the minimum number of loops.
I have to transform 500 columns from an excel sheet to Sql Server. In Excel 2k3 , I can read a max of 256 columns only.If I use Excel 2k7, then SSIS 2k5 excel source does not support excel 2k7. If I use ole db source then again it can read a max of 256 columns.how can we read 500 columns in excel sheet (Around 10000 rows) efficiently using SSIS 2k5.
The only way to add a new column to an existing mapping that I know is to go to advanced editor and refresh. This however keeps only the default mapping (where the field names match), the rest is wiped out, so need to restore the mapping manually after that. Risky and annoying at the same time. Is there any alternative?
Just installed VS 2005 & SQLServer 2005 clients on my workstation. When trying to create a new Integration Services Project and start work in the designer receive the MICROSOFT VISUAL STUDIO 'Object reference not set to an instance of an object.' dialog box with message "Creating project 'Integration Services project1'...project creation failed."
Previously I had SQLServer 2000 client with the little VS tool that came with it installed. Uninstalled these prior to installing the 2005 tools (VS and SQLServer).
I'm not finding any information on corrective action for this error.
In order to update an Oracle table target from a SQL Server table source I need to use a Foreach Loop Container, so I can loop on the rows of the SQL Server table source. This source table has two columns: the old identifier to update and the new identifier to apply. I must use the value of the old identifier to filter the Oracle rows to update, while the new identifier is the new value to assign to the filtered old identifier.
I already know how to use the Foreach Loop Container when it is necessary to loop on an unique column of a table/view (using an object variable, using a Foreach ADO enumerator, etc.), but I need to loop on two columns.
I have a importfile that I need to insert into an db table. The file looks like this: one;two;three;text;moretext one;two;three;text;moretext; one;two;three;text;moretext one;two;three;text;moretext; one;two;three;text;moretext one;two;three;text;moretext;
As you can see some rows contains a delimiter while others dont. There is a programing error on the application that generates the file and this cannot be changes. Is there a way in integration services to remove the delimiter ?
I have a CSV file that I am importing via SSIS into a SQL table.On the Flat File connector, I have specified Line Feed as the row delimiter.The data flow is failing due to some of the rows having line feeds before the end of the row.Is there a way to get rid of some line feeds but not others, so that I can run the data flow successfully.
How can i perform this task with ssis OR TRANSACT SQL? I HAVE THESE ROWS WITH THE NEXT DATA, I want to take just the valid one, BUT I HAVE A LOT OF COMBINATIONS AS following names, it can be animals, things or personal names
GABRIEL OBANDO --CORRECT GABRIEL OVANDO Gavriel OVANDO gAbriel OBANDO GABRIE OBANDO Gabri OBONDA MANAGUA --CORRECT NANAGUA NAMAGUA
I have a stored procedure. It can be executed like this
exec test @p = 1; exec test @p = 2 exec test @p = n; n can be hundred.
I want the sp being executed in parallel, not sequence. It means the 3 examples above can be run at the same time. If I know the number in advance, say 3, I can create 3 different Execution SQL Tasks. They can be run in parallel.
However, the n is not static. It is coming from a table.
How can I execute Stored Procedures in PARALLEL and DYNAMICALLY ?
I think about using script task. In the script, I get the value of n, and the list of p, from the table, then running a loop with. In the loop, I create a threat and in the threat, I execute the sp like : exec test @p = p. So the exec test may be run parallel. But I am not sure if it works.
I have developed an SSIS package which extracts and creates 5 flat files and finally using Process Extraction task zip the folder. On my Dev environment everything is working fine but when I am moving to SIT and UAT, not able to set up jobs dynamically by importing XMLConfig file.I created variables and assigned values but still it doesnt take.Below are varaibles I created for flat file destination, Arguments and Working Directory (for zipping)On UAT when I go to SQLAgentJobs to set, import .dtsx file, XML config file....the new values doesnt appear. why ?DataSource is taking always dev location....why ? How can I set it up to take dynamic values what I mentioned in config file ?
I am using the following script to check existence of table in the Database and create it dynamically...
This is working when table not existed, it error-ed when the table existed...
This script i am using in the Exec Sql Task.....
[Execute SQL Task] Error: Executing the query "declare @ODSDB varchar(50) declare @SQLSTMT varcha..." failed with the following error: "There is already an object named 'addressTable' in the database.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
declare @ODSDB varchar(50) declare @SQLSTMT varchar(max) set @ODSDB = 'SampleDB' begin set @SQLSTMT = ' IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(''' + @ODSDB + '.dbo.addressTable'') and Type=''U'')
We are using UnPivot task to convert the columns into rows using the Excel File as source. But the Excel file column names are changing frequenly sometimes its having only 4 columns sometimes its 10 columns.
Everytime we are checking and unchecking the column list in Unpivot Task.
can anybody help us to solve this issue that Unpivot task should take the column name dynamically.
We have a maintenance plan in place for updating the statistics on daily basis. Now, i would like to remove the view from maintenance plan. How can i remove that from Update statistics task?