Integration Services :: Dynamically Setting Values Using XMLConfig Files
Sep 29, 2015
I have developed an SSIS package which extracts and creates 5 flat files and finally using Process Extraction task zip the folder. On my Dev environment everything is working fine but when I am moving to SIT and UAT, not able to set up jobs dynamically by importing XMLConfig file.I created variables and assigned values but still it doesnt take.Below are varaibles I created for flat file destination, Arguments and Working Directory (for zipping)On UAT when I go to SQLAgentJobs to set, import .dtsx file, XML config file....the new values doesnt appear. why ?DataSource is taking always dev location....why ? How can I set it up to take dynamic values what I mentioned in config file ?
Every day an application creates new tables and dumps static info into them.
I would like to create a package to dynamically export those database tables to raw files for long term archive, one file per table. Here is what I have so far and the issue I am having.
1) Get a list of un-archived tables. 2) Foreach table do the following.
a. Export the table into raw file. b. Zip the raw file. c. Update archive tracking table.
As long as the metadata for each table is the same this package seems to work fine. However, I have many tables with different metadata. How can I dynamically get the package update the metadata column collection when it hits a new table? When it hits a table with different metadata I am getting warnings like this:
The column "some_column" needs to be added to the external metadata column collection.
The "external metadata column "someother_column" (103)" needs to be removed from the external metadata column collection.
Then I get this error: Error: 0xC004706B at dump the table into a raw file, DTS.Pipeline: "component "OLE DB Source" (1)" failed validation and returned validation status "VS_NEEDSNEWMETADATA"
I have a stored procedure. It can be executed like this
exec test @p = 1; exec test @p = 2 exec test @p = n; n can be hundred.
I want the sp being executed in parallel, not sequence. It means the 3 examples above can be run at the same time. If I know the number in advance, say 3, I can create 3 different Execution SQL Tasks. They can be run in parallel.
However, the n is not static. It is coming from a table.
How can I execute Stored Procedures in PARALLEL and DYNAMICALLY ?
I think about using script task. In the script, I get the value of n, and the list of p, from the table, then running a loop with. In the loop, I create a threat and in the threat, I execute the sp like : exec test @p = p. So the exec test may be run parallel. But I am not sure if it works.
I have a scenario where we have to handle dynamically changing source columns.
For example , some times in the source files the number of columns will be increased or decreased, new columns can be added in the middle or in the end of the source file.
I am using the following script to check existence of table in the Database and create it dynamically...
This is working when table not existed, it error-ed when the table existed...
This script i am using in the Exec Sql Task.....
[Execute SQL Task] Error: Executing the query "declare @ODSDB varchar(50) declare @SQLSTMT varcha..." failed with the following error: "There is already an object named 'addressTable' in the database.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
declare @ODSDB varchar(50) declare @SQLSTMT varchar(max) set @ODSDB = 'SampleDB' begin set @SQLSTMT = ' IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(''' + @ODSDB + '.dbo.addressTable'') and Type=''U'')
I've created a SSIS package which takes a matrix from Excel file and insert into SQL table. It works perfectly! However, if I would add a new column into that matrix in Excel. Unpivot tool should take into process dynamically. Is there a way to provide this automatically?
I need to generate a csv file from another csv file, seems to be simple but let's go the trick thing:
Needs to have maximum 1000 lines, if I reach to this, I need to create another csv and fill that new one.
Exemplifying:
I have a csv file called fileA and this has 2000 lines and another csv called fileB with 1500 lines.
I need to loop a folder and get the fileA, create an output called FileAOutput and start to fill that, if I reach to 1000 lines, I need to create a FileAOutput_2 and fill the other 1000 lines...so I'll go to fileB and do the same thing, but in the second case, I'll have 500 lines in the second output.
I have a for each loop(ADO Enumerator) container which executes for each Advertiserid which is coming from database. In for each loop I have to create a new excel file with the advertiser name. So if the loop executes 7 times there should be seven excel spreadsheets with seven advertiser names.
How can i create an excel dynamically in the foreach loop container.
I am using the below code to send HTML Email body to multiple recipients and CC, its working fine. Now i have to attach multiple files in that mail. Is there any possibilities to attach multiple files with the below code else provide any other code to achieve this task.
Code: /* Microsoft SQL Server Integration Services Script Task Write scripts using Microsoft Visual C# 2008. The ScriptMain is the entry point class of the script.
I have create packages which loads the data from flat file to sql server table, now I want to make my destination table connection dynamic what is format of connection string. I also need to pass user name and password for sql server dynamically in this case, what is the format for the connection string.
Also in package i used ADO.net as source for *.mdb files how i can set the commection to .mdb files dynamically which is used as source in my package.
I have SQL Server 2012 SSIS. I have Excel source and OLE DB Destination.I have problem with importing CustomerSales column.CustomerSales values like 1000.00,2000.10,3000.30,NotAvailable.So I have decimal values and nvarchar mixed in on Excel column. This is requirement for solution.However SSIS reads only numeric values correctly and nvarchar values are set as Null. Why?
I am having a sql query task to fetch a maximun number. Example output : 102. After that I have a for each container to return files from a particular location. Files in the location are for example KAS.JN.101, KAS.JN:102, KAS.JN.103, KAS.JN.104. I want to lpop over only the files which have the ending greater than the maximum number returned from the sql query. In this case, the for each container must only loop over 103, 104. Chow to pass this condition to the container ?
I have a situation where I want to load the Excel file dynamically, and the excel file have different columns or even worksheet name. How I could approach this? I believe there's no way to modify the meta data (specifically the mapping) in the data flow.
I need to do something like this in SSIS:From one SQL table I need to get some id values, I am using a simple sql query:Select ID from Identifier where value is not null.I've got this result:As a final result I need to generate and set a variable in SSIS with the final value:
@var = '198','120','ACP','120','PQU'
Which I need to use later in a odbc expression.How can I do this in SSIS?
I have an SSIS package created on SQL Server 2005. I have moved this to a SQL Server 2008 R2 server and amended the package on this new server to point at the correct databases.
The package runs manually. However, I cannot see the package when trying to schedule a job. The dropdown list does not contain the package.
I imported the package by right clicking on MSDB and selecting the package from the file system. The package then appears under this folder (SQL Server Integration Services). I then create the SQL job but cannot see the package I just created.
I have a main package calling another package through the Execute Package task.
The main package is passing the Job Instance id as a parameter to the other package.
When i execute the Execute Package task the concerned package is not showing any execution progress.However when i set the Delay validation Property to True , I saw that the package executed instantly and the desired result was obtained.
I am not sure how the Delay Validation property worked for the cause , as in my package I had no scenario of a temp table being called or any other temporary variables being used which needed a Delay Validation.
I have a report with a subscription enabled and the default values that are selected for the report frequently change. I have our report server locked down so that the users can't change the defaults, but I now want to empower them to maintain this on their own. Here is my dilemma. When you have the available parameters set up to pull from a query, the defaults on the report server have to be keyed in manually, which is not an option. The only way to get a check box there, is to explicitly specify the available values.I need my available values to be database driven and I need to be able to select my defaults on the report server using check boxes.
I am copying files from one server to another and I have specific format for all jpg files. which is in 3 format
filename_reg.jpg, filename_kat, filename_pag
and I want to copy _reg files only using file system task.I have already created file sytem task using foreach loop and it is copying files but I want to copy only _reg files.
i have multiple folders in a directory and each folder contains multiple files of same extension but with different formats(columns) and names(xmp: file aand file b). We have a data task in which we are joining(merge) both files and loading into table..should i use foreach, but then it takes 1 file at a time and i need the other file also to join it in data flow.
i need to add the double quotes in all the records from start and end.
source data col1 col2 col3 col4 1 abdul this is email it was very good ,and very relative posts. Target data col1 col2 col3 col4 "1" "abdul" "this is email" "it was very good, and very relative posts"
i want to load these three files three different destinations customer file should go one destination table, employee file should go one destination table, student file should go one destination table tomorrow if i get some more files in same folder , those files also should go separate destinations these should happen dynamically.
I have an SSIS package that imports data from an Excel file, replaces any value in Excel that reads "NULL" to "", then writes the data to a couple of databases.
What I have discovered today, is I have two columns of dates, an admit date and discharge date column, and what I need to do is anywhere I have a null value in the discharge date column, I have to replace it with the value in the admit date column.
I have searched around online and tried a few things using the Replace funtion in Derived columns but no dice so far.
I have one small requirement.. I want to load the different types of files(.txt, .csv, .tsv, .xlsx).
Using one forearch loop container how can I load the files to database and I shouldn't use the script task to split the filenames. Is there any other way to load all the files using forearch loop container, exesql task..
Client uses an Amazon S3 bucket which they load flats files to . They also expect files to be delivered there to.So at the minute I have an SSIS package (SQL2012 ) which I use to generate some files but then have to manually import the files to the S3 bucket as well as export others.Now Mike Yin ( For SQL2008R2 ) mentioned that you need to obtain PostgreSQL ODBC driver so that you can use the .Net ProvidersOdbc Data Provider for ADO.NET Source component to connect to the Amazon cloud storage. After that, you can use a OLE DB Destination to load the data to SQL Server database.
Installed both 32 and 64bit 9.03. New connection Manager ADO.NET - New then drop the provider down to ODBC.Dataprovider.Then what ? Do I put the S3 bucket address within the use connection string ? Is there and example ? Why do I need the PostgreSQL ODBC as Im not connecting to a database just a S3 Bucket?
We run std 2008 r2, I need to recreate flat files from their varbinary(max) equivalents in our db. I have a mix of excel, pdf, word etc to recreate. Will ssis be a good tool for doing this? I'm wondering what transform(s) would be involved.
Perhaps I need to cast to varchar 1st and then land the data but if I recall correctly there is a maximum record length in ssis destination flat file rows. And I'm thinking I would have to map the varbinary (or cast equiv) to a row in the destination once for each file created.