I'm looking to copy multiple files from multiple subfolders within a directory via ftp.
So our FTP server will have a /Logs directory with multiple subfolder reprisenting different machines
So you may have /Logs/M_987, /Logs/M_123, etc, etc... the number of machines is variable as they can be added or removed.
And I'm trying to make an import routine that will go through all the subfolders on a remote machine and FTP the log files contained within them over to a local machine for importing.
The one thing i'm struggling on is making the FTP Task loop through remote folders and copy all *.log files.
I can get it working on a single folder within /Logs but need a way to "wildcard" the subfolders or loop through all subfolders within /Logs?
I'm trying this code in a script task inside a foreach loop that gets a file in a foreach file enumerator and maps it to a variable FileName. In the script task I'm setting the ReadOnlyVariables to @[User::FileName] : Imports System.IO Imports System Imports System.Data Imports System.Math Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fname As String = CType(Dts.Variables("FileName").Value, String) File.Move(fname, fname + ".processed") Dts.TaskResult = Dts.Results.Success End Sub End Class But now getting this error: Error: Failed to lock variable "\xxxxDataf1.csv" for read access with error 0xC0010001 "The variable cannot be found. This occurs when an attempt is made to retrieve a variable from the Variables collection on a container during execution of the package, and the variable is not there. The variable name may have changed or the variable is not being created.".
As you can see the variable does appear to be resolving.
Does anyone know how I can do this? I tried adding the Microsoft.Office.Interop.Excel reference to the script task but it doesn't allow me to. It only lets me pick from a selection of references that are already there (not allowing me to add new ones)
I do have this reference and have used it in VB and C# app's
If there is any other way to read an excel file please let me know, ill move onto that
I have searched on this forum for a similar question but couldn't find it so I apologize if this has been asked. If so, I'd greatly appreciate a link to the question. I have created a custom task and am trying to read an xml package configuration file within my custom task. To be more specific, I have added an ADO.Net Connection to my database onto the package and have generated the appropriate tags within my package's configuration xml file. I'm not seeing the classes I should use to access the configuration file of the package I've created. Does anybody have any ideas how I can accomplish this or a link to a document that might cover the material? Thanks!
I used the below code to read excel files  in SSIS 2008R2 script component and it is working fine but when i copied it in Script Task of SSIS 2012, the code doesnt work. I have define one variable
Var_ExcelFileName and stored location of excel file.
/* Microsoft SQL Server Integration Services Script Component * Write scripts using Microsoft Visual C# 2008. * ScriptMain is the entry point class of the script.*/
using System; using System.Data; using Microsoft.SqlServer.Dts.Pipeline.Wrapper; using Microsoft.SqlServer.Dts.Runtime.Wrapper;
[Code] ....
 I am getting errors in the below lines:
using Microsoft.SqlServer.Dts.Pipeline.Wrapper; using Microsoft.SqlServer.Dts.Runtime.Wrapper;
Hey Guys. I’m having a little trouble and was wondering if you could help me out. I’m trying to create a custom paging control, so I create a stored procedure that returns the appropriate records as well as the total amount of records. And that works fine. What I’m having problems with is reading the data from the second select statement within the code. Anyone have any idea on how to do this? Also.. how can I check how many tables were returned? Here's my code. I'm trying to keep it very generic so I can send it any sql statement:public DataTable connect(string sql) { DataTable dt = new DataTable();
SqlConnection SqlCon = new System.Data.SqlClient.SqlConnection(ConfigurationManager.ConnectionStrings["MyDB"].ToString()); SqlDataAdapter SqlCmd = new SqlDataAdapter(sql, SqlCon); System.Data.DataSet ds = new System.Data.DataSet(); SqlCmd.Fill(ds);
dt = ds.Tables[0];
//Here's where I don't know how to access the second select statement
return dt; } Here's my stored procedure: ALTER PROCEDURE dbo.MyStoredProcedure ( @Page int, @AmountPerPage int, @TotalRecords int output )
AS
WITH MyTable AS (
Select *, ROW_NUMBER() OVER(ORDER BY ID Desc) as RowNum From Table where Deleted <> 1 )
select * from MyTable WHERE RowNum > (((@Page-1)*@AmountPerPage)) and RowNum < ((@Page*@AmountPerPage)+1);
Select @TotalRecords = COUNT(*) from Table where Deleted <> 1 RETURN
I have an EXECUTE SQL Task, which gets a result-set from SQLServer using OLEDB Connection.
The result set is mapped to an object variable , say @[User::FilePath]
There are 33 row is the above resultset.
Then, I have a For-each loop, inside which, I have a Script task .
I am trying to put the above @[User::FilePath] recordset into a DataTable using DataAdapter.Fill() function. I perform some read operations to its rows.
The problem is , in the First Iteration of For-Each-Loop, the number of rows in data-table shows 33.
But from the Next Iteration, it comes out to be 0. (ZERO!!)
Any idea about how to configure/read multiple worksheets from a spreadsheet using single connection manager? I think using SQL Command we could able to do - not sure how to achieve that. Let me know the other alternatives too.
Hi i have a FTP task on my SSIS package where i want to select 3 files from a directory, this directory already contains other files but i only want 3 of them how do i only pull down the 3 files from the FTP site i can't seem to find a option on the FTP task to select what files i want from the direcroty.
I have a situation where I run the same taks multiple times during the execution. I would like to have one task which runs every time, instead of duplicating the task over and over 14 times in my script.
Basically, it is an audit log, which I set variables and then insert into a SQL table the variables.
I would like to do this:
Task1 ------Success-----> Set Vars -----Success--> Log | Task2 ------Success-----> Set Vars -----Success-| (do the Log task again) | Task3 -------Success-----> Set Vars -----Success-| (do the Log task again) | etc
This works, however, I have to duplicate Log over and over and over. No OR does not work, because it still only executes the Log task once.
Another option I thought of, but cannot find a way to implement is: Make the Log task "disabled" with no dependencies, then in the Set Vars script, enable and execute Log and disable again.
I have an execute process task to run Red-Gate's SqlDataCompare synchronization. The normal exit code is 0 indicating successful synchronization. However, if the tables are already identical and require no synchronization the process exit code is 63 and the task fails. I do not want the task to fail if the tables are identical, but it seems I can only specify a single value in the task's SuccessValue property. I tried separating values with a comma, e.g. 0,63. Any suggestions?
I have modified my workflow to take conditional branch. The workflow terminates after the branched tasks finish without continueing to the next task no matter how I set the flow out condition. I found I have this problem when a task take more than one input. Does SSIS has a seeting for limiting the inputs to be taken?
I have an xml task that I have set up to validate my xml using a XSD. It seems to be working OK. However, I have had to wrap my xml in a SOAP envelope before I send it to the validation task so I need to include an additional schema for the soap message header. That "header" schema has an <xsd:import> of the soap envelope schema. When I try to <xsd:include> the message header schema I get this:
"Warning 313 The targetNamespace 'blah blah' of included/redefined schema should be the same as the targetNamespace ' blah blah blah' of the including schema."
Is it not possible to use the Xml Task to validate the entire document including the SOAP Envelope due to the differing target namespaces? Thanks for any suggestions.
I am stuch at a problem that sould be very simply but it is not. At least for me
Here is the initial state: Working folder: E:123DTS and E:DTSArchive files: akarmi.zip, akarmi2.zip (there are one-one mdb files inside)
What I want to do with the package: a) unzip the first file (there is an unzip.exe for this task) b) rename the unzipped mdb (its name can be anything) to default.mdb c) use the mdb with another package (that is not yet designed so I haven't added this one) d) delete the default.mdb e) move the first zip file into E:123DTSArchive f) do a-e with all other zip files in the folder (so only once more in this case)
What I did: 1) I created a foreachloop container to do these with all the zip files, I correctly configured the enumerator to *.zip and created a user::zip_files variable 2) put an unzip ExecuteProcessTask. Its argument was the user::zip_files variable (this worked great) 3) I created another foreachloop container because I don't know how to rename the only existing mdb file in the folder (I wasn't able to use wildcards here) 3.1) this time the enumerator was *.mdb and the variable was user::mdb_files 3.2) I put a file system task into the container to rename that mdb file to default.mdb. I used variables for source and destination, too. Source variable (SourceRename) was an expression made of WorkingFolder variable (that was E:123DTS) +
user::mdb_files. This supposed to work but didn't When I put the exact name of the mdb instead of the user::mdb_files variable it worked without any problem. 4) another 2 file sytem tasks to move the original zip file into the E:DTSArchive folder and to delete the default.mdb file 5) end of the loop
I had problems with task 3) I even tried to make it run separately but without success
I would think that the solution is very simple. I would be grateful even if someone just send or upload me a package that renames every mdb file in a directory into default.mdb (it might sound stupid but if there is only 1 mdb that's okay and since this container is inside another the other default mdb files can be overwritten after using them in another package).
I've uploaded my packages with the files here: http://www.sendspace.com/file/mrd2jh
I was dealing with this one all day (with some interruptions) and I can't believe it can't be done.
The further i get with doing my current SSIS package the more i am starting to wonder about best practices and performance.
My current package loops through CSV files in a specified location and extracts events from these files. Each file contains multiple events which are a mixture of different types. Depending on the event there are a different number of comma seperated values. In the package i firstly set each event to one column seperated by a comma delimeter. I then create an array for the event which is split by the delimeter. In a script i weed out all elements of the array that are common to all events and set the remaining events to another array. After some processing i come to my conditional split transformation which splits the processing of each event based on the EventID. This is where i'm having doubts on whether i have approched the package correctly. There are approximately 60 different events so each one of these has a seperate pipeline to process the remaining parameters in the array and output them to the destination table. The destination table is differnet for each ID. Is it viable to have this amount conditions and paths when creating the pacakge and is this likely to have any detrimental effect on performance. Is there possibly another way that i could approach this problem?
I have a DTS that creates multiple .zip files in a folder. Each zip file has a different name. My goal is to attache all the zips to a single email in a Send Mail Task. I have tried to do so with something like "C:Folder*.zip", but have had no luck. Any suggestions?
I'm running this legacy DTS package under SQL Server 2005, and need to make some changes.
I have some (say, 3) stored procedures, and they all have one same parameter. Each SP returns a dataset. I need to import each dataset to a table using DTS. Both the SPs and the destination tables are in the same database.
So, I created an "Execute SQL Task" and typed in "SQL Statement" something like this:
INSERT INTO TABLE TABLE1 EXEC SP1 ? GO INSERT INTO TABLE TABLE2 EXEC SP2 ? GO INSERT INTO TABLE TABLE3 EXEC SP3 ? GO
And the execution returns an error: No value given for one or more required parameters.
Is it possible to assign a parameter to all three SPs? What is the work out if I don't want to change the SPs, and I don't want to create 3 Execute SQL Tasks?
I was wondering if it is possible to assign values to multiple variables from within the same execute sql task, ie I want to use only one execute sql task and have multiple T-SQL statements within it and then assign the results of these sql statemenst as values to multiple variable.
Typically I would declare variables var1 , var2 and var3 , then can I just add one execute sql task and have mutiple sql statements within it? something like this
I have a situation where i need to unpivot multiple columns using ssis. The data looks like
Name Age products1 products2 orders1 orders2 abc     23   cycle       radio         12        24 as Name  Age Products  orders abc    23     cycle      12 abc   23      radio      24
Is it possible to do this using the unpivot task in ssisMy actual data is has 18 columns which needed to be unpivoted into one and another 18 into another one.when using unpivot task it gives an error saying only one pivotvalue key is allowed.
I am having difficulty with an SSIS package that is simply to pivot a table, and perform calculations used in reporting.
Background: A sample set of data is being tested to pivot for reporting purposes. The sample set of data being used is 2226 rows that will pivot to 6 rows having nearly 400 columns.
Problem 1: This all seems quite simple except that when I try to pivot on one set of data it works perfectly, I try two at a time, again great. However, as I add three or more groups of data from a view (thus creating 3+ pivoted rows) SSIS throws an error of "-1071636293" and points the to the column containing the values to be pivoted. The error output dumps ALL source data rows with the same error code and message, even on data groups that were successfully pivoted and written to the destination DB all with correct mappings (including the noted with the error code)
Running the package on each group of data one set at a time, creating one pivot output row, always works for all data sets without throwing any errors. How is it possible that SSIS would run perfectly on each data set creating one row without any problems/errors/warnings, but when when run on all data at the same time it would fail?
Problem 2: When running on multiple sets of data creating many pivoted output rows, the error output viewer and log file contain duplicated data rows from the original query. For example if I received the row the pivot id 1001 ONCE in the query from the DB as it is written out to the error file and viewer I see this ID TWICE for each data group. So if each data set were to contain 100 codes to match and I ran this on three data sets, creating an original 300 rows from the db query, the error file and viewer will have 303 rows. This occurs for only different ID's in depending on the data set, but one ID in particular is duplicated for ALL. How could a row be duplicated in the error output in one case but not another when the source data does not contain ANY duplicates?
Problem 3: When running the package for three sets of data SSIS will note an error and redirect the row to the file I specified with no problem. However, as I run it on 4 or more data sets that cause more errors I receive the message: "The buffer manager attempted to push an error row to an output that was not registered as an error output. There was a call to DirectErrorRow on an output that does not have the IsErrorOut property set to TRUE." Why would the error logging work for one failure but have problems with more?
I am monitoring the package execution by: 1) logging all available output from SSIS to a file. 2) redirecting all errors from the pivot to a file which works only on the small data sets where I am expecting 1-2 rows to be the pivot result. 3) Placing data viewers on all directional arrows from original db query and pivot
Finally, even with all the errors or warnings all data is pivoted as desired and inserted into the destination database as if nothing was wrong, sometimes. Same package, same data, same errors, but delaying the package with breakpoints causes SSIS to either be successful or fail. I'll question that one later as I gather more information.
Any help on one or more of these questions would be a great help...
I have implemented a single lookup and would like to know the optimal approach to implement multiple lookups within a single €œdata flow task€? i.e. my question was if I had to look up multiple reference tables to obtain surrogate keys. I am oversimplifying for illustration purposes€¦
My goal is to run a bunch of select statements from different tables in one database and have them all insert to the same columns/table in the new database. Do I need a new data source for each statement, or is there a way to run all the statements in one set seeing as they all have the same destination. I keep receiving the SQL statement improperly ended error when trying.
I have multiple sequence containers in my package. Â I only want to have one sendmail task for the failure/completion of the package. Â If I put the sendmail task in the last sequence container and the first seqence fails, the sendmail task will not be reached and therefore, no email will be sent out.Is there a way to have one sendmail task for all the sequence containers and allow it to send mail regardless of what sequence fails/completes?
I have a data flow task in which there is a OLEDB source, derived column item, and a oledb destination. My source is a SQL command, that returns some values. I have some values, that I define in the derived columns, and set default values under the expression column. My question is, I also have some destination columns which in my OLEDB destination need another SQL command. How would I do that? Can I attach two or more OLEDB sources to one destination? How would I accomplish that? Thanks
OK. I give up and need help. Hopefully it's something minor ...
I have a dataflow which returns email addresses to a recordset.
I pass this recordset into a ForEachLoop configuring the enumerator as (Foreach ADO Enumerator). I also map the email address as a variable with index 0.
I then have a Execute SQL task which receives this email address as a varchar variable (parameter 0) which I then use in my SQL command to limit the rows returned. I have commented out the where clause and returned all rows regardless of email address to try to troubleshoot this problem. In either event, I then use a resultset to store the query result of type object and result name 0.
I then pass this resultset into a script variable to start parsing the sql rows returned as type object. ( I assume this is the correct way to do this from other prior posts ...).
The script appears to throw an exception at the following line. I assume it's because I'm either not passing in the values properly or the query doesn't return anything. However, I am certain the query works as it executes just fine at the command prompt.
My intent is to email the query results to each email address with the following type of data by passing the parsed data from the script to a send mail task. Email works fine and sends out messages but the content is empty. I pass the parsed data as string values to the messagesource and define the messagesourcetype as a variable in the mail task.
part number leadtime
x 5
y 9
....
Does anyone have any idea what I might be doing wrong?
I have been strunggling to find solution to convert XLSX files with multiple sheets to csv file.
Requirements >>Â Convert XLSX file with multiple sheets to CSV file >>Â CSV file names : XLSX filename + '_' + sheet name >>Â scirpt has to be in VB as i am using ssis 2005 >> I started develping scirpt using Micorosoft.office.interop.Excel.dll . this dll is referenced to script task. >> found web link as useful.. [URL] ....
I ran into a variety of problems trying to set a script task breakpoint in a package containing multiple script tasks. The debugger apparently treats the breakpoint as if it were set in ALL tasks in the package, not just the one in which it is actually set.
At best this results in hitting breakpoints in scripts where they are not set and at worst the debugger brings up the "Send error report" dialog and quits (while the package continues to run). The latter seems to happen most often when the script with the breakpoint has more lines than an earlier script and the breakpoint is set at a line number that exceeds the number of lines in the earlier script--it bombs when the earlier, shorter script starts.
To get the debugger to work under these circumstances I had to add some nonsense code like
While False Dim i as Integer = 0 End While to every script, at the same line number near the beginning of the script (line 40, for example). I then set a breakpoint on the middle statement in one of the scripts (doesn't matter which) to cause the debugger to open at runtime. It doesn't hit the breakpoint because the line is never executed. If the breakpoint is set on a line that can be executed in any script in the package, bad things tend to happen. I then add a "stop" statement to the script that I want to debug. This only works if the debugger is already open, hence the dummy breakpoint above. This workaround is usable, but I am debugging a package that has quite a few scripts and having to insert dummy code in all of them at a fixed line number is rather inconvenient. I would really like to see breakpoints work the way one would expect--only in the scripts where they are set. Is there some other, easier way around this problem? Is there at least an easier way to get the debugger to open so that "stop" will work?
Which works fine, but what I need to calculate the total duration of a request based on the duration of the tasks completed in the request based on Req_ID. I would like to use the CASE statement I have to determine the SLA_Mins for each task and add them together to get total request SLA_Mins.
Below is the create table schema and data
GO /****** Object: Table [dbo].[MidrangeOtherSourceControl] Script Date: 06/03/2015 18:13:15 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[MidrangeOtherSourceControl]( [Req_ID] [float] NULL, [Service_Name] [nvarchar](255) NULL,
I have a data flow task which has around 5 data flows (like the 2nd diagram shown here). These 5 simple flows with just a row count transformation in between. Now, I want to fail the entire task immediately even if one of the data flows failed. Right now if one flow fails the remaining flows fails after a long time, not immediately. How can I make it fails immediately.
The other I would like to do is Can I place these 5 data flows in a transaction, so that if one data flow fails, others data flows also roll backs? ( I assume its not possible)