I have a master package with a sequence container with around 10 execute package tasks (for child packages), all in parallel. Checkpoints has been enabled in the master package. For the execute package tasks FailParentOnFailure is set to true and for the sequence container FailPackageOnFailure is set to true.
The problem i am facing is as follows. One of the parallel tasks fails and at the time of failure some of the parallel tasks (say set S1) are completed succesfully and few are still in execution (say set S2) which eventually complete successfully. The container fails after all the tasks complete execution and fails the package. When the package is restarted the task which failed is not executed, but the tasks in set S2 are executed.
If FailPackageOnFailure is set to true and whatever be the FailParentOnFailure value for the execute package task, in case of restart the failed package is executed but the tasks in set S2 are also executed.
Please let me know if there is any setting that only the failed task executes on restart.
I have done a search and have read some of the posts, but am left more confused than before. I am fairly new to SSIS. Here is my situation and what i am trying to accomplish.
I have a package that has a sequence container, in which there are multiple SQL tasks (about 20) running in parallel. I have checkpoints enabled, and FailPackageOnFailure enabled as well. If the package fails, when i re-run the package it will run the last task as well as all the other tasks. What I am looking to accomplish is when the package is re-run, have the SQL tasks that failed ran and not the previous successful tasks.
I think the best way would be via disabling tasks on successful completion of a task, where it writes the name of the SQL task to a temp table, but I am skeptical.
Can anyone point me in a direction to help me accomplish what I am looking for please.
I have three SQL tasks executing in parallel in an Integration Services package.
+-B-+ A-+-C-+-E +-D-+
It starts with task A; then B, C, and D all execute in parallel; and finally task E runs after BCD are done.
B, C, and D are all Execute SQL tasks, all with the same connection manager. Here is their code:
B) SELECT CASE WHEN COUNT(*) = 0 THEN 0 ELSE 1 END AS Process FROM temp_B
C) SELECT CASE WHEN COUNT(*) = 0 THEN 0 ELSE 1 END AS Process FROM temp_C
D) SELECT CASE WHEN COUNT(*) = 0 THEN 0 ELSE 1 END AS Process FROM temp_D
Each one is setting a binary value to a package variable (using Result Set settings) based on the count of records from different tables.
This works with no problems when I run it against one server (development). But when I switch to the production server, task B and D both fail. I'v checked to make sure all of the temp tables exist in the database for that connection manager and that all three have the same connection manager - all is okay.
Here's the trickier part. When I'm still pointing to the production server and I run these tasks individually, they are all successful. It is only when they are attempting to run in parallel that they fail.
Here is the Output error: Error: 0xC002F210 at Process Med?, Execute SQL Task: Executing the query "SELECT CASE WHEN COUNT(*) = 0 THEN 0 ELSE 1 END AS Process FROM temp_B" failed with the following error: "Invalid object name 'temp_B'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I have a SQL Server 2000 instance running on a Windows Server 2003 box with 4 processors. SQL Server is configured to use all 4 processors, and use all available processors for parallelism.
I have created a simple DTS package which has 2 "execute external process" tasks with no precedence constraints between them. There are no connections required or defined for the two tasks (sequential processing is forced on tasks sharing connections). The DTS package properties have the "limit the number of tasks to execute in parallel" set to 4.
However, despite the above configuration, the two steps are never executed in parallel, but always sequentially.
Does anyone have any ideas as to why these tasks are not being executed in parallel?
I've create a package that currently uses 5 DataFlow tasks connected in series to get data from 5 different files and place that information into 5 different temp tables. Each Dataflow task contains only a OLE Source, a row count and a OLE destination. My question is - Is it normal practise to keep each of these separate, or should I put them all into a single DataFlow? The package should only continue if all five dataflow task complete successfully.
I have a scenario where i have to run update task on multiple servers in parallel and once all of them are completed (success or failure) another task is to be run on another server
1. in maintenance plan, if we add tasks which are not joined, will they run in paralled at the same time 2. if we link the last task to all the tasks with link type 'completed' will the last task complete after all tasks are completed or when any one of them is completed (i have big doubt here)
the business requirement behind this is to bring data from multiple servers into shadow copies locally and then process them together. its ok if some server data transfer fails, but its not ok to start processing centrally while data transfer is going on. further, we want to run data transfer from multiple servers in paralleled to save time.
I've made a query like the one in msdn (SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE Targetinstance ISA "CIM_DirectoryContainsFile" and TargetInstance.GroupComponent= "Win32_Directory.Name="e:\\temp""). I have 20 similar tasks for watching in different folders, but when there are too much tasks in parallel, it doesn't work anymore. I change the numbers of executables to 128 (in the general properties of the package (to test)) but it doesn't seems to work.
I don't understand why it works when there are only 1 or 2 (6 seems to be the maximum) tasks and not if there are more than 6.
Could you help me with this issue?
Configuration : Windows Server 2003, SQL Server 2005, SSIS, Sql Server Agent
I want to use a checkpoint in an SSIS package and would require some help.
I have a scenario like this
Task A ------ Task B-----------Task C
------- Task B1
Task A has a precedence constraint which determines if either Task B or Task B1 runs. Task B is run if the condition is met and Task B1 if the condition is not met.
I would like Task B1 to be a script task that is used to fail Task A so that when the package is restarted it will start from task A based on the checkpoint.
I have a package that has 4 Script Tasks that are placed sequentially.
I have Task1--> Task2-->Task3-->Task4
The arrows between them are OnCompletion Arrows as opposed to the Standard OnSuccess arrows.Even if Task2 failed, it would still execute 3 and 4
the catch is that i want it such that when i run the first time and task 2 fails, then all the tasks except task2 should run which is fine, but when i rerun it. I want it such that it realises that task 2 had failed earlier, so it runs just task2.... if both 2 and 4 had failed then it should just run 2 and 4
i tired to implement it with check points, but the problemn is that if it fails at task2 it stops at task2 and does not continue to execute tasks 3 &4... when u rerun it starts at 2 but like i said i would like 3 & 4 to have completed the previous run...
I have a package that uses ssis checkpoints. It works well. However, when I try to setup transactions for some task, the chekpoints aren't used.
I read BOL and It states: "If a package is configured to use checkpoints, Integration Services captures the restart point in the checkpoint file. The type of container that fails and the implementation of features such as transactions affect the restart point that is recorded in the checkpoint file."
But, how checkpoints are affected by transactions? what relation exists between this two components?
hi people, i have crashed on a this problem. I have a sequence container and on this container I have set "FailPackageOnFailure=true". Now in this container there are 2 tasks. The first one is preceeding the second one. Now both this task have set "FailParentOnFailure=true". Both task are the same and their purpose is to drop table A.
1) I run the package and it fails, because there is no table to drop. 2) I create the table manualy and run package again. 3) I see, that the first task is beeing just SIMPLE OMMITED and the second task runs
In general, everytime any task in a sequence container invokes failure, next time is beeing ommited regardelss of its status. How can this be fixed ? Thanks
I am building a set of packages to load different things, some of which have relationships with the others. Therefore I want them loaded in a certain order. I have built a main package that executes the set of packages to control the flow of the packages.
Now, I want to implement checkpoints. Ultimately, I only want to deal with the main package that controls everything. So I figure the main package needs checkpoints enabled. When packages are nested and checkpoints are on at the top level package, will the nested package(s) start at the control flow point of failure or will it run the entire nested package? Should checkpoints be implemented within the nested packages as well? Should checkpoints only be implemented within the nested packages? Again, remember that I only want to launch estart the main package.
Hello everyone, I had been studying the relationship between SSIS Checkpoints and SSIS Transactions.
What I want to do is to create a package with different task, where each one task creates a new transaction, and the same time each task be a checkpoint, it€™s in order to restarts the package from the failure task not from the beginning.
The Transaction-Checkpoint solution contains two packages*: CkeckpointsAndTransactions1.dtsx and CkeckpointsAndTransactions2.dtsx
Package CkeckpointsAndTransactions1 contains four tasks, task three always fail. The package is configured to use checkpoints and each individual task creates a checkpoint. Additionally, each task creates a new transaction. The package has the TransactionOption setting to NoSupported.
In the CkeckpointsAndTransactions1 package there is something wrong, when the third task fails and I restart the package, the package starts from the beginning, this is wrong!!, the package should restart from the failure task.
In order to the package works like is expected it€™s necessary to add a new task between second and third task. It is also necessary that this new task hasn€™t transaction support. This is shown in the CkeckpointsAndTransactions2 package, in this package after package failure, I restart the package and the package restarts from the failure task, like is expected, but the additional task should not be necessary!!
Does anyone what is wrong in my packages?? How can I to create a package with different task, where each task creates a new transaction, and the same time each task be a checkpoint?
*Please download the BIDS solution from hernan93.files-upload.com (Transaction-Checkpoint.zip file)
I am trying here to get a situation going which includes both transactions and checkpoints to make sure that when something goes wrong I don't get a) data corruption (hence the transactions and b) I don't have to completely restart my 2hr run (hence the checkpoints). However I ran into something of which i cannot see whether it is intended behaviour or simply a bug.
Here's the deal: I have a SSIS-package in which I enable checkpoints (CheckpointUsage: IfExists and SaveCheckpoints: True). I have 2 Dataflows which follow eachother (the first dataflow prepares data for the second dataflow to edit). Because I want to make sure that my data is secure I put a separate transaction on both the dataflows.
And here my problem arises. If I run my package now and the second dataflow breaks then my checkpoint sends me back to the first dataflow and my initial insert is executed again, which isn't meant to happen (I enabled checkpoints to prevent rerunning items). Somehow my checkpoint does not register the fact that the first dataflow has already been executed and it will execute that one again upon rerun.
However: if I put a random task between the 2 transacted dataflows (for example an empty script-task) it will work as intended. Just as long as this inserted item doesnt have a transaction; because if it does then the problem comes back Now if I execute the package then my checkpoint shows that the first dataflow has already been executed and thus it will not execute this one again and it starts at the second dataflow upon re-execution.
I can work around it (with the empty script-task) but still I am wondering as to why this is happening. I am very interested to hear whether this is really a bug or if it is intended behaviour (and if it is then why is it intended?)
I am using check point in my packages , but i am not able to run my packages where it exactly got failed. The scenario is i am 100 rows at source system and i was loaded 95 records into target and due to the some data formatting issues i got failed at the 96th record. Later i am trying to re-execute the package, Surprisingly my package start run from the 1 st record(nothing but the start point of dataflow task).
How can i achive to run from where it excatly got failed(96th record) ?? is it possible using check points else is there any work-around approach ??please respond this post , it is very helpfull for me..
I have a package that has a container containing multiple DF Tasks.
The container is set to be Transacted, such that should any of the DF tasks fail the data inserted in any of the previous tasks rolls back.
This works as expected.
However, this container is part of a larger package and so I wanted to have a checkpoint on it, so that should any of the tasks within it fail, the package could be restarted from this container.
However, I would expect the functionality to be that on failure, the checkpoint would cause the whole container to be started again (because the container is transacted all DF task info would be rolled back) so we would expect it to start at task 1 again.
This is not the functionality I see. The package restarts from the failed task within the container every time.
According to the book Prof SSIS, it should start again from the first task and as explained this makes sense on a Transacted container as you would want this to happen.
A previous forum message encountered the same issue it appears:
See SSIS Checkpoints 04 Dec 2006.
This is an extract from it:
"I only experimented a little but my experience was that when I have a transacted container with multiple tasks that are checkpointed, SSIS would try to restart from the task that failed rather than from the first task in the container. The transaction was being rolled back correctly though.
In short, I felt that check points were not aware of transactions.
So, I ended up with this setting and it works for me:
Container is checkpointed and trasacted. Tasks within the container are not checkpointed. 'FailParentOnFailure' property set to True on the tasks.
That way, if a task failed, it would fail the container and a checkpoint would be created at that level. Transaction would be rolled back as usual."
While this makes sense to me it is not the same properties that the SSIS book has that work.
Additionally, this didn't work for me either !!
I have tried every combination of FailPackageOnProperty and FailParentOnProperty that makes sense but every time the package restarts from the failed container within the task.
The transaction is rolled back correctly every time, but it seems the checkpoint that is created is not used correctly when dealing with transactions within containers.
We are currently facing an issue in ensuring restartability of an SSIS package. The scenario is explained below.
Context: The SSIS Package has two Data Flow tasks. The Data Flow task named DFT1 is the predecessor for DFT2 and chained with OnSuccess precedence constraint.
OnPreExecute and OnPostExecute event handlers have been implemented for DFT1. Each task in both event handlers as well as DFT1 and DFT2 have FailPackageOnFailure set to True.
Scenario1: Task in OnPreExecute of DFT1 fails. DFT1 is attempted and succeeded. OnPostExecute of DFT1 was not attempted. DFT2 was not attempted. Checkpoint file was created; however, no entries were made.
When restarted, execution started from first step in Control flow.
Scenario2: Task in OnPostExecute of DFT1 fails. DFT1 and its OnPreExecute Event were executed. DFT2 was not attempted. Checkpoint file was created and entries were made. Entries had DTS:result as 0 for OnPreExecute and DFT1 tasks.
When restarted, DFT2 was executed. OnPostExecute event, which failed during previous execution, was not attempted.
Each task in the package, whether it is in Control flow or as part of an event handler is crucial for seamless execution. But apparently, as explained above, there is no reliability on the event handlers in case of failures. Has anyone encountered similar scenario? Is this behavior as per design of the runtime engine?
I have a sequence container in my Package and this sequence has more than one control flow tasks.
Can I create the checkpoints such that only the failed component inside the sequence container runs again and not the other successful components/tasks in the sequence container?
I have a FTP task in my control flow that download files from a FTP server. This ftp task is inside a foreach container that loops over a ADO recordset for the file name. The files that the ftp task pulls are huge. If the FTP task fails then I want the FTP task to restart and only download those files that have not been downloaded. Is this possible?
What possible configurations do I have to make to the foreach container and the filetask?
I am working on SQL Server 7.0. Every weekend we go for reindexing of some tables. I want to know if it is possible to run the re-indexing of tables in parallel so that I can save time.
Our database is of size 80GB and one table is around 22GB. Rebuilding of index on this table takes a lot of time and we are unable to index the other tables.
hi, we currently use the Database Maintenance Plan to do backups for our SQL Server 2000 databases. I notice that the database are backed up one after the other.
I would like to know how to run the backups in parallel rather than sequentially. To do this, is there any dependency on the number of CPUs?
I created the package to download 4 ftp files at once. I set the MaxConcurrentExecutables for the SSIS package to 4. So in BIDS in downloads 4 files at the same time.
However, when I started the job I noticed that only 3 files were downloaded at the time (looking at temp files in download directory)
Solution: Sure enough after digging around for awhile - in Step properties for SSIS package - there is execution tab - and "Maximum Concurrent Executables" was -1 (which for some reason defaults to 3 concurrent processes even on our dual CPU server) - so after chanign that value to 4 - tada - all 4 files in parallel
Is there any way to run a stored procedure in parallel to another one? i.e. I have a stored procedure that sends an email. I then scan a table and send any unsent emails. I do not want the second part to slow the response to the user.
Assuming I have a line, is there a function I can call to create a parallel line at a given distance away.i.e - with the below I would want to draw a parallel line to the one output.
Hi ,I need to place the results of two different queries in the same resulttable parallel to each other.So if the result of the first query is1 122 343 45and the second query is1 342 443 98the results should be displayed as1 12 342 34 443 45 98If a union is done for both the queries , we get the results in rows.How can the above be done.Thanks in advance,vivekian
Hi,All. I'm writing test cases on C# for a few methods that make changes in database.To prevent making changes I used BeginTransaction-Rollback,everything was good.But this doesn't work if tested method has BeginTransaction-Rollback code itself.An error appears in NUnit: System.InvalidOperationException : SqlConnection does not support parallel transactions. Do smb know how to solve the problem?
I have several packages within secuence containers and into one main dtsx package with a checkpoint configuration and when I run it some succeed and some don´t. The problem is that when I rerun it checkpoint doesn´t seem to work ´cause some of the successful packages are rerun as well (and not skipped as it should be...) In other words, the process does not begin on the point of failure..
Seems to be that packages that finish after the failure point (and succeed) are not registered in the checkpoint file, then when I rerun the main package these succeeded packages are rerun too....