There is a bug in SSIS2005 concerning the way that checkpoint files behave in concert with Sequence containers. It is documented (at length) here:
Is it possible to execute a container regardless of the checkpoint file?
(http://forums.microsoft.com/msdn/showpost.aspx?postid=1574262&siteid=1&sb=0&d=1&at=7&ft=11&tf=0&pageid=0)
Is anyone from Microsoft yet able to give a definitive answer to whether this will be fixed or not in Katmai? A yes or no answer would be very very much appreciated.
I have about 12 sequence containers mapped out to execute separately based on a precedence contraint and an expression. These lie at the same level and order in my sequence. Only one of these will execute based on an expression. After any of these executes, I have a consecutive sequence container that I'm attempting to execute. I've set the precedence contraint from all twelve of the prior sequence containers to this single sequence container that I would like to run after any of the 12. My problem is that the package will only allow one of these twelve sequence containers to become a precedence of this second single sequence container at a time. The package will let me graphically attach the precedence constraint from all 12 to this single sequence container, but when the package runs, it fails to follow through to this single sequence container. I'm trying to figure out why this is the case and how I can get what I would like to work -- work. Thanks.
I have a Sequence containser(named One) and 2 Sequence containers( named two and three) nested inside container One. In Container two and three I have execute SQL tasks to execute Stored Procedure. Then I have send email task linked to my sequence containter One on failure constraint.
Failure contrainst to send email is not working. I want each sequence containters(two and three) to execute SQL tasks and if one fails wait for other to execute and then failure task should execute.
I have multiple sequence containers in my package. Â I only want to have one sendmail task for the failure/completion of the package. Â If I put the sendmail task in the last sequence container and the first seqence fails, the sendmail task will not be reached and therefore, no email will be sent out.Is there a way to have one sendmail task for all the sequence containers and allow it to send mail regardless of what sequence fails/completes?
Has anyone used Checkpoint files in conjunction with the OnError Event Handler ? I'm having a problem getting the OnError event to fire when the SSIS package reruns with the Checkpoint file.
The first run of the package (without a checkpoint file) works fine. The error occurs, the OnError event handler is called, the package stops and the checkpoint file is created.
When the package is restarted is goes to the correct spot (where the error occured) using the checkpoint file, then it throws an error within the For Loop container and does not call the OnError event handler. The OnError event handler is setup on the For Loop container. The ForLoop performs three loops. Each one of these loops creates an error. Not one of these errors within the three loops will trigger the OnError event handler...
Upon completion of a package the OnPostExecute event is fired whether the package was successful or not. As expected.
Then, I setup the package to use CheckPoints. I create a Script Task and set the Dts.TaskResult to fail. When the package runs and fails the OnPostExecute event for the package does not fire! If I remove the CheckPoints from the package, the OnPostExecute event fires.
What's even stranger is that if I set the Script Task to succeed, and then rerun the package, the OnPostExecute event still doesn't fire!
Has anyone else noticed this? It seems to me that the OnPostExecute event should fire whenever the package completes as it does when CheckPoints are not used. Why would it not fire when using CheckPoints?
I have a series of SQL scripts that contain a mix of DDL and DML. Given the overall size of the scripts (7,000+ lines) I've chosen to split them into several different files (this also promotes different developers working on different pieces in Visual Source Safe).
When it comes time to run them, however, I want to run them all in a sequence -- without having to open, and run each one.
In "The Oracle" you could create a SQL-PLUS script like this: @MyScript1.sql @MyScript2.sql
Which would result in Script1 and Script2 being run, with the output going to a single log file.
I have a package that has many containers that execute in sequence.
If any container fails the package does not fail as the subsequent containers must run.
However if containers fail i do not want to rerun the entire package. Checkpoint restartibility only allows you to start from where your package failed, however my package will not fail and i do not want to start from where it failed but only rerun the containers that failed.
Is this possible ? Can one maybe run only certain containers in a package through dtsexec or another command line tool?
So I have a few data flow tasks that I need to all execute successfully before I commit the changes. So I use a few nested sequence containers with the parent set to Required and all of the children set to Supported. This should work right? Instead I get "The AcquireConnection method call to the connection manager "Databasenamehere" failed with error code 0xC0202009. If I switch the parent back to Supported or NotSupported it will execute fine.
I have run into a problem! Im developing a SSIS package programmatically using C#. But when i create and add a container (foreachloop and sequence) the container is not becommming visible in design time in my intergration services designer (when i open the .dtsx package afterwards). Does anyone have a solution to this problem? It is only a problem with containers i create myself (it is working when im adding e.g. dataflow tasks to existing containers).
I would like to checkpoint my transaction log every night before full backup. Would this affect the transaction log sequence in the event of a restore.
Has anybody encountered this situation before? DB on SQL Server 2000 SP4 with trunc log on chkpt option turned on. Checkpoint trace flags were turned on but noticing no checkpoints are being done on one specific DB resulting into growing transaction log. No open transactions.
I see a line in sys.sysprocesses. The process's status is suspend and the command is CHECKPOINT. I have the information here exactly as it is on my monitor. It seems is consuming hi cpu. What should I do? [/code] spid: 10 kpid: 7416 block: 0 waittype: 0x0081 waittime: 232546 lastwaittype: CHECKPOINT_QUEUE waitresource: dbid: 1 uid: 1 cpu: 427046 physical_io: 36695 memusage: 0 login_time: 2007-04-04 10:01:32.787 lastbatch: 2007-04-04 10:01:32.787 ecid: 0 open_tran: 0 status: suspended sid:0x0100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 hostname: program_name: hostprocess: cmd:CHECKPOINT nt_domain: nt_username: loginame: sa [/code]
I have a package that uses checkpoint restart. It is resposible for truncatings many sets of tables and then loading them. There are several ExecuteSQL tasks to truncate the tables and several corresponding data flows to accomplish the loads.
If a load fails I want the corresponding truncate task to be part of the restart otherwise duplicate data may be loaded. Normally, SSIS will start at the failed task. I read something about containers that led me to think that if I put the truncate & matching load pair in a sequence container that the container would be the restart point, but either I read it wrong or it's not working that way.
With 2005 SP1. Have built a SSIS package that successfully saves a checkpoint file and sometimes successfully restarts. (I've also built some others that are 100% reliable).
On the unsuccessful restart it appears as though the failed steps and subsequent steps do not execute. the package appears to "complete" though and the checpoint file is removed as though everything is fine.
On a successful restart the failed step reexcutes and everything works fine.
The issue appears that when a failed step finishes at the same time as a successful step finishes there is contention in the process that writes the checkpoint file out and the checkpoint file is corrupt. The failing step runs in parallel with a successful step and the execution times are very similar so task A may complete before or after task B.
Contents of a good checkpoint file follows <DTS:Checkpoint xmlns:DTS="www.microsoft.com/SqlServer/Dts" DTS:PackageID="{3BFFF2F9-74BA-4CE9-8435-81CC198E8144}"><DTS:Variables DTS:ContID="{3BFFF2F9-74BA-4CE9-8435-81CC198E8144}"/><DTS:Container DTS:ContID="{3655F83D-5EA5-4F16-9B8F-520582A1229A}" DTS:Result="0" DTS:PrecedenceMap=""/><DTS:Container DTS:ContID="{DB2D7A57-D405-4B11-AF4A-41B331EE3F15}" DTS:Result="0" DTS:PrecedenceMap=""/><DTS:Container DTS:ContID="{DFC6A95F-CCFA-4FD9-B604-FCBD722B47D8}" DTS:Result="0" DTS:PrecedenceMap="YYY"/></DTS:Checkpoint>
I've set up a number of jobs (not a maintenance plan) via a script in SQL 2005. These jobs do the following:
1) Full backup every sunday night 2) Differential backup every weeknight 3) Log backup every hour
The database is obviously in the full recovery model.
The backups all seem to be running, with one issue - the log file is still growing and is not being truncated. I was under the impression that a log backup should result in the log being truncated after each full backup. However, this does not seem to be the case.
Is there anything obvious I've missed that needs to be set up, or is there a way I can check that the full backup is actually setting the appropriate checkpoint and that the log backups are 'seeing' these checkpoints?
If i have 3 Tasks in my control flow with checkpoint enabled and the transactionoption of the tasks is required,
Transaction option of the package is supported.
if the second task fails , the package restart from the first task when its running again instead of using the checkpoint and begin from the second task
I have a database that has Truncate on Checkpoint set for the Log file. The Log file is set to AutoGrow. Is it necessary to to run dbcc shrinkdb (or the like) to get Log file to contract? Is there any harm in not contracting the Log file? I'm looking for best efficiency and least-likely-to-fail path as DB sits 'really remote' and there is little opportunity for observation.
Does anyone have any recommendations on re-indexing? I have one table that bears the most growth. It has a clustered index. What would be a suitable data point to watch? I run a SP to save DBCC SHOWCONTIG info along with the duration of a test query, but haven't seen a clear breakover point.
Can anyone assist with this problem. Every now and then my overnight backups (backup Exec) fail due to the truncate log on checkpoint being enabled. This occasionally occurs on Master MSDB databases. I have unchecked the truncate log on checkpoint box numerous times and the backups work fine. Then mysteriously the box is checked again and the backups fail once more. I am stuck as to why this can happen. Is there a generic stored proceedure that checks this box ?
The log on one of my databases keeps filling up, even though I have it set to truncate on checkpoint. the only real difference between this database and the others on my server is that it is built from the dump of another database (on another server) where the tables are marked for replication.
I'm wondering if the fact it is built from a replicating database could be causing this. I've noticed I can't drop any of the table, even though my database isn't set to replicate (or publish).
two questions 1) Any ideas? 2) Is there anyway I can make my server realize I'm not replicating so it will let me drop those tables? (nothing in Enterprise manager indicates that my database is replicating or publishing).
I am DB Developer (not admin), excuse me if this is a silly question.
I don't know much about CHECKPOINT background But I feel, this process is slowing down performance of my sps which runs slower than normal in some cases.
Especially when I see any of my process goes in the suspended mode and its wait type is SLEEP_BPOOL_FLUSH and CHECKPOINTs process is also suspended and its wait type is CHECKPOINT_QUEUE.
More important anything else is... this background process (which I always find its spid is 11) BLOCKS all other user processes when it goes into suspended mode and its wait type is SLEEP_BPOOL_FLUSH
I dont know my analysis is correct (claiming checkpoint as culprit), need experts advice and help
can someone give info on checkpoint and how this effect server performance
We're running the Microsoft product SMS 2003 SP1 for software deployment, patching, hardware inventory, etc. The back-end is SQL 2000 Enterprise SP4 which is installed on the same box as the SMS 2003 SP1 product, and the DB is 145GB's.
We started noticing that the server would freeze every minute or so for 30 seconds. We started logging stats via perfmon and saw that the average disk queue length for the physical drive of F: would skyrocket between 400 - 500 for 30 seconds at the same time the freezing occurred. I have determined that this is occurring during the checkpoint. The recovery interval option is set to the default of 0 on SQL, when I changed the setting to every 5 minutes, the average disk queue length for the physical drive of F: would skyrocket between 400 - 500 every 5 minutes and would subside after 2 minutes. I understand the need for the checkpoint / recovery interval option, but don't believe this high average disk queue length should be occurring.
Does anyone know why this is happening and how to fix this ? The freezing of the box while checkpointing is killing me.
This is maybe a dumb question but I couldn't find a definitive answeron BOL.Looking at my backup script. If I issue a CHECKPOINT, does this trulyforce all transaction log entries to the data file? Therefore, makingit unnecessary to BACKUP log (just BACKUP database is needed).Louis
I have a package with settings SaveCheckpoints=True and CheckpointUsage = Never. After the package failed I fixed the cause of failure by setting a database column to allow nulls. Then I went to our web app that we built to monitor package execution and clicked on the button to restart the package. The web app loads the package and then sets the CheckpointUsage property of the package object to 'Always'. Then it executes the package in a new thread. The package then produced this error:
Checkpoint file "E:Package1Checkpoint" failed to open due to error 0x80070005 "Access is denied.".
Since there was only one remaining task to run in the package I ran it manually.
Now here is the really interesting part. I then needed to run the same package but with different parameters. When I attempted to run it with the saved checkpoint settings (CheckpointUsage=Never, SaveCheckpoints=True) I got this error:
"An existing checkpoint file is found with contents that do not appear to be for this package, so the file cannot be overwritten to start saving new checkpoints. Remove the existing checkpoint file and try again. This error occurs when a checkpoint file exists, the package is set to not use a checkpoint file, but to save checkpoints. The existing checkpoint file will not be overwritten. "
So I then attempted to rename the checkpoint file so it would not interfere, however, it would not let me, saying that the file was in use.
So what I had to do was add a configuration entry which set SaveCheckpoints to False. Then I was able to run the package.
Hi All, Is there a checkpoint at record or row level? i mean let say i have 2 million rows and i want to transfer from one db to another, when i start tranfering about 1 million records the package fails, thus next time i run i want to continue from where it was, i.e. since 1 million records were transfered i don't want to repeat them, just like checkpoint at Task Level. Farther information I am working on SSIS.
I have several packages within secuence containers and into one main dtsx package with a checkpoint configuration and when I run it some succeed and some don´t. The problem is that when I rerun it checkpoint doesn´t seem to work ´cause some of the successful packages are rerun as well (and not skipped as it should be...) In other words, the process does not begin on the point of failure..
Seems to be that packages that finish after the failure point (and succeed) are not registered in the checkpoint file, then when I rerun the main package these succeeded packages are rerun too....
I have an SSIS solution with 8 packages in it. I have checkpoint turned on with the 'If Exists' option. Each of the 8 packages have 8 separate checkpoint files specified.
One out of two runs will fail with one of the below errors:
The checkpoint file \xxxxxxxx is locked by another process. This may occur if another instance of this package is currently executing.
Checkpoint file \xxxxxxxx failed to open due to error 0x80070020 "The process cannot access the file because it is being used by another proces
I have checked all the settings and everything looks fine, looks like the problem is when you have many Control Flow tasks in a package and if two of them are completed at the same time and they try to write to this file one of them is unable to write and it fails.
This is causing the entire job to fail even though the control flow was successful.
Anyone encounter this issue? Any assistance is appreciated.
I'm trying to make sure my database options are set up properly. Our database is used for our Decision Support System. The data is loaded in once a day. Should I have the truncate log on checkpoint on and should I limit the size that the log grows to. I just had to shrink the log. It grew so much that it took up all the space left. If I have the truncate log on checkpoint do I just have to issue the checkpoint command in order for it to truncate the transaction log?
I am new to SQL Server 7 and have inherited a server built by a consultant that is no longer here. I have noticed that the system databases (master, msdb & model) are completed backed up on a nightly basis and are all set with truncate log on checkpoint. Is this the proper way to have things set up?