I would like to checkpoint my transaction log every night before full backup.
Would this affect the transaction log sequence in the event of a restore.
Has anybody encountered this situation before? DB on SQL Server 2000 SP4 with trunc log on chkpt option turned on. Checkpoint trace flags were turned on but noticing no checkpoints are being done on one specific DB resulting into growing transaction log. No open transactions.
I see a line in sys.sysprocesses. The process's status is suspend and the command is CHECKPOINT. I have the information here exactly as it is on my monitor. It seems is consuming hi cpu. What should I do? [/code] spid: 10 kpid: 7416 block: 0 waittype: 0x0081 waittime: 232546 lastwaittype: CHECKPOINT_QUEUE waitresource: dbid: 1 uid: 1 cpu: 427046 physical_io: 36695 memusage: 0 login_time: 2007-04-04 10:01:32.787 lastbatch: 2007-04-04 10:01:32.787 ecid: 0 open_tran: 0 status: suspended sid:0x0100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 hostname: program_name: hostprocess: cmd:CHECKPOINT nt_domain: nt_username: loginame: sa [/code]
I have a package that uses checkpoint restart. It is resposible for truncatings many sets of tables and then loading them. There are several ExecuteSQL tasks to truncate the tables and several corresponding data flows to accomplish the loads.
If a load fails I want the corresponding truncate task to be part of the restart otherwise duplicate data may be loaded. Normally, SSIS will start at the failed task. I read something about containers that led me to think that if I put the truncate & matching load pair in a sequence container that the container would be the restart point, but either I read it wrong or it's not working that way.
With 2005 SP1. Have built a SSIS package that successfully saves a checkpoint file and sometimes successfully restarts. (I've also built some others that are 100% reliable).
On the unsuccessful restart it appears as though the failed steps and subsequent steps do not execute. the package appears to "complete" though and the checpoint file is removed as though everything is fine.
On a successful restart the failed step reexcutes and everything works fine.
The issue appears that when a failed step finishes at the same time as a successful step finishes there is contention in the process that writes the checkpoint file out and the checkpoint file is corrupt. The failing step runs in parallel with a successful step and the execution times are very similar so task A may complete before or after task B.
Contents of a good checkpoint file follows <DTS:Checkpoint xmlns:DTS="www.microsoft.com/SqlServer/Dts" DTS:PackageID="{3BFFF2F9-74BA-4CE9-8435-81CC198E8144}"><DTS:Variables DTS:ContID="{3BFFF2F9-74BA-4CE9-8435-81CC198E8144}"/><DTS:Container DTS:ContID="{3655F83D-5EA5-4F16-9B8F-520582A1229A}" DTS:Result="0" DTS:PrecedenceMap=""/><DTS:Container DTS:ContID="{DB2D7A57-D405-4B11-AF4A-41B331EE3F15}" DTS:Result="0" DTS:PrecedenceMap=""/><DTS:Container DTS:ContID="{DFC6A95F-CCFA-4FD9-B604-FCBD722B47D8}" DTS:Result="0" DTS:PrecedenceMap="YYY"/></DTS:Checkpoint>
I've set up a number of jobs (not a maintenance plan) via a script in SQL 2005. These jobs do the following:
1) Full backup every sunday night 2) Differential backup every weeknight 3) Log backup every hour
The database is obviously in the full recovery model.
The backups all seem to be running, with one issue - the log file is still growing and is not being truncated. I was under the impression that a log backup should result in the log being truncated after each full backup. However, this does not seem to be the case.
Is there anything obvious I've missed that needs to be set up, or is there a way I can check that the full backup is actually setting the appropriate checkpoint and that the log backups are 'seeing' these checkpoints?
If i have 3 Tasks in my control flow with checkpoint enabled and the transactionoption of the tasks is required,
Transaction option of the package is supported.
if the second task fails , the package restart from the first task when its running again instead of using the checkpoint and begin from the second task
I have a database that has Truncate on Checkpoint set for the Log file. The Log file is set to AutoGrow. Is it necessary to to run dbcc shrinkdb (or the like) to get Log file to contract? Is there any harm in not contracting the Log file? I'm looking for best efficiency and least-likely-to-fail path as DB sits 'really remote' and there is little opportunity for observation.
Does anyone have any recommendations on re-indexing? I have one table that bears the most growth. It has a clustered index. What would be a suitable data point to watch? I run a SP to save DBCC SHOWCONTIG info along with the duration of a test query, but haven't seen a clear breakover point.
Can anyone assist with this problem. Every now and then my overnight backups (backup Exec) fail due to the truncate log on checkpoint being enabled. This occasionally occurs on Master MSDB databases. I have unchecked the truncate log on checkpoint box numerous times and the backups work fine. Then mysteriously the box is checked again and the backups fail once more. I am stuck as to why this can happen. Is there a generic stored proceedure that checks this box ?
The log on one of my databases keeps filling up, even though I have it set to truncate on checkpoint. the only real difference between this database and the others on my server is that it is built from the dump of another database (on another server) where the tables are marked for replication.
I'm wondering if the fact it is built from a replicating database could be causing this. I've noticed I can't drop any of the table, even though my database isn't set to replicate (or publish).
two questions 1) Any ideas? 2) Is there anyway I can make my server realize I'm not replicating so it will let me drop those tables? (nothing in Enterprise manager indicates that my database is replicating or publishing).
I am DB Developer (not admin), excuse me if this is a silly question.
I don't know much about CHECKPOINT background But I feel, this process is slowing down performance of my sps which runs slower than normal in some cases.
Especially when I see any of my process goes in the suspended mode and its wait type is SLEEP_BPOOL_FLUSH and CHECKPOINTs process is also suspended and its wait type is CHECKPOINT_QUEUE.
More important anything else is... this background process (which I always find its spid is 11) BLOCKS all other user processes when it goes into suspended mode and its wait type is SLEEP_BPOOL_FLUSH
I dont know my analysis is correct (claiming checkpoint as culprit), need experts advice and help
can someone give info on checkpoint and how this effect server performance
We're running the Microsoft product SMS 2003 SP1 for software deployment, patching, hardware inventory, etc. The back-end is SQL 2000 Enterprise SP4 which is installed on the same box as the SMS 2003 SP1 product, and the DB is 145GB's.
We started noticing that the server would freeze every minute or so for 30 seconds. We started logging stats via perfmon and saw that the average disk queue length for the physical drive of F: would skyrocket between 400 - 500 for 30 seconds at the same time the freezing occurred. I have determined that this is occurring during the checkpoint. The recovery interval option is set to the default of 0 on SQL, when I changed the setting to every 5 minutes, the average disk queue length for the physical drive of F: would skyrocket between 400 - 500 every 5 minutes and would subside after 2 minutes. I understand the need for the checkpoint / recovery interval option, but don't believe this high average disk queue length should be occurring.
Does anyone know why this is happening and how to fix this ? The freezing of the box while checkpointing is killing me.
This is maybe a dumb question but I couldn't find a definitive answeron BOL.Looking at my backup script. If I issue a CHECKPOINT, does this trulyforce all transaction log entries to the data file? Therefore, makingit unnecessary to BACKUP log (just BACKUP database is needed).Louis
I have a package with settings SaveCheckpoints=True and CheckpointUsage = Never. After the package failed I fixed the cause of failure by setting a database column to allow nulls. Then I went to our web app that we built to monitor package execution and clicked on the button to restart the package. The web app loads the package and then sets the CheckpointUsage property of the package object to 'Always'. Then it executes the package in a new thread. The package then produced this error:
Checkpoint file "E:Package1Checkpoint" failed to open due to error 0x80070005 "Access is denied.".
Since there was only one remaining task to run in the package I ran it manually.
Now here is the really interesting part. I then needed to run the same package but with different parameters. When I attempted to run it with the saved checkpoint settings (CheckpointUsage=Never, SaveCheckpoints=True) I got this error:
"An existing checkpoint file is found with contents that do not appear to be for this package, so the file cannot be overwritten to start saving new checkpoints. Remove the existing checkpoint file and try again. This error occurs when a checkpoint file exists, the package is set to not use a checkpoint file, but to save checkpoints. The existing checkpoint file will not be overwritten. "
So I then attempted to rename the checkpoint file so it would not interfere, however, it would not let me, saying that the file was in use.
So what I had to do was add a configuration entry which set SaveCheckpoints to False. Then I was able to run the package.
Hi All, Is there a checkpoint at record or row level? i mean let say i have 2 million rows and i want to transfer from one db to another, when i start tranfering about 1 million records the package fails, thus next time i run i want to continue from where it was, i.e. since 1 million records were transfered i don't want to repeat them, just like checkpoint at Task Level. Farther information I am working on SSIS.
I have several packages within secuence containers and into one main dtsx package with a checkpoint configuration and when I run it some succeed and some don´t. The problem is that when I rerun it checkpoint doesn´t seem to work ´cause some of the successful packages are rerun as well (and not skipped as it should be...) In other words, the process does not begin on the point of failure..
Seems to be that packages that finish after the failure point (and succeed) are not registered in the checkpoint file, then when I rerun the main package these succeeded packages are rerun too....
I have an SSIS solution with 8 packages in it. I have checkpoint turned on with the 'If Exists' option. Each of the 8 packages have 8 separate checkpoint files specified.
One out of two runs will fail with one of the below errors:
The checkpoint file \xxxxxxxx is locked by another process. This may occur if another instance of this package is currently executing.
Checkpoint file \xxxxxxxx failed to open due to error 0x80070020 "The process cannot access the file because it is being used by another proces
I have checked all the settings and everything looks fine, looks like the problem is when you have many Control Flow tasks in a package and if two of them are completed at the same time and they try to write to this file one of them is unable to write and it fails.
This is causing the entire job to fail even though the control flow was successful.
Anyone encounter this issue? Any assistance is appreciated.
I'm trying to make sure my database options are set up properly. Our database is used for our Decision Support System. The data is loaded in once a day. Should I have the truncate log on checkpoint on and should I limit the size that the log grows to. I just had to shrink the log. It grew so much that it took up all the space left. If I have the truncate log on checkpoint do I just have to issue the checkpoint command in order for it to truncate the transaction log?
I am new to SQL Server 7 and have inherited a server built by a consultant that is no longer here. I have noticed that the system databases (master, msdb & model) are completed backed up on a nightly basis and are all set with truncate log on checkpoint. Is this the proper way to have things set up?
I read recently that: "if a before-image was made before aninterruption in database processing, you use the before-image dump &simply begin processing again from just before the last transaction.Any checkpoints are irrelevant."I'm not a dba here, so bear with me... What I don't understand is, whyare checkpoints irrelevant when you did a before image dump? Why notbegin from the last checkpoint? Probably part of my problem isunderstanding the difference between the before image dump and acheckpoint. I know that the checkpoint is a db even that suspendsprocessing & writes data to the disk. It represents a moment in timewhen everything is OK in the db. That sounds a lot like a before imagedump :) Maybe the checkpoint is further back in time (?)Basically, if there's an interruption and you need to restore, do youresume from the transaction just prior to the last one, from the beforeimage? Or do you resume from the transaction from the checkpoint...thanks for any help -- jason shohet
I have a situation where I need to make sure a task executes regardless of whether the package starts from a checkpoint or not. Is this possible?
Here's the scenario:
I have a package with 3 tasks {TaskA, TaskB, TaskC} that execute serially using OnSuccess precedence constraints. The package is setup to use checkpoints so that if a task fails the package will restart from that failed task TaskA is insignificant here. TaskB fetches some data and puts it in a raw file TaskC inserts that raw file data into a table.
Problem is that the insertion violates an integrity constraint in the database - so it fails. The problem is easily fixed but it needs to be fixed in TaskB because that is where the data is sourced.
So, I need to be able to execute TaskB again, even though it was TaskC that failed. Currently the package restarts from TaskC which reuses the raw file (which has the bad data in it) so the package continues to fail even though the cause of the problem has been fixed.
How do I configure the package in order to execute TaskB again? Is it even possible?
There is a bug in SSIS2005 concerning the way that checkpoint files behave in concert with Sequence containers. It is documented (at length) here:
Is it possible to execute a container regardless of the checkpoint file? (http://forums.microsoft.com/msdn/showpost.aspx?postid=1574262&siteid=1&sb=0&d=1&at=7&ft=11&tf=0&pageid=0)
Is anyone from Microsoft yet able to give a definitive answer to whether this will be fixed or not in Katmai? A yes or no answer would be very very much appreciated.
What DTS:Result & DTS:PrecedenceMap are used for? What values can DTS:Result contain and what do they mean? [Today I have only seen "0".] What values can DTS:PrecedenceMap contain and what do they mean? [Today I have only seen "Y" & "".]
I am trying to dynamically set the name of a checkpoint file and I have used expressions in the package property and evaluated the following expression correctly
-The first task in that package (an Exec SQL Task) retrieves a timestamp value from an external source
-That timestamp value is used in data-flows to pull data that has been changed since then
-At the end of the package the value in the db is updated
Now...if something fails then on the next execution the checkpoint file will kick in. But this means that the first task (which retrieves the timestamp) will not execute and therefore all the data-flows will be pulling the wrong data.
The only way I can think of getting around this problem is to specify that a task should execute regardless of the presence of a checkpoint file. Unfortunately it seems this cannot be done.
Another option might be to put an OnPreExecute task on the package that gets the timestamp value.
Anyone got any advice about how I should progress? Short of a suggestion from elsewhere I'm going to go with the tactic of using the OnPreExecute to retrieve the timestamp.
Has anyone used Checkpoint files in conjunction with the OnError Event Handler ? I'm having a problem getting the OnError event to fire when the SSIS package reruns with the Checkpoint file.
The first run of the package (without a checkpoint file) works fine. The error occurs, the OnError event handler is called, the package stops and the checkpoint file is created.
When the package is restarted is goes to the correct spot (where the error occured) using the checkpoint file, then it throws an error within the For Loop container and does not call the OnError event handler. The OnError event handler is setup on the For Loop container. The ForLoop performs three loops. Each one of these loops creates an error. Not one of these errors within the three loops will trigger the OnError event handler...
Still I have problem with the DBCC DBReindex which results in large size transaction log equivalent to the backup even after I tried the checkpoint command after this job doesn't change anything.
(as we cannot change to DBCC INDEXDEFRAG for all the indexes. )
Is that Ok if we have a truncate log on checkpoint set to true when this job runs and make the truncate log on checkpoint to false after the job