SQL Server 2008 :: Backup Running Long And Backup Threads Show Suspended
Feb 18, 2015
SQL Server 2008 r2 - 6 GB memory...I attempted a backup on a 500GB database but it was taking way too long. I checked the resources on the box and saw the CPU at 100%. I checked the SQL Server activity log and saw a hung query (user was not even logged on) that had multiple threads so I killed it and now the CPU utilization is back to normal.
Trouble is, now all of the threads in the activity monitor for the backup show 'suspended' and the backup appears to be not doing anything.
If I create an adhoc db backup that takes, say 30 miuntes to complete, should I suspend the tran log backups that run every 10 minutes, until the full backup is complete?
I've written a custom script to delete backup files from location. But unable to modify now to count the number of files are deleted. How to modify the script...
/* Script to delete older than N days backup from a specific directory */
USE [db_admin] GO IF OBJECT_ID('usp_DeleteBackup', 'P') IS NOT NULL DROP PROC usp_DeleteBackup GO
Hello. We recently ran into extremely large transaction log sizes on our SQL 2005 box...turns out they were not being backed up. After doing some research, I found that they should be backed up very frequently in order to kee their sizes down. I kind of inherited the mess, but got three maintenance plans set up:
FULL Backup that runs weekly
DIFFERNTIAL Backup that runs nightly
TRANSACTION LOG backup that runs hourly However, since I implemented the plans there has been high CPU utilization at all times. I finally figured out what was causing it today...the BACKUP LOG process is hanging in "Suspended" mode when I view processes. I have to manually kill them to release the CPU utilization.
How can I trace the cause of the hang? Is there a better way to backup/truncate the log files?
I run an Operating system backup nightly using Veritas Backup exec (v8), immediatly after I run a SQL written backup job.
1. Can I use the same tape? Appending to the media that Veritas has written to?
2. How do I run the backup from cmd line from within Veritas? It has an option to "execute after backup job:" - How do I run a SQL job from operating system?
Try this script to see what queries are taking over a second.To get some real output, you need a long-running query. Here's one(estimated to take over an hour):PRINT GETDATE()select count_big(*)from sys.objects s1, sys.objects s2, sys.objects s3,sys.objects s4, sys.objects s5PRINT GETDATE()Output is:session_id elapsed task_alloc task_dealloc runningSqlText FullSqlTextquery_plan51 32847 0 0 select count_big(*) from sys.objects s1, sys.objects s2,sys.objects s3, sys.objects s4, sys.objects s5 SQL PlanClicking on SQL opens the full SQL batch as a .txt file, including the PRINTstatementsClicking on Plan allows you to see the .sqlplan file in MSSMS========Title: Using a VB Script to show long-running queries, complete with queryplans.Today (July 14th), I found a query running for hours on a development box.Rather than kill it, I decided to use this opportunity to develop a scriptto show long-running queries, so I could see what was going on. (ReferenceRoy Carlson's article for the idea.)This script generates a web page which shows long-running queries with thecurrently-executing SQL command, full SQL text, and .sqlplan files. The fullSQL query text and the sqlplan file are output to files in your tempdirectory. If you have SQL Management Studio installed on the localcomputer, you should be able to open the .sqlplan to see the query plan ofthe whole batch for any statement.'LongestRunningQueries.vbs'By Aaron W. West, 7/14/2006'Idea from:'http://www.sqlservercentral.com/columnists/rcarlson/scriptedserversnapshot.asp'Reference: Troubleshooting Performance Problems in SQL Server 2005'http://www.microsoft.com/technet/prodtechnol/sql/2005/tsprfprb.mspxSub Main()Const MinimumMilliseconds = 1000Dim srvnameIf WScript.Arguments.count 0 Thensrvname = WScript.Arguments(0)Elsesrvname = InputBox ( "Enter the server Name", "Server", ".", VbOk)If srvname = "" ThenMsgBox("Cancelled")Exit SubEnd IfEnd IfConst adOpenStatic = 3Const adLockOptimistic = 3Dim i' making the connection to your sql server' change yourservername to match your serverSet conn = CreateObject("ADODB.Connection")Set rs = CreateObject("ADODB.Recordset")' this is using the trusted connection if you use sql logins' add username and password, but I would then encrypt this' using Windows Script Encoderconn.Open "Provider=SQLOLEDB;Data Source=" & _srvname & ";Trusted_Connection=Yes;Initial Catalog=Master;"' The query goes heresql = "select " & vbCrLf & _" t1.session_id, " & vbCrLf & _" t2.total_elapsed_time AS elapsed, " & vbCrLf & _" -- t1.request_id, " & vbCrLf & _" t1.task_alloc, " & vbCrLf & _" t1.task_dealloc, " & vbCrLf & _" -- t2.sql_handle, " & vbCrLf & _" -- t2.statement_start_offset, " & vbCrLf & _" -- t2.statement_end_offset, " & vbCrLf & _" -- t2.plan_handle," & vbCrLf & _" substring(sql.text, statement_start_offset/2, " & vbCrLf & _" CASE WHEN statement_end_offset<1 THEN 8000 " & vbCrLf & _" ELSE (statement_end_offset-statement_start_offset)/2 " & vbCrLf & _" END) AS runningSqlText," & vbCrLf & _" sql.text as FullSqlText," & vbCrLf & _" p.query_plan " & vbCrLf & _"from (Select session_id, " & vbCrLf & _" request_id, " & vbCrLf & _" sum(internal_objects_alloc_page_count) as task_alloc, " &vbCrLf & _" sum (internal_objects_dealloc_page_count) as task_dealloc " &vbCrLf & _" from sys.dm_db_task_space_usage " & vbCrLf & _" group by session_id, request_id) as t1, " & vbCrLf & _" sys.dm_exec_requests as t2 " & vbCrLf & _"cross apply sys.dm_exec_sql_text(t2.sql_handle) AS sql " & vbCrLf & _"cross apply sys.dm_exec_query_plan(t2.plan_handle) AS p " & vbCrLf & _"where t1.session_id = t2.session_id and " & vbCrLf & _" (t1.request_id = t2.request_id) " & vbCrLf & _" AND total_elapsed_time " & MinimumMilliseconds & vbCrLf & _"order by t1.task_alloc DESC"rs.Open sql, conn, adOpenStatic, adLockOptimistic'rs.MoveFirstpg = "<html><head><title>Top consuming queries</title></head>" & vbCrLfpg = pg & "<table border=1>" & vbCrLfIf Not rs.EOF Thenpg = pg & "<tr>"For Each col In rs.Fieldspg = pg & "<th>" & col.Name & "</th>"c = c + 1Nextpg = pg & "</tr>"Elsepg = pg & "Query returned no results"End Ifcols = cdim filenamedim WshShellset WshShell = WScript.CreateObject("WScript.Shell")Set WshSysEnv = WshShell.Environment("PROCESS")temp = WshShell.ExpandEnvironmentStrings(WshSysEnv("TEMP")) & ""filename = temp & filenameDim fso, fSet fso = CreateObject("Scripting.FileSystemObject")i = 0Dim cDo Until rs.EOFi = i + 1pg = pg & "<tr>"For c = 0 to cols-3pg = pg & "<td>" & RTrim(rs(c)) & "</td>"Next'Output FullSQL and Plan Text to files, provide links to themfilename = "topplan-sql" & i & ".txt"Set f = fso.CreateTextFile(temp & filename, True, True)f.Write rs(cols-2)f.Closepg = pg & "<td><a href=""" & filename & """>SQL</a>"filename = "topplan" & i & ".sqlplan"Set f = fso.CreateTextFile(temp & filename, True, True)f.Write rs(cols-1)f.Closepg = pg & "<td><a href=""" & filename & """>Plan</a>"'We could open them immediately, eg:'WshShell.run temp & filenamers.MoveNextpg = pg & "</tr>"Looppg = pg & "</table>"filename = temp & "topplans.htm"Set f = fso.CreateTextFile(filename, True, True)f.Write pgf.CloseDim oIESET oIE = CreateObject("InternetExplorer.Application")oIE.Visible = TrueoIE.Navigate(filename)'Alternate method:'WshShell.run filename' cleaning uprs.Closeconn.CloseSet WshShell = NothingSet oIE = NothingSet f = NothingEnd SubMain
I was running a stored procedure it was suspended for about 11 hours so I decided to kill it now its in Killed/Rollback stage for 12 hours and when check the status of roll back it says "Estimated rollback completion: 0%. Estimated time remaining: 0 seconds." its using up CPUTIME 380000 and DiskIO 970000. How to do I stop this co.mpletely
I have a backup that comes to me nightly in the format of XXXX_YY_MM_DD.bak where the date is incremented each night the previous night backup is deleted when the next days is added so in general . I have only one in that folder. I wanted to setup a restore to run nightly
RESTORE DATABASE [XXXData] FROM DISK = 'C:folderExtractedDataapp_XXX_backup_15_01_28.bak' --location of .bak file WITH REPLACE
I would like to do something like
RESTORE DATABASE [XXXData] FROM DISK = 'C:folderExtractedDataapp_XXX_backup*.bak' --location of .bak file WITH REPLACE
While I'm asking.. In case there is an older one left behind(which shouldn't happen) I saw something about "LATESTFULL" but think its a redgate command?
A job runs every morning at 3.00 am to back up a database. Many of the tables have triggers on to write updates, deletes and inserts to audit tables.A typical trigger looks like this.
USE [ProjectDB_Live] GO /****** Object: Trigger [dbo].[trgStakeHolders] Script Date: 03/02/2015 10:23:38 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
[code]....
I added a trigger to a table over the weekend - structured the same as the one above and, when the back up ran last night, it copied every row in the table into the audit table as if every row in the table had been updated.
SQL Transaction replication, specifically SQL backup and restore Transaction replication. So Scenario,
S1 = Primary Server 1 R1 = T - Replication Server 1 R2 = T - Replication Server 2
So we have S1 replicating to R1, and we want to build another subscriber which is R2.
Can I take the Replicated Database from R1, backup it up, then restore it to R2, and create the publication/subscription?
Will that work? if not, is there an easier way to avoid the snapshot? the reason i ask this is because we do have replication snapshot, but takes long. One of my Colleagues stated he tried this, however replication made duplicate rows on R2, which is why he had to use replication snapshot.
I am trying to locate the mdf,log and bak files but I am not seeing folder (MSSQL10_50.MSSQLSERVER) the only folder I see there are 90,100,110 and Report builder.
I located the properties of one of the databases below right clicking the properties on SSMS, but when I physically go there, I don't find that folder.
I'm getting this message on my third automated backup of the transaction logs of the day. Both databases are in full recovery mode, both successfully backed up at 01.00. The transaction logs backed up perfectly happily at 01:30 and 05:30, but failed at 09:30.
The only difference between 05:30 and 09:30's backups is that the log files were shrunk at 08:15 (the databases in question are the ones that sit under ILM2007, and keeping the log files small keeps the system running better).
Is it possible that shrinking the log files causes the database to think that there hasn't been a full database backup?
I am having a serious problem which I need some help with regarding our SQL Server backup.
Basically it has started to take ages (as in 48hrs +), when it should only take about 4 hrs. The database is only 380GB and up until monday our backups have not been completing. When I check the activity monitor I have seen that the 'BACKUP DATABASE' process is set to suspended with a huge wait time and the wait type is ASYNC_IO_COMPLETION.
I am not sure how to solve this, but I am going to have to!
So if anyone has any ideas please help me! If you need any othe info please let me know.
We use Netbackup for our SQL servers to backup and restore databases. I would like the service account used by Netbackup to have as limited permissions as possible. The account should be able to backup and restore a db without being able to read any of the content. Right now the account jobs fail if the service account is not in the sysadmin role.
I removed the account from sysadmin and limited it to dbcreator and public but the job fail.
How to setup an account so that people who know the service account password can't log in with that account and read db information?
"The log in this backup set terminates at LSN 9566000024284900001, which is too early to apply to the database. A more recent log backup that includes LSN 986000002731000001 can be restored."
If I am missing to restore previous log backup file, how to find which LSN mapped to which log backup file (.trn file name)? Another word looking at LSN can we tell which log backup file is missing?
Hi, I migrated two 110 Gigabyte databases from SQL Server 2000 to SQL Server 2005 some weeks ago. The server with the SQL2000 databases still exists. It is a data warehouse database, thus data is updated once a day by ETL processes. The data is exact the same on the SQL Server 2000 and on the SQL Server 2005 database.
Database backup needs about 2 hours on the SQL2000 server using a maintenance plan to do a full backup daily. I tried to do a full backup on the SQL Server 2005 Server. I had to stop the process after some hours because no progress was visible. Activity monitor just shows one process in the database with 3 elements doing BACKUP DATABASE on 1 of the databases. 1st one: status is stopped, wait type is Async_io_completion, wait time is very high and rising constantly (over 8.000.000 after 2,5 hours) 2nd one: status is stopped, wait type is backupbuffer, wait time is changing in values in the range of 20 to 90 3rd one: status is executeable, not wait type, no wait time
No users or processes were accessing the database at this time.
The SQL Server 2000 has SP4 installed and SQL Server 2005 has SP2.
The SQL 2005 server is based on a Itanium Dual Core with 2 processors and SATA Raid, and usually all database operations are about 3 or 4 times faster than on the SQL Server 2000 Server. The SQL Server 2000 server is a Xeon with two processors and a slower RAID. Why is the backup so slow on the SQL Server 2005 server? How could I enfasten this. What do the wait types mean, how could I avoid them.
I am trying to create a job using power-shell script to move the backup files from one folder to another. I am using Ola Hallengren script for backups. Ola hallengren created a common backup folder with sub-folders for databases and even more sub folders for Full and Log backups. My goal is to move full backups, which are older than a month and save them in a different drive along with the same folder structure. I was able to move the first set of backups without any problem, but I can't move anymore files and keep getting this error even when I try to overwrite the previous file with the force statement:
Move-Item : Cannot create a file when that file already exists.
My backups are failing sometimes.My db size is around 400 GB and we are taking backup to the remote server. Free size available on the disk is showing 600 GB but my database full backup run more than 10 hrs and failed. The failure reason is there is not enough space on the disk.
What could be the possible failure reasons? It has more than the the database size on the backup server but why it is showed that msg on the job failures. I noticed same thing occurred couple of times in the past.
Is there any way to find how much the backup file will be generate before we run the backup job?
i.e. If we run the full backup of test1 database now, it will generate ....bak file for that test1 db
currently we are using maintenance plan and with compression (2008R2) and the database has TDE enabled
Wanted to check if there's any option to suppress message after the database backup statement is run like we have in dbcc checkdb by using NO_INFOMSGS?
Processed 448 pages for database 'master', file 'master' on file 1. Processed 2 pages for database 'master', file 'mastlog' on file 1. BACKUP DATABASE successfully processed 450 pages in 0.100 seconds (35.151 MB/sec). Processed 280 pages for database 'model', file 'modeldev' on file 1. Processed 2 pages for database 'model', file 'modellog' on file 1. BACKUP DATABASE successfully processed 282 pages in 0.036 seconds (60.994 MB/sec). Processed 1992 pages for database 'msdb', file 'MSDBData' on file 1. Processed 1 pages for database 'msdb', file 'MSDBLog' on file 1.
Having a lot of problems with backup device creating backups with a new transaction log for each day. This is causing the backups to grow way to fast. Seems to be random with our clients. Created new device backups but getting same problem. A manual backup selecting overwrite all existing backup sets will fix it. But starts the cycle all over again.
It's OS / SQL related. I backup to a UNC device, and frequently get a network related error - but not always. And the error is also OS related...
The scene: 64 bit Windows Server 2008 hosts 64 bit SQL 2008 Standard, and devices are created to point to 64 Server 2008 Server B (ServerBSQL_backupsdvc_DB.bak). The database is backed up daily, on a job, and when run the following error is returned:
Date2009/08/08 06:43:30 AM LogJob History (Backup_AssetData_2009-08-08)
Step ID3 ServerSQL01 Job NameBackup_AssetData_2009-08-08 Step NameBackup Database - Second Attempt Duration00:05:33 Sql Severity16 Sql Message ID3013 Operator Emailed Operator Net sent Operator Paged Retries Attempted0
Message Executed as user: COMPANYSERVICE_USER. Processed 94096 pages for database 'AssetData', file 'AssetData' on file 1. [SQLSTATE 01000] (Message 4035) Processed 1 pages for database 'AssetData', file 'AssetData_log' on file 1. [SQLSTATE 01000] (Message 4035) The operating system returned the error '64(failed to retrieve text for this error. Reason: 15105)' while attempting 'FlushFileBuffers' on 'dvc_AssetData( iscsrv-dcm04RISCSRV-SQL01_Backupsdvc_AssetData.bak)'. [SQLSTATE 42000] (Error 3634) BACKUP DATABASE is terminating abnormally. [SQLSTATE 42000] (Error 3013). The step failed.
The file is generated, the same size as a successful backup, but the job terminates unsuccessful.I have tried to delete the devices' file, and then the backup usually succeeds. This may point to a permissions' type error, but the user the job is run under is a domain admin. The destination server is not unavailable during this time, and the network shares also remain active throughout the excersize - although I haven't got a way to "prove" this.
I have a few databases that backup to the UNC described above (different files) that don't fail..My SQL2000 server backs up it's databases to the SQL2008 server, and those backups don't fail with this error.
My research has led me to understand that the 64bit OS tries to buffer the files it receives from another server, and I was wondering if this could be influencing the backups as the destination server is also 64 bit Windows Server 2008.
##20090812## UPDATE
I tried another way to do the backup; with the same results:
Backup database Infovest to disk = ' iscsrv-dcm04 iscsrv-sql01_backupsdvc_Infovest_Z.bak' with INIT,STATS=1
we are queirying an stored procedure multiple times same time,from our application. In this case, few processes executing successfully and few getting failed with error "50000 error executing the stored procedure" and if we run thesame process again its getting executed sucessfully.Does the MySQL cannot handle multiple threads same time?
I'm trying to figure out why my transaction log backup is taking up to an hour to complete. I started off with a full recovery model with a Full database back up every Sunday, differential backups every Tuesday/Thursday and log backups every 5 minutes. I would have thought that the log file backups would execute much quicker because I'm backing them up more often.
Here is my backup statement, I'm hoping I've got a wrong option that you can point out to me:
BACKUP LOG [xxxx] TO [LogFilexxxxBackups] WITH NOINIT , NOUNLOAD , NAME = N'xxxx log backup', SKIP , STATS = 10, NOFORMAT
Have a server running 14 production databases, 12 of which are replicated (transactional replication) to a second server for reporting. Distributor on same machine as publishers. All set to 'SyncWithBackup' = true, distribution set to Full recovery mode, log backups every 30 minutes, full backup nightly. This generally runs just fine.
Occasionally, the process 'hangs' indefinitely (has gone 12 hours or more before being caught) and I need to stop the backup job, stop one or more of the log reader agents, and restart everything, and it proceeds just fine. Annoying, but not fatal, and not very often.
This time, no matter what, the backup job hangs when it runs. This is true whether it is the FULL backup or just a Transaction Log backup. It hangs on the stored procedure sp_msrepl_backup_start, at the point where it is attempting to update the table 'MSrepl_backup_lsns'. When it is hung like this, all of the log reader agent jobs are also hung, blocked by this stored proc. I've tried stopping ALL of the log reader agents prior to starting the backup, but the backup process still hangs up at the same spot and never ends.
I can run the select statement in that SP that gathers the data for the databases 'manually' in a query window and it finishes in about 10 seconds. It actually seems to be hung up on the 'UPDATE' statement. When it is hung, I cannot SELECT from MSrepl_backup_lsns unless I append WITH (NOLOCK) to the statement. Nothing else I can find indicates that there is anything else locking that table. DBCC OPENTRAN shows that there is a lock on that table held by that stored proc -- but I can't see any reason why it won't update the table (17 total records) and move on.
As I said, normally this runs just fine. Totally baffled by what may be causing this at this time.
I have automated process, which synchronizes a transactional publication using initialize from backup approach. It drops subscriptions and puts them back again once the restore on the subscriber is completed.
Dropping the subscriptions causes a lot of blocking and deadlocking. I've decided to remove those steps, but it causes loss of data on the subscriber.
Is it a must to drop and re-create the subscriptions during such process? If not, how can I avoid the loss of data?
Data got deleted on Friday evening, need to have database restored to FRiday afternoon and also some data has been entered on Monday, which needs to be there.
Windows 2003 backup utility uses the shadow copy option that allows it to copy open files. Therefore, can I use this utility to backup the .mdf and .ldf files for my SQL 2000 database? I can then attach the .mdf files if I need to restore the database to another server. Can anyone tell me if this is safe? I've tried it and it worked but I'm worried there maybe some lurking danger in using this approach.
I am working towards automating the process of testing our backups. For the meantime, I do it all manually - I copy the backup files (full + transaction logs) to our test server and then run the restore script. Once database restored I run the DBCC CheckDB. The results of checkdb I manually upload to our Sharepoint portal as proof that the backup file is intact with no errors.
here are some ideas I have but have not yet tested:
Create a maintenance plan with each 3 jobs:
--> Powershell script to copy the files from Prod server to Test server - add this scrip to Job1 --> Powershell script to restore databases files - add this script to Job2 --> Run the DBCC in powershell (yet to find if possible in PS) - add this script to Job3
I would like to use seperate jobs as to get a report on the duration and status of each job
Would also like to get the results of the DBCC Checkdb as proof that no errors were found for upload to our Sharepoint portal. Dont know if possible via the job.