Scripting Overnight Backup Jobs For SQL Server Databases
Mar 21, 2008
Hi, I've just been given the task of finding out how to implement a backup procedure for our SQL server databases. Most are running 2000, some 2005.
I'm a programmer, and I'm used to having a DBA to help me! I've seen a few methods on the web involving a stored procedure and running the task from task manager.
I need to backup and restore all the databases in SQL Server 2000 and work out a way of displaying whether or not it was successful.
Can anybody please point me in the right direction as I've no idea how to do any of this really. I guess if I could setup a sproc to loop through the databases that would help, but I'm not sure where to start.
While attempting to script out some multi-server jobs I am receiving this error:
TITLE: Microsoft SQL Server Management Studio ------------------------------ Script failed for Job 'my job name'. (Microsoft.SqlServer.Smo) For help, click: <link removed to keep message width to a normal size :)> ------------------------------ ADDITIONAL INFORMATION: Unable to cast object of type 'System.DBNull' to type 'System.String'. (Microsoft.SqlServer.Smo) ------------------------------ BUTTONS: OK ------------------------------
This happens on any multi-server job. The reason behind needing to script this is the current database server that acts as our master server for SQL Server jobs is being replaced. I figured it would be easy enough to script out the job on the old server and run it (with minimal modifications) on the new server. Now, if I create any of these jobs manually on the new server and try to script it, it works just fine. Any ideas what might be the issue here?
I've tried doing it via SSMS from multiple servers and the issue persists.
I would like to move over all of my DTS packages and corresponding jobs onto another MS SQL server machine. The problem that I have is that I haven't been able to find any straight-forward info on how to script up these jobs, move them onto another machine and execute them. I have scanned through previous posts written by people with the same problem, but every response is simply "script up the jobs and move them onto another server." Maybe I'm missing something, but how "exactly" do I get a job to show up on this destination machine in EM, in the management/sql server agent/jobs folder. I did the jobs/all tasks/script job... but then what? I saved it on this destination machine, but am unclear as what to do next. The MS SQL Server Administrators companion book is of no help on this subject, and I do not understand how this simple task can be so confusing and is not documented. Any help would be greatly appreciated. Thanks in advance! TB
I am trying to copy a job by scripting it out and running the script. I am getting these errors: Warning: Non-existent step referenced by @on_success_step_id. and Warning: Non-existent step referenced by @on_fail_step_id.
What's up with that?
Is it possible to copy a job onto the same server?
Is there any way that I can script out all the jobs at a time (instead of doing one by one). I have mor than 100 jobs to transfer, I was just wondering if there were a way to do all at once.
Is there an easy way to script the tables from within a database toinclude the indexes etc...Basically, if I want to script a number of tables, I can right handclick on them and generate the script that way. What I'd like to do isthis same process in T-SQL if possible so I can build an SP which I canthen run.It's really just a discussion point. Say we have a number of verysimilar databases whereby there is a database per client and thestructure is almost the same. This was done for simplicity (rightly orwrongly) and there is a fair amount of data.When we take on a new client, we would need to take a copy of thestructure and create a new database with the tables from a known'base'.We would likely need a table to hold the names of the tables to includefrom each database. We would then create an SP passing the name of thedatabase to copy from and one to copy to. This would then create ascript we can then automatically run to create a new database. Couldthis be done in an SP where you pass these parameters, it createsanother SP with the script and then runs it ?I've simplified some of this, so I hope it makes sense. It's notsomething I would have normally done, but I'm curious how this problemwould be approached. Often people would take a script (or use a savedone) and run that after manually creating the database.Thanks in advanceRyan
The backup jobs on one of our sql servers started hanging. The job appears to be complete because the backups are on the disk but the job never completes when viewed in the activity manager. sqlmaint.exe had numerous instances running all night long but the job never completes.
sql server 2000 standard edition sp3a Windows server 2000
The jobs on this server have run successfully for well over a year and just recently they started hanging. We've done everything short of reboot the server. We've restarted the agent, shut down the sqlmaint.exe but alas nothing we do lets the job complete.
Other jobs that are not back up related are working as they should.
There is plenty of disk space.
Any ideas?
thanks in advance because we are lost as to the cause and resolution.
I am looking for a way to only capture database structure for my teams development server. I know that I can generate the SQL script manually, but I really need to capture this information nightly and I don't have sufficient tape space to do full backups. I worked out a method for scheduling the copying of only the structure of the databases to a new database and then scheduling backups of the empty databases (this required creating and dropping the databases that would only hold structure), but I'm hoping someone has an idea about just scheduling the script generation (without my having to write an .exe that uses DMO).
we currently use Backup Scripts from Ola Hallengren, It Says Full (non copy-only) and differential backups are performed on the primary replica. Full(Copy-only) backups and transaction log backups are performed on the preferred replica.
we currently do FULL(COPY_ONLY) Backup everyday and LOG Backups for every 15 min, is there any performance benefit on running the FULL (non copy-only) on the preferred replica .
Also all these databases are running in 'Simple' recovery mode so obviously I need to change this to 'Full' to enable incramental and log backups - is this possible using the same script.
Hope someone can help as the thought of doing all of this individually for each database scares me silly!!! :)
We have no certified SQL DBAs. Mainly because 95% of our production data is on Oracle and DB2 databases.
One of our Oracle DBAs is trying to write scripts that can be run by the AS400 robot to export data and backup the databases. I (being the only person with any SQL expierance) have been asked to ensure that the scripts she wrote will not only work but not bring down the server.
Can anyone point me to the correct place (web, book, ect) to find out how this is to be done? In years past my SQL servers have been stand alone dedicated units. With db backups scheduled through the SQL Manager and tape backups handled by the sysadmin.
I need backup script to take all the database backups and we have the maintenance plan but our database character size is 98 and when we are taking the backups through maintenance plan while storing the backup history information it is adding the date and timezone information and exceeding the length to 128 so it is not writing the information on MSDB.
So we want to take the backup using the script and it has to create sub folder for each database. Also if any of the database fails it should continue with others.
I am working towards automating the process of testing our backups. For the meantime, I do it all manually - I copy the backup files (full + transaction logs) to our test server and then run the restore script. Once database restored I run the DBCC CheckDB. The results of checkdb I manually upload to our Sharepoint portal as proof that the backup file is intact with no errors.
here are some ideas I have but have not yet tested:
Create a maintenance plan with each 3 jobs:
--> Powershell script to copy the files from Prod server to Test server - add this scrip to Job1 --> Powershell script to restore databases files - add this script to Job2 --> Run the DBCC in powershell (yet to find if possible in PS) - add this script to Job3
I would like to use seperate jobs as to get a report on the duration and status of each job
Would also like to get the results of the DBCC Checkdb as proof that no errors were found for upload to our Sharepoint portal. Dont know if possible via the job.
Schedule backup job which usually completes in 1 to 2 minutes, it's been running for over three hours. Meanwhile transact log backup job executed and it's hanging also.
I stopped the jobs, which now shows last run status 'cancelled'. But the spid still exeists in 'current activity' pane with status 'runnable'. No locks or blocks!
Is there anyway I can get rid of this spids so that I can start backup job again?
I have 3 seperate jobs to backup my database. 1 for a full backup, 1 for differentials, and the 3rd for the transaction log. Each is on its own schedule. The problem I have is that sometimes the transaction log job will try to start while one of the other two are in process. When this happens, an error is generated and I get paged (which is very anoying at midnight) Is there any way to block the transaction job until the other two are complete? It should still run, just not at the same time.
I have a DB Maintenance plan created that performs optimizations, integrity checks, and a database backup. The optimizations and integrity checks work fine, but the backup job fails. The backup job is to write the .bak file to a network share. When I change the owner of the job to "sqluser", the job fails at step 0 withe the following error: The job failed. Unable to determine if the owner (rsnsqluser) of job DB Backup Job for DB Maintenance Plan 'Online Database Server Maintenance Plan' has server access (reason: Could not obtain information about Windows NT group/user 'rsnsqluser'. [SQLSTATE 42000] (Error 8198)). Checked with the system admin for this server and sqluser has priviledges to do everything, including write to the network share.
So, now I change the owner of the same maintenance job from sqluser to "sa". I now get this error: Executed as user: RSNsqluser. sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029). The step failed.
Checked the settings for MSSQLSERVER and SQLSERVERAGENT in Control Panel. Everything seems to be in order according to the other threads I have read.
For grins, I tried to ensure the sqluser password on the SQL Server Agent was correct. When I try to enter the new password I get this: sql server agent startup account could not be verified I click yes (it asks if I want to continue anyway). Is the password not being registered properly in the agent?
One other thing, the sqluser user is listed under Security|Logins as connecting with Windows Authentication. We have several other servers that are set up the same and work fine. What am I missing? I'm pulling my hair out! :eek:
HI Guys. i cannt' understand what's going on the server.. actually i have created a two jobs for full backup for my databases.. One is running perfect but one is giving me following error.
Can any one tells me where i m doing mistake.
Message Executed as user: Domainuser. Cannot open backup device 'Database_Full(\192.0.0.1BackupsFULL_Backupsdatabase_Full.BAK)'. Operating system error 53(error not found). [SQLSTATE 42000] (Error 3201) BACKUP DATABASE is terminating abnormally. [SQLSTATE 42000] (Error 3013). The step failed.
Our sysadmin accidentally uninstalled SQL Server started to panic and reinstalled. Thankfully the data/transaction files to our important databases were still present and I simply reattached them but our DTS packages our gone.
However, we've done weekly backups of the msdb database. How do I get the DTS jobs out of these backups?
This Maintenance Cleanup Task is set to delete all BAK and TRN files it's made (in seperate maintenance plans) in given path, with the given file extension, delete files based on the age of the file at task run time. Delete files older that 4 days.
The files are now backing up for months and months. I'm not going to take care of this. I've got a computer to do this for me, every time it runs its jobs, every 4 hours and once overnight.
This is ignoring commands and refusing to do as it's told. I've checked the settings in here over and over. It's so simple - what could be wrong? I've checked the path, the file age, the extensions... The disks are getting full!
Hi guys, I am developing this site http://www.onlineacademicadvisor.com and having DB problems for the 3rd time in a row. Whenever the traffic on the site is getting bigger, the transaction log becomes full and no user can login. This problem is described at http://support.microsoft.com/kb/317375 From there, I got the feeling that the problem occurrs if transactions are not committed and last for too long. However, I do not have any explicit transactions, just the usual select, insert, update statements in stored procedures. I do not call COMMIT (or RETURN) explicitly at the end of my stored procedures though. My stored procedures are short. Have you got any ideas about what can cause the problem? I really have not idea what that could be. Your help is much appreciated.
Over the past few days we noticed severe performance issues on some of our more complicated queries. I ran a DBCC ShowContig on the problematic tables, and noted that the Logical Scan Fragmentation was very high, like over 90%. I ran a DBCC DBREINDEX on the tables, the Logical Scan Fragmentation reduced down to between 0% and 10%, and the queries ran instantly.
However...the next day, the queries were causing problems again. Running ShowContig showed the fragmentation was up to over 90% again. Now, these are very static tables I'm dealing with...absolutely no UPDATE, INSERT or DELETE commands have been run against them (we import the data once a month). I set up a job to monitor the state of the index fragmentation overnight. All is well until 0100, when the LSF hits 90% again. I can't figure out what could be causing this, we have no jobs that run on, or affect, this database overnight, except the backup, which runs at 2100. Has anyone experienced anything like this before, or does SQL Server do something on the fly that could cause it to happen?
Hi All, any advice or help greatly appreciated, I need to Process Dimensions and Cubes Overnight, what is the best and most reliable way of achieving this.
During the day I frequently use watch the current activity window under Enterprise Manager to see who is doing what and when.
However, overnight there are various users running various jobs that I am not always informed about. Wactcing the current activity isn't an option here.
Does anyone have a job that I could periodically run overnight to perfrom the same function as the current activty box? Which system tables does the current activity functionality use?
Thanks. Any information that can be provided here will be appreciated.
We have an MIS system which has approx 100 reports. Each of thesereports can take up to several minutes to run due to the complexity ofthe queries (hundreds of lines each in most cases). Each report can berun by many users, so in effect we have a slow system.I want to seperate the complex part of the queries into a process thatis generated each night. Then the reports will only have to querypre-formatted data with minimal parameters as the hard part will havebeen completed for the users when they are not in. Ideally we willgenerate (stored procedure possibly) a set of data for each report andhold this on the server. We can then query with simpler parameterssuch as by date and get the data back quite quickly.The whole process of how we obtain the data is very complex. There arevarious views which gather data from the back office system. These arevery complex and when queries are run against them including othertables to bring in more data, it gets nicely complicated.The only problem is that the users want to have access to LIVE datafrom the back office system, specifically the Sales team who want toaccess this remotely. My method only allows for data from the nightbefore, so is there an option available to me which will allow me todo this ? The queries can't be improved on an awful lot, so they willtake as long as they take. The idea of running them once is the onlyway I can see to improve the performance in any significant way.True I could just let them carry on as they are and let them sufferwith the performance on live data, but I'd like to do something toimprove the situation for them.Any advice would be appreciated.ThanksRyan
I have this sp that backsup all our db and the master. I want the backup files to be put in D:sqlbackups. I get a return message stating that all the files were successfully backed up but only the master.bak shows and the other db files are no where to be found?!?(Processed 2000 pages for database 'master', file 'master' on file 1. Processed 1 pages for database 'master', file 'mastlog' on file 1. BACKUP DATABASE successfully processed 2001 pages in 0.121 seconds (135.413 MB/sec). Processed 828272 pages for database 'MARS_SYS', file 'MARS_SYS_Data' on file 1. Processed 1 pages for database 'MARS_SYS', file 'MARS_SYS_Log' on file 1. BACKUP DATABASE successfully processed 828273 pages in 84.552 seconds (80.248 MB/sec). Processed 148736 pages for database 'MSystem', file 'MSystem_Data' on file 1. Processed 1 pages for database 'MSystem', file 'MSystem_Log' on file 1.)
Here is the sp: SET QUOTED_IDENTIFIER ON GO SET ANSI_NULLS ON GO
I'm trying to setup a back up plan for a number of databases, I initially set up one plan to include all user databases which worked fine or so I thought, when I check them a few days later I noticed that some of the databases were not appearing in the backup set, the only way I could get these to appear is to set the comp level to 90, now when we run certain applications we get an error, when I return the comp level back to 70 then the application works fine, is there a reason I can not back up any database on sql 2005 without it being a comp level 90?
Hi - I use the windows scheduler to execute an SP for my SQL Express instance on a server. The SP loops through the SQL Express databases, and makes a backup - however, it will only backup the attached databases. Is there anyway I can modify it, to make it backup the databases in my various web sites, within the app_data folder? These aren't attached, but I think get attached when the web application needs them, then detached afterwards - so they are never permanently attached! Thanks for any help, Mark The SP I use us:/** File Name: BackupExpress.sql Description: Backs up all databases. This script is mainly meant for SQL Express instances The script requires a C:ackups directory by default to backup to but can be changed with the @OutputPath variable. Accompanying file is BackupExpress.cmd, which is used to schedule the script. Author: Brian Knight
SELECT @dayofweek = CASE datepart(dw, getdate()) WHEN 1 THEN 'Sunday' WHEN 2 THEN 'Monday' WHEN 3 THEN 'Tuesday' WHEN 4 THEN 'Wednesday' WHEN 5 THEN 'Thursday' WHEN 6 THEN 'Friday' WHEN 7 THEN 'Saturday' END
I need to back up about 50 databases that are currently sitting on MARS. Every saturday they will be backuped so that they can be moved to HOMER
Pls help as I don't know where to start
BTW..I have also set up the SQL Maintenance wizared Backup procedure which copies to the local server but I want to be able to move the database files to HOMER (another server) so eventually they are backed up by TAPE