Recovery :: Maintenance Task To Copy Bak File To Secondary Drive After Backup Is Complete?
Nov 6, 2015
I need to copy a just-created bak file to another drive after the backup task has completed. I don't see anything in the job toolbox which works with file system operations like this. But still it must be a common need..There are ways to script this or use third part tools but I am looking for something native to the sql server 2012 SSMS toolset, if possible.
An alternate approach would be to run the backup job again, after the main backup, and change the destination to the alternate location. But I was thinking that another backup job would probably invoke more overhead on the server than a simple file copy operation. If I do end up taking this approach I could also use the cleanup task to toss older bak files in the alt dir.
There is a lot of documention concerning data(base) backups, but I could not find information about how to backup a complete SQL Server instance (data + configuration) or the SQL server configuration (= everything, that is not user payload). you always re-configure your system after restoring, especially because you will forget some settings?I found following things necessary to backup configuration:
server-wide: security: login, roles, ... server objectsserver properties (e.g. collation, connection or memory settings; see SSMS "Server - Properties") per database database settings (SSMS - database - properties)scheme
1. Is there anything missing?In case of a severe system crash I would like to be able to recover the system using a full system image of the virtual machine. However I just create these images after significant changes (e.g. Windows service pack).The SQL server should be backuped independently from these system images.
2. Even if I used SQL Server maintenance plan, choose "backup all databases" and additionally backup resources DB manually I think, that some configuration is still missing, right? Where are the security configuration and server properties saved? In my system database there are some tables, but there is very little content inside, so that the data from the views INFORMATION_SCHEMA* and sys.* obviously is not saved there.SQL Server system objects, such as sys.objects, are physically persisted in theResource database, but they logically appear in the sys schema of every database. URL....What does that mean: Are all DB/table schemes, logins, ... saved in resource DB?
3. Does a usual SQL Server full database backup also contain all settings concerning this database (database properties, logins, ...)?
4. Is there at least a way to backup ONLY the configuration (server-wide and database) without data? The only tool I could find is DACPAC, that exports a database's scheme and some other configuration, but e.g. the database properties are not included: By default, the database created during the deployment will have the default settings from the CREATE DATABASE statement. The exception is that the database collation and compatibility level are set to the values from the source database. URL..
We have an encrypted drive (that can be mounted and dismounted, a third party tool to encrypt drive path). I wanted to store the secondary file to that encrypted drive path. The secondary file stores confidential information. I separated the table from the primary to secondary file. Encryption per column is not advisable to do on that table so we decided to separate that table and put it on secondary filegroup. The physical file is stored in the mounted drive path.
I can read and write in that mounted drive path. I can also read and write if the drive is unmounted (which I believe read and write is really being done). When the drive is unmounted, the physical secondary file (.ndf) is not visible to any user logging in the server itself (this is actually the goal why we do this encrypted drive setup thing). It is kept virtually somewhere in the machine. To mount it back, a password is needed.
I'm a bit confuse, somebody can advise or give their insight on this setup. I believe that when the drive is dismounted, SQL Server stored the transactions in cache until it finds that the drive is mounted back. This means that all transactions are not comitted yet. When the drive is mounted back, I think SQL Server is smart enough to check/know that the drive is physically present and will flash all the pending transaction from the cache to the hard drive.
Is my assumption correct? Is there any thing that I need to know about transaction, committed and those data flashing thing on the hard drive?
Hi, I have created an sql server 2005 maintenance plan for a daily backup. The plan has two 'Back up database task' i.e. one backup on the local drive while the second on a network drive. When the plan is executed, a backup is created on the local drive but not on the network drive. If i check the log, it says "Access Denied" whereas i have full access to the network drive with complete permissions to read, write and delete. Can anyone help me understand how to take a backup on both a local and network drive at the same time using a maintenance plan ? I shall be obliged... Regards...
We're new users of SQL Server 2005. I created two maintenance plans...one to backup the database at 2 AM daily and one to back up transaction logs every 30 minutes. These maintenance plans write to a local disk. What we want to do, within the maintenance plan, is copy the files as soon as they are written to a remote server. Is that possible?
Executing the FTP Task - The execution starts and after 3 or more minuts the execution stops with the RED X but with no errors, and the file is not transferred.I use the same entries to the FTP connection manager as it is for the Dreamweaver...The variable that I created for file in the site is FileName1 and the site directory tree is The local path is And The File Transfer is filled up like this: After the Execution stops I get..And the file was not transfered..Also, when I try to Specify the Variable Expretion.
I have to perform disk maintenance on current drive - Drive 'D' where it has sql data (mdf file) and I have added new drive - Drive 'E' By the way Drive 'C' have the program files for SQL Server 2008 R2 What is the correct process to transfer sql data (mdf file) from Drive 'D' to Drive 'E' and later remove Drive 'D' from the server.
I am able to run SSIS packages as SQL Server Agent jobs with a Control Flow items "File system task", if I move a file (test.txt) from a drive (c on the server (where SQL Agent jobs run) to a subdirectory on the same drive. But, if I try to move a file on a network drive, the package fail.
Has anyone been able to successfully delete old backup files(*.bak) and tran logs (*.trn) TOGETHER using maintenance plan cleanup task in SQL 2005 SP2.
this is transact sql running in the back ground. EXECUTE master.dbo.xp_delete_file 0,N'F:MSSQL.2MSSQLBackupibmdir',N'"bak" & "trn"',N'2007-03-26T22:21:14',1
This DOESNT WORK.
It works if I just try to delete only one of the things ie trn or bak files.
What is the method to execute backups from batch (.bat) files on the server running SQL Server. I have tried the sqlmaint command - doesn't seem to execute, looked into the xp_sqlmaint with no luck. I'm sure the problem lies in my lack of DOS batch programming skills. If anyone has an example of a batch file that executes a backup would you mind sharing. thanks
We have set up Log shipping between Primary and Secondary DB. The secondary DB is right now option: Standby/Read-Only. I can not take Backup of Secondary DB now.
Shall we disable Log shipping and change the DB Option to Multi-user mode and take backup? or any different method, without disabling log shipping?
Historically I've always written a VB script to copy a file from a sharepoint library. I don't like this method because I have to input a username & password in the script and maintain a config file.
Yesterday I was playing around with using a file system task. The sharepoint file has a UNC path so why not? I created a simple test package with a single file system task that copies the sharepoint file (addressed via UNC) to another network location. Package runs fine locally.
I try running on our utility server but am getting a "The file name [SHAREPOINT UNC PATH] specified in the connection was not valid" error. Package is running with a proxy on the server and the proxy account has the same permissions to the sharepoint site (so far as I can tell) as me.
I have a service (Windows service that is) that's set to automatic startup. The service makes use of one or more SQL server databases and is configured with a startup dependency on SQL Server.
Problem is, the service control manager considers that startup dependency to be satisfied once the SQL server service process successfully starts, but connections from within my service to my databases won't actually succeed until recovery on at least those databases is complete.
Of course, I can do something mundane like retry failed connections a few times, or ensure that I don't attempt to connect to a database until some hard-wired amount of system uptime has passed, but those solutions are just band-aids. The Right Answer, it seems to me, would be to detect when SQL Server has finished recovery and not attempt to connect to my databases before that time.
So, is there any way to determine if SQL server has completed recovery programatically? On a server-wide basis or on an individual database basis? I'm assuming that my service would need to connect to SQL Server with some high-powered credentials (sa or equivalent) to get that information, but that's not a problem in my application.
I have the following issue with Maintenance plan backups that work for BAK DIF and TRN to a remote server share. When I try and remove the old files with a clean up task I get an error and the files don't get deleted.
The version is as follows Microsoft SQL Server 2005 - 9.00.3054.00 (X64) Mar 23 2007 18:41:50 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2)
The error result is as follows,
Failed-1073548784) Executing the query "EXECUTE master.dbo.xp_delete_file 0,N'\\EXECUTE master.dbo.xp_delete_file 0,N'\ABCD-A1\BACKUPS\ABCD_BACKUP\ABC_DAILY\ABCD',N'trn',N'2008-01-13T12:52:49'" failed with the following error: "xp_delete_file() returned error 2, 'The system cannot find the file specified.'". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
The maintenance plan seems to be adding extra "" though when i enter the code directly in a query i get same error.
xp_delete_file() returned error 2, 'The system cannot find the file specified.'
The servers belong to the same domain and are using the same Service account which has all the necessary rights to the share and the file directory location. The backups work but i get the error on the cleanup task.
Trying to figure out how to get the Cleanup task to delete old files. The same happens for all file extensions and I have tried other locations with simpler file paths same error.
We have a SQL AG configured with 3 servers. 1 Server is on Azure Cloud and 2 are On Prem. The one on Azure is the Primary DB [Copy]. The copies on On Prem are secondary.Databases are more than 6 TB in total. We have already synced 1 copy [One SQL Server] OnPrem with Azure DB. The connection was over a VPN tunnel so it took a almost 8-10 days.. Some failures in between.Now, can I sync my other SQL 2012 server OnPrem with the secondary copy.
On the SQL Server the Event Viewer shows the same messages and errors every evening between 22:05:00 and 22:08:00. The following information messages are shown for every database:
"I/O is frozen on database <database name>. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup."
"I/O was resumed on database <database name>. No user action is required."
"Database backed up. Database: <database name>, creation date(time): 2003/04/08(09:13:36), pages dumped: 306, first LSN: 44:148:37, last LSN: 44:165:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{A79410F7-4AC5-47CE-9E9B-F91660F1072B}4'}). This is an informational message only. No user action is required."
After the 3 messages the following error message is shown for every database:
"BACKUP failed to complete the command BACKUP LOG <database name>. Check the backup application log for detailed messages."
I have added a Maintenance Plan but these jobs run after 02:00:00 at night.
Where can I find the command or setup which will backup all databases and log files at 22:00:00 in the evening?
hi i am using isp_backup store procedure to get the daily backup of database and this is work fine but i have copy the database file in network sharing folder to use following cmd EXEC master..xp_cmdshell 'D:filename M:foldernam' M: is network folder to window2003 but their no copy this folder
We have an AAG environment. In order for the failover to be transparent we have to ensure that the login that is added in the Primary node is also added to the secondary node. Currently, we are adding the logins to the secondary node manually. Is there a way to automate this process so that a Login added to the Primary node is automatically updated on the Secondary Node.
We have a valid full backup of a database. We know it is valid, we have restored it twice from the network with no problems, but we do not have access to the network location from our sandbox environment.
The .bak file is sizable at about 9GB. The .bak file resides on our internal network, in a SAN drive. No problems there. When we copy the file from there into a sandbox environment to attempt the restore in the sandbox environment it gets corrupted. We've tried three different times and all three different times it gets corrupted. First time we copied the file over to the sandbox the restore went up to 50% and failed. The second time we copied the file again and attempted the restore again it failed at 70%.
The third time it failed at 60%. The error message we get during the restore is "...Read on ... failed: 13(The data is invalid) Msg 3013, Level 16, State 1, Line 1 Restore database is terminating abnormally."
Now some background here. To move the file our network team is doing this: they have this .vmdk file that they mount out in our production environment (which has access to the network location where the .bak file is), copy the file into the drive, then move the .vmdk file into the sandbox environment(which does not have access to the network location), mount the drive in the sandbox environment, and then I have access to the .bak file from within the sandbox environment.
Something in the process of using the .vmdk file to move the .bak file from production into the sandbox is causing the file to get corrupted.
What is the syntax for using xp_smdshell to copy a file from 1 server to another? Our Report server is restored from our production server and I want to copy the .dat file from the production server to a folder on the report server.
But I am running this command with sa user..... Wich kind of permission is missing to execute this copy? When I execute the same command to copy the backup from the server to itself, it works fine!!!! Does someone have an idea to solve this problem?????
i would like to copy the SQL Server Express database .mdf and .ldf files for backup. Is this ok? Autoclose = true and recovery model = simple.
Must i detach the database before copy the 2 files or can i copy the 2 files without detach at any time? When connections are open (also remote connections). Can i copy at any time even when transactions are active?
I would like to write a copy programm which copies the 2 files every 30 minuutes. Only 30 minutes of work could be lost.
This would be enough for me and i don't have to care for the the BACKUP and RESTORE stuff. In the past i used BACKUP and when i needed this BACKUP it did not run. Returns some error message..
Is copy ok? When is it possible? At any time or must all transactions be comitted? Must all connections (remotes too) be closed? Must the database be detached?
Is this enough to have a valid backup? Backup would be an attach of the .mdf file.
Or must i use the BACKUP and RESTORE stuff? Why? If so, for what reason is the AUTO CLOSE property there?
As per our client requirement we want to set synchronization time from primary replica to secondary replica after 20 minutes. Is it possible in MSSQL AlwaysOn Availability Groups in SQL Server 2012?
I noticed that after a SQL AlwaysOn failover, one of the DB in the secondary replica is stuck in Restoring state. The primary replica shows that it is in a synchronized state. These are the error logs from SSMS. How do I trace the cause of the problem?
Error: 5901, Severity: 16, State: 1. Nonqualified transactions are being rolled back in database for an AlwaysOn Availability Groups state change. Estimated rollback completion: 0%. This is an informational message only. No user action is required Error: 18400, Severity: 16, State: 1.
One or more recovery units belonging to database failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
The background checkpoint thread has encountered an unrecoverable error. The checkpoint process is terminating so that the thread can clean up its resources. This is an informational message only. No user action is required.
I recently configured SQL Server 2012 AlwaysOn Availability group using two nodes - a primary and one secondary read only replica. The group is residing on a windows 2012 cluster with an smb file share as the quorum. I am able to successfully failover through SQL and through the windows 2012 cluster. When I look at the group dashboard on the primary server and view the Operational state of each node I notice an odd value. The secondary role server is listed as Unknown. I also noticed that the Availability replicas node icons in object explorer are displaying the same icon on the primary server but on the secondary server, the primary server is shown as a server with a question mark.
Am I missing a permissions setting or is this normal behavior.
For example:
ServerA is the primary ServerB is the secondary ServerA lists the servers in Object Explorer as:
ServerA (Primary)ServerB (Secondary) ServerB lists the servers in Object Explorer as:
ServerA ServerB (Secondary)
The primary is never listed a primary on the secondary server. Again failovers are working properly, but I want to be sure I am not missing a setting somewhere.
Currently in my environment we are using SQL server 2012.We setup Alwayson with synchronous commit.Details of existing AlwaysOn: one primary and two secondary.
Primary: On-Premise server. Secondary1: On-Premise server. Secondary2: Azure VM. Requirement: We need to add Secondary3 New Azure VM on same AG with asynchronous mode or synchronous mode. Or We need to create one more AG on same DB and add the new replica with asynchronous.Is it possible above 2 option in this scenario? My cluster environment is Manual failover only not auto failover.
I am testing some maintenance tasks sql commands such as index rebuild, index reorg, update statistics and db integrity check on a SQL Server 2014 Database. This is a new non-production vendor database (DB Size 500 GBs, Log Size 25 GBs) which eventually will be created in production. Currently, it is in full recovery model and without log backups. The database has a whole lot of indexes. I am just trying to rebuild and reorganize all the indexes (that need it), in addition to trying to get an idea of how long these maintenance task will take and the space needed in the log file to complete these tasks/commands. I would like to execute these tasks manually (the first time) to gather the duration and space required information. Eventually, I would probably schedule a weekly job to perform this maintenance.
I ran the index rebuild task on the database and noticed that the log file grew by over 50 GBs. I killed the process and truncated and shrunk the log file back down.
1. Does the index rebuild, index reorg, update statistics and db integrity check commands all use the log file?
2. Does Indexs Reorg have less impact on log file then Index Rebuild?
3. Should a truncate log and shrink log file be performed after these maintenance commands?
4. Should a full database backup be performed after these maintenance commands? Or before the maintenance commands?
I have read and understand that shrinking is not good for the database (could lead to more fragmentation and more data file growth when data is added) and I know about rebuilding indexes when fragmentation is GT 30% and reorganizing indexes when fragmentation is GT 5% and LE 30%.
Since this is a non-production database maybe I should set the recovery model to simple, run the maintenance commands and leave the database in simple recovery model unless the vendor needs it in full recovery model for some unknown reason.
5. With the simple recovery model the log file should be reused in a circular manner and not grow during these maintenance tasks. Is this correct?
how can i copy a complete database (tables, views, stored procedures) with/in the sql server 2005 "server mgm. studio". the import/ export function only copys the data (tables). sql server 2000 had a nice tool for that (import/ export data). but how can i do that with the sql server 2005. can't find anything ...
We tried to configure log shipping using script generated by GUI and when executed specific script which is meant for secondary server the database is not created and threw below error.
Msg 15010, Level 16, State 1, Procedure sp_add_log_shipping_secondary_database, Line 50
The database 'BUBALLO' does not exist. Supply a valid database name. To see available databases, use sys.databases.
Note: Only Copy, restore and alerts jobs have been created.
The account I'm trying to configure log shipping is the service account by which SQL and agent services are running and folder in where data and log files are intended and to be created is open to all (everybody has read/write permissioins) believe the issue is not with permissions.
I have a SQL 2014 SP1 set of servers with two asynchronous copies of an availability group. One of the asynchronous sites is down and SQL can no longer replicate the changes. I need to understand how long SQL Server can continue this way before the secondary replica will no longer be able to catch up. I assume this is really tied to the transaction log on the primary replica but would like it clarified.