I have inherited a new SQL Server 2008 database server and can not figure out how my user databases are being backed up. This database server is running under a VM.
All the user databases are being backed up nightly per the SQL server log. The backups are written to a virtual disk and is kicked off by the NT AUTHORITYSYSTEM user. I can not see the virtual disk. A restore task does not provide any information about the last backup. I have created a new database, and it is automatically included in the next set of backups.
I have looked at the windows event viewer with out any luck. There are no SQL Server Maintenance Plans or Agent jobs that call a backup. I have also checked the Windows Task Scheduler and can not find any task that does a backup.Could the backups be called from another server ?
Currently there are various teams accessing the database. For costing reasons, we need to track usage.Is there an efficient way to monitor User access to the database.Can we track which user has executed which query(SELECT,insert etc),the login time and such parammeters?
I am using Ola Hallegren's scripts to do backups. He uses @Cleanup Time to delete backups older than a certain number of hours. My situation is I do a full backup of a database on Sunday and then I have a few Differentials and then log backups for the rest of the week. When Sunday rolls around again and my full backup is finished, I would like to delete all the differential backups and log backups. Any way that I could accomplish this using Ola's scripts?
I have tried doing sql server backups to a file share and that has been taking too long. So I've decided to backup locally and then taking those backups and getting them off the server. For those that are doing this what do you use to get your backups off the server?
I am sure I have seen in the past in a monitoring tool that PLE drops off to 0 whenever we do a backup. I was doing some reading around this however and found something that said backups use a different portion of memory external to the buffer pool (minmax settings).
Is this correct and how can I tell how much memory will be required for a backup?
I just completed a copy-Only compressed backup of a DB (with a FULL Recovery Model ) on SQL Server 2012 and the resulting backup (the bak file) is 1/100th the size of the data & log file. Is the compression in SQL Server 2012 just that good or did something else happen that I did not catch? Below is the T-SQL to re-create the backup. The size of the data file is 750MB and the log file is 75GB and is %95 used according to the SQLPERF command.
Does the compression in SQL Server 2012 simply that good
BACKUP DATABASE [MYBIGOLEDB] TO DISK = N'Z:Microsoft SQL ServerMSSQL11.MSSQLSERVERMSSQLBackupMYBIGOLEDB_20150611.bak' WITH COPY_ONLY, NOFORMAT, INIT, NAME = N'MYBIGOLEDB-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10 GO
So I'm testing some things in our new servers and was trying to restore a database from some striped backup sets. We have 4 files for our backups and restoring the FULL backups with no recovery worked beautifully via SSMS. But when I tried to restore the differentials (also striped across 4 files), the GUI gave me this error:
Unable to create restore plan due to break in the LSN chain.
How to locate when the break happened and I came across this article about how this is an SSMS 2012 bug.
So I tried the advice in this article to attempt a restore via Files and Filegroups, and ended up with the below error:
EDIT: Picture is attached if it is not showing in post.
I was able to restore via T-SQL, but I want to also be able to restore through the GUI.
first of all when i choose the pick a folder to backup, no mapped drives I make are even THERE.
I realize this is probably related to the account being used, okay I thought let me change the user account to a network admin account... I still cannot see the drive.
Can't this thing just accept whatever I tell it to access like any other program??
You would think they would at least keep the standard Open File dialog so we can use the network browser or something...
I've changed my accounts all to NETWORK SERVICE, then LOCAL SYSTEM, then a DOMAIN ADMIN...
I can't get this to work correctly on this freshly installed server... can someone please help?
I'm at the point where I don't care if i have to just re-install the damn thing...
Just someone please tell me what to pick for the accounts.
Bonus: I have this same issue with reporting services and Services for Unix NFS Mapped drives.
How can I map a drive with NETWORK SERVICE Credentials so it finds the datasource path?
I've only been able to do something like this with psexec and Local System.
When logged in as Domain Admin it will show a disconnected network drive that you cant get rid of but system account can use.
We have a SQL 2012 server instance that has log shipping set up to another SQL 2012 server to provide a warm standby for a forward facing application. The databases on the primary server occasionally are required to be backed up and restored to a development environment, completely different server. Is there a way to schedule full backups with log shipping enabled?
Native backups to NAS do not complete.We have been experiencing an issue whereby our native backups are hanging with status': SUSPENDED/ RUNNABLE...I ran select * from sys.sysprocesses. All of the backup SPIDs processes show BACKUPTHREAD/PREEMPTIVE_OS_FILEOPS
This first occurred last Wednesday evening. When I discovered this on Thursday, I attempted to kill the backup jobs. This also hung with 0% completed/0% time remaining. Backups hung on more than one instance.That evening, I attempted to restart the instance which also failed with something along the lines of: could not start MASTER file in use.
I then restarted the server--which I really did not want to do--and this cleared it. I was also able to manually kick off maintenance plans (DBCC CHECKDB and full backup) without issue.I was off Friday and the weekend. I came in this morning and found the maintenance plans (diff/tlog backups) did not complete in some of the instances--in one case, the instance affected now was not affected before. They appeared to have hung on their next scheduled kickoff which was later that night after I went home
Remembering the "file in use" error, I have run process monitor to see if anything unusual had a lock on any files. I saw only SQL Server and Double-Take processes accessing log files.Being a relatively new DBA, I am user where to go next in trying to track down the cause of this issue. This is fairly urgent as one of the instances that has had this problem both times is our production SharePoint environment.
ENVIRONMENT:
SQL version: Microsoft SQL Server 2012 (SP1) - 11.0.3368.0 (X64) May 22 2013 17:10:44 Copyright (c) Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.2 <X64> (Build 9200: )
I am in plan to implement following for backup of one of our database Enable Full recovery mode
1- Create full backup nightly 2- Create transaction log backup after every 25 min
as I am taking full backup every night, I think I can remove transaction log file backups at the time of full backup, as we can apply transaction log backup over full backup.My question is regarding removal of transaction log backups.
-Should I remove all transaction log backups and then execute full backup? -Should I execute full backup and remove all transaction log backup older than 24Hrs ? -Do I have to consider SCN or related info before deleting any transaction log backup ?
Up until a couple of days ago all my maintenance plans were working.
Now they are all failing with the following errors -
Message Executed as user: MHS2sql2008. Microsoft (R) SQL Server Execute Package Utility Version 11.0.2100.60 for 64-bit Copyright (C) Microsoft Corporation. All rights reserved. Started: 10:07:02 Progress: 2014-05-29 10:07:03.20 Source: {CE851AA2-045F-429A-885E-E60E75A38639} Executing query "DECLARE @Guid UNIQUEIDENTIFIER
[Code] ....
The package execution failed. The step failed. Now I have changed the SQL Server Agent service account to use a network account called "sql2008".Stopped and started the Agent service.
The account has access to the folders it is trying to back up to. It's a full admin on the sql box and the box where I am trying to backup to?
We have just implemented a SQL 2012 always on environment. We have a primary and secondary server. I am confused about how to set up the backup plans. The application team was happy to tell me that in sql 2012 always on we can offload the backups to the secondary, thus reducing overhead on the primary server.
However, the secondary only supports copy only full backups. I am unsure how these would be useful in a disaster event? I could not apply any trx log backups on a copy only backup. This means I need to run my full backups on the primary server?
We had our backups backing up to the server where the databases reside. Now I modified the backups to backup to a file share. Now when we try to restore from the file share the restore fails, so we have to copy the backup to a drive on the server and recover for there. Should I be able to restore directly from the file share (using the gui)? Do I need to change something else to modify the default backup drive?
I am setting up Availability Groups and I want to use the secondary replica to perform the full copy_only backups to reduce the load on the primary replica.But what is the best way to check for successful full backups on Availability Group databases?
Previously I could check the system table msdb.dbo.backupset but this is not available for copy_only backups.So I wonder how people are monitoring that their full backups have been successful?
Do you just check that the SQL Agent job that runs the backup was successful?
Or do you search the SQL Server Error Log for entries like "Database backed up. Database: xxx" where database xxx is in an Availability Group?
Can we backup our cluster databases directly to tape using native backups (without using any third party tool) ? It's SQL Server 2012 two node Active/Passive cluster. One of the DB will be huge in size, hence checking if we can directly backup from the cluster instance to a tape.
I would like to know if there is a way to find out who changed a users roles/access WITHOUT using the audit function. For example, if a user account was created and given SA access then changed to read only, how can I find out who made that change? I tried searching for an answer, but kept getting no results. I'm thinking this may tie into the sys.sysusers view?
I am relatively new to sql developer. There is a new user that just joined our organization. I am trying to grant him the same direct grants privilege to the tables that an existing user has. The existing user has a ton of direct table access privileges and it will take days if I had to do each grant one by one like: grant select,insert,delete,update on 'table name' to 'user id'. Is there a way of copying or inserting an existing user's privilege and granting it to a new user.
I have an SSIS package built by another developer, and now that I'm running it under my login the passwords won't save. The solution and packages are setup with ProtectionLevel EncryptSensitiveWithUserKey, but how do I get the User Key to reset so I can now save passwords? I can re-enter them, but whenever I enter hte password and test it then click OK it still has the red arrow next to the connection as if there's an error. I can create new connections and those passwords save fine, but with 40-50 items in this package I hate the thought of having to go into each and change the connection.
I tried changing the package and solution to DontSaveSensitive then rebuilding and closing then reopening, I hoped there was some option to reset the User Key just as if I created the solution. If this option doesn't exist why?
The space allocated to the Log in question is 180 GB. During this time period I was running TLog backups every 5 minutes, yet the log continued to chew through to 80 GB used, even after the process was complete and a final TLog backup had been taken. It continued to stay very large until the Full backup was complete -- or something else that I'm unaware of completed. Like every other DBA I typically take a TLog backup to shrink the log, but what appeared to be the case here was the Full completed and it released the used log space. All said, will Transaction Log backups not free up the log during Full backups?
I know that I have read not to backup a database over a netwrok. So I am curious as to what others are doing out there. BAckup to your local hard drive on the server and then move the backup files to a repository some where on the network? Do others have a file structure out on another server that stores all of the backups from all of the different servers that have SQL 7.0 on them? We are a small company and are just starting to migrate data to SQL Server 7.0.
I have to perform a backup for disaster recovery purposes before an application upgrade. The upgrade will alter the database and stored procedures. My cuurent backup takes a backup of master and msdb weekly. The user database uses the Full Recovery model and is backed up daily at 21:00 and the logs daily at 13:00. Assuming the databse is modified between the last backup and the upgrade starting at 9:00am what should my backup stratergy be for roll back purposes. 1) backup master, msdb and the user Database to a different location than the normal backups. Use these to restore if required 2) backup the master, msdb and user databases using the same jobs and therefore overwriting the original evening backups 3) do nothing and just restore master and msdb from a backup and replay the logs to a given point in time for thr user database should the upgrade fail
Can anyone tell me what the impact of dynamic database backups in sql 6.06.5 will have on users using the database?
Will their user processes be blocked? Will their queries run slower than normal (how slower)? Will there be a lot of locking activity as the SQl tries to backup? Will the serverdatabase run slower
I am looking for the best method to backup SQL Server databases. Currently we are running a dump database statement to disk and backing up the files to tape through Arcserve.
One problem that I am having is the statement to dump the database. I would like to retain the dump for at least three days and be able to restore the database from any one of those three days. My current statement is: "DUMP DATABASE CHOISDAT TO DISK=`D:BACKUPCHOIS.BAK` WITH NOUNLOAD , STATS = 10, INIT , RETAINDAYS = 3, NOSKIP"
but, every other day I receive the message from SQL executive: "Can`t open dump device `D:BACKUPCHOIS.BAK`, device error or device off line. Please consult the SQL Server error log for more details. (Message 3201)"
What am I doing wrong? Any suggestions?
P.S.
Is there anyway to tell the Maintenance Wizard to delete the backups. I tried using the wizard but the backup files still remain on the disk and I have to delete them every week.
I have a database which is 72GB, which is backed up every night as part of the maintenance plan. I have plenty of storage space, and the server that runs the database is fairly powerful (quad-processor 3.2ghz, 64bit, 48GB RAM) and is part of an active-passive cluster. The database backup is also copied to a SAN location.
My issue is with the size of the backup file. As part of the Disaster Recovery plan, I need to copy this database backup file accross the network to a remote site, so that in the event of a disaster at the site, business can continue at the remote site after restoring the database backup file. However, my database backup file is so big that I cannot copy it accross the network in time for the next morning. I have tried using WinRar and have managed to achieve a file about 20% of its original size, but it takes 2 hours to produce this file.
Is there any recommended reeading for this type of issue? Log shipping / mirroring has been investigated and will be part of the DR model but the 'powers that be' insist on having a full copy performed to the remote site.
Any suggestions? Thanks in advance guys n gals :-)