SQL Server Admin 2014 :: Is It Realistic To Base Recovery On Mdf And Ldf Backups
Dec 15, 2014
My recommendation to my manager is to base recovery on .bak and .trn files. However the .mdf and .ldf files can be backed up without being detached and I am wondering about the value of these file backups. He has set up backups to external drive of the .mdf and .ldf but not of the .bak and .trn files that I have set up. His recovery strategy is based on complete backup of the machine and bare metal recovery. I am finding it hard to defend my preference for going about recovery with a new SQL Server installation, then restoring .bak files. Has tried recovery from .mdf and .ldf? Please note that all the database files (system and user) are on a single drive, I think this was set up to allow the machine to be clustered. We are intending to do a test rebuild of the machine from his backups.
Have a SQL 2014 install and cannot for the life of me get the maintenance plan to remove old backups. I've tried everything. Rights to the folder where the backups are stored are adequate, extension set in the clean up task is as it should be, etc. Log shows the job ran successfully. Running the command manually shows successful completion, but backups are still not removed.
Until yesterday I had a server running SQL Server 2008 R2 - with all the SQL Server DB files on an attached disk array.
The server died - so I attached the disk array to a new server - and all the DB data files are visible there.
I installed SQL Server 2014 on the new server and am trying to work out how to point it at the existing database files.
I also have backups of the DB's - but they will take ages to copy over and restore - so it would be much easier to just use the db files. Should I restore the master db first (easy as its small)?
I am trying to create a job that would backup a DB dynamically to a different folder based on the week day. So if it's Saturday, backup to folder Saturday.
I have this
DECLARE @BackupLoc nvarchar (100) DECLARE @DayOfWeek nvarchar (100) set @BackupLoc = N'D:Backup' set @DayOfWeek = (SELECT DATENAME(dw,GETDATE())) set @BackupLoc = @backuploc + @DayOfWeek
[Code] ....
But it's giving me permission errors, although the AGENT and MSSQL service accounts are members of the local admins group.
This is my first deployment of an always on availability group for SQL 2014 and I'm trying to get my custom backup procedure to handle all databases appropriately depending on the primary group. Basiscally I want the system databases and all databases that don't participate in the availability group to be backed up on both nodes and those that do participate backed up ONLY on the primary server. I've looked at the sys.fn_hadr_backup_is_preferred_replica funcation, but would like to only have to test for a single databases existance in the availability group. If the one database is in the group, only backup the system databases and those that don't participate, otherwise backup everydatabase. This would be the case for both full backups and transaction logs.
I'm performing SQL Server native backups and Comvault Backups are breaking the backup chain?I can work around them if they only perfume 1 backup per day.We have a 3rd party hosting our servers so I do not have much control as to what they do.I have concerns about being able to get my databases Comvault being restored.
On one of our SQL Server 2014 boxes each database has a copy-only full backup made every night, in addition to the maintenance plan schedule of a full backup weekly, daily differential backups and log backups.
When performing a PIT restore in SSMS the restore file list lists the most recent copy-only backup as the full backup to use, not the most recent plan full backup. I noticed that using SSMS 2008 to start a PIT restore on the 2014 box does not have this problem, and lists the correct restore file sequence (ignores the copy-only backups).
The DB is in simple recovery mode. There are no open transactions (used dbcc opentran).
The server is running SQL Server 2014 and the DB is in compatibility mode SQL Server 2008 (100). It was upgraded to 2014 a month or two ago.
I have tried to re-size the log to 100mb, but any way I have tried (none gave errors), the log file remains the same size. I have tied to shrink the log file (through the UI and via DBCC commands) without success; no errors, but also no change in file size.
I have checked Log Reuse Waits, just in case, and as expected it showed “NOTHING” (select log_reuse_wait_desc, name from sys.databases)
I tried running a checkpoint, but that did not allow any resize or shrink to work.
I have tied creating large transactions to move the used point in the log file, in case this was the issue. I did this by creating tables that I drop after large inserts. While it shows me that the log space % used increased, the log file still does not allow the space to be reduced.
The following is what I was using for the transactions to get the log used.
BEGIN TRAN select a.* into testtable from sysobjects a, sysobjects b, sysobjects c ROLLBACK TRAN
Do I just need to continue running large transactions until the log space used gets high enough to get the “end point” in the log to really move? Is there an easier way to accomplish this (I have several DBs that have the almost identical problem), what I am using moves the Log Space Percent Used about a percent on each execution.
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
How can i make sure that i dont have any data loss in the event of DB crash. I take daily full database backup dump to the disk and every 4 hrs transaction log dump.
What shoud be the backup strategy to get 100 % database from the backup/ to get in point in time receovery.
I do not insert/update/delete on the view directly.
For every insert/update in table A /B the values should get insert/update in the view respectively. This insert/update on view should invoke the trigger.
And I am unable to see this trigger work on the view if any insert/update occurs on base table level.
Trigger is working only if any operation is done directly on the view.
We have 3 replica AG setup. 2 replicas are in sync/automatic failover, the other(DR Server, different subnet) in asynchronous/manual mode…All these replicas were on sql server 2012, Recently we upgraded DR server to 2014. Since then we have a problem, the AG databases in 2014 instance went into ‘Synchronizing/ in recovery’ state…The SQL server error log has message, the recovery couldn’t start for the database ‘XYZ’…We tried to create a new database and add it to AG , it works for fine for other two 2012 replicas, but on 2014 we see the same issue
If I install an instance with Windows Only authentication, and then change it to Mixed Mode, if I enable the sa login, the password has already been set. What is the default? If it's generated, how secure is it? Is the password generated? What algorithm is used for that?
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
I am using a monitoring system where I can monitor a numeric SQL result assuming the result is one field and one row.I would like to do this to say monitor the free available space or percentage on say the Master database. DBCC SQLPERF gives me a few columns and results for all databases on the server.
In our environment applications are using a DNS name which points to the physical server ip address. Now we are planning to move to 2014. We are planning to have servers in different subnets so we will be having two ip adresses for listener. How we can point the DNS to the listener ips? If failover happens can the DNS point to the exact ip address of the listener where it's primary node?
"Process 0:0:0 (0x1e10) Worker 0x00000006B6D341A0 appears to be non-yielding on Scheduler 13. Thread creation time: 12906028806348. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 13%. System Idle 84%. Interval: 70189 ms."
Is it better to run the profiler or performan counter?
What are the filters we have to select in the profiler to monitor the Sql server
I have a SQL server box running 2014 reporting services. I have another server running IIS v8.
I would like to be able to connect to the IIS site and be given the SSRS report browser.
So externally if I browse to [URL], I am presented with the report server interface, the same as if I browse to http://xxx.xxx.xxx.xxx/reports internally.
What is the best approach for a read only copy of a database that is ~ 1TB. The primary database is fed nightly with an ETL process. We are currently trying to duplicate the ETL to read only server but that process is not going well. So we are looking at other options to let SQL make the copy.
The primary database is on a Win12R2 with SQL 12 or 14, a 2 node A/P failover cluster.
The read only copy will be on a Win12R2 with SQL 12 or 14. It is not a requirement to fail over to the read only copy if the primary should go down.
What would best the approach to accomplish the end result?
I have 10 databases which are configured as principal in mirroring I need to failover all the databases as part of failover , instead of writing query each database as parner failover, is an script which will generate the databases as principal to failover ?
After installing SSMS on some computers - the only way we can get SSMS to run correctly is to run it as the administrator. Is there a way where you don't have to do that? These end users are logging as themselves and have accounts in SQL Server all set up - but SSMS will only launch for them if we right click and select "run as administrator".After doing some digging - it seems that this is a common problem out there.
when execute the restore log command, in the messages window it shows how many seconds the restore takes, at the meantime, on the status bar, it also shows the seconds the command takes.
Two values are different and could be very different, please see below examples , restoring takes 1.8 seconds, but in total the command takes 4 seconds to complete, the other one is 8.1 seconds and 12 seconds.
What does SQL Server or Windows do after the restoring?
pic a:
pic b:
I did a xperf, I can see after the restoring is completed, sql server did garbage collect and log write, which just run very quickly, but storage is busy on reading the log file for nearly 2.2 seconds( 4-1.8), and 4 seconds ( 12-8.1) .
pic 1:
pic 2:
see pic 1 above, from 13 to 17, the restore operation is finished, but the storage jump to 100% active to do some reads, only reads no writes. zoom that period shows pic 2, it read 4096 (I don't know the unit size) for about 4 seconds, what does this do?
Data file, log file, backup file are no different drives, but all local drive, the interesting point is the read jumped after restoring, I tested it on different server, same result...