2 Boxes in HA, identical hardware (VMs) and both utilising identical storage systems (Nimble SANs).Was needing to do a copy only backup of a few databases for an escrow environment we have, so decided to perform this on the secondary box.Small ~10GB ish database first, seemed to be really slow, so opened resource monitor and noticed that disk read / write was getting limited to about 15MB/s so took a little while (the big one is over 1TB)
Tried this again on the primary it was done in seconds disk rate in the order of 250MB/s (same settings re compression etc).. I decided to make a copy of the backup file on the secondary, the copy disk to disk went in the region of 150MB/s , so it appears that SQL Server is the bottleneck?
I was working on a job to send me info each morning about database file free space and was noticing some odd things when looking at the log file VLFs for one of my databases in an AlwaysOn availability group.When I run DBCC LOGINFO on the secondary replica for this database, I get what I expect and most VLFs have a status of 0 (indicating the VLFs are reusable or unused). When I run DBCC LOGINFO on the primary replica, all of the VLFs have a status of 2 (active or recoverable).
Since log backups on the secondary replica in AlwaysOn still truncate the log in the primary replica, I would expect that the VLFs in the primary replica would also be mostly in a reusable or unused state. My log file sizes are the same size on each server and my backups are completing successfully. what might be causing the VLFs on the primary replica to have a status of 2 in DBCC LOGINFO when taking log backups from the secondary replica?
I have a scenario where a customer is going to be using Log Shipping to the DR site; however, we need to maintain the normal backup strategy on the current system. (i.e. Nightly Full, Every 6 Hour Differential and Hourly Transaction Log backup)I know how to setup Transaction Log Shipping and Fail-over to DR and backup but now the local backup strategy is going to be an issue. I use the [URL] .... maintenance solution currently.
Is it even possible to do regular backups locally keeping data integrity for your backup strategy with Transaction Log Shipping enabled?
i have created a new login in primary server and provided dbowner permission to primary db.how do i transfer this login to secondary server and assign the same permission to secondary db ?
We have set up Log shipping between Primary and Secondary DB. The secondary DB is right now option: Standby/Read-Only. I can not take Backup of Secondary DB now.
Shall we disable Log shipping and change the DB Option to Multi-user mode and take backup? or any different method, without disabling log shipping?
I have 2 SQL Server replicas configured on SQL Server 2012 AlwaysOn. e.g. SQL1 & SQL2.
I have configured backup job on both SQL Server with the following statement. and the job occurs every 10 minutes.
•declare @DBNAME sysname,@sqlstr varchar(500) set @DBNAME = 'dba' IF (sys.fn_hadr_backup_is_preferred_replica(@DBNAME)=0) BEGIN --Select 'This is not the preferred replica, exiting with success';
[Code] ...
I turned off SQL2 for Windows maintenance. So there is only SQL1 is online. Afterwards. I checked the backup folder and didn't see any new backup files was created after SQL2 was offline. I rerun the job. It still doesn't backup database on the Primary Replica. Then I searched on SQL Server Book online. It says
Prefer Secondary
Specifies that backups should occur on a secondary replica except when the primary replica is the only replica online. In that case, the backup should occur on the primary replica. This is the default option.
According to what it says, it should backup on the Primary Replica.
I have setup Log shipping between two SQL 2005 servers, and everything seems to be working well. The files are transferring and restoring correctly.
My question is whether I need to add any backup procedures for the secondary server to prevent the secondary server's log file size from growing continuously. Should I be doing a transaction log backup on the secondary server? Or will that break the Log chain?
If it makes a difference, the secondary server is in Standby mode after applying the logs.
in my secondary server the database which is in restoring state , when i checked in always on dash board "This secondary database is not joined to the availability group" ,
I am after T-SQL code which will simply load the next T-log backup file from a network share folder to a warm standby db on a secondary server.What is needed is a Third server (server x), to participate in log shipping (MULTIPLE TARGETS).
Primary SERVER (SERVER A) Secondary SERVER (SERVER B) Log shipped to via GUI. THIRD SERVER (SERVER X) which will contain the same log shipped db from server A.
This will simply restore the logs from a network share to keep the db up to date.
"If we fail over a SQL AG group on a failover cluster from one node to another making the secondary the new primary, is there any reason why we would have to fail it back over to the old primary node?"
In always on under availability group server name properties can see the option Readable Secondary. In that for secondary server the Readable Secondary Option is YES and for Primary it is Read-Intent. I believe Read-Intent allows only read only connections and YES allows all user connections.
In always on under availability group server name properties can see the option Readable Secondary. In that for secondary server the Readable Secondary Option is YES and for Primary it is Read-Intent. I believe Read-Intent allows only read only connections and YES allows all user connections.
What exactly it means for the primary and secondary?
I have been working with a BI colleague to access the readable secondary through SSRS. For some reason it keeps complaining that ApplicationIntent is not recognized keyword. I am starting to think it's something to do with the driver for SSRS.
Windows 2012 R2, SQL 2012 (Primary Replica) SQL 2012 (Seondary Replica) SQL 2012 (Secondary Replica over WAN site)
There are database replicating on three SQL servers. WAN line is having performance issue because of limited bandwidth I have to remove SQL secondary replica over WAN site temporarily and add it again later when the WAN line is upgraded with between bandwidth What is the best practice to remove secondary replica and replicating database and add later from SQL management studio without interruptions on databases?
I need to copy a just-created bak file to another drive after the backup task has completed. I don't see anything in the job toolbox which works with file system operations like this. But still it must be a common need..There are ways to script this or use third part tools but I am looking for something native to the sql server 2012 SSMS toolset, if possible.
An alternate approach would be to run the backup job again, after the main backup, and change the destination to the alternate location. But I was thinking that another backup job would probably invoke more overhead on the server than a simple file copy operation. If I do end up taking this approach I could also use the cleanup task to toss older bak files in the alt dir.
I have an AlwaysOn Availability group configured between 2 nodes (Synchronous)
Automatic failover was working fine until recently
I can failover between the nodes manually but automatic failover doesn't seem to be working. In my earlier test, I would shut down the SQL Service on the primary and within seconds, the secondary replica would take over. Recently I have performed the same test and the secondary replica enters the resolving state and the DB in unavailable.
I have tried everything here: [URL] ....
The only change I made was changing the availability mode from Synchronous to Asynchronous - Could that be the cause?
Do we have any way to insert,update,delete data from one table and update the changes onto second table. Also, while updating records into second table, can the data be encrypted.
I tried using view and it can insert, update, delete without any issues. But if i tried to encrypt any fields after inserting data into view, I am unable to do it.
CREATE Tableb_vw ON TableB Instead of Insert AS Begin update TableA set Lname = ( --UserName = 'User' + substring(convert(varchar(32), UsersTrID), 1, 8) SELECT REPLACE(LEFT(Lname, 2), '''', 'Z') AS LNAME) FROM TableA end
What I would like to get:
1. Can we update base tables and encrypt second table data while inserting or updating data 2. If not supported using base tables, can we do using views to encrypt view data [Some fields]
I have a SP that runs on the primary in 18 min and 45 min on the secondary( poorly written cursor,trying to fix it).Both machines are Exactly the same.I ran them in the middle of the night when no one was on the Sec. Node as we use it for reporting.
PLE: 7,000+ AVG Disk sec/write below .01 AVG Disk sec/read below .01 CPU below 5% both machines set a max dop 4
Secondary replica database(setup in async mode) of AlwaysON went in "restricted mode" during weekly reindex operation.
So I have tried below steps
1) Executed following statement on the same secondary replica database where the issue exists
alter database <DBNAME> set multi_user with rollback immediate
but it failed with the error saying "The operation cannot be performed on database "dbname" because it is involved in a database mirroring session or an availability group. Some operations are not allowed on a database that is participating in a database mirroring session or in an availability group. ALTER DATABASE statement failed."
2) Primary database is multi_user but still tried following command on primary replia database(thinking it will replicate)
alter database <DBNAME> set multi_user
but no luck. The secondary alwaysON database shows (synchronizing) as the alwaysON is set in async mode but the command doesn't replicate across secondary
so we are left with the only option to re-setup alwaysON but I want to avoid it as database size is huge..
What happens when an automatic failover occurs, in a two server AlwaysOn Availability Group configuration, where the secondary replica is configured as read-only?
Will it only allow read-only connections, or will it become read-write and can accept INSERT, UPDATES and DELETES when assigned the new role as Primary?
Is it correct that adding a third server/node, that just acts as passive and should be used for automatic failover, to support true HADR, would NOT need another license .. and that licenses would only be required for the previous Primary and Secondary (Read-Only) replicas?
I checked the server and found that LS restore job failing and Backup and copy jobs running fine without any issue. and also observed that Copy folder the trn file existing on secondary server. i try to restore trn file im getting the error. and observed that last log backup file that it restored at the secondary database on May2nd,2015.
2015-06-02 12:25:00.72*** Error: The log in this backup set begins at LSN 761571000000022500001, which is too recent to apply to the database. An earlier log backup that includes LSN 721381000002384200001 can be restored.
From Restore job histort details below.
Message 2015-06-02 12:25:00.72*** Error: The file 'xxxx\_20150530104503.trn' is too recent to apply to the secondary database 'database'.(Microsoft.SqlServer.Management.LogShipping) *** 2015-06-02 12:25:00.72*** Error: The log in this backup set begins at LSN 761571000000022500001, which is too recent to apply to the database. An earlier log backup that includes LSN 721381000002384200001 can be restored. RESTORE LOG is terminating abnormally.(.Net SqlClient Data Provider) *** 2015-06-02 12:25:00.73Searching for an older log backup file. Secondary Database: 'database' 2015-06-02 12:25:00.73*** Error: Could not find a log backup file that could be applied to secondary database 'database'.(Microsoft.SqlServer.Management.LogShipping) *** 2015-06-02 12:25:00.74Deleting old log backup files. Primary Database: 'database'
In my environment always on is there. Today I observed that primary server fail over to secondary server .now the secondary server acting as primary role.
Can I know when is fail over is happened and who did the fail over. Is there any script to find this?
I've configured log shipping to use for DR purposes. I'm concerned that the physical location of the secondary is mis-reported by SQL Server Management Studio.
Viewing the secondary location (with Studio DB_name Properties Files) shows the path of the primary DB (I expected it to show the path of the secondary).
This SQL command shows the correct/actual paths of both primary and secondary DB's when run on their host servers.
SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files
Is this just cosmetic?
Here is an Example of how the Studio shows the incorrect path for the secondary.
I know now that AlwaysOn feature HAS to be installed/configured on a Windows Clustering environment, BUT the secondary replicas, like the Disaster recovery replica residing in a different Data Center HAS to be also in a Windows Clustering environment or can it reside on a SINGLE SQL Server INSTANCE?.
I have created a Test SSIS Package within BIDS (VS 2K8, v 9.0.30729.4462 QFE; .NET v 3.5 SP1) that connects to our Test Listener.
There is only 1 Connection Manager Object, and OLE DB Provider for SQL Server.
The ConnectionString lists: Provider=SQLOLEDB.1;Integrated Security=SSPI
The Test Connection within BIDS works.
The Package Control Flow has just 1 Object, and Execute SQL Task that performs an Exec on an SP that contains only a Select (Read).
The Package runs within BIDS.
I've placed this Package within a Job on the Primary Node. Ive run the job successfully using 32 bit runtime on and off. The location of the file on the server happens to be on a share that resides on what is currently the Secondary Node.
When I try to run the exact copy of this Job on the Secondary Node (Which has been Set up for Read All Connections; Yes), I get an error, regardless of the 32 bit runtime opiton. At this point, the location of the file is on the Secondary Node.
The Error is: "Login failed for user 'OurDomainAgent_Account'".
The Agent is a member of NT ServiceSQLServerAgent on both instances, and that account is a member of SysAdmin. Adding the Agent account as well, and giving that account SysAdmin, makes no difference either.
I received an alert from one of my two secondary servers (all servers are running 2012 SP1):
File 'E:SQLMS SQL ServerMSSQL11.MSSQLSERVERMSSQLDATAMyDatabaseName_DateTime.tuf' is not a valid undo file for database 'MyDatabaseName (database ID 8). Verify the file path, and specify the correct file.
The detail in the job step shows this additional information:
*** Error: Could not apply log backup file 'MyDatabaseName_DateTime.trn' to secondary database 'MyDatabaseName'.(Microsoft.SqlServer.Management.LogShipping) ***
*** Error: Table error: Page (0:0). Test (m_headerVersion == HEADER_7_0) failed. Values are 0 and 1.
Table error: Page (0:0). Test ((m_type >= DATA_PAGE && m_type <= UNDOFILE_HEADER_PAGE) || (m_type == UNKNOWN_PAGE && level == BASIC_HEADER)) failed. Values are 0 and 0.
Table error: Page (0:0). Test (m_freeData >= PageHeaderOverhead () && m_freeData <= (UINT)PAGESIZE - m_slotCnt * sizeof (Slot)) failed. Values are 0 and 8192. Starting a few minutes later, the Agent Job named LSRestore_MyServerName_MyDatabaseName fails every time it runs. The generated log backup, copy, and restore jobs run every 15 minutes.
I fixed the immediate problem by running a copy-only full backup on the primary, deleting the database on the secondary and restoring the new backup on the secondary with NORECOVERY. The restore job now succeeds and all seems fine. The secondaries only exists for DR purposes - no one runs reports against them or uses them at all. I had a similar problem last weekend on a different database that is also replicated between the same servers. I've been here for over a year, and these are the first instances of this problem that I've seen. However, I've now seen it twice in a week on the same server.
Hi, Does any one have an idea why the backup to disk it takes 6 Hrs and to Tape it take 1 Hr.? Also what will be the diference bettewn creating the a backup device and just creating the file on the fly when I setup the backup? Thanks Felix.
The last five days, the SQL Server Production backups to tape have been incredibly slow. Actually, they don't complete because they have been going on for 60 hours. Everything on the servers (clustered production environment) seems fine. Nothing untoward unusual. Any ideas what could be causing it? We're thinking of rebooting tonight.