SQL Server 2012 :: Restore Database With Minimal Downtime
Oct 12, 2015
I have a process that restores a production DB, overwriting the existing copy each night. I'd like to keep the solution "up" for as long as possible. And this'll be more important if I want to update it in the day (where there are more queries) too. The nature of queries thrown at the system is that there are about 20 per hour, it's underpinning a reporting system, it's not an OLTP system.
It seems to me I could restore the fresh DB copy into a holding DB, then rename it to the production DB name at the end of the process. The rename process should be pretty much instant.
But I need to think about detecting and waiting for queries to complete on the prod DB, before removing/demoting it (actually, I though to rename it, then reusing it as the next copy to update).
I have a table (named table1) with 20million rows. It takes around 11 minutes to apply the primary key to this table. There are some tables with over 100 million rows so based on the previous time if my calculations are correct it will take close to an hour apply this primary key for tables with around 100 million rows.
My current solution is to create another table (named table2) with no indexs or primary keys. Pump over only like 5 days worth of data, then apply the primary key. Then have a script that will eventually populate table2 with the rest of the data gradually. When I say gradually I mean like insert like every 100k per hour or something. Keep in mind this table2 is heavily updated with new records.
- restore a backup of a 3rd party database onto one of our servers - this has no users that I can use - there is some ETL processing so we're using Control-M to manage the process - create a database user and grant it db_reader.
I'd like to do this without granting any users elevated privileges if possible.
What I've done so far is grant the Control-M user (this is a domain user) dbcreator rights and made it owner of our copy of the database that is being refreshed.
The refresh is completing, but Control-M is not able to log onto the database to create the user.
What is the best way to accomplish this task without granting the control-m user sysadmin rights?
Would I be able to do it if I used a SQL Agent job for the restore and user creation?
Hello,I'm upgrading from SQL 7 to SQL 2000 on another box. To minimize thedowntime I would like to1) backup my sql 7 database,2) copy it to the new box with SQL 2000 already installed,3) restore the database on the SQL 2000 box,4) Shutdown my sql 7 database,5) Copy the transaction logs to the SQL 2000 database,6) Restore the transaction logs to the SQl 2000 database,7) Bring up SQL 2000.My only concern with this is restoring the transaction logs that werecreated on SQL 7 to SQL 2000. Do you know if I can do this?Do you see any (other) problem(s) with my plan.Thanks, Scott
We use Netbackup for our SQL servers to backup and restore databases. I would like the service account used by Netbackup to have as limited permissions as possible. The account should be able to backup and restore a db without being able to read any of the content. Right now the account jobs fail if the service account is not in the sysadmin role.
I removed the account from sysadmin and limited it to dbcreator and public but the job fail.
How to setup an account so that people who know the service account password can't log in with that account and read db information?
I am having issues with Restoring the Backup of same Database on to the same server , as i know like many of you will be asking y i need to restore on same server.. Well the need came in that way , now i think i know the problem (i.e) The Orginla DB is there and also i am restoring the same DB again on that server, so .mdf and .ldf will be same .
I'm preparing a checklist for myself before getting ready to migrate from 2005 to 2012. Our largest database is a nice one at over 250GB. I'm thinking my best bet to minimize any downtime would be to Restore the DB (NORECOVERY) on the new server and keep rolling it forward with the transactional logs. Eventually I'll need to bring the old DB offline and do one last backup and apply that one to the new server but that should be a small time frame given the whole process could take several hours.
I'm currently working on a project at work to test the effects of database compression, trying to obtain measurable data on the impact of the compression on other server resources, and therefore whether the reduction in space used is worth the extra overhead. This has involved taking a trace of a production customer's workload for a period of time and replaying it against a backup using Distributed replay in synchronised mode.
I'm then taking a trace of that replay, as well as using perfmon to record useful data about the server, before and after compression is enabled. Finally, I'm loading the traces into a tool called Qure to analyse the impact of the compression on reads, writes, CPU, overall duration etc.
What I'm finding is that even across 2 different 'baseline' runs, which are replaying the exact same workload against the exact same database, performance etc differs to a significant enough degree that it calls into question the validity of the test. I can only put this down to the fact this server is on a VM, which is affecting available resources, which in turn affects execution plans the workload is generating and causes different replays of the same workload. I'm therefore looking at doing this on a standalone server, but I still can't be sure the differences will go away.
How to make tests such as this as similar as possible on multiple runs, when elements outside of SQL Server are in effect out of my control?
I understand that minimal logging can occur on a non clustered indexed heap as long as [URL] ...
*not replicated
*tablock is used
*table is empty
The following test seems to contradict this
In the test I create a non indexed heap, insert some record and check the log, then repeat the test on an indexed heap.
The results suggest that even though the conditions for minimal logging into a indexed heap are met, minimal logging is not happening although it does happen on an non indexed heap. What am I doing wrong?
CREATE DATABASE logtest GO USE logtest GO CREATE TABLE test (field varchar(100)) GO CHECKPOINT
The transaction log takes up a lot of space on my database, and even after I try truncating the log, doing a transaction log backup, and then shrinking it, I am not allowed to reduce the size of the transaction log to less than 250MB. Is there some reason why this space is required?
There is a great book on database refactoring that contains a comprehensive set or recipies on how to revise databases that are supposed to be always online and may have various clients that can't be upgraded at the same time. I guess this is a typical case with large databases and I would be surpised if Amazon stops their servers just to move a column from one table to another. The book describes necessary steps for such changes. Basically it's all about creating intermediate database schemas that would be used during transition period.
For example, if we need to move a column from one table to another:
Version 1. Table A columns: Name, Price Table B columns: Quantity, Date
Let's say we move Price to table B:
Version 2. Table A columns: Name Table B columns: Quantity, Date, Price
The book suggests an intermediate version:
Version 1_2. Table A columns: Name, Price Table B columns: Quantity, Date, Price Additional trigger that will synchronize "Price" columns between A and B.
Version 1_2 can be used by both clients written for version 1 and 2. Software developers don't need to rush their upgrades, transition can last months and include several changes.
This technique requires accuracy in version control management, but looks very good to implement non-interruptible database schema upgrade. I wonder if this is the only option available for data schema upgrade with no downtime. I can't think about anything else - it this how large data warehouses updata their databases?
I need a query to find out the server uptime and downtime of the server from MOM database, i don't know in which tables MOM actually stores this infomation.
I need this very urgently.
Thanks in advance
You can use this code to find out the information stored in the MOM tables:- ############################################################################ create PROC [dbo].[SearchMyTables]
I am on SQL 2012 standard version and I am writing a script to restore database from .bak files on a network.
ALTER DATABASE DB1 SET SINGLE_USER WITH ROLLBACK IMMEDIATE
----Restore Database RESTORE DATABASE DB1 FROM DISK = 'N:SQLBackupDailyDB1_backup_2015_06_22_194002_0500494.bak' WITH REPLACE ALTER DATABASE db1 SET MULTI_USER GO
Since I have to restore about 100 databases , I am planning to put the script in a cursor. However my problem is how can I get the bak file name dynamically .
Sometime during the night last night some user account permissions were "lost". Am I right to think that restoring the master database would be the way to go? We have a 2 node 2012 cluster and I stop the cluster resource and start the db in single user mode from the active node. Somehow the sharepoint farm is still trying to connect so I can't get logged in single user. What method could I use to stop users from connecting when I don't have access to the sharepoint farm.
I'm working on a project where I need to build a small database and then copy it to a server at the client's site. I can't connect directly, so I have to use a VPN connection and use Remote Desktop, copy the database backup from my machine to the cloud, then download it to the client machine. The project is still in the early stages, and the client is still sending me data in CSV files and Excel spreadsheets. I'm periodically needing to do a complete refresh of the database at the client. I've hacked my way through it a couple of times, but I need to know the proper way to do it. I get errors on the restore step, telling me the file is in use.
If one is regularly taking backups of system databases, when does it become necessary to rebuild the master database. I am looking for a situation where rebuilding the master is preferred to restoring it from backup.
I have an SQL .bak file and I would only like to restore specific columns as one of the columns is a free text field and is substantially increasing the size of the file. I can't restore it due to disk space constraints so dropping the column isn't possible if I can't get the table into a database locally.
I'm using SQL Server 2012 R2 and am working on configuring vendor access to a particular DB. I have a test db & (what will eventually be) the production DB. I've configured security for the test DB and want to back that up, then restore it (including all settings) to the prod one, renaming it to the prod DB name.
We have one database with Filestream enabled. There is one table "dbo.files" which uses Filestream.
We created a filestream filegroup Filegroup1 and added 3 data containers to it. (3 filestream data containers within the same filegroup.)
We have three LUNs F:, G:, H: each with a capacity of 2TB (That is the limitation). F: and G: are almost full. So, I restricted their growth so inserts do not happen into these data containers. Inserts are now going into H: drive which has lots of free space. Our application code prevents any sort of deletes or updates to this table. So data in the growth restricted containers will never change.
Now the database is around 6 TB in size and backups is a challenge. We are contemplating on migrating storage to netAPP and use their snapmanager console which is much faster.
However, until then, we need a solution with native SQL backups. We tried partial backups and piecemeal restore.
WE tried this on a test server :
1) Partial backup only the read-only data containers first, (F: and G:) (The plan is to back these up just once a month as this data never changes).
2) Partial backup the primary filegroup plus the third data container in the Filestream filegroup which is subject to inserts (H:)
While restoring, we tried the online restore, First, I restored the backup obtained from step 2 above with recovery option. Then I restored the backup obtained from step 1 with recovery. I see that the database was brought online. However, when I try to query the dbo.files table, I get an error stating that some files of the filestream filegroup are offline.
We have a bunch of SQL 2012 databases which use SQl Server authentication (essentially local dev instances). Is it possible to take a backup of one of these database and then push them onto a (central) server which uses Integrated security (based on active directory authentication) using a script to change and map the authentication model in the process?
I have backed up databases from a 2008 server and now I would need to restore them to a 2012 , the only issue is that I need a script bcuz I have over a hundred databases.
In last week my database was crashed and some how i managed to restore it back on SQL2K12 but after restoration all the relationships are removed and sql server is showing below message when i open diagram of the database.Table(s) were removed from the diagram because privileges were removed to these table(s) or the table(s) were dropped.how to get back all the relationships of the tables.
My SQL Server 2005 Database was down last night.From logs I can find out below details only.
Code Block Date 11/30/2007 1:01:34 AM Log SQL Server (Archive #1 - 11/30/2007 1:01:00 AM) Source spid4s Message SQL Server is terminating in response to a 'stop' request from Service Control Manager. This is an informational message only. No user action is required.
Now I want to find out root cause for this.Who has stopped this service.And If services was stopped automatically then which program/service is resposible?
So Please suggest me how to do root cause analysis for this downtime.
I'm trying to run a query to check the downtime in production lines, but if a line has assigned more than one cause for the downtime it repeat the info for each cause.
This is the code.
SELECT D.Line AS Line, D.ProductionLine AS ProductionLine, D.Shift AS Shift, D.DownTime, CONVERT(VARCHAR(10), D.DatePacked,101) AS DatePacked, AssignedDowntime, (D.DownTime - AssignedDowntime) AS NOASSIGNED, R.Enviromental,R.Equipment, R.IT_Systems, R.Material_External,R.Quality,R.Material_Internal, R.Method,R.PreProduction,R.People FROM ( SELECT Line, Shift, DatePacked, SUM(Cast(Downtime AS INT)) AS AssignedDowntime,
[Code] ....
I'm expecting that if is more than one "Down Reason "it will include in the same line. At this moment if i have more than one reason it create a line for each one for example:
If i have a total Downtime of 50 minutes and they are assigned 10 for itequipment, 30 by testequipment and 10 assigned to quality issues i will have and output like this:
I have TDE backup one serverA but There is no backup of certificates or keys from Server A. And no one knows the password used to create those backups. How do you restore the database XYZ at that time on Server B?
i have .bak file downloaded from internet , and i also have istalled sql server 2012.my problem that i can not restore this .bak file and get this error massage :
System.Data.SqlClient.SqlError: The operating system returned the error '5(Access is denied.)' while attempting 'RestoreContainer::ValidateTargetForCreation' on 'C:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLDATASRO_VT_SHARD.mdf'. (Microsoft.SqlServer.SmoExtended) - .bak file version = 661 10 50 1600 = sql server 2008R - my sql version = Microsoft SQL Server 2012 - 11.0.2100.60 (Intel X86)