I have backed up databases from a 2008 server and now I would need to restore them to a 2012 , the only issue is that I need a script bcuz I have over a hundred databases.
I am on SQL 2012 standard version and I am writing a script to restore database from .bak files on a network.
ALTER DATABASE DB1 SET SINGLE_USER WITH ROLLBACK IMMEDIATE
----Restore Database RESTORE DATABASE DB1 FROM DISK = 'N:SQLBackupDailyDB1_backup_2015_06_22_194002_0500494.bak' WITH REPLACE ALTER DATABASE db1 SET MULTI_USER GO
Since I have to restore about 100 databases , I am planning to put the script in a cursor. However my problem is how can I get the bak file name dynamically .
Sometime during the night last night some user account permissions were "lost". Am I right to think that restoring the master database would be the way to go? We have a 2 node 2012 cluster and I stop the cluster resource and start the db in single user mode from the active node. Somehow the sharepoint farm is still trying to connect so I can't get logged in single user. What method could I use to stop users from connecting when I don't have access to the sharepoint farm.
I'm working on a project where I need to build a small database and then copy it to a server at the client's site. I can't connect directly, so I have to use a VPN connection and use Remote Desktop, copy the database backup from my machine to the cloud, then download it to the client machine. The project is still in the early stages, and the client is still sending me data in CSV files and Excel spreadsheets. I'm periodically needing to do a complete refresh of the database at the client. I've hacked my way through it a couple of times, but I need to know the proper way to do it. I get errors on the restore step, telling me the file is in use.
If one is regularly taking backups of system databases, when does it become necessary to rebuild the master database. I am looking for a situation where rebuilding the master is preferred to restoring it from backup.
I am having issues with Restoring the Backup of same Database on to the same server , as i know like many of you will be asking y i need to restore on same server.. Well the need came in that way , now i think i know the problem (i.e) The Orginla DB is there and also i am restoring the same DB again on that server, so .mdf and .ldf will be same .
I have an SQL .bak file and I would only like to restore specific columns as one of the columns is a free text field and is substantially increasing the size of the file. I can't restore it due to disk space constraints so dropping the column isn't possible if I can't get the table into a database locally.
I'm using SQL Server 2012 R2 and am working on configuring vendor access to a particular DB. I have a test db & (what will eventually be) the production DB. I've configured security for the test DB and want to back that up, then restore it (including all settings) to the prod one, renaming it to the prod DB name.
We have one database with Filestream enabled. There is one table "dbo.files" which uses Filestream.
We created a filestream filegroup Filegroup1 and added 3 data containers to it. (3 filestream data containers within the same filegroup.)
We have three LUNs F:, G:, H: each with a capacity of 2TB (That is the limitation). F: and G: are almost full. So, I restricted their growth so inserts do not happen into these data containers. Inserts are now going into H: drive which has lots of free space. Our application code prevents any sort of deletes or updates to this table. So data in the growth restricted containers will never change.
Now the database is around 6 TB in size and backups is a challenge. We are contemplating on migrating storage to netAPP and use their snapmanager console which is much faster.
However, until then, we need a solution with native SQL backups. We tried partial backups and piecemeal restore.
WE tried this on a test server :
1) Partial backup only the read-only data containers first, (F: and G:) (The plan is to back these up just once a month as this data never changes).
2) Partial backup the primary filegroup plus the third data container in the Filestream filegroup which is subject to inserts (H:)
While restoring, we tried the online restore, First, I restored the backup obtained from step 2 above with recovery option. Then I restored the backup obtained from step 1 with recovery. I see that the database was brought online. However, when I try to query the dbo.files table, I get an error stating that some files of the filestream filegroup are offline.
We have a bunch of SQL 2012 databases which use SQl Server authentication (essentially local dev instances). Is it possible to take a backup of one of these database and then push them onto a (central) server which uses Integrated security (based on active directory authentication) using a script to change and map the authentication model in the process?
In last week my database was crashed and some how i managed to restore it back on SQL2K12 but after restoration all the relationships are removed and sql server is showing below message when i open diagram of the database.Table(s) were removed from the diagram because privileges were removed to these table(s) or the table(s) were dropped.how to get back all the relationships of the tables.
I have a process that restores a production DB, overwriting the existing copy each night. I'd like to keep the solution "up" for as long as possible. And this'll be more important if I want to update it in the day (where there are more queries) too. The nature of queries thrown at the system is that there are about 20 per hour, it's underpinning a reporting system, it's not an OLTP system.
It seems to me I could restore the fresh DB copy into a holding DB, then rename it to the production DB name at the end of the process. The rename process should be pretty much instant.
But I need to think about detecting and waiting for queries to complete on the prod DB, before removing/demoting it (actually, I though to rename it, then reusing it as the next copy to update).
Not sure this is the correct forum, but I'll give it a go. I am in the process of deploying a series of databases from a development environment to a production environment. I've done some searching around best practices, but haven't found anything specifically calls out what is best. I am looking to find out the best approach for moving the code/tables/views/SP's, well everything from my dev environment to the production environment. Due to the complexity and large number of objects to be created, would a backup/restore to the production server be more prudent than creating a batch file type thing that creates all the objects through a series of scripts?
- restore a backup of a 3rd party database onto one of our servers - this has no users that I can use - there is some ETL processing so we're using Control-M to manage the process - create a database user and grant it db_reader.
I'd like to do this without granting any users elevated privileges if possible.
What I've done so far is grant the Control-M user (this is a domain user) dbcreator rights and made it owner of our copy of the database that is being refreshed.
The refresh is completing, but Control-M is not able to log onto the database to create the user.
What is the best way to accomplish this task without granting the control-m user sysadmin rights?
Would I be able to do it if I used a SQL Agent job for the restore and user creation?
I am looking for a SQL Backup/Restore tools which can restore multiple environments. Here is high level requirements.
1. We have 4 DBs, range from 1 TB - 1.5 TB Each Database. When we restore to QA, DEV, or Staging, we usually restore 4 of them. 2. I am looking for the speed to complete restoring between 1 - 2 hours for 4 DBs.
I am evaluating the Dephix Software but the setup is very complex and its given us a lot of issues with Windows Authentions, and failure in the middle of the backup. I used Guess Software many years ago but can't find it on the web site any more. Speed is very important for us mean complete restoring as fast as possible. We are on SQL 2012 and SQL 2008 R2.We are currently using NETAPP Technology and I have Redgate Backup Tool but I am mainly looking for fast Restore Process.
Scenerio : To keep a very large system running optimally in a VM cluster, Take PR01 and make PR02, PR03, PR04. Distribute the 45 databases and 9T+ of disk across multiple VM guest.
Each PR## is a SQL Server 2012 Enterprise guest on a VM 5.1 cluster.
So instead of PR01 needed 16 core and 128g of memory, each one will have 4 core and 32g of memory. Making VM HA more manageable. (yes, DRS rules will apply). Also provides more HBA paths and distributes i/o over more physical disk on the SAN.
Instead of a connection string having to know PR01.dbo.UserDB01, PR02.dbo.UserDB03, ect the connection would be PRDB.dbo.UserDB01. That way if needed 1) UserDB can be moved to any of the PR## 2) new PR05, PR06 can be added as needed. The end user and processes are not allowed to touch system databases, no PR## will have a user DB called the same name.
There are seperate VM guests on other VM clusters and Citrix servers that need access to PRDB. As things expand and move around, none of the connection strings need to be changed.
I am looking into RadWare and modifing level 7 information, but that is iffy and $$$$$$.
I face a task to edit definition for bunch of sp: cleaning <IF EXISTS THEN DROP> and changing CREATE for ALTER.
Do you think there is a good way to do it in batch and take original source from system.tables (even we have this as physical file in VSS)?
Surely enough <IF Exists then drop> is not uniformed, could be variations with syntax and on diff number of lines, IF Exists... vs OBJECT_ID is not null etc...
CREATE__sp_PROCEDUREA vs CREATE_spPROcedure (with diff spacing).
I have been using the software, and it has been working fine (on windows user A). Now, I have created another windows user (User B), and would like to use the same software/database. The software launches fine (User B), but cannot access the created SQL database (created with user A)
How do I setup the database to allow access from all users on the same PC?
Is it possible to create a cluster configuration, where 2 (or more) SQL Server instances running on different servers will simultaneously serve the same database attached to the same storage? Something like this:
Instance A Instance B Instance C (active) (active) (active) | / | / | / SAN for Database and Quorum
The point would be to improve reliability and performance at the same time. All nodes would share load. If a node fails, the other nodes still work and take over the load.
I know a failover cluster can be done, where 1 instance is active and others are passive (active/passive) which improves reliability, but in this configuration just 1 instance is serving at the same time.
(As far as I understand, the active/active configuration is meant to run different databases on 2 (or more) instances such that the databases on a failing node are taken over by any other instances in the cluster.)
I have an interesting problem. A number of spids are being blocked by a single select statement. The select statement is the same as returned from sp_who2, sysprocesses, sp_whoisactive of dbcc inputbuffer. It is not waiting on anything and has status as sleeping.
Clearly it is not 'just' a sleeping select statement as I can see over a thousand locks held by the spid on 2 user databases and tempdb. I'm working on the theory there is a begin transaction with a bunch of statements and no closing commit. But I want to be able to prove that. How can I show what statements were previously executed as part of this transaction?
Additional Info: SQL 2012 Enterprise Edition. This is a test server but is a reproduction of a live issue. At this point the application team cannot isolate the code causing the problem, only the set of processes the code resides in.
i have database which has 25 tables. all tables have productid column. i need to find total records for product id = 20003 from all the tables in database.
We have a Silverlight based application which currently supports only one production version. Idea is to support three concurrent versions of the same application and user will switch to the newer versions based on their interest or they can still continue with the older version.
We still have to use the existing database for all these three versions.
What is the best way to architect this so that we can differentiate the code between the versions and still keep the data in sync and run all the versions in parallel.
I have a table with about 466 Million rows. In this table there is a int column called WeeksToRetain as well as a EventDate column containing the date the row was inserted. I am trying to delete all the rows that that should be deleted according to the WeeksToRetain. For example, if the EventDate is 5/07/15 with a 1 in the WeeksToRetain column the row should be removed by 5/14/15. I am not sure what days SQL considers the beginning and end of the week. However the core issue I am having is the sheer mass of deletions I must do and log growth.
So I am trying to do the delete in batches. More specifically I want to load a temporary table with a million rows, then use the temporary table to load a sub temporary table with 100,000 rows and join this temporary table to the table I want to delete from looping through 10 times to get 1 million. The Logging.EvenLog table which is the table I'm trying to purge has a clustered index on EventDate (ASC). I would like to run this in a schedule job with enough time between executions for log backups to run.
DECLARE @i int DECLARE @RowCount int DECLARE @NextBatchDate datetime CREATE TABLE #BatchProcess ( EventDate datetime, ApplicationID int,
I am trying to restore multiple .bak backup SQL database files onto a new server. However, I have found that it will not allow me to restore multiple databases at once. Is there a way to do this so that I do not have to manually upload one at a time? I tried adding all the .bak files at once to the backup device window but it only did the first one listed. It would be so much easier to restore them all at once so that I do not have to continue this manual process. I am restoring them via device.
I am testing out a blank database created over two physical files on two separate disks with one table called data which has one column called values nvarchar(max).
I filled the table up with a whole load of data and ran a select * against it. If I run Permon at the same time I can see that the read load has been spread over multiple disks as each of these disks is getting read from in parallel. If I create the same database on a single file and run the same select * again it takes much longer, proving that the read load has been distributed across multiple disks.
Now moving onto writes, this is where the confusion lies. I understand that SQL server fills files evenly until they need growing, after which it will then fill files individually until they are full in a round robin fashion unless you have trace 1117 turned on. What I don't understand is why the writes aren't distributed out whilst it is filling these file groups.
I ran an continual insert into my table with go 1000000 to monitor how the files are being filled up. I monitored where SQL server was physically placing the files as they were being inserted by running the following query:
;WITH CTE AS (SELECT sys.fn_PhysLocFormatter (%%physloc%%) col1, RIGHT(LEFT(sys.fn_PhysLocFormatter (%%physloc%%),2),1) AS [Physical RID], DATAID
[Code] ....
I could see that it would a thousand or so records into file 1, then a thousand or so into file 2, then a thousand or so into file 1 etc etc. In another words it would hit one disk, then another disk, then back to disk one to fill the file evenly. Is there any way to make SQL Server distribute the writes out in parallel so that both disks are writing in tandem?
By the looks of it, multiple disks only scale reads, as with writes only one disk is ever written to at once which is annoying. Any way to harness the write power of multiple disks?