I have a Windows Server 2012 R2 2 node cluster with SQL Server 2014 FCI installed. Data files are on a separate Windows Server 2012 R2 file server. Data files share has been permissioned to the SQL Server service and SQL Server Agent service accounts as Full Control. NTFS Permissions are Full Control.
When I try to attach a database CREATE DATABASE AdventureWorksDW2012 ON (FILENAME = 'apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATAAdventureWorksDW2012_Data.mdf') FOR ATTACHI get this error: Msg 5120, Level 16, State 101, Line 4 Unable to open the physical file "apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATAAdventureWorksDW2012_Data.mdf". Operating system error 5: "5(Access is denied.)".
If I log into the file server (called APRICOT) and look at the NTFS permissions they all look good. I have also reapplied the NTFS permissions from the root folder down.
EDIT If I log on to one of the nodes in the cluster as the SQL Server service account and navigate to apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATA and copy and paste the data file, it works fine.
EDIT2: If I log on to the file server and Enable Inheritance at the root level, then Replace all child objects with inheritable permission entries from this object, I get this error:
User Account Control settings on all nodes and the file server are set to Never notify
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
Is there a better way to deal with the virtual log files?...I see several approaches in dealing/decreasing the virtual log files for a database..want to know what's the best n safest approach, from the masters here?
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage. Log files – should go on the fastest writing storage. TempDb – involves a lot of writing at the same time the data files are being read. Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
I want to use service SIDs for my SQL Service accounts but also want to have the data files on a NetApp filer CIFS share. The 2014 installer prevents installation if CIFS and Service SIDs are used. I tried to install with domain account on CIFS, and then to swap back to Service SIDs afterwards, but couldn't find a way to do it.
I granted the AD Computer account Full Control to the CIFS share, so it should work, but I just can't work it out at the moment.
I've been trying to get a definitive answer to this question but alas I have conflicting and patchy answers so far from other sources. I have an index that, lets say, requires 10GB of data space to rebuild..This index resides on a filegroup that spans 2 files on two seperate drives (i.e. a mdf and ndf)
When I rebuild this index how will each of these datafiles grow as the rebuild proceeds to completion? Lets for the time being remove the caveats of any other activity hitting the example index/database in question.My tests seem to show that only the mdf will grows (or the file with the lowest id in the that filegroup) provided there is enough space available in that particular file to complete the operation. The secondary ndf dat file doesnt grow at all if the mdf has enough space.
Is expected behavior? i.e. the index will be rebuilt in a contiguous manner relative to the files contained with the filegroup i.e. fileid 1 will grow till limit reached then next fileid grows etc?
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.
A little background on what I am trying to achieve first. We are moving to Azure virtual machines and we will have 8 disks on the SQL Server box. I am adding more files to the primary file group and each file will go on its own drive. I am then rebalancing data across these files by rebuilding all of the indexes on the tables which is working fine. No problems so far all is good.
I now have an additional problem. If there is a lob or blob column on the table, rebuilding the clustered index and all the non clustered indexes doesn't rebalance the blob or lob data across the disks such as it does with in row data.
I cannot find any articles on rebalancing lob or blob data because all the articles say to move to a new file group. I do not want a new file group, I just want to use the primary file group where the data already resides, and just redistribute it evenly in the same way that I can in row data which is working fine.
One solution I thought about was to BCP data out of the table, truncate the table and then BCP back into the table which I imagine would have the desired effect of distributing the data evenly over the files.
I have 6 tables which are very huge in row count and need to be partitioned for better manageability.
Little info: Every day, 300 Million records are inserted and 300 million records are deleted in below 7 tables. we maintain only 8 days worth of data in below tables which is the reason records which are older than 8 days are continuously deleted.
Master table which has [ID],[Timestamp] Table Name: Sample - 2,578,106
Child tables: Foreign key [ID] is common for all the tables. There is no timestamp column in child table. dbo.ConnectionDB - 1,147,578,048 dbo.ConnectionSS - 876,458,321 dbo.ConnectionRT - 118,133,857 dbo.ConnectionSample - 100,038,535 dbo.Command - 100,032,235
I would like to partition the above child tables based on the IDs that are inserted every 4 hours. Meaning, All IDs that are inserted in 4 hours window should be in a partition.
I have a master table with after insert trigger on it.. When record is inserted into master table, the trigger fires and is captured in the backoffice table. In case the trigger fails, my record is neither in the master table nor in the back office table..
Is there anyway to capture the record either in the master table or in a separate table.
And have chosen the destination - unstructered (flat) file. But the wizard proposes to export only one table (dbo.Acocount) and all the others from the list are not exported. How can I export ALL the data into one file.I need to do this to edit the syntax in the editor and then import this data and database structure into Postgresql
I'm looking for a way to list the target servers associated with a master server. The reason is that we're moving to another master server, and I'd prefer not to move the targets manually.
I've got most of the T-SQL already (sp_msx_enlist, sp_add_jobserver), but I'd like a scripted solution instead of a wizard.
Until yesterday I had a server running SQL Server 2008 R2 - with all the SQL Server DB files on an attached disk array.
The server died - so I attached the disk array to a new server - and all the DB data files are visible there.
I installed SQL Server 2014 on the new server and am trying to work out how to point it at the existing database files.
I also have backups of the DB's - but they will take ages to copy over and restore - so it would be much easier to just use the db files. Should I restore the master db first (easy as its small)?
I cannot see the file created in the directory. The account under which sql server the agen job run have full privileges on it and is sysadmin.Then i run the Command in ssms
BACKUP DATABASE [F1SB] TO DISK = N'F:SqlBackup2014<server>F1SBFULLIGS-DB01_F1SB_FULL_20150510_214455.bak' WITH NO_CHECKSUM, COMPRESSION, ENCRYPTION (ALGORITHM = AES_256, SERVER CERTIFICATE = [serverCertificate])
and I get this error message:
Msg 3013, Level 16, State 1, Line 13 BACKUP DATABASE is terminating abnormally.
I have set the environment set for AutoRecover (for every 3 minutes and Keep information for 7 days under the SSMS 2014 Menu: Tools -> Option ->Environment -> AutoRecover).
I've rebooted the box and restarted the SQL Server service and nothing seems to create the files.
What is the best method to restore a DBTest1 (with one .mdf and one .ldf) into DBTest2 (with one .mdf, multiple .ndf data files and with 4 filegroups associated with specific data files). I do not see how the one .mdf file (in DBTest1) can be separated into the other 4 filegroups (in DBTest2). This does not sounds like it is possible with Backup DBTest1/Restore to DBTEST2 or (Detach/Attach) because the underlying filegroup and file structure is different.
What method should be used to get the data and structure from DBTest1 (includes 1100 Tables and 550 GBs of Data) into DBTest2 (with 4 filegroups)? Is the following possible:
1) First, in DBTest2, execute a script to create tables/indexes on appropriate filegroups.
2) In DBTest2, use scripts to pull data from DBTest1 into DBTest2, for example INSERT INTO DBTest2.dbo.tables with SELECT FROM DBTest1.dbo.tables OR use SELECT/INTO DBTest2.dbo.tables FROM DBTest1.dbo.tables.
Or, is it possible to use the BULK INSERT or BULK COPY Options? Export/Import Wizard?
Does the Create Index step needs to be done after the data is loaded into DBTest2?
We have multiple databases on a single instance in an OLTP environment. I have my data files on a separate SAN LUN from my transaction log files (and a few NDFs split out onto additional LUNs). I was wondering if there is a performance benefit to putting each LDF file on its own LUN? Or at least my few busiest LDFs?
We are currently on 2012, but I'm having to put together specs for a 2014 installation and need to answer this question without having an environment in which I can benchmark different setups. I just want to hear whether or not others have done this (why or why not?).
I have configured windows failover clustering 2012 on 4 of my test nodes.
I am trying to add another node into this cluster but its not happening. I am not even able to start the cluster service in services.msc
After installing windows failover clustering, when I go to the C:WindowsCluster folder, I am unable to find CLUSDB, CLUSDB.1.container, CLUSDB.2.container and CLUSDB.blf files in the folder.
These files are very much present on the other nodes where cluster service is running.
I tried copying these files manually to server where its missing but still no luck.
I did tried the encryption on server "A" for database "AdventureWorks2012". Then I tried to restore to server "B". There was the certificate issue, and I thought "of course : it's encrypted ! Let's deactivate it". So here I go "ALTER DATABASE AdventureWorks2012 SET ENCYRPTION OFF".I look at sys.databases : not encrypted.I backup using no encryption, I verify using msdb.dbo.backupset : not encrypted.
I move my backup to my other server where encryption was never configured (so no certificate, nothing...), and I have the error : Msg 33111, Level 16, State 3, Line 1
Cannot find server certificate with thumbprint '0xFA130E58C999C4919B8975999C83A75A403B11D8'. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally.
I have a requirement to implement CDC for 50+ tables to implement incremental data changes warehouse/reporting rather than exporting the whole table data. The largest table is having more than half a billion records.
The warehouse use a daily copy of OLTP db (daily DB refresh). How can I accomplish this. Is there a downside in implementing CDC just for the sake of taking incremental changes on the tables?
Is there any performance impact if we enable CDC on OLTP db?
Can we make use of the CDC tables on the environment we do daily db refresh so that the queries don't hit OLTP database?
What is the best way to implement CDC to take incremental changes for reporting.
I have two databases like each other that one is the backup of another. Each DB have 2 filegroups. I want to replace one filegroup from one db to another. How do I do this? Or how do I backup and then restore?
Query to show logins that don't have any permissions within the SQL instance? I'm tasked with doing some cleanup and have found some cases where the database was deleted or moved to another server but the logins that used it were not deleted. I'd like to identify them to research.
For instance a query to show logins that have no permissions in any of the existing databases would be handy. I'm thinking it would be complicated by the need to loop through all of the existing databases and then outer join it to the list of instance level logins. Going to try to write something like that but was hoping that a script already exists.
I have multiple SQL 2008 severs with databases. Also, 1 mirroring server in place.
Since my database count is increasing can i have only 1 mirroring server. Is there any limit of db at mirroring server. I would have approx. 150 databases.
I want to Replace The Big Log database with A new one ( A database with same structure).But current DB has many connection .
This is my plan :
1- Create a new database with same structure.
2- Rename current database to olddb with this code :
USE master GO EXEC sp_dboption CurDataBase, 'Single User', True EXEC sp_renamedb 'CurDataBase', 'OldDataBase' GO 3- Rename Newdb to current DB. USE master GO EXEC sp_renamedb 'NewDataBase', 'CurDataBase'
is it true ? and Tsql code is ok ? (dont forget many of connection to curdatabase (that Is a log db) and loss some seconds data is not problems)
My database went into suspected mode. and after we had run some script, it came out from the suspected mode. but we encountered this error while opening table in database.
2009-11-02 15:46:42.90 spid51 Error: 824, Severity: 24, State: 2. 2009-11-02 15:46:42.90 spid51 SQL Server detected a logical consistency-based I/O error: incorrect pageid (expected 1:43686; actual 0:0). It occurred during a read of page (1:43686) in database ID 23 at offset 0x0000001554c000 in file 'H:MSSQL.SQL2008MSSQLDATAmy_db.mdf'.
Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.