SQL Server Admin 2014 :: Separate Transaction Log Files For Multiple Databases?
May 15, 2015
We have multiple databases on a single instance in an OLTP environment. I have my data files on a separate SAN LUN from my transaction log files (and a few NDFs split out onto additional LUNs). I was wondering if there is a performance benefit to putting each LDF file on its own LUN? Or at least my few busiest LDFs?
We are currently on 2012, but I'm having to put together specs for a 2014 installation and need to answer this question without having an environment in which I can benchmark different setups. I just want to hear whether or not others have done this (why or why not?).
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
I have a requirement to delete all the orphans users for the databases. The issue I am having is with when database principal owns a schema in the DB, User cannt be dropped.
How do I transfer it to DBO in case I am looping multiple databases. This is what I got so far .
declare @is_read_only nvarchar (200) Select @is_read_only = is_read_only from master.sys.databases where name='test' /* This should be a parameter value */ IF @IS_READ_ONLY= 0 BEGIN Declare @SQL as varchar (200)
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.
A little background on what I am trying to achieve first. We are moving to Azure virtual machines and we will have 8 disks on the SQL Server box. I am adding more files to the primary file group and each file will go on its own drive. I am then rebalancing data across these files by rebuilding all of the indexes on the tables which is working fine. No problems so far all is good.
I now have an additional problem. If there is a lob or blob column on the table, rebuilding the clustered index and all the non clustered indexes doesn't rebalance the blob or lob data across the disks such as it does with in row data.
I cannot find any articles on rebalancing lob or blob data because all the articles say to move to a new file group. I do not want a new file group, I just want to use the primary file group where the data already resides, and just redistribute it evenly in the same way that I can in row data which is working fine.
One solution I thought about was to BCP data out of the table, truncate the table and then BCP back into the table which I imagine would have the desired effect of distributing the data evenly over the files.
I've got reporting services on a different box from the database and I can see all the reports, but when I try to setup a subscription, I get this weird error:
The SQL Agent service is not running. This operation requires the SQL Agent service. (rsSchedulerNotResponding)
The same error happens when I connect to the database server via management studio and try to run a job.
I have Full database backup upto previous day and transaction logfile of Today transaction. my database has crashed. I have restored previous day's Full backup. I have faced difficulty to restore today's transaction from today's transaction log. What are the steps to restore full database back and one day's transaction log file. Note: there is no differential database backup and transaction backup.
I've got an old version of SQL Server 2008 R2 Developer Edition on an old PC which is failing. I've got a new PC and have put SQL Server 2014 Developer Edition onto it. Now before the old machine completely dies, I've gotten into SSMS on the old machine and did a backup of the databases I want to save. I've moved the .BAK files to where I could get to them from SSMS on the new machine. I've gotten into SSMS and tried to do a restore the database to my new machine. However I'm getting an error that does not make any sense to me.
The database I'm I've backed up is named JobSearch. When I backed it up, that was the only database I had selected. Like I said I copied the .BAK to the new machine. Got into SSMS, told it that I wanted to restore the JobSearch database, telling it where I wanted to put it, and it then immediately fails with a:
"Restore of database 'JobSearch' failed. System.Data.SqlClient.SqlError: Logical file 'VideoLibrary_Data' is not part of the database 'JobSearch'. Use RESTORE FILEISTONLY to list the logical file names."
Well of course VideoLibrary isn't "the logical file". But neither did I select VideoLibrary (which is a database I also want to move, but I'm doing one at a time). So what in heck is going on here? Why is it complaining about a database I haven't even selected to back up? Why, when I check everything on the old machine, it's backing up JobSearch, but on the new machine it sees VideoLibrary?
I need backup script to take all the database backups and we have the maintenance plan but our database character size is 98 and when we are taking the backups through maintenance plan while storing the backup history information it is adding the date and timezone information and exceeding the length to 128 so it is not writing the information on MSDB.
So we want to take the backup using the script and it has to create sub folder for each database. Also if any of the database fails it should continue with others.
Is there a formula for calculating how expensive a transaction will be in terms of disk space used before its run. I dont want it accurate to the MB, but rough enough so I can determine how much additional space to assign to a transaction log or SAN volume.
Currently we're reindexing ~25billion rows, nothing too wide, say 12 columns consisting of 1 varchar(50) and the rest ints, bits and money.Roughly speaking if I reindex the clustered index on an int indetity, (with sort_in_tempdb) how would I calculate the the disk space used?
I have a database that is part of AlwaysOn that is filling up the transaction log drive even though I have a daily full backup and transaction logs set for every 2 hours. The backups are going from both the primary and secondary replica backuping up to the shared disk and I have the backup preferences set to the primary.
When I try to shrink the log I get 'The transaction log for database 'DB' is full due to 'LOG_BACKUP''. I have to manually backup the trans log and then shrink, why the maintenance plan backups aren't doing this even though they are "working".
Until yesterday I had a server running SQL Server 2008 R2 - with all the SQL Server DB files on an attached disk array.
The server died - so I attached the disk array to a new server - and all the DB data files are visible there.
I installed SQL Server 2014 on the new server and am trying to work out how to point it at the existing database files.
I also have backups of the DB's - but they will take ages to copy over and restore - so it would be much easier to just use the db files. Should I restore the master db first (easy as its small)?
I cannot see the file created in the directory. The account under which sql server the agen job run have full privileges on it and is sysadmin.Then i run the Command in ssms
BACKUP DATABASE [F1SB] TO DISK = N'F:SqlBackup2014<server>F1SBFULLIGS-DB01_F1SB_FULL_20150510_214455.bak' WITH NO_CHECKSUM, COMPRESSION, ENCRYPTION (ALGORITHM = AES_256, SERVER CERTIFICATE = [serverCertificate])
and I get this error message:
Msg 3013, Level 16, State 1, Line 13 BACKUP DATABASE is terminating abnormally.
Is there a better way to deal with the virtual log files?...I see several approaches in dealing/decreasing the virtual log files for a database..want to know what's the best n safest approach, from the masters here?
I have just upgraded a test server from sql server 2008 sp3 to sql server 2014 inplace upgrade. The compatability level of master database has not upgraded. It was showing 90 and the rest of system databases got updated to 120. Is it fine to update the compatibility level of master database ? Any precautions need to taken??
I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.
I have set the environment set for AutoRecover (for every 3 minutes and Keep information for 7 days under the SSMS 2014 Menu: Tools -> Option ->Environment -> AutoRecover).
I've rebooted the box and restarted the SQL Server service and nothing seems to create the files.
What is the best method to restore a DBTest1 (with one .mdf and one .ldf) into DBTest2 (with one .mdf, multiple .ndf data files and with 4 filegroups associated with specific data files). I do not see how the one .mdf file (in DBTest1) can be separated into the other 4 filegroups (in DBTest2). This does not sounds like it is possible with Backup DBTest1/Restore to DBTEST2 or (Detach/Attach) because the underlying filegroup and file structure is different.
What method should be used to get the data and structure from DBTest1 (includes 1100 Tables and 550 GBs of Data) into DBTest2 (with 4 filegroups)? Is the following possible:
1) First, in DBTest2, execute a script to create tables/indexes on appropriate filegroups.
2) In DBTest2, use scripts to pull data from DBTest1 into DBTest2, for example INSERT INTO DBTest2.dbo.tables with SELECT FROM DBTest1.dbo.tables OR use SELECT/INTO DBTest2.dbo.tables FROM DBTest1.dbo.tables.
Or, is it possible to use the BULK INSERT or BULK COPY Options? Export/Import Wizard?
Does the Create Index step needs to be done after the data is loaded into DBTest2?
I'm working on a large scale project that is currently in production. We have a big process that recently changed to use In-Memory Tables with SQL 2014 for performance efficiency.
The Process uses:
51 In-Memory SQL Tables. 50 Stored Procedures (not native) that loads data(Insert) from about 150 regular Tables and IM tables. 300 Validations (short stored procedure not native) Selecting from those 50 In-Memory Tables (And insert to In-Memory table that save the validation errors if exists on In-Memory table).
At the end of this process we clean the table from the data that relavnt to etch prosses(DELETE FROM WHERE)
B.T.W No UPDATE STAT on In-Memory are used-when we test the prosses it slow as down and cause some locks.
We are calling this process from ADO.Net, loads stored procedure first and then validations, each SP use different SQL Connection. In normal use, everything works fine and takes about 1.5 second.
Under stress test (6 Clients X 100 Tasks) for 30 minutes. After several minutes we are starting to get this SQL Exception (1 SQL Exception for every 20 tasks):
41301. A previous transaction that the current transaction took a dependency on has aborted, and the current transaction can no longer commit.
Transactions in Memory-Optimized Tables
The Exception is not clear. We are not using BEGIN TRANSACTION in the process. The SQL Exception occurs in different stored procedures each time.
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage. Log files – should go on the fastest writing storage. TempDb – involves a lot of writing at the same time the data files are being read. Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
I have system database and user database file are present in G,H and W drive.The process is going to be - copy data from G to S, H to T, W to U. Rename G to X, H to Y and W to Z. Rename S to G, T to H and U to W. Reboot the servers. The original G, H and W will then be X, Y and Z. The old S will be the new G, old T will be H and old U will be W. My question is that after doing this whether my SQL server will start or not
I've recently started working with a public sector organisation who have 4 clustered sql instances that has 80% of it's db mirrored.
Looking at the transaction log - it seems that a transaction log backup is a good idea as the log is 4x larger than the data file.But I'm not allowed access to the physical server to check onto which drive I can create the trn. No RDP, no vmware - let's be honest I'm not even allowed to launch cmd line Also the Server Manager informs me "We will need to carefully look at database backups if you guys want to start doing these backups on box, as that will break our off box backup routine (it will screw the transaction chain)."
I don't understand how backing up the transaction log could break the "transaction chain"?