SQL Server Admin 2014 :: Re-balancing Blob And Lob Data In Multiple Files On Primary File Group
May 7, 2014
A little background on what I am trying to achieve first. We are moving to Azure virtual machines and we will have 8 disks on the SQL Server box. I am adding more files to the primary file group and each file will go on its own drive. I am then rebalancing data across these files by rebuilding all of the indexes on the tables which is working fine. No problems so far all is good.
I now have an additional problem. If there is a lob or blob column on the table, rebuilding the clustered index and all the non clustered indexes doesn't rebalance the blob or lob data across the disks such as it does with in row data.
I cannot find any articles on rebalancing lob or blob data because all the articles say to move to a new file group. I do not want a new file group, I just want to use the primary file group where the data already resides, and just redistribute it evenly in the same way that I can in row data which is working fine.
One solution I thought about was to BCP data out of the table, truncate the table and then BCP back into the table which I imagine would have the desired effect of distributing the data evenly over the files.
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.
For a database, we have 4 data files in a particular file group and the file sizes are almost 70 GB each.
Do I come across any performance issues if I create/pre-allocate an additional data file in the same file group so that the existing files don't grow too much?
In a server we had File Growth,And then We had to Add New Hard Drive And New File On It.And Now We have New server with a Huge Hard Drive.But all files remaind.Can I Reduce This files to One data file or not ?
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.
I setup an availability Group. (Only 2 servers - Primary And secondary) -- 21 , 22
I also define an listener . IP .. 23
1- In First step I connected To Listener (23) And in a while I inserted A record to a table .
While 1=1 insert into Tbl_T1(f1,f2) Values (1,2)
2- in second, I Stop the primary .
- I expected this while whitout disconnect, continue.
3- The while code stopped whit this message :
Msg 64, Level 20, State 0, Line 0 A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The specified network name is no longer available.)
4- I execute again the script, And it worked in new primary.
My questions :
1- is the listener disconnected between switched primary and secondary ? OR have we data loss between switching?
2- I did some huge update on Primary that fill the Log fiel space. And in last Update I got this error :
Msg 9002, Level 17, State 2, Line 27
The transaction log for database 'Your_DB' is full due to 'LOG_BACKUP'.
Is this (Fill All space) a reason to switch primary? Or not ?
We are having a conversation at work and the subject of load balancing with SQL came up. Right now we are running SQL Server 2014 on four (4) machines. I am using a AlwaysOn with Availability Groups (AG). Now I know that we can scale out the reading in AG by allowing the secondary serves to receive reads.
Is there a way to be able to do this with writes? Can I have in essences 2 masters that some how reconcile with each other? We are expecting a huge amount of writes in the near future and we need a way for SQL to handle the amount of traffic we are expecting with out any issues.
I explored the possibility of Peer - to - Peer replication; however, it seems that it would be more work if we are constantly making updates to the database scheme.
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
We have multiple databases on a single instance in an OLTP environment. I have my data files on a separate SAN LUN from my transaction log files (and a few NDFs split out onto additional LUNs). I was wondering if there is a performance benefit to putting each LDF file on its own LUN? Or at least my few busiest LDFs?
We are currently on 2012, but I'm having to put together specs for a 2014 installation and need to answer this question without having an environment in which I can benchmark different setups. I just want to hear whether or not others have done this (why or why not?).
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage. Log files – should go on the fastest writing storage. TempDb – involves a lot of writing at the same time the data files are being read. Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
I need to modify a table to reside on a new filegroup and also point TEXTIMAGE_ON to that filegroup instead of PRIMARY. Apparently in the past, the only way to achieve this via SQL is to create a new table, copy over data, drop the old table and rename the new table to the original name. I found this solution in the SQL Server 2005 forum.
Is there any other way to alter this table in order to point the TEXTIMAGE_ON to new filegroup using SQL Server 2014? We are on Standard edition. The technique I am using is the drop constraint (with move option) and add constraint (to new filegroup) commands. The data and indexes move, but not the text data (it still is in primary filegroup).
I have been creating databases in SQL 2008 with a primary filegroup for the system objects and a secondary, marked Default, for the data.
We are preparing a migration to SQL 2014, and the administrator is complaining he won't adopt this structure on the new servers because 'there is no benefit' and 'a backup cannot be restored (!?)'.
I want to use service SIDs for my SQL Service accounts but also want to have the data files on a NetApp filer CIFS share. The 2014 installer prevents installation if CIFS and Service SIDs are used. I tried to install with domain account on CIFS, and then to swap back to Service SIDs afterwards, but couldn't find a way to do it.
I granted the AD Computer account Full Control to the CIFS share, so it should work, but I just can't work it out at the moment.
I've been trying to get a definitive answer to this question but alas I have conflicting and patchy answers so far from other sources. I have an index that, lets say, requires 10GB of data space to rebuild..This index resides on a filegroup that spans 2 files on two seperate drives (i.e. a mdf and ndf)
When I rebuild this index how will each of these datafiles grow as the rebuild proceeds to completion? Lets for the time being remove the caveats of any other activity hitting the example index/database in question.My tests seem to show that only the mdf will grows (or the file with the lowest id in the that filegroup) provided there is enough space available in that particular file to complete the operation. The secondary ndf dat file doesnt grow at all if the mdf has enough space.
Is expected behavior? i.e. the index will be rebuilt in a contiguous manner relative to the files contained with the filegroup i.e. fileid 1 will grow till limit reached then next fileid grows etc?
I have a Windows Server 2012 R2 2 node cluster with SQL Server 2014 FCI installed. Data files are on a separate Windows Server 2012 R2 file server. Data files share has been permissioned to the SQL Server service and SQL Server Agent service accounts as Full Control. NTFS Permissions are Full Control.
When I try to attach a database CREATE DATABASE AdventureWorksDW2012 ON (FILENAME = 'apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATAAdventureWorksDW2012_Data.mdf') FOR ATTACHI get this error: Msg 5120, Level 16, State 101, Line 4 Unable to open the physical file "apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATAAdventureWorksDW2012_Data.mdf". Operating system error 5: "5(Access is denied.)".
If I log into the file server (called APRICOT) and look at the NTFS permissions they all look good. I have also reapplied the NTFS permissions from the root folder down.
EDIT If I log on to one of the nodes in the cluster as the SQL Server service account and navigate to apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATA and copy and paste the data file, it works fine.
EDIT2: If I log on to the file server and Enable Inheritance at the root level, then Replace all child objects with inheritable permission entries from this object, I get this error:
User Account Control settings on all nodes and the file server are set to Never notify
I was running an operation to shrink file/emptyfile a data file, and then remove it.
It blocked and caused a huge mess, I suspect on the removal part. But I want to confirm that the emptyfile completed (and that the engine isn't going to try to put more data in there for when I schedule the removal part again a week or more from now).
How does the engine know not to put any more data in there, and how long does that situation last?
I recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.
We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).
Here what I have done so far:
I created the database
Created the file and successfully imported data into it using the Import Data feature.
I saved the SSIS package
Scheduled an Agent Job for this package to run at certain time,daily
At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.
Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.
I have a 2 node cluster with 2 standalone 2k14 instances having alwayson setup. As per client requirement we have created a client access point with a cname alias in dns to connect to secondary replica. Now, everytime whenerver the roles switch over one has to manually move this resource from the previous secondary node to the new secondary node. This is tedious, and should not be done manually either, so I am looking for a way to automate it so that as soon as the role switches over, the resource group after some time should also switch over to the current secondary.
Recently after turning on trace I restarted the sql services on a box which is configured for automatic failover availability groups. The ag has not failed over to other node. The other node was in resolving state. When the restarted server is back, the AG went back to that server. I checked the sys.availability groups field for failover property failure condition level, it's set to 1 which means service restarts should initiate the failover.
What I asked for: Three Windows Server 2012 R2 machines with independent storage running a SQL Server 2014 AlwaysOn Availability Group. DB1 would be the primary, DB2 would be a synchronous replica, and DB3 would be a remote asynchronous replica.
What I was given: a two-node Windows Server 2012 R2 WSFC to run SQL Server 2014 Enterprise with shared storage and a third (remote) Windows Server 2012 R2 machine with independent storage, also with SQL Server 2014 Enterprise, to host an AlwaysOn Availability Groups asynchronous replica.
DB1 and DB2 (as Cluster1) share an E: drive. The remote DB3 has its own E: drive. Initially, DB3’s E: drive was claimed as a cluster resource and I couldn’t even see it. I’ve had several ugly days trying to make this work and have temporarily given up, installing DB3 as a standalone SQL Server that is no longer part of the WSFC and pointing everything towards that (it was originally a third node in the WSFC).
Is it possible to create an AlwaysOn Availability Group with nested clusters (i.e. create the AOAG with Cluster1 and DB3 and somehow ignore the individual nodes that comprise Cluster1)?
I need some clarification about adding file in to mirrored dataabse in primary server without downtime and breaking the mirror server.
In our environment we are using monutdisks in both the servers. in primary for ex we have F drive for data files under mount disk 3 in mirror server also we have same drive but in mount drive2.
As per my knowledge if it is same drives we can add the ndf files in the primary that will reflect on mirror. but in current situation i am confusing about mount points with different names.
Until yesterday I had a server running SQL Server 2008 R2 - with all the SQL Server DB files on an attached disk array.
The server died - so I attached the disk array to a new server - and all the DB data files are visible there.
I installed SQL Server 2014 on the new server and am trying to work out how to point it at the existing database files.
I also have backups of the DB's - but they will take ages to copy over and restore - so it would be much easier to just use the db files. Should I restore the master db first (easy as its small)?
I cannot see the file created in the directory. The account under which sql server the agen job run have full privileges on it and is sysadmin.Then i run the Command in ssms
BACKUP DATABASE [F1SB] TO DISK = N'F:SqlBackup2014<server>F1SBFULLIGS-DB01_F1SB_FULL_20150510_214455.bak' WITH NO_CHECKSUM, COMPRESSION, ENCRYPTION (ALGORITHM = AES_256, SERVER CERTIFICATE = [serverCertificate])
and I get this error message:
Msg 3013, Level 16, State 1, Line 13 BACKUP DATABASE is terminating abnormally.
Is there a better way to deal with the virtual log files?...I see several approaches in dealing/decreasing the virtual log files for a database..want to know what's the best n safest approach, from the masters here?
I have a user, who is trying log into the server, but everytime he gets this error saying something about the Group policy denies him access.
This user needs access and i'm trying to understand how to grant it to him.
I have been looking into how i can access the group policy editor, but the farthest i can get is the Local group policy editor. How do i make sure this specific user has access?