SQL 2012 :: Read And Write Speeds Not Increased With Additional Disks
Mar 13, 2014
Why I see absolutely no performance improvement when I spread my primary file group over 8 separate files on 8 separate disks, as opposed to having the primary file group all in one file on one disk.
I have set up 2 identical databases, one spread over 8 disks and one on one disk. Each database has a table called DATA and a column called VALUE. Value is NVARCHAR(200). I have filled each table up in both databases with 20,000 rows.
I then perform a select on each table in each database using CHECKPOINT and DBCC DROPCLEANBUFFERS to ensure I am reading from disk before each query and the execution times are identical in both databases.
I then ran the same queries against each database using a load testing tool and the batch requests per second on each DB is identical under load.
Surely the database with data spread over 8 disks should be FAR faster than the single file database as you have the combined reading power of 8 disks as opposed to 2??
Also, the same is happening for write speeds. When I create the data on both databases, the time it takes is identical on both.
BOL says it should be faster with multiple disks.
Just FYI this is on an Azure virtual machine and each disk is a locally redundant data disk that I have attached to the virtual machine.
Whether write speeds should increase with multiple disks or just read speeds?
I am testing out a blank database created over two physical files on two separate disks with one table called data which has one column called values nvarchar(max).
I filled the table up with a whole load of data and ran a select * against it. If I run Permon at the same time I can see that the read load has been spread over multiple disks as each of these disks is getting read from in parallel. If I create the same database on a single file and run the same select * again it takes much longer, proving that the read load has been distributed across multiple disks.
Now moving onto writes, this is where the confusion lies. I understand that SQL server fills files evenly until they need growing, after which it will then fill files individually until they are full in a round robin fashion unless you have trace 1117 turned on. What I don't understand is why the writes aren't distributed out whilst it is filling these file groups.
I ran an continual insert into my table with go 1000000 to monitor how the files are being filled up. I monitored where SQL server was physically placing the files as they were being inserted by running the following query:
;WITH CTE AS (SELECT sys.fn_PhysLocFormatter (%%physloc%%) col1, RIGHT(LEFT(sys.fn_PhysLocFormatter (%%physloc%%),2),1) AS [Physical RID], DATAID
[Code] ....
I could see that it would a thousand or so records into file 1, then a thousand or so into file 2, then a thousand or so into file 1 etc etc. In another words it would hit one disk, then another disk, then back to disk one to fill the file evenly. Is there any way to make SQL Server distribute the writes out in parallel so that both disks are writing in tandem?
By the looks of it, multiple disks only scale reads, as with writes only one disk is ever written to at once which is annoying. Any way to harness the write power of multiple disks?
I am trying to build the 2 node 2 clusters with the AlwaysOn.
Here isthe landscape.
2 nodes PROD failover cluster (running once instance) 2 nodes DR failover cluster (running 2 instances - DR and PRE-PROD)
Both clusters are in different geographies.
PRE-PROD can be editable. So out of scope of Always On.
One instance on PROD -> DR of the other box. [Want to achive thru AlwaysON]
Now my Question:
1) Do i need to have all the 4 nodes in same failover cluster group? If yes, then this would become MultiSubnet cluster Or Is there any way those 2 diffrerent failover clusters (one DR and one PROD) can be part of AlwaysOn.
2) Can i use the clustered disks as in the above landscape for always on?
I'm trying to do Sharepoint DR with Log Shipping and every thing configured except one thing which is switch the WSS_Content (Standby /Read-Only) DB to be ready and Write.
I tried from
GUI or ALTER DATABASE [WSS_Content] SET READ_WRITE WITH NO_WAIT
I have two database files, one .mdf and one .ndf. The creator of these files has marked them readonly. I want to "attach" these files to a new database, but cannot do so because they are read-only. I get this message:
Server: Msg 3415, Level 16, State 2, Line 1 Database 'TestSprintLD2' is read-only or has read-only files and must be made writable before it can be upgraded.
What command(s) are needed to make these files read_write?
I am getting following errors in my Cluster Validation report when trying to create a Windows Cluster.
I have 2 nodes DB01 and DB02 . Each has 1 public ip, 1 private ip (for heartbeat), 2 private ips for SAN1 and SAN2. The private ip's to SAN are directly connected via Network Adaptor in DB01 and DB02.
Validate Microsoft MPIO-based disks Description: Validate that disks that use Microsoft Multipath I/O (MPIO) have been configured correctly. Start: 9/9/2014 1:57:52 PM. No disks were found on which to perform cluster validation tests. Stop: 9/9/2014 1:57:53 PM.
I am wondering what would be the best disk/RAID setup for a Windows server 2008 R2 OS and SQL Server 2012 database that has heavy read/write. I have the following disks I can use:
4x 15k 146GB 2x 10k 600GB
According to the server build requirements for the application, I need 100GB for the OS and 290GB for the drive containing the SQL mdf there are no stated requirements for the ldf, but would like to know if it should be allocated elsewhere?I should do RAID 10 for the 15k drives for SQL and RAID 1 for the OS on the 10k.
I wanted to remove an extra transaction log file that was no longer required, and ran the following against the database...
DBCC Shrinkfile (DB_Name_log2, Emptyfile); go alter database [Db_Name] remove file DB_Name_log2; go
I got a successful removal message. But if I go into the properties of the database, and click on files, it still shows up. Why is this and how can I get rid of it?
I have a query that runs for 10 sec on one database( A) and 5 min on another database(B) even though two database have identical scheam, tables, index and statistics..
I ran a profiler and got the below information CPU READ Write Database A: 92051 711956 8774 Database B: 91812 7621589 315822
A query runs on database has a significant larger read and write.. I don;t understand why this is happening? even though these two database have the same structure?? it has the same execution plan as well..
Hi, we're trying to read from a table and write back to the same table and are having a lot of trouble with blocking. What could we do to prevent our application from hanging due to blocking of this type?
Hi .. I want to Write in files or read from files for example i have My_File.txt . i need a syntax and i want to call this syntax in my Store procedure and this syntax write forexample " Hello Word " in My_File.txt . and i want another syntax that read from My_File.txt forexample "Word" from My_File.txt . what are those syntaxes do that ??
We currently run sql 2005 server and also sql express in our dev environments. We use sql express as an offline store (smart client). We have a similar/exact schema on the sql 2005 server and also the express.
We use the auto attach feature to connect to the express version of the database. Both the developer machines and the one that is running the sql 2005 server have exactly the same hardware configuration. The only difference may be that the server box is not running the VS.Net environment. The disk space etc is pretty much the same. Actually we run another database server(DB2) on the 2005 server machine.
We have observed that sql express is much slower and queries execute much slower aswell. For example, this may not be a totally scientific way of checking but a long running query on the server took only 2 minutes while on express it took longer than 9 minutes. The schema and data etc are the same.
Is there something we need to look into as far as read write speed/performance goes ?
I am using a Script Component and I have a Read/Write Variable varStatusCase (as assigned in the Custom Properties of my Script Component). I used this inside my script to get a specific value. However, when I ran it I get this error:
The collection of variables locked for read and write access is not available outside of PostExecute.
I have some transactions with the same card number that needs to add value amount to its existing balance. For example:
Card Number Balance Amount Issue Date Issue Branch. 4000111122223333 $100.00 10/1/2015 123 <= This is an existing row in Card Number SQL table.
Now, the same card number with additional $50 dollars that I want to add to this card number to make the total to become $150. This additional $50 is from another transaction table. On the contrary, I will have -$20 from the same card number in different transaction that I will need to deduce $150-$20 to become $130. How can I update the card number table with debit/credit transactions to keep the outstanding balance?
i have a database which get refreshed every day from client's data . and we need to pull heavy data from them every day as reports . so only selects happens on that database.
we do daily population of some table in some other databases from this daily refreshed DB.
will read uncommitted or NOLOCK with select queries to retrieve data faster.
there will be no dirty read as there are NO DML operation in that database so for SELECT which happens concurrently on these tables , will NOLOCK work?
Okay, maybe I'm getting ahead of myself. Using SQL Server Express, VWD and .net 2.0 I've figured out how to drop a Table Column Constraint or Default Value/Binding and then Create it again using a stored procedure. What I can't figure out is how to retrieve that column's constraint value and write it to, say a label, in an aspx page, simply for reference. Is it possible? In this case the Data Type of the column is money. I'm using it to perform a calculation to a column with a value that the user inserts into another column. (Column1(user input) minus Column2(with Default Value) = Column3(Difference). I just want to read Column2's Default Value for reference so I know whether to change it or not.
Hi, I was reading that many of these high traffic websites actually have 2 databases, 1 database is used ONLY for reads, while the other is for writes.
How does one go about creating such a setup? How does the database where writes are allowed replicated the data to the read only database server?
Hi,I would like to use database locking mechanism to control access to anexternal resource (like file system).What I need is1. an exclusive (write) lock conflicting with any access to theresource (both for read and write)2. non-exlusive (read) lock conflicting with writes onlyHow this could be done?I'd appreciate any reply.Vadim
Hello Guys, in SSIS I want to get a set of data and do some modifications on it before I insert it into the destinatipn. So far so good. Some of the modifications will include comparisions between two columns and if certain field is NULL then I want to get the value from the other one I was comparing to. When using conditional splits, I only get the rows to be redirected so that I can do whatever next. However, I want like use the variables as a storage so that I can put the value of one of the two columns in this variable which will be actually loaded into the destination finally. Any help? Thanks
I need to create a flat file as word document, may i know how to write text from stored procedure if a file is already exist then the text will append, how to do it ?
I use excel as an interface to write query to retrieve data from a database in network drive. My problem is everyone can open and edit my query. Of course, the univeral database access user name and password will ask but everyone know this. How can I put password to prevent any query modification?
Is there a way to get the last read/write time of a table?
I want to have a few tables, but only allow them to exist if they have been used in the last 30 days. I want to set up a "purge" job to clear out any tables that have not been used in 30 days.