SQL 2012 :: Blank Database Created On Two Separate Disks - Write To Multiple Files
Mar 17, 2014
I am testing out a blank database created over two physical files on two separate disks with one table called data which has one column called values nvarchar(max).
I filled the table up with a whole load of data and ran a select * against it. If I run Permon at the same time I can see that the read load has been spread over multiple disks as each of these disks is getting read from in parallel. If I create the same database on a single file and run the same select * again it takes much longer, proving that the read load has been distributed across multiple disks.
Now moving onto writes, this is where the confusion lies. I understand that SQL server fills files evenly until they need growing, after which it will then fill files individually until they are full in a round robin fashion unless you have trace 1117 turned on. What I don't understand is why the writes aren't distributed out whilst it is filling these file groups.
I ran an continual insert into my table with go 1000000 to monitor how the files are being filled up. I monitored where SQL server was physically placing the files as they were being inserted by running the following query:
;WITH CTE AS
(SELECT
sys.fn_PhysLocFormatter (%%physloc%%) col1,
RIGHT(LEFT(sys.fn_PhysLocFormatter (%%physloc%%),2),1) AS [Physical RID],
DATAID
[Code] ....
I could see that it would a thousand or so records into file 1, then a thousand or so into file 2, then a thousand or so into file 1 etc etc. In another words it would hit one disk, then another disk, then back to disk one to fill the file evenly. Is there any way to make SQL Server distribute the writes out in parallel so that both disks are writing in tandem?
By the looks of it, multiple disks only scale reads, as with writes only one disk is ever written to at once which is annoying. Any way to harness the write power of multiple disks?
View 6 Replies
ADVERTISEMENT
Mar 13, 2014
Why I see absolutely no performance improvement when I spread my primary file group over 8 separate files on 8 separate disks, as opposed to having the primary file group all in one file on one disk.
I have set up 2 identical databases, one spread over 8 disks and one on one disk. Each database has a table called DATA and a column called VALUE. Value is NVARCHAR(200). I have filled each table up in both databases with 20,000 rows.
I then perform a select on each table in each database using CHECKPOINT and DBCC DROPCLEANBUFFERS to ensure I am reading from disk before each query and the execution times are identical in both databases.
I then ran the same queries against each database using a load testing tool and the batch requests per second on each DB is identical under load.
Surely the database with data spread over 8 disks should be FAR faster than the single file database as you have the combined reading power of 8 disks as opposed to 2??
Also, the same is happening for write speeds. When I create the data on both databases, the time it takes is identical on both.
BOL says it should be faster with multiple disks.
Just FYI this is on an Azure virtual machine and each disk is a locally redundant data disk that I have attached to the virtual machine.
Whether write speeds should increase with multiple disks or just read speeds?
View 8 Replies
View Related
Apr 7, 2008
I've read that if particular tables are frequently queried together through a join then these tables should be placed on different devices on different physical disks.
What does this mean exactly and how would you configure this?
Is this a common practice in high-performance real-world environments (or should it be)?
View 3 Replies
View Related
Jan 16, 2008
We just upgraded from SQL 2000 to 2005. Under 2000, I could export multiple stored procs to separate windows files.
Is there a way to do this under 2005 without exporting 1 proc at a time?
View 5 Replies
View Related
May 15, 2015
We have multiple databases on a single instance in an OLTP environment. I have my data files on a separate SAN LUN from my transaction log files (and a few NDFs split out onto additional LUNs). I was wondering if there is a performance benefit to putting each LDF file on its own LUN? Or at least my few busiest LDFs?
We are currently on 2012, but I'm having to put together specs for a 2014 installation and need to answer this question without having an environment in which I can benchmark different setups. I just want to hear whether or not others have done this (why or why not?).
View 3 Replies
View Related
Nov 12, 2014
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.
View 2 Replies
View Related
Apr 15, 2014
I am currently investigating aa high avg write time ms issue (145ms) which seems to be only occuring on the tempdb data files.I have followed the recommended setup of TEMPDB in that
1. Data files = number of physical cores
2. Data files and logfiles are on separate partitions away from the other databases.
3. Tempdb is presized and no incremental file increases look like they are happening with frequency.
We have sharepoint 2012 setup on other sql servers and with TEMPDB setup following the same guidelines, with far more Sharepoint activity on a similary specified hardware which is why its confusing.FileIO auditing on the partitions themselves shows that the FileIO is very fast on the partitions that the tempdb data file which leads me to beleive that Sharepoint may be the culprit perhaps due to excess use of tempdb with operations taking a long time to resolve.
View 3 Replies
View Related
Apr 8, 2007
Hi all (newbie @ asp.net)(oldie @ ASP 3)What is the purpose of using an attached MDF database files in the App_Data folder on a web site as to importing it into the SQL server directly or creating it on the SQL server. Does a mdf database attached file purely use the SQL server as a connection interface.Is it something similiar to DSN(ODBC) Connections for ms access databases.
View 2 Replies
View Related
Mar 13, 2014
I am trying to build the 2 node 2 clusters with the AlwaysOn.
Here isthe landscape.
2 nodes PROD failover cluster (running once instance)
2 nodes DR failover cluster (running 2 instances - DR and PRE-PROD)
Both clusters are in different geographies.
PRE-PROD can be editable. So out of scope of Always On.
One instance on PROD -> DR of the other box. [Want to achive thru AlwaysON]
Now my Question:
1) Do i need to have all the 4 nodes in same failover cluster group? If yes, then this would become MultiSubnet cluster Or Is there any way those 2 diffrerent failover clusters (one DR and one PROD) can be part of AlwaysOn.
2) Can i use the clustered disks as in the above landscape for always on?
View 1 Replies
View Related
May 12, 2015
OS Windows Server 2012
SQL 2012
I have a software that presents a drive to a 2 node cluster.
Not sure how I would go about presenting this newly presented drive as a Cluster Resource and make SQL depend on this resource.
View 5 Replies
View Related
Jan 9, 2015
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
View 2 Replies
View Related
Sep 9, 2014
I am getting following errors in my Cluster Validation report when trying to create a Windows Cluster.
I have 2 nodes DB01 and DB02 . Each has 1 public ip, 1 private ip (for heartbeat), 2 private ips for SAN1 and SAN2. The private ip's to SAN are directly connected via Network Adaptor in DB01 and DB02.
Validate Microsoft MPIO-based disks
Description: Validate that disks that use Microsoft Multipath I/O (MPIO) have been configured correctly.
Start: 9/9/2014 1:57:52 PM.
No disks were found on which to perform cluster validation tests.
Stop: 9/9/2014 1:57:53 PM.
[Code] ...
View 9 Replies
View Related
Jun 27, 2015
I am wondering what would be the best disk/RAID setup for a Windows server 2008 R2 OS and SQL Server 2012 database that has heavy read/write. I have the following disks I can use:
4x 15k 146GB
2x 10k 600GB
According to the server build requirements for the application, I need 100GB for the OS and 290GB for the drive containing the SQL mdf there are no stated requirements for the ldf, but would like to know if it should be allocated elsewhere?I should do RAID 10 for the 15k drives for SQL and RAID 1 for the OS on the 10k.
View 6 Replies
View Related
Mar 7, 2007
I have a VS2005 C# winforms application that reads a SSRS request from a table and using the ReportViewer control produces a report and then exports in one of a number of formats via a specified path to a share on another server. This normally works without issue, however today I have had three instances of invalid or blank PDF's being produced. A sample error from the Acrobat Viewer is "There was an error processing the page. There was a problem reading this document (109)."
The software version are as follows:
Host Server: Windows Server 2003 with Sp1.
SQL 2005 with Sp1.
Acrobat reader: Version 7.0
By deleting the PDF file, resetting the processed flag to un-processed, the report was run again, and this time a perfectly readable PDF file was produced. As neither the source data nor the report definition file was updated during this time period, how it works at one time but not previously is currently inexplicable. I have run the report manually with the same input parameters using Internet Explorer and exported it successfully to another location.
Any ideas as to what is going on?
A fix to the winforms application will be to delete any existing file before exporting a new one.
View 3 Replies
View Related
Feb 5, 2008
Hello there,How can i take data out of my database, put them into a textbox and then separate with a comma..An example:----------------------------------| column Email || mail1@email.com || mail2@email.com || mail3@email.com |----------------------------------Put them into a textbox and separate with a , (comma)-------------------------------------------------------------------------------------| mail1@email.com, mail2@email.com, mail3@email.com |-------------------------------------------------------------------------------------Anybody who know how I can do that? :S
View 4 Replies
View Related
Nov 12, 2014
Query to find the date/time when a database schema was created and who created it.
View 2 Replies
View Related
May 21, 2015
So we have new servers that are going to be installed with SQL 2012 and I'm debating the wisdom of splitting tempdb with multiple files.
I know it's a myth that performance automatically improves if you split it into a number of files based on processors, but I'm debating the wisdom of putting a file on each of my data / log file drives.
For instance, I have a server with a C: drive (OS), D: drive (Data for system DBs and install of programs - 458 GB), an F: drive for user DB data files (767 GB), and a J: drive for log files (255 GB).
Obviously no files are going on C:. I'm debating on whether or not we should even leave system DBs on the D: drive given in our current 2k8 servers, we end up with Memory.dmp files over flowing the D: drives as well as .cabs and other install / update files that tend to collect on that drive over the years.
But if we leave the system DBs on D:, I'm wondering if adding a second tempdb file to F: and a third to J: will improve query performance or not.
View 9 Replies
View Related
Mar 24, 2015
How to identify whether the files are in read write or read only?
View 1 Replies
View Related
Sep 27, 2007
Hi All
I am in process of moving a SQL 2005 solution from a development box that used local storage to UAT environment with SAN attached storage. The solution uses database snapshots
The database files are on the SAN storage but during testing I was unable to create a Database snapshot on the SAN disk. Creating snapshots on the local disk worked fine.
Is their some restriction/problem in using the database snapshot technology with SAN storage?
View 19 Replies
View Related
Oct 6, 2015
How do I determine the read/write frequency on a database table? I am trying to do this on a 2012 and 2008 R2 servers.
View 2 Replies
View Related
Mar 2, 2005
Hello,
I am managing a sqlserver 6.5 database in my company. I get the message that the datafiles should be expanded but whenever I try to expand it the following message appears:
Could not find enough space on disks to extend the database. Meanwhile, I have about 6 gigabytes free space on my disks. Please help me out.
Thanks,
Albert
View 2 Replies
View Related
Jun 29, 2015
trying to get a new database created then running a script to created the tables, relationships, indexes and insert default data. All this I'm making happen during the installation of my Windows application. I'm installing SQL 2012 Express as a prerequisite of my application and then opening a connection to that installed SQL Server using Windows Authentication.Â
E.g.: Data Source=ComputerNameSQLEXPRESS;Initial Catalog=master;Integrated Security=SSPI; Then I run a query from my code to create the database eg: "CREATE DATABASE [MyDatabaseName]".
From this point I run a script using a Batch file containing "SQLCMD....... Myscriptname.sql". In my script I have my tables being created using "Use [MyDatabaseName]   Go  CREATE TABLE [dbo].[MyTableName] .....". So question is, should I have [dbo]. as part of my Create Table T-SQL commands? Can I remove "[dbo]."? Who would be the owner of the database? If I can remove the [dbo]., should I also remove dbo. from any query string from within my code?
View 3 Replies
View Related
Sep 13, 2000
Usually, our in house ERP software has 1 database and 1 database file. After an upgrade from MS SQL 6.5 to MS SQL 7.0 I have a database who's properties show that it is made up of multiple datafiles. What is the easiest and safest method to return this database to only have 1 datafile?
View 11 Replies
View Related
Sep 26, 2000
Hi everybody,
On the time of installation SQL Server asking me where I wont to locate the DATA files and the PROGRAM files. It’s giving to me choice to put database AND log files on one disk and program files on separate. But what about to separate LOG and DATA files. I have RAID1 especially created on F: drive for LOG files and RAID 5 on E: for DATABASE files. When I have to separate that if not on the time of installation? How I can do that?
Thanks,
Miriam
View 3 Replies
View Related
May 16, 2007
Hi there
It is obvious that putting multiple database files on different physical disk is better for performance, but what about splitting the data on different files on the same disk?
I have got a database of about 20GB and only a single data file. will I benefit from splitting this file to multiple files on the same disk?
View 10 Replies
View Related
Jan 18, 2008
I have two database files, one .mdf and one .ndf. The creator of these files has marked them readonly. I want to "attach" these files to a new database, but cannot do so because they are read-only. I get this message:
Server: Msg 3415, Level 16, State 2, Line 1
Database 'TestSprintLD2' is read-only or has read-only files and must be made writable before it can be upgraded.
What command(s) are needed to make these files read_write?
thanks
View 7 Replies
View Related
Apr 13, 2006
Hi all,
I receive data via FTP to our webserver nightly as .txt files and .dic (if anybody is familiar with idx realtor websites, that's what this data is).
I've learned recently that I'm not going to be able to use Access to import or link to this data, so I'm trying to get my feet wet with SQL.
I have been practicing importing text files into SQL db, but I notice that the dts imports everything as varchar 8000, and that you can edit that. I've got a .dic file that accompanies every .txt file that contains definitions of each fieldname, fieldtype & length & I was wondering how to import that data as well, without having to manually retype everything.
I would be happy to email these text files to anybody willing to take a look.
Thanks,
Carrie
View 2 Replies
View Related
Dec 6, 2007
Hi,
I'm making backups of the database by first making a full backup and then differential backups. The differentials are backed up to separate files.
Restore of the full backup works fine, but I can't restore a differential backup. In Management Studio Express, I first do a full backup restore with option NO RECOVERY and then try to restore a differential backup. But this failes with the message:
"This differential backup cannot be restored because the database has not been restored to the correct earlier state."
Is it possible to restore a differential backup that is backed up to a separate file?
View 8 Replies
View Related
Aug 1, 2006
i have a table with rows of file names and paths. what i'm trying to do is process each file and store it in my sql database. i want to store the files as binary files (they are word and excel and pdf files) anyone know a way to do this? it would especially be useful if i could do this with a console application so i can schedule it
View 2 Replies
View Related
Sep 12, 2007
I have inherited some responsibilities for which I'm not really qualified, so I'll push on through and maybe not totally fall down.
Assume 10 50GB databases, each in a single MDF file. All these MDF files reside on the C drive (the only drive on the system), running SQL 2005 in a 32-bit Windows 2003 or later, 8GB RAM.
The C drive is 6 physical disks in RAID 5, say about 1.0 TB or so. We have 4 dual-core processors on the box.
We have limited simultaneous users, initally about 8 users doing very heavy write on all tables in any one database. Later, we have about 15 users connecting via Web interface, and doing very heavy read and light writing. Each of the 10 or so database has this lifecycle: Heavy write for about 2 weeks (load data) then heavy read for about 1 month (research and search data), then nothing ever again (db is taken offline).
Of course, this is not enough information to go on, but let's just go on it anyway.
My TempDB, Log (simple recovery), Index etc is all on the same RAID 5 drive (C).
I have two basic questions I'd love to hear feedback on:
1. Is there any real advantage to creating 8 Data files for my database (one per processor core)?
2. Given that the hardware people here REALLY don't want to change anything, what should I fight for first:
a. Separate drive for LOG files?
b. Separate drive for TempDB?
c. Something else
Thanks in advance.
View 1 Replies
View Related
Sep 20, 2007
I'm trying to do something which I hope can be accomplished relatively simply.
I have a report similar to bank statements let's say. When run, it currently prints out each person's statement into one file, with page breaks sepearating each person's statement. What I need to do, is when the report is run, save each person's report into a seperate file for the purpose of emailing to them later.
I could easily modify my report to just output for one particular person, but I'm not sure if there's a way to "bulk render" all the reports and have them saved to sepearate files.
I should also add that I'm using an MS Access Data Project (ADP) as the front end to my app - connected to a SQL Server 2005 DB. I currently display the reports by embedding a web browser object into an Access form and rendering the report via HTML.
Thanks in advance,
H
View 1 Replies
View Related
Jul 21, 2014
I've stepped into a new environment and have never dealt with multiple data files on user databases only with Temp db.What would be the best way to get all my data files in sync. I have done this on databases that aren't that big in size or off in size by a lot. Here is what I have
Mdf -- 69 GB
ndf -- 3
ndf -- 3
ndf -- 3
ndf -- 3
ndf -- 4
ndf --4
ndf -- 2
View 7 Replies
View Related
Dec 22, 2014
(SQL 2012 Standard)
I have a database with one 320 .mdf file and one 200GB .ndf file.
I would like to reconfigure my database to have four 200GB .mdf files. How do I get from here to there?
View 5 Replies
View Related