File Placement For Optimal Performance?
Aug 24, 2007
What is the best performance for this configuration:
Files:
Data
Log
Indexes
tempdb
Disk:
A - RAID 10
B - RAID 10 (or should this be RAID 1?)
Whats best?:
A - Data and Indexes
B - tempdb and Log
??? Thanks.
View 1 Replies
ADVERTISEMENT
Mar 14, 2008
Hey guys
I have someone telling me that you can improve performance in SP's by placing all the DDL at the beginning of the procedure. ie. Do all your CREATE TABLE #tbl and DECLARE's before the rest of your code.
Any thoughts on this?
View 3 Replies
View Related
Feb 20, 2004
I was told in one of my systems classes that the real performance bottleneck in accessing information from the database was the opening of a connection from the application to the database.
To combat that problem I was advised to use a Singleton Factory pattern and to have that Factory instaniate a connection and open it, then pass references to that connection for all of the objects that it created. All of those objects passed the connection reference to the objects they created and so on. Basically that meant that I only ever had one connection open at any one time for my entire application. And I was able to implement this solution at my previous job where I was developing in Oracle. I primarially used OracleCommands and OracleDataReaders to get the informaton into and out of the database. I thought this was a very nice solution. Having this many DataReaders accessing a single connection was not a problem because OracleConnections don't get locked from having more than one DataReader open at once.
At my current job, however, I use SQL Server. I am concerned that the single connection will not work in my new enviroment as the SQLDataReaders lock up the connection while they are using it. If the information that I recieved about opening connections being the real bottleneck, then I am hesitant to have a connection instanciated and opened for each method, but I am concerned that a whole lot of errors will be generated if I use the single connection method. Also, how do DataAdapters effect my decision of which approach to use.
Any advice would be most helpful. If you have any questions that would help answer just ask. Thanks.
View 4 Replies
View Related
Nov 30, 2004
Can I choose where I want to store my database using MSDE
View 3 Replies
View Related
Jul 20, 2005
Hello All,I am looking at the performance of our production database. It is40gb, and growing reasonably fast. It is placed in one file group on aRAID-5 array. The array is made up of 20 (or so) 9gb disks. The data,the indexes and the transaction log are all on the "one logical disk".My question then, is, would it be better to move the transaction logonto a separate device (with Raid-1), and then separate out theindexes and the data and to place them onto separate devices (ie.split the raid disks into 2 new drives). Or would it be better toplace the table into a larger number of smaller filegroups effectivelysplit across the raid device and to (strategically) place differenttables into the new logical disks. Does this make sense?Or, do I just leave everything as it is?CheersMike
View 1 Replies
View Related
Aug 16, 2007
Can anyone point me to any Microsoft articals giving reccomendations for file placement for SQL server? We are trying to convince our Hardware guys that we need separate disks for data/log/tempdb files and need some ammo.
Thanks,
Jason
View 4 Replies
View Related
Jul 21, 2004
Hello,
I have a question about how I can change the database placement on our HP MSA1000 SAN. Basically I'm concerned about the performance of one particular server with 40+ databases. I'm familiar with the standard recommendations such as separating data and log files onto different physical drives, etc. But how is this going to be possible when there are only 14 physical drives available in the MSA1000? I also have to be concerned about the other server that's attached. Any suggestions, besides getting additional storage... :)
Thanks.
View 1 Replies
View Related
Sep 8, 2006
I need some help understanding the benefit of creating tempdb with one file per processor. I believe the benefit has something to do with the way SQL Server utilizes processor threads, but I'm a bit weak on the details.
Thanks, Dave
View 1 Replies
View Related
Feb 22, 2007
Using SQL2000
Is it recommended to put tempdb data and logs files on different drives?
View 1 Replies
View Related
Feb 22, 2007
Using SQL2000
Is it recommended to put the tempdb data and log files on different drives?
View 3 Replies
View Related
Jan 25, 2008
I have a server with 2 instances of SQL installed. There are 6 physical disks in the server which have been made into 3 mirrors.
The first mirror has the OS on it. Currently, the 2nd disk has all the database and transaction log files from both instances of SQL.
I plan to make use of the 3rd disk. My question is: is it better to move the database and transaction log files from the second instance to the new disk so that all the files for the first instance are on disk 2 and all the files for the 2nd instance are on disk 3 OR is it better to keep all the database files from both instances on disk 2 and move all the log files for both instances to disk 3?
I'm sure I have read somehwere that in this situation, the disks should be separated by instance rather than seperating by file type.
View 1 Replies
View Related
Feb 2, 2015
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage.
Log files – should go on the fastest writing storage.
TempDb – involves a lot of writing at the same time the data files are being read.
Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
View 9 Replies
View Related
Jul 23, 2005
Please excuse what is probably a no-brainer, but here goes.Is there any difference, in terms of performance or any other pertinentfactor, between:SELECT * FROM tblCustomers INNER JOIN tblCustomerOrders ONtblCustomers.fldCustomerID = tblCustomerOrders.fldCustomerIDandSELECT * FROM tblCustomers, tblCustomerOrders WHEREtblCustomers.fldCustomerID = tblCustomerOrders.fldCustomerIDI note that if I type the latter into the SQL pane in a Data window,SQL Server replaces it with the former.TIAEdward--The reading group's reading group:http://www.bookgroup.org.uk
View 3 Replies
View Related
Mar 2, 2005
What is more efficient for a database design - a lot of tables with only a few records or a few tables with lots of records.
I'm starting a new site and each user will have numerous records but I'm not sure whether to have a few very large tables (over 100,000 rows) or start a new table for each user which would result in approx 1500 tables most of which would be the same table design with different rows.
I'm using SQL2000.
I guess this is quite a basic question, but I'm a bit unsure.
Any references anyone could point me too as well.
Thx
View 3 Replies
View Related
Feb 3, 2004
Hi
I'm fairly new to this, so bare with me...
I have to make a new installation of an MS SQL 2000 EE on a Windows 2003 Std. Edt.
HW:
---------------------
Dual Xeon 2,4 + 1 GB Ecc
1 x 32 MB Adaptec 2100S RAID Controller
2 x 18 GB 10K HD
4 x 18 GB 15K HD
---------------------
So far I have made following configuration....
---------------------
2 x 18 GB 10K HD / RAID 1
- C:OS
- D:MSSQL program files + System DB's (Master, pubs ect.)
4 x 18 GB 15K HD / RAID 5
- E:TempDB
- F:Data + Logs
---------------------
But I'm not sure that this is the optimal configuration, and I'm willing to start all over :)
So my q's are.......
--------------------
Which RAID configuration would you suggest?
Which partitions on the raids would you suggest?
Which usage would you assign the various partitions?
How do I move the system and temp db's?
--------------------
Thanx!
Regards,
Taras Bredel dk
View 3 Replies
View Related
Jul 25, 2007
Hi
I'm trying to find the optimal way of getting the timestamp of the last updated entry in an mssql database. A database is updated only about 5 times a minute, how ever a request for the time of the last entry could be around 1 per second. For this reason i was thinking of having a separate table which has a single row which is updated everytime a new entry is updated in the main table. I would then only need a simple SELECT statement and need very little processing power.
Is this the best method, or can you think of any others i could use?
many thanks
View 14 Replies
View Related
Jun 19, 2008
I am wondering if 100% buffer cache hit ratio is considered not good in general?
Are there instances that it is actually bad and can contribute to server performance degradation?
Any thoughts on the topic most welcome :)
--------------------
keeping it simple...
View 11 Replies
View Related
Jul 23, 2005
I am working with a report generator that is based on SQL Server 2000 anduses ASP as the UI. Basically we have a set of reports that end users canexecute through a web browser. In general the model works fine, but we arerunning into some scaling issues.What I'm trying to determine is, what is the optimal configuration for thissystem. It is currently a 2.4G Pentium with a large RAID and 1G of RAM. Wehave been using the "fixed" memory configuration, allocating 864M to SQL.This is on a Windows 2003 server box.This works fine when a "small" query or two is executed, but the performancesuffers terribly when several users try to run reports in parallel. A singlequery might take 10 minutes to run if nothing else is happening on the box,but if additional users log on an run reports, it's almost impossible topredict when the queries will finish.I am also looking at the effect of database size on performance, runningtests against a database with 1 month, 3 months, and say 12 months of data,running the same query against 2 databases in parallel. With the originalconfiguration, the results were all over the place, with the 12 monthdatabase outperforming the smaller dbs, while other times there was littledifference. It seems that once the system starts paging, and paging heavily,it's over; the system never "recovers" and queries that previously ran in afew minutes now take hours.I added 3 G more memory to the system, and modified boot.ini to include the/3GB switch. Now when I run the same tests, the results are much moreconsistent, as the system rarely ever has to swap. Then again I've neverseen it go past 1.7G in Task manager, making me think that any more than say2.5G of memory is a waste?Things we are trying to determine are:- in the SQL Server memory configuration, is Fixed better than Dynamic? Wehave read that Dynamic is not good at returning memory to the OS once it'sbeen allocated- What else can we do to optimize the performance for this application? Itseems to me if the indexes are properly designed, the database sizeshouldn't have that much impact on performance, but this appears to be trueonly to a point. In comparing the execution plans between say a 12 month anda 3 month database, the plans are sometimes dramatically different. I assumethis is due to the optimizer deciding that going directly to the base tablesand not using an index will result in better performance, when in reality,this doesn't always appear to be true.- Are there other SQL Server switches I should be tweaking? Is there somenumber of simultaneous queries that this configuration should be limited to?- What about other versions of SQL Server (e.g. Enterprise, Data Center,etc) would these buy us anything?Thanks for any advice,-Gary
View 2 Replies
View Related
Nov 23, 2005
I have the following tableCREATE TABLE Readings(ReadingTime DATETIME NOT NULL DEFAULT(GETDATE()) PRIMARY KEY,Reading int NOT NULL)INSERT INTO Readings (ReadingTime, Reading) VALUES ('20050101', 1)INSERT INTO Readings (ReadingTime, Reading) VALUES ('20050201', 12)INSERT INTO Readings (ReadingTime, Reading) VALUES ('20050301', 15)INSERT INTO Readings (ReadingTime, Reading) VALUES ('20050401', 31)INSERT INTO Readings (ReadingTime, Reading) VALUES ('20050801', 51)INSERT INTO Readings (ReadingTime, Reading) VALUES ('20051101', 106)GO-- list the tableSELECT ReadingTime, Reading FROM ReadingsGOIt is a table of readings of a free-running counter that istime-stamped. I need to determine the value of the reading thatcorresponds to the closest date to the supplied dateAre there more optimal/efficient ways of accomplishing this than thefollowing?DECLARE @when DATETIMESET @when = '20050505'SELECT TOP 1 ReadingTime, Reading FROM ReadingsORDER BY abs(DATEDIFF(minute, ReadingTime, @when))The above gives me the desired result of ('20050401', 31).Any suggestions would be appreciated
View 1 Replies
View Related
May 14, 2007
I would like to know what options are available from BIOS / OS / SQL and server perspective when configuring or tuning a system with SQL Server 2000 or SQL Server 2005.
For example, I have a system with 4 dual-core Opteron CPUs on Windows 2003 Enterprise Edition. However, the OS sees 8 CPUs -- is this the optimal configuration or is it better (if even possible) to configure the system to see only 4 CPUs? The reason for this concern is due to performance problems faced deploying systems with Hyperthreading Technology.
Any documentation or examples in this regard would be very useful. Basically, what are the scenarios that would require a certain type of CPU configuration over another.
Thanks in advance for your help,
Ziggy
View 1 Replies
View Related
Dec 21, 2006
Is there any thought going into moving these two tables to a file group that we can control? Putting this in Primary with the rest of my system tables is quite problematic, and hinders my ability to manage space usage on my files. Traditionally, we didn't have to consider a primary file group that could grow to large proportions, but now with these two tables it can. If a large volume of messages gets sent through and the system can't keep up, then these tables and my primary file group will grow sometimes enormously.
View 8 Replies
View Related
Aug 5, 2014
I have a VM set up for offloading DBCC checks. Specs are below. I've read through this, but I'm not seeing the performance gains by enabling the trace flags and using the physical only switch.
Is the whole drawback that I'm on SATA storage? Is there a VM configuration with the CPU I can/should change? I've been playing with MAXDOP trying to see if I can get any benefits but I'm not seeing a much.
wait_type wait_time_spctrunning_pct
CXPACKET 561191.4228.7128.71
OLEDB 387136.7619.8148.52
PAGEIOLATCH_SH 340674.5817.4365.95
TRACEWRITE 321598.8416.4682.41
[code].....
View 9 Replies
View Related
Apr 30, 2007
We are creating a company-wide table of ZipCodes, States, GPS info, etc. This table can be used by our development and production servers (many of them.) We could place the table on a given server and use linked servers to grant access to that table to the other servers. But is there a better way to handle this globally-useful table?
Barkingdog
P.S. Clearly, we don't want to have multiple copies of this table scattered around on various servers. That introduces synchronization issues.
View 1 Replies
View Related
May 28, 2014
I would like to build a report with nice functionalities like filter, sorting, drill-down, something like a PowerPivot Table, but with some layout/design/format capabilities. I would also want to publish the report, refresh it let´s say once a week, notify users when a new version is available, etc.
If I use PowerPivot, then I am not able to customized the report or to mix data from different sources in one table.
If I convert the cells of the PowerPivot table to workbook formulas I lose the filter, sorting, etc functionalities.
I still have to try using Reporting Services, but I think that always something is missing.
View 1 Replies
View Related
Jan 3, 2008
C# .Net Application as front end
Sql Server2000 as back end
I need to merge an external dataset from .Net app(in XML format) with the information in database with one column in database table as the merging criteria. A situation similar to Left Outer Join, wherein i need all records from external dataset and if matched in database the corresponding values from there too, the only difference here is that the join is not between two Tables its between a table and external dataset.
There is no need to store the external dataset in the database in persistent form, its just a query - merge - response operation.
So, can anyone suggest the best possible solution for this? A table variable / temporary table / some other schema, what and how?
Thanks in advance..
View 8 Replies
View Related
Dec 10, 2006
Hi,I want to know the optimal solution, to find if all the data was entered. Lets say, Table A (date field) and for a given month, i need that all the days in the given month are present in the Table A. Right now i have different solutions, 1) a stored procedure which loops through all the days in the given month against a select statement on Table A2) a stored procedure, create a temp table which contains all the dates in the given month, and a single select statement using where condition (select * from.... where datefield not in (select * from...))I want to know what is the best solution of these two or any other solution.Thanks
View 9 Replies
View Related
Jul 2, 2015
I've got a feeling that the answer is, "can't be done," but I'll go ahead and ask the august members of this forum, anyway. Is it possible to alter the placement of the Parameter fields when previewing a report?
At the moment, it seems that they form in a column of twos, reading from left to right. I see how the ORDER is affected, by changing the order of the parameters in the Report Data window, but can I change the number of columns?
View 2 Replies
View Related
Nov 4, 2015
I like writing concise and compact sql code without cursors if possible. My current dilemma has me stuck though.I have 3 tables, but one of them is optionally used and contains a key element of TimeOut to determine which Anesthesia CrnaID to use. It is the optionally used part that has me stumped.
Surgery table
CaseID
Patient
(Sample data: 101,SallyDoe 102,JohnDoe)
Anesthesia table
CaseID
CrnaID
(Sample data:
101,Melvin
102,Bart
102,Jack)
AnesthesiaTime table (this table is optionally used, only if the crna's take a break on long cases)
CaseID
CrnaID
TimeIn
TimeOut
(Sample data:
102,Jack,0800,1030
102,Bart,1030,1130
102,Jack,1130,1215)
Select Patient INNER JOIN Anesthesia produced too many case results. So, I figured out there is an AnesthesiaTime table that only gets used if the anesthesia guys take time-outs. That doesn't happen all the time. I could use TOP 1 on the Anesthesia table, but technically I need to read the AnesthesiaTime table and locate the last time and pull that crna, Jack. I'm not sure how to deal with an optional table. I believe the IF Exists will be pertinent, but not sure of how to build this query. I've tried subquery without success.
View 2 Replies
View Related
Apr 18, 2015
I can't seem to place the "option (recompile)" in any valid position so that the following procedure executes without a syntax error .
USE [PO]
GO
/****** Object: StoredProcedure [dbo].[npSSUserLoad] Script Date: 4/18/2015 3:57:38 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
[Code] ...
-- Generated code - DO NOT MODIFY
-- From Object Schema: 'C:XXXXXX.NetPOPOModel\_ObjectSchema
-- To regenerate this procedure use the 'Open With' option on file _ObjectSchema and select POCodeGen.exe
Declare @SqlCmd nvarchar(max)
Declare @ParamDefinitions nvarchar(1024)
Set @ParamDefinitions = N'@UserId int,NTUser varchar(30), @XmlResult XML OUTPUT'
Set @SqlCmd = N'Set @XmlResult =
(
Select
[UserId] [a],
[UserName] [b],
[code]....
View 7 Replies
View Related
May 24, 2015
I have a table called 'AssetPlacements' that shows the dates when certain objects (AssID) were placed at certain locations (LocID).
ID AssID LocID PlacementDate
1112015-05-01
2122015-05-06
3132015-05-09
4212015-05-03
5222015-05-07
6232015-05-11
I'd like to show the assets with a start date and end date for the placement of the asset.
The start date to be the placement date and the end date to be the next placement date of the asset.
Where there is no next placement date to then show the end date as the current date, so hopefully the table will show as the following.
ID AssID LocID StartDate EndDate
1112015-05-01 2015-05-06
2122015-05-06 2015-05-09
3132015-05-09 [GetDate()]
4212015-05-03 2015-05-07
5222015-05-07 2015-05-11
6232015-05-11 [GetDate()
I'm guessing some sort of recursion is required here to produce this.
View 7 Replies
View Related
Nov 8, 2001
I am developing a new application and am trying to find the best way to deliver XML documents to the app. Here's the scenario: each set of xml docs will have a unique identifier. The app needs to retrieve a known xml doc by using this identifier. So the dilema I am having is as follows: I could create a folder for each identifier and load the xml docs relevant to each id into the folder. For the app to retrieve the document, it would simply construct a file path and the windows B-tree search would find it and return it very quickly. Alternatively, I could create a table with a primary key that stores the unique identifier, cluster it, and store the xml in a blob field.
Does anyone have any experience, references, or thoughts about which would be the best way to do this? I am an avid database developer and the app will be using sql 2000 anyway, however it seems in this situation the database might not be the best option in terms of speed, performance, and resource useage. Thoughts?
Thanks,
Mike
View 5 Replies
View Related
Apr 29, 2004
Hello, everyone:
I am not sure wether transaction log file size affect the database performance. My SQL 2K suddenly became slow yestoday. The data file is 3GB, and transaction log file is 11GB. Someone suggested I should shrink transaction log file. Can it work?
Thanks a lot.
ZYT
View 7 Replies
View Related
Nov 4, 1999
Hi,
I logged some of the parameters of my SQL server using the Performance monitor into a log file - smn5.log & the log settings in smn5.pml. I started the log and the log file (smn5.log) started growing in size indicating that it was collecting data.
I then went to Options button and said Save. After this in the File menu, I selected Export Log and saved it in a .CSV file, expecting it to contain the Logged data. However, it contains only the Log settings as shown below :
--------------------------------
Reported on L&T1362
Log File C:MSSQLLOGsmn5
Interval: 15.000 seconds
Object,Computer
SQLServer-Log,ERMINTRUDE
SQLServer-Procedure Cache,ERMINTRUDE
SQLServer-Locks,ERMINTRUDE
SQLServer,ERMINTRUDE
SQLServer-Log,FLORENCE
SQLServer-Procedure Cache,FLORENCE
SQLServer-Locks,FLORENCE
SQLServer,FLORENCE
---------------------------------
Could some one please tell me how to gather and view the logged information ? smn5.log contains 10MB of data - the perf. monitor shows that.
Thanks
Satish
View 2 Replies
View Related