It was shipped with a 76 gig drive setup in RAID 1 (2 disk) and a 400 gig drive setup in RAID 5 (4 disk).
I would like to determine what is the best way to setup up the partitions. What size and what should be placed on each.
Like the C: Drive...Should I just put Windows on there and nothing else? Do I stand to gain something from not using part of that 76 gigs as a D: drive for my apps?
Are there any issues with installing SQL 2000 on a server with Dynamic Disks configured with RAID 1 Mirroring ? I know that with Dynamic Disks there are no partitions but rather volumes. Is the drive configuration setup the same way ?. Would I setup a volume for the O/S and a volume for SQL ? The reason I need to use Dynamic Disks is because we will be integrating it later with an EMC SAN solution.
I have a two node SQL 2000 cluster running on windows 2003 enterprise server. We need to replace the SAN disks. Can we not disable SQL service & Cluster service, copy the contents from existing disks to target disks, swap the drive letter & start the services?
What is the best practice to do this? Appreciate your help.
Any help appreciated! Is there any performance enhancements to be gained by storing frequently 'trigger-written-to' databases on a seperate disk to the source database? In particular, we keep a 'history' database of all inserts/updates/deletes against records, activated by triggers, and I was wondering if I would gain performance enhancement by locating the two databases on different disks? Thanks in advance
I am trying to build the 2 node 2 clusters with the AlwaysOn.
Here isthe landscape.
2 nodes PROD failover cluster (running once instance) 2 nodes DR failover cluster (running 2 instances - DR and PRE-PROD)
Both clusters are in different geographies.
PRE-PROD can be editable. So out of scope of Always On.
One instance on PROD -> DR of the other box. [Want to achive thru AlwaysON]
Now my Question:
1) Do i need to have all the 4 nodes in same failover cluster group? If yes, then this would become MultiSubnet cluster Or Is there any way those 2 diffrerent failover clusters (one DR and one PROD) can be part of AlwaysOn.
2) Can i use the clustered disks as in the above landscape for always on?
I am in process of moving a SQL 2005 solution from a development box that used local storage to UAT environment with SAN attached storage. The solution uses database snapshots
The database files are on the SAN storage but during testing I was unable to create a Database snapshot on the SAN disk. Creating snapshots on the local disk worked fine.
Is their some restriction/problem in using the database snapshot technology with SAN storage?
I've read that if particular tables are frequently queried together through a join then these tables should be placed on different devices on different physical disks. What does this mean exactly and how would you configure this? Is this a common practice in high-performance real-world environments (or should it be)?
I am managing a sqlserver 6.5 database in my company. I get the message that the datafiles should be expanded but whenever I try to expand it the following message appears:
Could not find enough space on disks to extend the database. Meanwhile, I have about 6 gigabytes free space on my disks. Please help me out. Thanks,
Why I see absolutely no performance improvement when I spread my primary file group over 8 separate files on 8 separate disks, as opposed to having the primary file group all in one file on one disk.
I have set up 2 identical databases, one spread over 8 disks and one on one disk. Each database has a table called DATA and a column called VALUE. Value is NVARCHAR(200). I have filled each table up in both databases with 20,000 rows.
I then perform a select on each table in each database using CHECKPOINT and DBCC DROPCLEANBUFFERS to ensure I am reading from disk before each query and the execution times are identical in both databases.
I then ran the same queries against each database using a load testing tool and the batch requests per second on each DB is identical under load.
Surely the database with data spread over 8 disks should be FAR faster than the single file database as you have the combined reading power of 8 disks as opposed to 2??
Also, the same is happening for write speeds. When I create the data on both databases, the time it takes is identical on both.
BOL says it should be faster with multiple disks.
Just FYI this is on an Azure virtual machine and each disk is a locally redundant data disk that I have attached to the virtual machine.
Whether write speeds should increase with multiple disks or just read speeds?
I am getting following errors in my Cluster Validation report when trying to create a Windows Cluster.
I have 2 nodes DB01 and DB02 . Each has 1 public ip, 1 private ip (for heartbeat), 2 private ips for SAN1 and SAN2. The private ip's to SAN are directly connected via Network Adaptor in DB01 and DB02.
Validate Microsoft MPIO-based disks Description: Validate that disks that use Microsoft Multipath I/O (MPIO) have been configured correctly. Start: 9/9/2014 1:57:52 PM. No disks were found on which to perform cluster validation tests. Stop: 9/9/2014 1:57:53 PM.
I am wondering what would be the best disk/RAID setup for a Windows server 2008 R2 OS and SQL Server 2012 database that has heavy read/write. I have the following disks I can use:
4x 15k 146GB 2x 10k 600GB
According to the server build requirements for the application, I need 100GB for the OS and 290GB for the drive containing the SQL mdf there are no stated requirements for the ldf, but would like to know if it should be allocated elsewhere?I should do RAID 10 for the 15k drives for SQL and RAID 1 for the OS on the 10k.
I am testing out a blank database created over two physical files on two separate disks with one table called data which has one column called values nvarchar(max).
I filled the table up with a whole load of data and ran a select * against it. If I run Permon at the same time I can see that the read load has been spread over multiple disks as each of these disks is getting read from in parallel. If I create the same database on a single file and run the same select * again it takes much longer, proving that the read load has been distributed across multiple disks.
Now moving onto writes, this is where the confusion lies. I understand that SQL server fills files evenly until they need growing, after which it will then fill files individually until they are full in a round robin fashion unless you have trace 1117 turned on. What I don't understand is why the writes aren't distributed out whilst it is filling these file groups.
I ran an continual insert into my table with go 1000000 to monitor how the files are being filled up. I monitored where SQL server was physically placing the files as they were being inserted by running the following query:
;WITH CTE AS (SELECT sys.fn_PhysLocFormatter (%%physloc%%) col1, RIGHT(LEFT(sys.fn_PhysLocFormatter (%%physloc%%),2),1) AS [Physical RID], DATAID
[Code] ....
I could see that it would a thousand or so records into file 1, then a thousand or so into file 2, then a thousand or so into file 1 etc etc. In another words it would hit one disk, then another disk, then back to disk one to fill the file evenly. Is there any way to make SQL Server distribute the writes out in parallel so that both disks are writing in tandem?
By the looks of it, multiple disks only scale reads, as with writes only one disk is ever written to at once which is annoying. Any way to harness the write power of multiple disks?
Hi, I need to use a top and a join in the same sql. To get 10 top refnr from orders_refnr. That works fine to I use this: SQL = "SELECT TOP 10 refnr, antal = COUNT(refnr) FROM orders_refnr INNER JOIN produkter ON (orders_refnr.refnr = produkter.referensnummer) GROUP BY refnr ORDER BY antal DESC" But I need to be able to get information from more fields than the field refnr. How can I specify more fields? I need to get other fields from produkter. Please helt I´m really stucked.
I have gotten some criticism from coworkers regarding this test and just wanted to see what you guys think. I realize the wording could use improvement and any criticism towards making it easier to understand is much appreciated.FWIW - I had to solve this problem on the job so I feel it is a real-world test that helps me understand how people think and if they try to find alternate solutions.Thanks!~~~~~~~~~~~~~~~~~~~~Given a table that has over 100,000 records…SUBSIDIARY=========PARENT_IDINTCHILD_IDINTULTIMATE_PARENT_IDINTCLEANUP_INDBIT…where each PARENT_ID can have multiple CHILD_ID values, but the PARENT_ID should not equal the CHILD_ID. After an initial data load, the ULTIMATE_PARENT_ID and CLEANUP_IND columns contain NULL values (see page 2 for sample data).ULTIMATE_PARENT_ID is defined as the topmost parent in the chain for the particular CHILD_ID record, so if the chain was only 2-level’s deep the ULTIMATE_PARENT_ID is the CHILD_ID’s PARENT_ID’s PARENT_ID.Please write an answer for all three questions below:A)Which of the following queries should you run first?B)Write an optimized query to identify the ULTIMATE_PARENT_ID for each CHILD_ID and set its value into the ULTIMATE_PARENT_ID column.C)Write a query to identify ALL of the circular references and mark each record that is a circular reference by updating the CLEANUP_IND column to 1.~~~~~~~~~ Page 2 ~~~~~~~~~ Sample Data, remember though this table has over 100,000 records and the parent-child chain can go n-levels deep – where n is not known.PARENT_IDCHILD_IDULTIMATE_PARENT_IDCLEANUP_IND1024512NULLNULL362300NULLNULL887541NULLNULL10221024NULLNULL546887NULLNULL5122305NULLNULL112967NULLNULL697123NULLNULL901452NULLNULL2300666NULLNULL334445NULLNULL512903NULLNULL884554NULLNULL313313NULLNULL554884NULLNULL112119NULLNULL967555NULLNULL2305333NULLNULL33336NULLNULL541546NULLNULL10301020NULLNULL112999NULLNULL
hi, I have NT server which has drive c: 500 MB and drive d has 44 GB.
I know that the person who set up this server did not give enough space to the c drive, here is the problem. I am running sql server 7.0 which has 30 GB of data in the d drive. I need to reconfigure the NT hard drive so I can allocate 2 GB for C drive and 42 GB for D drive.
What is the best, safe method to accomplish this task.
After experiencing a hard drive failure i have reinstaled MSSQL7 on one drive and have a database which I need to recover on separate physical drive. How can I go about doing this?
Hi, I have ran 1. xp_fixeddrives and got the result drive MB free ----- ----------- C 1708 D 16311 2. I ran Backup Wizard in EM and able to see only above drives
3. But if ran backup in EM able to see more than 10 Drives(like C,D,H,I,J,M,N and etc). Why I can able to see those difference?. How do I find out exactly how many drives are there in this server without directly going to that server?. I appreciated your valuable answere. Thanks, Ravi
Hi, I'm looking for a way to check the free space left on the hard drives and then if needed send an alert to notify when we need to free up some space. I played around with the performance monitor and realized I could do it that way but I think you would have to leave the performance monitor running all the time and I'm not sure if I want to do that. I also read about the xp_fixeddrives proc that displays how much free space is available but then I don't know what to do from there? Does anyone have any recommendations for the best way to do this.
Right, I have this database that I need to sort, I'll give you an example:
ID Name Value 2312 Sega 200 5678 Blizzard 215 3412 Bullfrog 210 6798 Nintendo 195
Now, what I need to do is to sort it, perform calculations on and I need the list to be sorted with a predefined post as the top result, say like this one time:
ID Name Value 3412 Bullfrog 200 2312 Sega 210 5678 Alizzard 215 6798 Nintendo 195
as you can see sorting it alphabetically would lead to
5678 Alizzard 215 3412 Bullfrog 210 6798 Nintendo 195 2312 Sega 200
(or the other way around if you play with asc/desc) by id would be
2312 Sega 200 3412 Bullfrog 210 5678 Alizzard 215 6798 Nintendo 195
There aren't any top or bottom values sort of speak for the posts I want to be on top, so...how to sort this like this?
3412 Bullfrog 200 2312 Sega 210 5678 Alizzard 215 6798 Nintendo 195
the order after the top one is irrelevant.
Now...I know I could sort this by doing something like
"Select * FROM blablabla WHERE Name = 'Bullfrog'"
and then doing "Select * FROM blablabla" and then just bypassing that post in asp/php code or whatever, but that would be a pain for me to do as I have to perform some massive calculations and the code would be alot larger then needed be
Its a brain teaser allright...can you help me out?
I was wondering if anyone played around with changing the allocation unit size when formatting the hard drive the SQL server is running on. I would think that setting it higher to account for the larger size of the database files would help, but I'm not sure.
I have 2 harddisk in my computer and I have SQL 2005 Express on 1 of them (let's say C:), however, my C: is going to be full soon! Once it is full, is it possible to create a table on my other harddisk which the server can recognise?
I am using this stored procedure in sql. I have 6 tables. One is called employees. This is what I need to be able to do. A user enters a new employee into a winform, picks a role, division, manager, technicalskill set and applications from the drop down lists and hits save. The employee table should be the only one updated and has these columns only.( firstname, lastname, dvisionid, managerid, roleid,techskillsid, and appID). At the moment what is happening is its saving the firstname, lastname correctly, but the rest of the ID columns are null. It is updating the other tables with the string entered but what I need is the emplyee table to update with the corresponding ids. Is this alot more complicated then i thought? If I try to replace the role with roleid etc, it will just tell me I can't convert string to int which is understandable. How do I do this?
CREATE PROCEDURE sp_InsertEmployee @Firstname nvarchar(50), @Lastname nvarchar(50), @Role nvarchar(50), @Manager nvarchar(50), @Division nvarchar(50) AS BEGIN SET NOCOUNT ON;
INSERT INTO EMPLOYEES (FIRSTNAME, LASTNAME) VALUES (@FIRSTNAME, @LASTNAME) INSERT INTO [ROLE] ([ROLE]) VALUES (@ROLE) INSERT INTO MANAGER (MANAGER) VALUES (@MANAGER) INSERT INTO DIVISION(DIVISION) VALUES (@DIVISION) END GO
My C# code is like this:
SqlCommand sqlC = new SqlCommand("sp_InsertEmployee", myConnection);
Hello, I am experimenting with indexes and hope people can shed lighton some of my problems.I am using SLQ 2000 on Win 2000 Server. Using the following query fordiscussion;--------------------------------SELECT TOP 1000000E.EUN_Numeric, -- Primary KeyE.EUN_CODE, -- VarCharE.[timestamp] --,--E.Model -- Computed column (substring of EUN_CODE)FROM dbo.Z1_EUNCHK E--WHERE E.[timestamp]DATEADD ( wk , -48, getdate() ) AND-- E.[timestamp]< DATEADD ( wk , -4, getdate() )ORDER BY E.[timestamp] DESC-----------------------------------Problem 1) If I set up a single Index on the TimeStamp (plus the PK onEUN_Numeric) then there is not improvement in performance.It is only when I set up an Index on the Timestamp,EUN_Numeric,EUN_Codethen I get a good improvement. This is also thecase with the "where" clause added. I am using query analyser. Theimprovement is 14 secs to 3 secs (mainly with the removal of the sortprocess)Why?My expectation is that if my query uses [timestamp] column then surelyan index only on this is adequate.Problem 2) Introducing the simple computed column into the query takesthe time to 15 secs (with Sort processes involved).Why does revert back to sorting process when previous the index wasused ?Regards JC......
I have a 75 GB hard drive and a 300 GB. I want to mirror the 75 to the 300 and use the extra space as data storage. Is this possible if I partition the 300 and then mirror the hard drives.
How can I do this with vbscript, or C# ? - Copy backup files down from a network share, into the data directory of my local sql 2005 instance - perform a restore using the files copied from above - Execute a dts package
More info: Our databases are scripted and exist on the typical development, and testing enviroments. So as I get ready to start a new application, I want my local sql instance to be updated based on structure changes as well as data. So I have to apply the changes from the scripted sources and pull over the data. I would naturally like to automate this.
Is it possible to force parameters into the reports so enabling me to force a user id value into every report that is picked up from the list. The user ID is a system value and I don't want end users having any knowledge of it?
Hi!I'm desperating here!!! Two questions:1º Is line 13 realy selecting all the records with the username samurai (in this case)?!2º How do I fill the Boolean SeExiste var with a value from the record?1 Dim UserIDParameters As New Parameter 2 3 UserIDParameters.Name = "ProdUserID" 4 5 UserIDParameters.DefaultValue = "samurai" 6 7 Dim LoginSource As New SqlDataSource() 8 9 LoginSource.ConnectionString = ConfigurationManager.ConnectionStrings("ASPNETDBConnectionString1").ToString() 10 11 LoginSource.SelectCommandType = SqlDataSourceCommandType.Text 12 13 LoginSource.SelectCommand = "SELECT FROM aspnet_Users (FirstTime) VALUES (@UserIDParameters) " 14 15 Dim SeExiste As Boolean 16 17 SeExiste = LoginSource.SelectParameters("FirstTime").DefaultValue I'm a newbye and despite this simple thing that in normal ASP is very easy to do!!! Please help me!Thanks in advance!
The postage and packing scheme being used at the site I'm working on depends on the customer's location.
If they're in the UK they get once scheme and if they're in Ireland they get another. Furthermore, if they're anywhere else they get another scheme.
A customer's country is indicated by a 'countryID' stored in the main customer row in the database. (This ID references a country in the Countries table.)
Thus, I was wondering if it is acceptable to hard code the country pk of the UK and Ireland into the formular which works out the postage and packing?
At present, for a similar issue, I've even hard coded the the pk of UK and Ireland into some Javascript running at the client.
Is it fair design to work with a hard coded pk like this?