I am trying to setup a test cluster and am having an issue. When I try to create the resource of a physical disk it takes both the drive e: and drive q: and doesn't seperate them into two physical disks as resources. This means when I try to associate the quorum disk it links the to physcial disk resource of drive e and q. Then when I try to install SQL2k5 I get the warning about installing SQL on the quorum disk. Am I missing something? Is there a way to seperate e and q onto two physical disk resources so I can specifically associate the quorum to q and the sql to e or should I be setting the quorum disk to a majority node set? Thanks in advance.
I'm trying to install a server cluster to implement an SQL Server 2005 cluster. No other services (I think this is important).
I've a dual SCSI channel Smart Array with 4 disks configured in a 400Gb RAID 5.
I do not need to move different resource groups from one node to the other, I need only one group with all the resources IP, Network name, MSDTC, and SQL Server..., when a node fails, all services should failover to the other node.
Is it possible to have only one physical disk (RAID 5) for Quorum disk and shared disk?
It would be the following configuration:
[Groups] Cluster Group IP Address Network Name Physical disk (used for quorum and shared storage) Distributed Transaction Coordinator SQL Server SQL Server Agent Generic Service (SQL Server Fulltext)
The other option would be having a 1 physical disk Raid 0 for Quorum (146Gb wasted) and another physical disk Raid 5 (3 disk) for Shared Storage, but this schema will have a a flaw point that if Quorum disk fails, the cluster fails....
I've the quorum disk of my cluster on win2003 full and I cannot use The cluster administrator because the Service Cluster cannot going up.
Obviously the shared disk (included Quorum) and MSDTC are not visible and I'm wondering if is possible to solve the problem without rebuild the cluster.
I am trying to setup clustering for SQL 2005. I initially want to setup a 2 node cluster in Active/Active Configuration.
I am trying to understand the Quorum disk in SQL 2005. As I understand the quorum disk is a shared resource. How is this resource configured? Would I need to have an iSCSI or fibrechannel connection from each of the nodes to the shared disk?
As well, does each node have a separate data drive? Or do all the nodes use only the shared storage?
I have a windows 2012 cluster environment that consists of two SQL servers nodes with Quorum disk configured as witness.
Manual failover between nodes is working fine, however the sql instance virtual is not seeing the Quorum disk.
Moreover the Quorum disk has the same number as another cluster storage disk, is that considered a problem?
When I move the SQL instance from a node to anohter, should the Quorum Disk change ownership as well to that destination node ? if it is not changing ownership what would be the problem??
I ran into problem while trying to install SQL Server 2008 to join to another node in a failover cluster ,you will find here the configuration doc and installation details. URL...
I have been tasked with moving our SQL server estate onto new 64bit SQL 2008 Virtual servers on a VM base. Each Virtual server will be attached to our SAN that i will have no control over. Do i ask for multiple LUNs pretending that there is a COS), Etemp), FData) and Glog) disk structure or do I just present a very big space as a single C: drive and let it go.We are consolidating lots of old physical servers onto fewer (more powerful) virtual servers (according to the VM and SAN administrators)
But I'm not sure if I have to install SQL Server first on node 2, then add it to the cluster. Or does adding it to the cluster also install the software?
I have a Windows 2008 R2 Always on Cluster with 3 nodes (two in the primary site and one in the DR site).
Primary Site: -Primary Site Server1 -Primary Site Server2
DR Site 1 (to be decommed): -DR Site Server1
Our company is planning on decommissioning the DR site. But before we do this, we want to add a 4th site to the cluster. Migrate the data...and then decommission the original DR Site.
Is it possible to have this configuration:
Primary Site: -Primary Site Server1 -Primary Site Server2
DR Site 1 (to be decommed): -DR Site Server1
DR Site 2 (NEW DR Site): -DR Site Server1
IF this is possible, do I simply add the new DR site to the existing cluster (same steps as adding the first DR node to the cluster when the cluster was originally configured? or are there special steps?
I am wondering what would be the best disk/RAID setup for a Windows server 2008 R2 OS and SQL Server 2012 database that has heavy read/write. I have the following disks I can use:
4x 15k 146GB 2x 10k 600GB
According to the server build requirements for the application, I need 100GB for the OS and 290GB for the drive containing the SQL mdf there are no stated requirements for the ldf, but would like to know if it should be allocated elsewhere?I should do RAID 10 for the 15k drives for SQL and RAID 1 for the OS on the 10k.
hello,all I am new to Sql 2000,I installed sql 2000 database in C disk,but Now I found my C disk space is smaller than before,So I want to move my databse(include data and structure) from C Disk to D Disk(its space is very large) . is it possible to do it ? if its can be done ,do I need to change my asp.net program source code (exp: chaneg my crystal report connectstring ) ? thanks in advanced!
I have a three tier system using SQL server 2000, we are currently experiencing IO bottle necks on our SCSI Raid 10 array, which holds the Data and the logs in separate partitions.
So my options as I understand it are:
Get Enterprise edition
or
Get another physical raid 10 array and separate the logs and data i.e. data on one array and logs on the other array.
I would like to try the latter but I am totally unsure how much difference this will make or whether it will make any difference at all.
Does anyone know how much performance increase I will get from using two arrays as opposed to one?
Any other advice on this scenario would be greatly appreciated.
I have not used log shipping before and find myself in a position where I need to reboot the secondary node and then the primary node and I don't actually need to failover.
Is there anything I need to be aware of. When rebooting the secondary node I assume the transactions will be held in the primary nodes log till the secondary comes back and just carry on once back up?
When rebooting the primary node nothing needs to be done and the log shipping will just start again once it has come back?
I'm contemplating running two availability groups on a two node WSFC. The WSFC is setup with a file share witness (i.e. no shared storage). Can I safely run 1 AG on one primary node, and the other AG on the other node (as primary). Each AG would have replicas on the passive node. This would effectively allow both servers to be in use at the same time. In a failover event, I understand that both workloads would transfer to a single server - so the box needs to be sized appropriately.
We are in the process of building a 3 node SQL Server Cluster (Server 2012/ SQL Server 2012), and we have configured the quorum so that all 3 nodes have a vote (no file share witness as we already have an odd number of nodes).
As I understand it, this should allow the cluster to run as long as 2 of the nodes remain online.
However, the validation report states that 2 node failures would be acceptable and, when we tested this by powering off two of the nodes, the cluster did indeed continue to run on a single node.
I configure Windows 2003 R2 and SQL 2005 two nodes Cluster. When I move cluster resource from one node to anther node it takes around 30 seconds to become online. So in that time if any query is running it stops responding.
In reading material on the quorum drive on a sql server cluster in mentions this is the drive the logs are written to. Is this referring to sql log files that are dumped by a process or scheduled job, or some other log files?
If I return the Average, Minimum, and Maximum values for the counter Physical Disk: Avg. Disk Queue Length, and those values are 10, 0, 87 respectively, which value do I use to compute the Avg. Disk Queue Length for a 4 disk array(RAID 10): Average, Minimum, or Maximum? The disk(lun) is on a SAN.
I invoke xp_cmdshell proc from inside a stored procedure on a 2-node active/passive SQL 2005 SP2 Standard cluster. Depending on which server the xp_cmdshell gets executed on I need to pass different arguments in the shell command. I thought I could use host_name() function to get the runtime process server, however, I am finding that it's not behaving correctly. In one example I know my active node is server2, but the host_name() function is returning server1. The only thing that I could possible explain this is that the MSDTC cluster group is not always on the same active node as the SQL server group and in the case I am talking about the cluster groups are in this mode (differnet nodes). Does the xp_cmdshell get executed by the SQL active node or the MDTC active node? And what is the best way to find out which server is going to run my xp_cmdshell?
Thanks.
Edit:
Perhaps another by product of this is that if I run select host_name() from the Studio Management query window i get different results depending on which server I am running the Studio Management on. On server1 I get server1 and on server 2 I get server 2, all the while server2 is the active node. I need a different function that will always let me determine the correct server that'll be running the xp_cmdshell...
Edit 2: I guess I could determine the running host inside the command shell itself, but I am curious to see if i can do it (cleaner) from SQL.
We are trying to setup a Windows Server 2003 Cluster with 2 systems and a DAV. We intend to install SQL 2005 on this Cluster. We purchased a DAV with 3 physical disk arrays as follows.
73GB RAID 1 (our plan is to use this to store sql transaction logs) 146GB RAID 1 (sql backups, temp database & other temp files) 420GB RAID 10 (sql databases)
Now as we are setting all this up we find out we need a shared physical drive on the DAV to store the Quorom. It is my understanding we cannot partition the physical drives and use one of the partitions to store the Quorum because when you create the resource for the Quorum the resource is the phsyical disk not the partition.
So my question is, is it in our best interest to buy a seperate physical disk for the Quorom?
My next question is, with regards to the MSDTC, is it in our best interest to buy a seperate physical disk for the MSDTC or can we store it on the 146GB RAID 1 and still use the drive for its original purpose?
-- Initialize Control Mechanism DECLARE@Drive TINYINT, @SQL VARCHAR(100)
SET@Drive = 97
-- Setup Staging Area DECLARE@Drives TABLE ( Drive CHAR(1), Info VARCHAR(80) )
WHILE @Drive <= 122 BEGIN SET@SQL = 'EXEC XP_CMDSHELL ''fsutil volume diskfree ' + CHAR(@Drive) + ':'''
INSERT@Drives ( Info ) EXEC(@SQL)
UPDATE@Drives SETDrive = CHAR(@Drive) WHEREDrive IS NULL
SET@Drive = @Drive + 1 END
-- Show the expected output SELECTDrive, SUM(CASE WHEN Info LIKE 'Total # of bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS TotalBytes, SUM(CASE WHEN Info LIKE 'Total # of free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS FreeBytes, SUM(CASE WHEN Info LIKE 'Total # of avail free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS AvailFreeBytes FROM( SELECTDrive, Info FROM@Drives WHEREInfo LIKE 'Total # of %' ) AS d GROUP BYDrive ORDER BYDrive
I'm just starting to work with AlwaysOn Availability and WSFC.
I have in my environment (in Azure) a DC, WSFC and to SQL instances, so I have 3 nodes in my Failover Cluster:
WSFC SQL1 SQL2
If I simulate failure by shutting down one of the SQL boxes my Availability group seamlessly fails over to the other SQL instance - which is great.
However, I'm starting to look into the workings of the Quorum, my envt has the default settings and when I shutdown both of my SQL servers I expected the Cluster itself to go offline as 2 out of the 3 votes will be negative, but the Cluster is still up - Screenshot below when SQL1 and SQL2 are shutdown:
Going through the Wizard (but not changing anything) it shows following config:
We have been working on a project to upgrade the servers in our 2-node SQL Server environment. I evicted a node after removing it from the instance. We added the new node under a new server name. I then start the Add remove programs, choose to change the SQL 2005 environment, type the virtual server name. Choose to maintain virtual server, pick to add new node. All seems well, I enter all prompted questions, and when the install begins I get the error below.
Product: Microsoft SQL Server 2005 -- Error 1706. An installation package for the product Microsoft SQL Server 2005 cannot be found. Try the installation again using a valid copy of the installation package 'SqlRun_SQL.msi'.
So I copied the SS2K5 Enteerprise Edition software to the local C: drive, point it too the 'SqlRun_SQL.msi' in the setup folder and still get the error.
I Run All checks for Validation cluster.I get Error On Disk Lists And Validation failed.With This error : Failed to prepare storage for testing on node "server name" The security account manager (SAM) or local security authority (LSA) server was in the wrong state to perform the security operation.
We have 2 nodes window Server 2012 R2 and SQL Server 2012 Enterprise Version cluster setup. We can switch roles and Node to one node to another and revert back to previous node with out any issues. But we are facing when one Node is restarted. We could not restart that Node in cluster Service start in Failover cluster Manager. Error Details is displayed as below inside double code."Cluster node NODE1 could not to join the cluster because it failed to communicate over the network with any other node in the cluster. Verify the network connectivity and configuration of any network firewalls."
I checked windows firewall. windows firewall is all of in Node1, Node2, SAN and DC.I have disabled and enabled the Internal and private network of Node 1. I have validated the cluster. it is showing no error though.
Node1: Public IP: 10.10.0.11 SubNet Mask:255.255.255.0 Default Getway: 10.10.0.1 Prefered DNS: 10.10.0.10 (Ip of DNS)
[code]....
Private Network: Not configured.pinging to each other ip is successful from one node to another.