SQL Server Admin 2014 :: Unable To Add Server In Multi Subnet To Cluster
Oct 6, 2015
I am setting up SQL 2014 always on. I was able to set up the replicas between 2 servers in the same subnet.Their IP addresses are say like this:
100.20.200.200
100.20.200.201
When I am trying to introduce another node into the cluster which has IP address like 100.10.101.102, I am getting an error that the server isn't reachable.
We have a SQL 2014 AlwaysOn availability group running on two Windows 2012 R2 servers that are in the same subnet. We created a new server in a second subnet, installed SQL, joined the server to the Windows cluster, added a new IP resource for the new cluster, and performed the other remaining steps to add a new AG replica to the SQL instance on this new server. When we try to move the core cluster resources to the new node to test failover, we get an error. Here's the command we've been using:
Move-ClusterGroup "Cluster Group" -Node node3
and it returns the error: The operation failed because either the specified cluster node is not the owner of the group, or the node is not a possible owner of the group...I've checked the ownership of the cluster groups and the cluster resources and it looks like they are set appropriately:
>Get-ClusterGroup | Get-ClusterOwnerNode Cluster Object Owner Nodes ---------------- --------------- Available Storage {} Cluster Group {node1,node2,node3} SQLAG {node1,node2,node3}
[code]....
We've double-checked that all IP resources are in the right subnets and that the dependencies for the Cluster Name resource and the Listener Name resource are set appropriately. I'm not sure what else to check since the PowerShell commands seem to indicate that node3 is an owner of the appropriate resources. What other things need to be checked or if the ownership being checked isn't the same as what PowerShell is checking?
I am trying to build out an AlwaysOn AG with 2 nodes each in a different subnet (in AWS if that matters), windows 2012r2 / SQL 2014 RTM
I created a AG Listener with 2 ip address, 1 for each subnet (checked that neither ip address are used). But whenever i failover the AG to the secondary, and try and connect via the listener it fails,
I am trying to connect via SSMS from the primary instance. and just time out, If i roll over to the primary i can connect no issues, I've tried playing with the connection settings, upping the time out to 30 secs, adding the MultiSubnetFailover=true. etc but not getting any joy.
When I fail an availability group between subnets, I am finding that the DNS entry in DNS is staying. So what happens is the Availablity Group listener has 2 records in DNS, one for each IP. This causes the App to timeout at times, since DNS will return either of the two IP's.
I am asking about a virtual IP for SQL Server, is there a way we can assign a different IP to SQL Server other than the server's(host) IP address? like the same what we do in a clustered env.
deploying a 2 node SQL AAG, with one VM each DC, in sync mode, in an active-active DC layout, with 1ms RTT and 1Gbps? Similar to a SAN based geo-cluster. HA and DR in a single tin.I'm trying to minimise license costs. Having the 2 node in DC1, with async to site 2 is double the license cost!!
Dead lock is coming in select query in application because of index. It is identified after enabling trace in database and identified by reading deadlock xml file. After index removal, deadlock is not coming in same query. But it is affecting query's performance slightly. Is it correct way to remove index if dead lock is coming because of index?
I have installed 2 node windows Fail-over clusters successfully. But QUARUM Configuration is not appearing in Failover cluster manager instead "Witness: Disk (Disk Cluster 4)". I have also configured quarum configuration from Quarum "Configure Cluster QUARUM Settings". I have attached the snapshots of windows cluster configuration. Is it the issue or not. I have not got any warning and error during cluster validation while installing Windows failover cluster. I am assuming it is okay and i can move ahead to installation of SQL Failover cluster setup.
Products used for installation in Virtual Machine: Windows Server 2012 R2 SQL Server 2012 R2 Note: Service Pack is not installed.
We had a big issue today during maintenance work in our SQL environment.
So our environment: - 2x SQL Server 2014 Enterprise on Windows Server 2012 R2 (SRV1 and SRV2) -- Both Hyper-V VMs on different Hosts -- Both configured to an Windows Failover Cluster and AlwaysOn Availability Group (AG1) -- AG Listener: AG1_lis -- No shared storage (each Hyper-V Host has its own local storage) -- Asynchronous Mode -- SRV1 is primary, SRV2 is secondary SQL node
What happened? - Shutting down Windows on SRV2 due hardware maintenance - Cluster goes offline, AG1 goes offline -- Error message: "Stopped listening on virtual network name 'AG1_lis'." -- Error message: "The availability group database "DatabaseXY" is changing roles from "PRIMARY" to "RESOLVING" because the mirroring session or availability group failed over due to role synchronization."
Results? - AG1_lis wasn't available for our applications and they stopped working properly because database connection was lost!
I think, I HOPE, this is not the normale behaviour when one node is shutting down (especially the secondary node!)
NODE1 -256GB INST1 - 64GB min/64GB max INST2 - 64GB min/64GB max NODE2 - 256GB INST3- 64GB min/64GB max INST4- 64GB min/64GB max
With this configuration and if all instances are running on the same node there will be enough memory for them to run. Knowing that normally i ll have only 2 instances in each node wouldnt it be better the following config?
NODE1 -256GB INST1 - 64GB min/128GB max INST2 - 64GB min/128GB max NODE2 - 256GB INST3- 64GB min/128GB max INST4- 64GB min/128GB max
With this configuration and in case all the instances (due to a failure) start running on only 1 node, SQL will adjust all instances to just use Min memory specified?
I've recently started working with a public sector organisation who have 4 clustered sql instances that has 80% of it's db mirrored.
Looking at the transaction log - it seems that a transaction log backup is a good idea as the log is 4x larger than the data file.But I'm not allowed access to the physical server to check onto which drive I can create the trn. No RDP, no vmware - let's be honest I'm not even allowed to launch cmd line Also the Server Manager informs me "We will need to carefully look at database backups if you guys want to start doing these backups on box, as that will break our off box backup routine (it will screw the transaction chain)."
I don't understand how backing up the transaction log could break the "transaction chain"?
I have been facing following Error in Failover cluster setup as below. I have prepared 2 node and 2 instance sql server failover cluster on top of windows failover.I have deleted MTCBJINS07 in AD and recreated even after, problem is not solved. MTCBJINS07 is my 2nd sql instance sql server network name.
Cluster network name resource 'SQL Network Name (MTCBJINS07)' failed registration of one or more associated DNS name(s) for the following reason:
DNS bad key.Ensure that the network adapters associated with dependent IP address resources are configured with at least one accessible DNS server.
We have a SQL 2014 active passive cluster with 5 instances. When the cluster was installed one of the nodes was a virtual machine. As we started to have problems and this is not a Microsoft supported configuration we decided to replace the virtual node with a phyiscal one. On 3 of these 5 instances we configured static ports (1433 TCP) this was required for applications and firewall implementation.
Now with the physical node joined to the cluster we have issues with these three instances. Whenever one of these instances is moved to the passive node the server authentication is changed from mixed to windows only. I'm no SQL expert at all but to me it looks like the configuration of the instances are not replicated to the passive node? I found some similar problems on the net but these are mostly under sql 2008.
When trying to reinitialize a transactional replication subscription I am unable to select the "Generate the new snapshot now" checkbox. This seems to be happening only with SSMS 2014. When I connect to the same server from SSMS 2008 R2 I am able to select this checkbox.
Due to a SQL 2014 cluster installation failure related to security while setting up the primary node, I had to remove the node and any related installation programs from the node and redo the installation again. However I am unable to use the machine name as the SQL network name during the instance configuration, I get the following:
Microsoft.SqlServer.Chainer.Infrastructure.InputSettingValidationException: The SQL Server failover cluster instance name "MachineName" already exists as a server on the network. Specify a different failover cluster instance name.I am still getting the same name since the node name "MachineName" is listed under the cluster name. I have used machine name as SQL network name without an issue. I do not have any existing SQL machine name in network using same machinename which I want to use for this installation.
I have configured windows failover clustering 2012 on 4 of my test nodes.
I am trying to add another node into this cluster but its not happening. I am not even able to start the cluster service in services.msc
After installing windows failover clustering, when I go to the C:WindowsCluster folder, I am unable to find CLUSDB, CLUSDB.1.container, CLUSDB.2.container and CLUSDB.blf files in the folder.
These files are very much present on the other nodes where cluster service is running.
I tried copying these files manually to server where its missing but still no luck.
I'm using SQL accounting software now and i have a problem with my designer report. When i using designer report to design my customer statement of account, after i save the new design, i haven't rename it for the new statement report so the name there empty and i exit the designer report. So when i re-open the designer report, suddenly pop out "field value required". What should i do...? How can i re-open the designer report again?
The DB is in simple recovery mode. There are no open transactions (used dbcc opentran).
The server is running SQL Server 2014 and the DB is in compatibility mode SQL Server 2008 (100). It was upgraded to 2014 a month or two ago.
I have tried to re-size the log to 100mb, but any way I have tried (none gave errors), the log file remains the same size. I have tied to shrink the log file (through the UI and via DBCC commands) without success; no errors, but also no change in file size.
I have checked Log Reuse Waits, just in case, and as expected it showed “NOTHING” (select log_reuse_wait_desc, name from sys.databases)
I tried running a checkpoint, but that did not allow any resize or shrink to work.
I have tied creating large transactions to move the used point in the log file, in case this was the issue. I did this by creating tables that I drop after large inserts. While it shows me that the log space % used increased, the log file still does not allow the space to be reduced.
The following is what I was using for the transactions to get the log used.
BEGIN TRAN select a.* into testtable from sysobjects a, sysobjects b, sysobjects c ROLLBACK TRAN
Do I just need to continue running large transactions until the log space used gets high enough to get the “end point” in the log to really move? Is there an easier way to accomplish this (I have several DBs that have the almost identical problem), what I am using moves the Log Space Percent Used about a percent on each execution.
1. Once fail over to secondary replica, what will happen to connected session in primary node? can the session fail over to secondary seamlessly or need to re-login. what happen committed transactions which has not write to disk. 2. Assume I have always on cluster with three nodes, if primary fails, how second node make write/ read mode. 3. after fail over done to 2nd secondary node what mode in production(readonly or read write). 4. how to rollback to production primary ,will change data in secondary will get updated in primary.
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
We are trying to create some alerts in our SQL Server 2014 BI edition.Issue is that, after I chose "Type" as "SQL Server performance condition alert" nothing is listed in the "Object" list box.SQL Server event alerts are working. Issue is only with "SQL Server performance condition alert".
I Run All checks for Validation cluster.I get Error On Disk Lists And Validation failed.With This error : Failed to prepare storage for testing on node "server name" The security account manager (SAM) or local security authority (LSA) server was in the wrong state to perform the security operation.
We recently had a problem with DB Mail. SQL jobs that sends an email succeeded but the email in the job fail to sent. There was a problem with the email server. The error is included. We fixed the problem with the email server. How can I get an alert when a DB Mail email fails send?
Date4/23/2015 10:01:06 AM LogDatabase Mail (Database Mail Log)
Log ID5907 Process ID13204 Mail Item ID5702 Last Modified4/23/2015 10:01:06 AM Last Modified Bysa
Message The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 1 (2015-04-23T10:01:06). Exception Message: Cannot send mails to mail server. (Insufficient system storage. The server response was: 4.3.1 Unable to accept message because the server is out of disk space.). )
We have oracle linked server created on one of the sql server 2008 standard , we are fetching data from oracle and updating some records in sql server . Previously its working fine but we are suddenly facing below issue.
Below error occurred during process .
OLE DB provider "OraOLEDB.Oracle" for linked server "<linkedservername>" returned message "". Msg 7346, Level 16, State 2, Line 1 Cannot get the data of the row from the OLE DB provider "OraOLEDB.Oracle" for linked server "<linked server name>".
Is it possible to have Analysis Services in both modes or are they mutually exclusive?I have a machine setup with Multidimensional AS and would like to know if it's possible to add a Service in Tabular mode.
I want two write a small script to determine which is the currently active (primary) server in the AG.
Right now, I see that using SELECT * FROM SYS.dm_hadr_availability_replica_states I can determine the role. However, when the server goes down and switches to the secondary node, I don't believe that the role changes (or does it?). How do I determine which is the active node?
We are consolidating some old SQL server-environments from 'OLD' to 'NEW' and one of our vendors is protesting on behalve of the collation we use on our 'NEW' SQL server.
Our old server (SQL 2005) contains databases with collation SQL_Latin1_General_CP1_CI_AS
Our new server (2014) has the standard collation Latin1_General_CI_AS
Both collations have CI and AS
From experience I know different databases can reside next to eachother on the same Instance.
The only problem could be ('could be !!') the use of TempDB with a high volume of transaction to be executured in TempDB and choosing for Snapshot Isolation Level ....
The application the databases belong to is very static, hardly updated, and questioned only several time per hour (so no TempDB issue I guess).
using different databases using a different collation running on the same instance?