SQL Server Admin 2014 :: Deadlock Because Of Non-Cluster Index
Jun 11, 2015
Dead lock is coming in select query in application because of index. It is identified after enabling trace in database and identified by reading deadlock xml file. After index removal, deadlock is not coming in same query. But it is affecting query's performance slightly. Is it correct way to remove index if dead lock is coming because of index?
I have installed 2 node windows Fail-over clusters successfully. But QUARUM Configuration is not appearing in Failover cluster manager instead "Witness: Disk (Disk Cluster 4)". I have also configured quarum configuration from Quarum "Configure Cluster QUARUM Settings". I have attached the snapshots of windows cluster configuration. Is it the issue or not. I have not got any warning and error during cluster validation while installing Windows failover cluster. I am assuming it is okay and i can move ahead to installation of SQL Failover cluster setup.
Products used for installation in Virtual Machine: Windows Server 2012 R2 SQL Server 2012 R2 Note: Service Pack is not installed.
We had a big issue today during maintenance work in our SQL environment.
So our environment: - 2x SQL Server 2014 Enterprise on Windows Server 2012 R2 (SRV1 and SRV2) -- Both Hyper-V VMs on different Hosts -- Both configured to an Windows Failover Cluster and AlwaysOn Availability Group (AG1) -- AG Listener: AG1_lis -- No shared storage (each Hyper-V Host has its own local storage) -- Asynchronous Mode -- SRV1 is primary, SRV2 is secondary SQL node
What happened? - Shutting down Windows on SRV2 due hardware maintenance - Cluster goes offline, AG1 goes offline -- Error message: "Stopped listening on virtual network name 'AG1_lis'." -- Error message: "The availability group database "DatabaseXY" is changing roles from "PRIMARY" to "RESOLVING" because the mirroring session or availability group failed over due to role synchronization."
Results? - AG1_lis wasn't available for our applications and they stopped working properly because database connection was lost!
I think, I HOPE, this is not the normale behaviour when one node is shutting down (especially the secondary node!)
I have been facing following Error in Failover cluster setup as below. I have prepared 2 node and 2 instance sql server failover cluster on top of windows failover.I have deleted MTCBJINS07 in AD and recreated even after, problem is not solved. MTCBJINS07 is my 2nd sql instance sql server network name.
Cluster network name resource 'SQL Network Name (MTCBJINS07)' failed registration of one or more associated DNS name(s) for the following reason:
DNS bad key.Ensure that the network adapters associated with dependent IP address resources are configured with at least one accessible DNS server.
We have a SQL 2014 active passive cluster with 5 instances. When the cluster was installed one of the nodes was a virtual machine. As we started to have problems and this is not a Microsoft supported configuration we decided to replace the virtual node with a phyiscal one. On 3 of these 5 instances we configured static ports (1433 TCP) this was required for applications and firewall implementation.
Now with the physical node joined to the cluster we have issues with these three instances. Whenever one of these instances is moved to the passive node the server authentication is changed from mixed to windows only. I'm no SQL expert at all but to me it looks like the configuration of the instances are not replicated to the passive node? I found some similar problems on the net but these are mostly under sql 2008.
Due to a SQL 2014 cluster installation failure related to security while setting up the primary node, I had to remove the node and any related installation programs from the node and redo the installation again. However I am unable to use the machine name as the SQL network name during the instance configuration, I get the following:
Microsoft.SqlServer.Chainer.Infrastructure.InputSettingValidationException: The SQL Server failover cluster instance name "MachineName" already exists as a server on the network. Specify a different failover cluster instance name.I am still getting the same name since the node name "MachineName" is listed under the cluster name. I have used machine name as SQL network name without an issue. I do not have any existing SQL machine name in network using same machinename which I want to use for this installation.
I have configured windows failover clustering 2012 on 4 of my test nodes.
I am trying to add another node into this cluster but its not happening. I am not even able to start the cluster service in services.msc
After installing windows failover clustering, when I go to the C:WindowsCluster folder, I am unable to find CLUSDB, CLUSDB.1.container, CLUSDB.2.container and CLUSDB.blf files in the folder.
These files are very much present on the other nodes where cluster service is running.
I tried copying these files manually to server where its missing but still no luck.
I am asking about a virtual IP for SQL Server, is there a way we can assign a different IP to SQL Server other than the server's(host) IP address? like the same what we do in a clustered env.
I am setting up SQL 2014 always on. I was able to set up the replicas between 2 servers in the same subnet.Their IP addresses are say like this:
100.20.200.200 100.20.200.201
When I am trying to introduce another node into the cluster which has IP address like 100.10.101.102, I am getting an error that the server isn't reachable.
We have a database with a table that contains around 180m records. Each day a further 70k are inserted. No records are ever deleted as this table is used for archiving only.Users are required to perform SELECTs on this table constantly but due to the high number of INSERTs the indexes become very fragmented very quickly. My aim is to avoid daily rebuilds of the indexes which is what our software house is telling us we have to do.
This is the DDL for the table:
CREATE TABLE [dbo].[Inventory]( [EAN] [bigint] NOT NULL, [Day] [smalldatetime] NOT NULL, [State] [int] NOT NULL, [Quantity] [int] NULL, [StockValue] [float] NULL, CONSTRAINT [PK_Inventory] PRIMARY KEY CLUSTERED
[code]...
There are also three clustered Indexes on this table each referencing a single column. The problem from my side is that I cannot understand why the three columns in a primary key would also be configured as non-clustered indexes.My solution would be one of the following:
1. Accept the tables are going to be fragmented and require a daily rebuild (don't like this one!)
2. Partition the table
3. Remove the non-clustered Indexes and let the clustered index for the primary key do the work.
I would like to put a Clustered Index on a date column in a current heap, but one question/concern.This heap every month has thousands of rows deleted and even more added later. How much of an issue will this cause the Clustered Index as far as page splits? I was thinking Fill Factor of 70%.I would normally just test and still will on Dev box, but my Dev box is much smaller than production as far as power.
We use SQL server always on feather on my database and we distribute statement on main database server and mirror database server for raise performance.
My police for split statement is DML (insert, update and delete) statement go to main DB and Read Data (select) statement go to mirror DB.
I want know can I use different index on main DB and mirror Database?
Because some index are used in mirror DB not used in main database.
NODE1 -256GB INST1 - 64GB min/64GB max INST2 - 64GB min/64GB max NODE2 - 256GB INST3- 64GB min/64GB max INST4- 64GB min/64GB max
With this configuration and if all instances are running on the same node there will be enough memory for them to run. Knowing that normally i ll have only 2 instances in each node wouldnt it be better the following config?
NODE1 -256GB INST1 - 64GB min/128GB max INST2 - 64GB min/128GB max NODE2 - 256GB INST3- 64GB min/128GB max INST4- 64GB min/128GB max
With this configuration and in case all the instances (due to a failure) start running on only 1 node, SQL will adjust all instances to just use Min memory specified?
We face slow performance issue for like taking long time for same query execution after We apply index rebuild and reorganize index. But, after execution of query or procedure for 2 -3 times, performance will be faster. I have following questions
1 do we need to update stats after we rebuild an reorganize index. 2. is it will be slow for 1-2 times for every query and stored procedure execution after we rebuild and reorganize index?
I'm trying to determine how much space I would need for my data drive and log file drive to do index rebuild. I have a database which is 100gb, it is in simple recovery mode. let me know what to have a look at to determine how much space.
I've been fixing some issues lately where weekly maintenance has been causing logs to grow and filling disks.
Is there any rule of thumb for allocating log space for doing reorgs and rebuilds in a worst case scenario? I'm thinking 3x the largest database size?
I've been watching them run on databases in the range of 50GB where the logs are growing well over that for rebuilds or even reorgs. Once you have a few databases like this on a server, you can suddenly eat through a lot of disk space just for holding logs during maintenance.
We have a SQL 2014 AlwaysOn availability group running on two Windows 2012 R2 servers that are in the same subnet. We created a new server in a second subnet, installed SQL, joined the server to the Windows cluster, added a new IP resource for the new cluster, and performed the other remaining steps to add a new AG replica to the SQL instance on this new server. When we try to move the core cluster resources to the new node to test failover, we get an error. Here's the command we've been using:
Move-ClusterGroup "Cluster Group" -Node node3
and it returns the error: The operation failed because either the specified cluster node is not the owner of the group, or the node is not a possible owner of the group...I've checked the ownership of the cluster groups and the cluster resources and it looks like they are set appropriately:
>Get-ClusterGroup | Get-ClusterOwnerNode Cluster Object Owner Nodes ---------------- --------------- Available Storage {} Cluster Group {node1,node2,node3} SQLAG {node1,node2,node3}
[code]....
We've double-checked that all IP resources are in the right subnets and that the dependencies for the Cluster Name resource and the Listener Name resource are set appropriately. I'm not sure what else to check since the PowerShell commands seem to indicate that node3 is an owner of the appropriate resources. What other things need to be checked or if the ownership being checked isn't the same as what PowerShell is checking?
1. Once fail over to secondary replica, what will happen to connected session in primary node? can the session fail over to secondary seamlessly or need to re-login. what happen committed transactions which has not write to disk. 2. Assume I have always on cluster with three nodes, if primary fails, how second node make write/ read mode. 3. after fail over done to 2nd secondary node what mode in production(readonly or read write). 4. how to rollback to production primary ,will change data in secondary will get updated in primary.
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
I am in the process of moving databases from a SQL 2005 Standard version to a 2-node 2014 cluster.All of my 2005 databases back up successfully.They all restore without issue except for one database that has a full text catalog. I get this message
Msg 7610, Level 16, State 1, Line 2 Access is denied to "fileStoragedataMSSQLSERVERFullTextCatalog", or the path is invalid. Msg 3156, Level 16, State 50, Line 2 File 'sysft_FTCatalog' cannot be restored to 'fileStoragedataMSSQLSERVERFullTextCatalog'. Use WITH MOVE to identify a valid location for the file. Msg 3119, Level 16, State 1, Line 2 Problems were identified while planning for the RESTORE statement. Previous messages provide details. Msg 3013, Level 16, State 1, Line 2 RESTORE DATABASE is terminating abnormally.
[code]....
I went as far as giving the folder full access to everyone temporarily and received the same error.
If I install an instance with Windows Only authentication, and then change it to Mixed Mode, if I enable the sa login, the password has already been set. What is the default? If it's generated, how secure is it? Is the password generated? What algorithm is used for that?
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
I am using a monitoring system where I can monitor a numeric SQL result assuming the result is one field and one row.I would like to do this to say monitor the free available space or percentage on say the Master database. DBCC SQLPERF gives me a few columns and results for all databases on the server.