SQL 2012 :: Monitoring Table In High Transaction Database
Nov 4, 2015
I am developing a process to monitors a table in a high transaction database. The process will count the number of lines in the table to verify if it has changed or it is stuck. Due to the fact that the database has a lot of transaction I don't want to execute a query on database too often.l Is there another suitable way to accomplish this goal ?
View 2 Replies
ADVERTISEMENT
Feb 3, 2015
I have a Customer running a database in a High Availability Group and I am not familiar with the set up... They have a transaction log that quadrupled in size during a data import and update which was generated by an external application. They have limited server space and would like to shrink the log again now as this import / update only happens once a year. The way this has always been dealt with in the past was by shrinking the DB and logs after the update.
Now however, when attempting to do a log or db shrink, an error message is generated which says that the log cannot be shrunk as the DB is in use as part of an Availability Group....
The more I search and try to read up on this subject, it looks like the DB has to be removed from the Availability Group before the log can be shrunk and then the Availability Group has to be re-created or restored in some way. Is there a simple solution to this conundrum?
View 9 Replies
View Related
Sep 15, 2014
We are in plan to build a Monitoring tool using PowerShell and Performance Monitor which could monitor 10 to 20 servers. Do you have any reference of any existing tool using Performance Monitor to monitor the SQL Server and available for free? I didn't want to put some effort, if something is available already.
View 2 Replies
View Related
Jul 2, 2015
Why would I see high CXPACKET waits on a database instance with only one CPU? Since the server only has 1 CPU parallelism can't be used, or are I'm missing something here!?
SQL version
Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
May 14 2014 18:34:29
Copyright (c) Microsoft Corporation
Standard Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)
max degree of parallelism is set to 0
View 5 Replies
View Related
May 19, 2015
Discovered that a geo-spatial AlwaysOn HA database (1 of 4) was not synchronizing as at a point in time earlier in the day. Â Suspend Data Movement appears to be working perpetually without finishing. Â The SQL Server services is in a Pending Changes state after an attempt to restart it from SQL Configuration Manager. Â The Cluster Dashboard says it is in a Not Synchronizing state, with only the one database in question having a yellow triangle, all 3 others show green. Â
The warning for the cluster is:At least one availability database on this availability replica has an unhealthy data synchronization state. If this is an asynchronous-commit availability replica, all availability databases should be in the SYNCHRONIZING state. If this is a synchronous-commit availability replica, all availability databases should be in the SYNCHRONIZED state. Â There is no abnormal data movement from the primary to the seconday.The warnings for the unhealthy database are:
The data synchronization state of this availability database is unhealthy. On an asynchronous-commit availability replica, every availability database should be in the SYNCHRONIZING state. On a synchronous-commit replica, every availability database should be in the SYNCHRONIZED state.Either a database administrator or the system has suspended data synchronization on this availability database.So how to get this database back to synchronizing state?
View 2 Replies
View Related
Nov 10, 2001
Hey guys,
I orignally wrote a post here regarding some info on setting up a cluster. Upon further analysis of the problem with our system, I noted that at particular times we have tremendous amounst of Update, Insert, Delete etc, transactions hitting out database.
I thought originally SQL Clustering could solve this problem, but the time and upkeep that will be required to maintain such a configuration might not be feasible and more importantly it may not even fix the problem.
Next week I plan on doing some more specific performance monitoring off the database during normal business activity, but my initial suspicion is that there is a tremendous amount of I/O processing due to the high transaction load which is slowing down the application.
I was wondering what you have done to alleviate such problems? One of the solutions I have come up with is to possibly create a Master/Slave SQL Server design where the Slave handles most of the database transactions and then at a low load during the day update the Master DB. How does this sound? Any other ideas would be greatly appreciated...
Thanx
View 1 Replies
View Related
Jun 23, 2006
I have an existing database with approx 500,000 rows and accessed by afew hundred users per day creating approx 1,000 new records per dayplus typical reporting - relatively low volume stuff for SQL Server.I'm about to add a process that will be importing data daily fromlegacy databases and summarizing it for reporting purposes, integratingit with the existing database. This volume of data will be considerablyhigher, perhaps 100,000+ rows per day, which will be deleted once ithas been summarized and the results written to some intermediatetables.Is there any concern about mixing different levels of volume within onedatabase? As I'll be creating lots of rows daily and then deleting themI was wondering about fragmentation, transaction logging etc. andwhether having this processing in a separate database from the mainapplication would be 'better'.
View 3 Replies
View Related
Jun 27, 2014
I am basically trying to update a table which reflects account transactions. Accounts get paid in full but occasionally balance payments can be reversed and I want to update the table to show this - I need to show which period the account was previously paid in full.I've created a simplified version of the scenario and below are a couple of examples of things I've tried that do not work. I understand why they do not work but I'm struggling to figure out how to update the 'PeriodPrevPaidInFull' field.
create table Trans
(
AccNo int,
Transaction_Period_Index int,
PeriodOpeningBalance money,
DebtBalance money,
PeriodPaidInFull int NULL,
PeriodPrevPaidInFull int NULL,
[code]...
View 9 Replies
View Related
Dec 4, 2014
Getting below error most of the times .
"Server was unable to process request. ---> The transaction log for database 'xxxx' is full due to 'REPLICATION'."
I am doing below steps to clear the log everytime.
1.EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0,@time = 0, @reset = 1
2.checkpoint
View 8 Replies
View Related
May 7, 2014
What are the events to be monitored for a Active/Passive Cluster?
View 4 Replies
View Related
Oct 22, 2015
I am using sql2012 with partitiondb custom installation over 6 dbs on 4 servers, ~200GB per db. I am looking for the perfect graphic tool (similar to Perfmon+Activity Monitor) where I can monitor a wide-scope SQL environment. I am interested in data / log / table / index growth, buffer cache hit ratio, average wait time, physical/logical reads/writes and such. I am interested in real-time / time-range metrics. I know I can issue immediate queries against dm_os_performance_counters / dm_os_buffer_descriptors / dm_exec_query_stats and get some of the relevant data, but is this the only way?Also, for your opinnion, what are the most crucial metrics to monitor when dealing with multiple dbs?
View 1 Replies
View Related
Mar 4, 2015
I have a stored procedure that does the following
BEGIN TRANSACTION OUTERTXN
BEGIN TRANSACTION
Copy records from live to archive
END TRANSACTION with commit or rollback
execute sproc to write audit log with success or fail
IF transaction was committed
BEGIN TRANSACTION
Delete records from live the archive
END TRANSACTION with commit or rollback
execute sproc to write audit log with success or fail
End IF
END TRANSACTION OUTERTXN with commit if both inner transactions were successful or rollback if either failed
If either inner transaction rolled back execute sproc to write audit log saying whole process is rolling back End IfMy problem is that if the outer transaction rolls back then I am losing the two audit records because they are part of the transaction scope. I want these executes to commit even if the master transaction fails.
View 2 Replies
View Related
Mar 9, 2015
We have H/A setup on SQL2012. Seems to work fine.I thought there was code I could add that would check to see if it was running on the primary node?
View 3 Replies
View Related
Feb 4, 2015
We currently use Zabbix to monitor our SQL Servers, generally we use Perf_Counters to gather performance information (CPU Usage, Memory, Blocked process etc.), Zabbix also lets us monitor services as to whether they are up or down. I have searched google looking for any information regarding the monitoring of AlwaysOn/Mirroring within Zabbix and to date ave found nothing. What I am looking to alert on is an indication that Mirroring has stopped/ been interrupted with say something like "Not Synchronized"... Zabbix and monitoring SQL Server 2012 and specifically monitoring AlwaysOn...
View 9 Replies
View Related
Jun 18, 2015
I received an alert from one of my two secondary servers (all servers are running 2012 SP1):
File 'E:SQLMS SQL ServerMSSQL11.MSSQLSERVERMSSQLDATAMyDatabaseName_DateTime.tuf' is not a valid undo file for database 'MyDatabaseName (database ID 8). Verify the file path, and specify the correct file.
The detail in the job step shows this additional information:
*** Error: Could not apply log backup file 'MyDatabaseName_DateTime.trn' to secondary database 'MyDatabaseName'.(Microsoft.SqlServer.Management.LogShipping) ***
*** Error: Table error: Page (0:0). Test (m_headerVersion == HEADER_7_0) failed. Values are 0 and 1.
Table error: Page (0:0). Test ((m_type >= DATA_PAGE && m_type <= UNDOFILE_HEADER_PAGE) || (m_type == UNKNOWN_PAGE && level == BASIC_HEADER)) failed. Values are 0 and 0.
Table error: Page (0:0). Test (m_freeData >= PageHeaderOverhead () && m_freeData <= (UINT)PAGESIZE - m_slotCnt * sizeof (Slot)) failed. Values are 0 and 8192.
Starting a few minutes later, the Agent Job named LSRestore_MyServerName_MyDatabaseName fails every time it runs. The generated log backup, copy, and restore jobs run every 15 minutes.
I fixed the immediate problem by running a copy-only full backup on the primary, deleting the database on the secondary and restoring the new backup on the secondary with NORECOVERY. The restore job now succeeds and all seems fine. The secondaries only exists for DR purposes - no one runs reports against them or uses them at all. I had a similar problem last weekend on a different database that is also replicated between the same servers. I've been here for over a year, and these are the first instances of this problem that I've seen. However, I've now seen it twice in a week on the same server.
View 0 Replies
View Related
Nov 15, 2013
I am trying to find out what could be causing this issue. Why would we be waiting on cpu when its barely being used. Signal waits are varying from 35 to 55% and cpu usage is only at 5% usage.We are using Windows Server 2012 with SQl Server 2012 Standard edition with cpu5. There are 3 instances on the server each with max memory 50gb memory and the server has a total of 190gb memory. The machine is a 12 core machine with hyperthreading enabled.
View 8 Replies
View Related
Jul 23, 2014
So we have our HA group servers and databases, now we want to deploy schema changes to the HA group databases.
1.) Can we deploy the changes to Server2.mydatabase whilst still having Server1.mydatabase available to users?
2.) If yes, what is involved in doing so
3.) If no, What is the best suggestions to apply schema changes to HA databases.
View 1 Replies
View Related
Nov 13, 2014
I am using SQL Server 2012 and my AlwaysOn High Availability features is not enabled? What should I do. Is this requires any extra system requirements to be installed?
View 5 Replies
View Related
Jun 9, 2015
I inherited a SQL 2012 Ent server sitting on a 2008R2 server using AlwaysOn High Availability, two nodes.
Available Mode: Synchronous commit
Failover Mode: Manual
Connection in Primary role: Allow all connections
Readable secondary: No
seesion timeout: 10
Somebody decided to give SQL server priority boost so I need to change this ASAP. So I plan on doing the following.
1. Manually fail over to the secondary, which does not have the priority boost set to true
2. change the setting
3. restart the service
4. Manually fail over
My question is with the service restart. How does SQL handle if the DB changes on the new primary while the secondary is having the service restarted. Where can I see if the DB are sync again or if not where are they in the sync process.
View 2 Replies
View Related
Apr 28, 2014
I came across an issue while migrating from SQL 2005 to SQL 2012 and using AlwaysOn Group. For some strange reason, when ever i connect to the Listener name for each AlwaysOn group, it list all the databases which is on the SQL instance, so i would be able to see databases that is not part of that Availability Group. I am not using default port, so have to put the port after the Name to connect and both Instance and Listener are using different port.
Testing the fail over works fine too, when i perform a manual failover, i can connect to any of the databases in the group from my application with no problem.
Considering that the Listener Port is different to the port which the instance is using?
View 8 Replies
View Related
Jul 9, 2014
We have a vendor that is exposing our database via a High Availability replica. They are geographically far away from us though so we would like to extract portions of the database over to our side for our reporting /warehousing purposes. I was curious if it is possible to setup snapshot replication on a high availability group?
View 0 Replies
View Related
Dec 16, 2014
I have a HA Listener which is visible and can be connected to, it has a read only secondary on a different subnet so when connecting to it we use the applicationintent = readonly and multisubnetfailover = true.
Trying to connect it as a linked server is giving me problems. I tried putting the extra info into the provider string but keep getting the failure to initialise error. I am trying to link SQL2012 to a 2012 HA group but will also need to connect from a sql2008 server as well
View 1 Replies
View Related
Jun 11, 2015
I have been having issues with our SQL server for awhile now. It seems to run out of memory every few days and when I look at the memory dump, the MEMORYCLERK_SQLOPTIMIZER seem to take over memory and eventually cause the server to crash.
Here is the SQL verison we are using: Microsoft SQL Server 2012 (SP1) - 11.0.3460.0 (X64) Jul 22 2014 15:22:00 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.2 (Build 9200: ) (Hypervisor)..It is on a VM on Windows 2012 server. It has 20gb of RAM allocated to it and the MAX Server Memory is set to 16.5gb.
I have seen the MEMORYCLERK_SQLOPTIMIZER grow to about 11gb at the time of the server crash. Why that is happening? What is causing the memoryclerk_sqloptimizer to get so high? I have looked it up and it looks like it has to do with ad hoc requests, but is there something I can do to bring that memory down when it gets so high so that I can prevent a server crash?Do we just need to add more memory or is there a memory leak somewhere?
View 2 Replies
View Related
Aug 25, 2015
I know I should probably be posting this in the RS section but I have a Win 2008 R2 server and RS 2012 along with SSIS and Database server installed. I also have SQL 2005 instance with Sql 2005 SSIS running on the server.
I saw that Reporting services was consuming 9gb of ram a few days ago with no published reports. It's just a default install.
So I investigated the settings and the document: [URL]......
I added this section to the reportserver.config file to restrict memory usage
<WorkingSetMaximum>512000</WorkingSetMaximum>
<WorkingSetMinimum>128000</WorkingSetMinimum>
Any example altering the reportserver.config file in your standard build of sql server reporting services?
View 0 Replies
View Related
Apr 15, 2014
I am currently investigating aa high avg write time ms issue (145ms) which seems to be only occuring on the tempdb data files.I have followed the recommended setup of TEMPDB in that
1. Data files = number of physical cores
2. Data files and logfiles are on separate partitions away from the other databases.
3. Tempdb is presized and no incremental file increases look like they are happening with frequency.
We have sharepoint 2012 setup on other sql servers and with TEMPDB setup following the same guidelines, with far more Sharepoint activity on a similary specified hardware which is why its confusing.FileIO auditing on the partitions themselves shows that the FileIO is very fast on the partitions that the tempdb data file which leads me to beleive that Sharepoint may be the culprit perhaps due to excess use of tempdb with operations taking a long time to resolve.
View 3 Replies
View Related
Mar 26, 2015
We have two server configured in always on mode .
windows is sql server 2012 64 bit edition and sql server is 2012 64 bit edition.
RAM installed on both server is around 65 gb of which 49 gb is max server memory allocated for sql services on both servers.
database related to reporting services are also in always on group .
We have also configured for reporting services and both are running on their respecting server.
Issue is on primary server reporting service is using almost 7 gb while on secondary it is using 10 gb even when there are 5 reports and its used within offices .
what issue and how to check why ssrs is using high memory..
any query , perfmon counters
reports are randomly used at client side
i have checked memory utilization through task manger..
View 0 Replies
View Related
Mar 18, 2008
Hi y'all, I am doing some searching in the archived threads, but I have a need to copy a table in a database to a new table in the same database, but the new table will be just a table with test data. There are several million rows in the table and I want to do the copy without logging the new inserts in the transaction log.
Is there an easy way to do this? I found this in my search efforts so far, but am just wondering if there is an easier/better way to accomplish what I want to do.
BTW, I normally wouldn't care, but the boss is complaining that it is taking too long to do the copy for a different team, so asked if I knew a way to copy data to a new table without logging. I don't, so here I am ;)
Here is what I found so far:Following 3 things need to be done
1) create table as not logged initially
2) set autocommit=off
3) and activate the not logged initially option
Now the inserts happen without the use of transaction logs
View 12 Replies
View Related
Apr 23, 2007
Hi,
Is there a way to configure mirroring to go from High Availability to High Protection without having to reconfigure Database Mirroring? Using the interface in Management Studio, I can change the configuration option to High Performance, but not High Protection despite both of them being Synchronous.
If not, what are the recommended steps to configure the mirror once it already has been configured? Is just like initially setting up the mirror or would there be any shortcuts I could take? If I stop the mirroring and remove the witness, will the High Protection option be available?
Thanks,
J.
View 3 Replies
View Related
Mar 6, 2008
Hi There
I realise this is a stupid quesiton but i cannot really find any confirmation of this in BOL.
If you are running High Safety with automatic failover, when failover occurs does this automatically change to High Performance mode. SInce for failover to occur something has happen with the primary , it will be impossible to commit transactions on the new primary and mirror asyncronously since 1 of them is no longer available.
So am i correct in assuming that automatic failover also automatically changes the mode to High Performacne for that session?
Thanx
View 4 Replies
View Related
Jan 19, 2005
Can anybody help me with the following on my MS SQL Server 2000 database.
1. All tables should have a lastModificationDate column. Any changes and inserts should have the system time updated with a trigger or so. We shouldn’t be inserting the value using SQL statements into this column.
2. There shouldn’t be any deletes on the table. Any deleted records should be marked as inactive or deleted, so it won’t come in queries, but should be physically present in the tables.
3. A modification log table, which will carry the table name, the column identifier, user modified, old value and the timestamp.
Thanks
Kishore
View 1 Replies
View Related
Jan 19, 2005
Can anybody help me with the following on my MS SQL Server 2000 database.
1. All tables should have a lastModificationDate column. Any changes and inserts should have the system time updated with a trigger or so. We shouldn’t be inserting the value using SQL statements into this column.
2. There shouldn’t be any deletes on the table. Any deleted records should be marked as inactive or deleted, so it won’t come in queries, but should be physically present in the tables.
3. A modification log table, which will carry the table name, the column identifier, user modified, old value and the timestamp.
Thanks
Kishore
View 1 Replies
View Related
Nov 11, 2014
I will soon be embarking on an upgrade of a transaction replicated database (Push) from 2005 to 2012 SP2.The publisher, distributor and subscriber are 3 separate machines.There databases will be detached, MDF and LDF will be copied across. Once attached, replication will be set up from scratch.
The name of the new servers are NOT the same as the existing ones.Should I completely remove transaction replication user the wizard and unticking the DB from being a database for transactional replication or just stop the log reader agent?How can I make sure the last of the transactions have gone across before I detach?Should I detach the subscriber first or the publication? Does it even matter?
Is there a better way of moving across the databases? I for one would have preferred to backup (.bak) then restore on the other side. I'd love to hear opinions on this as well.
View 0 Replies
View Related
Nov 8, 2007
In any asp.net application, whats the simplest way to monitor how many times a page hits the database (opens and closes a connection)?
View 1 Replies
View Related