Replication Latency
Feb 12, 2007
hi
I have a transactional replication that was running fine for about few months, but eversince there was a massive update of records at the publisher site and when I use the replication monitor, i notice Status: Performace Critical, Subscription:[localservername]:[Orders_Repl], Performance:Critical, Latency:03:34:47.
I was thinking of redoing the whole replication but before i am about to proceed. Can you guys explained what happen and what does it meant when Latency:03:34:47. I don't remember seeing a Latency:03:34:47.
I am hoping to solve this critical performance issue hope you guys can help. Thanks
View 1 Replies
ADVERTISEMENT
Jun 12, 2008
CREATE PROCEDURE [dbo].[proc_replication_latency_monitor]
AS
-- Procedure created by Henning Frettem (Lumbago), www.sqlteam.com, 12th of June 2008
BEGIN
SET NOCOUNT ON
DECLARE
@PublishedDBName varchar(200),
@SQL nvarchar(2000),
@PublicationPK int,
@PublicationName varchar(200),
@PublicationDatabase varchar(200),
@TracerTokenID int,
@Parmameters nvarchar(500)
DECLARE @Publications table (
PublicationPK int IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
PublicationDatabase varchar(200),
PublicationName varchar(200),
TracerTokenID int,
TracerTokenPostTime datetime
)
DECLARE @Latency table (
PublicationPK int,
DistributorLatency bigint,
Subscriber sysname,
SubscriberDB sysname,
SubscriberLatency bigint,
OverallLatency bigint
)
--> Cursor for fetching all publications in all databases
DECLARE curPublishedDatabases CURSOR LOCAL FAST_FORWARD FOR
SELECT [name] FROM sys.databases WHERE is_published > 0
OPEN curPublishedDatabases
FETCH curPublishedDatabases INTO @PublishedDBName
WHILE @@FETCH_STATUS = 0
BEGIN
SET @SQL = N'SELECT ''' + @PublishedDBName + ''', [name] FROM ' + @PublishedDBName + '.dbo.syspublications'
INSERT INTO @Publications (PublicationDatabase, PublicationName)
EXEC sp_executesql @SQL
FETCH NEXT FROM curPublishedDatabases INTO @PublishedDBName
END
CLOSE curPublishedDatabases
DEALLOCATE curPublishedDatabases
--> Cursor for posting tracer tokens
DECLARE curPublications CURSOR LOCAL FAST_FORWARD FOR
SELECT PublicationPK, PublicationDatabase, PublicationName FROM @Publications
OPEN curPublications
FETCH curPublications INTO @PublicationPK, @PublicationDatabase, @PublicationName
WHILE @@FETCH_STATUS = 0
BEGIN
SET @Parmameters = N'@PublicationName varchar(200), @TracerTokenID_OUT int OUTPUT';
SET @SQL = N'EXEC ' + @PublicationDatabase + '.sys.sp_posttracertoken @publication = @PublicationName, @tracer_token_id = @TracerTokenID_OUT OUTPUT'
EXEC sp_executesql @SQL, @Parmameters, @PublicationName = @PublicationName, @TracerTokenID_OUT = @TracerTokenID OUTPUT
UPDATE @Publications SET
TracerTokenID = @TracerTokenID,
TracerTokenPostTime = GETDATE()
WHERE PublicationPK = @PublicationPK
FETCH NEXT FROM curPublications INTO @PublicationPK, @PublicationDatabase, @PublicationName
END
CLOSE curPublications
DEALLOCATE curPublications
--> Wait two minutes for all tokens to be commited at all subscribers
WAITFOR DELAY '000:02:00.000'
--> Then check the results for each posted token
DECLARE curTokens CURSOR LOCAL FAST_FORWARD FOR
SELECT PublicationPK, PublicationDatabase, PublicationName, TracerTokenID FROM @Publications
OPEN curTokens
FETCH curTokens INTO @PublicationPK, @PublicationDatabase, @PublicationName, @TracerTokenID
WHILE @@FETCH_STATUS = 0
BEGIN
--> Insert token history for each token
SET @SQL = N'EXEC ' + @PublicationDatabase + '.dbo.sp_helptracertokenhistory @publication = ''' + @PublicationName + ''', @tracer_id = ' + CAST(@TracerTokenID AS VARCHAR(50)) + ''
INSERT INTO @Latency
(DistributorLatency, Subscriber, SubscriberDB, SubscriberLatency, OverallLatency)
EXEC sp_executesql @SQL
--> Make sure that the PublicationPK is added to the token history
UPDATE @Latency SET PublicationPK = @PublicationPK WHERE PublicationPK IS NULL
--> Clean up the tracer token
SET @SQL = N'EXEC ' + @PublicationDatabase + '.sys.sp_deletetracertokenhistory @publication = ''' + @PublicationName + ''', @tracer_id = ' + CAST(@TracerTokenID AS VARCHAR(50)) + ''
EXEC sp_executesql @SQL
FETCH NEXT FROM curTokens INTO @PublicationPK, @PublicationDatabase, @PublicationName, @TracerTokenID
END
CLOSE curTokens
DEALLOCATE curTokens
SELECT
b.PublicationDatabase,
b.PublicationName,
a.Subscriber,
a.SubscriberDB,
a.OverallLatency,
b.TracerTokenPostTime
FROM @Latency a
INNER JOIN @Publications b
ON a.PublicationPK = b.PublicationPK
END
--
Lumbago
View 7 Replies
View Related
Oct 19, 2007
Hello!
We have setup up transactional replication with dedicated distributor in SQL Server 2005 environment. I have noticed that during particular time of the day latency is increasing dramatically. I have been checking Tracer Tokens and Total latency during that time is around 30-40 min (both publisher to distributor and distributor to subscriber is taking much longer that normal). Normally, it is less than 10 sec. I was wondering if there is a way to pinpoint exact cause of the latency. This is pull subscription.
I would appreciate if someone can share (or point to the right direction) best practice on transactional replication setup/maintenance. My understanding is that only committed transactions are replicated, correct? I checked database on publisher and didn't see any outstanding long running transaction.
Any help is greatly appreciated.
Thanks,
Igor
View 3 Replies
View Related
Jul 31, 2007
I have been working on a project to evaluate transactional replication in SQL Server 2005 to find out if it will be suitable to provide real time reporting without impacting the primary database server. Thus far I have not seen latency better than several seconds replicating a change to a table or by utilizing the "tracer tokens" troubleshooting functionality. It usually takes a couple seconds from the publisher to the distributor and a couple more from the distributor to the subscriber.
I guess my question is actually whether it is possible to achieve sub-second (milliseconds) latency for transactional replication under any circumstances. Someone on another forum told me it was not, so I thought I would ask a separate community to confirm. Regarding the hardware, the latency on ping is less than 1ms between the servers and the servers themselves are decently powered quad processor boxes w/ 4GB of ram each. It seems to have the same latency of 3-5 seconds regardless of whether I publish a single table or all of the tables in the database. I currently have it configured with the publisher on one server and the distributor/subscriber on the second with the log reader and distributor agents set to run continuously. If it is possible, please recommend configuration of the agents or the topology of the PDS to achieve this. Thanks in advance for any insight you provide.
Dan
View 2 Replies
View Related
Jun 2, 2015
What is the measurement for Latency on the replication monitor? Is it hr:mm:ss, or is it mm:ss.ms?
I see
Performance = CriticalLatency = 2:41:52Last Synchronization = 6/2/2015 9:02:21 AM (current time)
Is the Latency 2 minutes 41 seconds behind or 2 hours 41 minutes behind?
View 4 Replies
View Related
May 25, 2004
Hi Everybody,
Can anybody tell me how to get the number of commands delivered per minute in case of Merge Replication with Publisher and subscribers.
This way, we can be sure that even if there is a latency (due to high volume transaction processing), replication is in good shape and things will catch up soon.
Also if there are any other similar measures which can be monitored to make sure that replication is going on fine, it would be great
Please let me know If anyone has got information on same.
View 1 Replies
View Related
Feb 22, 2008
Hi,
I have multiple questions and would appreciate any suggestions in resolving them. I'm a novice to these issues.
1) First of all, what is the exact command to setup the trunc. log on chkpt. option on for a transactional log of a SQL Server 2000 database? Is this option on by default? I have noticed for one of the databases I'm managing that the transaction log was over 12 GB, while the db was only 425 MB.
2) How's it possible to run a DBCC TRACEON to see the content of the transaction log to see if we are having any issues with any uncommitted transactions, i.e. updates, inserts, and/or deletes.
3) What are the commands to truncate inactive transactions and increasing the readbatchsize?
4) lastly, how do I validate transactional replication via checksum and find valid latency between a small number of changes that need to be committed between publisher and subscriber.
Thank you so much in advance!
Sincerely,
Alla
View 3 Replies
View Related
Jul 31, 2007
I have 2 servers; 1 in the head office and the 2nd at a remote location connected over a satellite. The satellite connection has typical latency and bandwidth. 2 mb down and 128kb up. Latency is average of 700ms.
Both servers have plenty of processing power and RAM and are very underutilized.
I have 1 push publication at the publisher at the remote location. The publication has 1 article. The table that is contained in the article only grows; ie inserts are the only statements applied, no updates or deletes. The table grows by 5 inserts every second(approximate). the table has 5 columns. none of the columns are blobs etc. An ID field, 1 datetime, 2 integers and 1 float column.
I have the agents set for continuous with a 2 second polling interval.
I am aware of the following command parameters for the agents but not sure which way to go to maximize the throughput of the data.
MaxCmdsInTran
SubscriptionStreams
ReadBatchSize
CommitBatchSize
PollingInterval
The subscriber table also has an insert after trigger to update another small table with the current value. Under normal network conditions, the trigger works great. Over the sat connection with high latency, the smaller target table seems to get a lock placed on it. I can only guess that the lock is from the inserts and perhaps a large number of inserts per transaction?
Under normal network conditions, the latency between publisher - distribution - subsciber is about 4 seconds.
What would be good command parameter settings for the agents to compensate for the latency?
Thanks,
Kevin
View 5 Replies
View Related
Jul 17, 2007
hi,
I'm building a C# database application that access a remote sql 2005 database. For the moment I am using sql express edition. My application will be running in several REMOTE camps which only have an internet connection via sattelite. The sattelite connection has a very high latency. I am wondering what workarounds or solutions are available for this situation. All applications need to access the same database and preferebly be notified when changes take place on the database.
Thanks in advance.
View 1 Replies
View Related
May 4, 2007
Hi,
I have a couple of questions to make.
How can i trigger notifications to my application ( C#)
without hanging in WaitFor Operation??
It's possible for broker service to call some remote
object that my application provide ???
Can Publish/Subscribe using Broker Service be used
for a low latency notifications (150ms ) max with milions of
messages published per second??
Thanks in advance
Sérgio
View 2 Replies
View Related
Apr 15, 2008
Hello,
I have a question:
On my WM5 devide I run 2 query's right after eachother:
DELETE FROM test,
ALTER TABLE test ADD COLUMN abc bit NOT NULL",
I get an error on the second statement. When I change the statement in "NULL" instead of "NOT NULL" it works.
test is a very large table. It seems as if query 2 is executed while the engine is still processing query 1. And then you propably get the NOT NULL statement violation. Is this possible???
I know that writing the registry has some latency because WM5 uses flash instead of ram. Does SQL Compact have the same latency?
Or is this another problem?
Thanks in advance.
Marthijn
www.pdaddict.nl
View 7 Replies
View Related
Oct 1, 2014
Is there anyway to check if server is having disk latency or IO issues?Found below in SQL error log
Date10/1/2014 8:28:58 AM
LogSQL Server (Current - 10/1/2014 12:00:00 AM)
Sourcespid10s
Message
SQL Server has encountered 8500 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [D:Fin.mdf] in database [Fin] (5). The OS file handle is 0x0000000000001368. The offset of the latest long I/O is: 0x0001104a7da000
View 1 Replies
View Related
May 14, 2014
I have the following:
(a) One Dynamic SQL Query that takes 37 ms when run as a single query or in an SP.
(b) Three SQL Indexed View queries that take 0 ms when run together.
When i add (a) + (b) in the same SP, i should get 37 ms + 0 ms = 37ms, but NO it takes 400 ms.
What is causing the extra 363 ms of latency.
View 9 Replies
View Related
Oct 15, 2007
Hello,I'm an absolute newbie when it comes to SQL. I was told that SQLserver does not function well on a WAN where network latency between,say, the SQL server and a front-end server is greater than 250ms.I can't find anything information supporting this claim online, so Iwas hoping someone here could tell me if this is true or not?Thank You!!
View 2 Replies
View Related
Oct 18, 2007
I am looking into various options to improve latency of our application (we figured the latency is mainly because data persistence - writes and reads from DB). I am looking into In-Memory databases also. But, before making that decision (of using in memory databases), I would like to see if there is a way to configure SQL Server 2005 to get as close performance as in-memory databases?
My question:
1. Is there a way that I can configure SQL Server 2005 to use a CACHE that gets loaded as needed basis, so that future database reads/writes will happen to the cache as opposed to disk (db writes)?
2. Is SQL Server 2005 recoverable in such configurations?
3. Are there any ideas/resources where I can get more details? (Such as sample configurations with bench mark numbers, rpevious experiences..etc)
Thanks
Murthy
View 1 Replies
View Related
Jan 28, 2015
I am trying to use change data capture to load the data into the secondtable from table 1 which is coming from UI.
What will be the minimum latency??? Can we use incase of latency less than 5 seconds.
View 1 Replies
View Related
Aug 27, 2015
We have several read-only nodes in our AlwaysOn cluster, which are set to use Synchronous-commit mode, which ensures that the logs are updated on the read-only nodes before any update statements complete. Even with this option, if we query a read-only node before the logs have been processed, we can read old data. I would like to know a strategy to ensure that a read-only query will definitely return up to date information. I had an idea that if I just used a different transaction type, like Serializable, that it might block the read-only query from actually getting the data until after the log file was processed, but I have not tried it, yet.
I would like to move more queries to the read-only nodes, in an effort to offload CPU utilization from the primary node.
View 2 Replies
View Related
Apr 27, 2015
How you would calculate the average read/write latency experienced by a SQL Server instance during a specific time window in order to monitor this for multiple instances. From this MSDN blog, I know that you have to take multiple samples and do some calculations to get the correct latency.
[URL] ...
However, the SQLServer:Resource Pool Stats object tracks these numbers per resource pool and we want to get one number for the whole server. Since there can be a different base value for each resource pool, you can't simply sum the numerator values together. Here's some sample data from a server that illustrates the problem.
object_name counter_name instance_name cntr_value cntr_type
SQLServer:Resource Pool Stats Avg Disk Read IO (ms) default 307318919 1073874176
SQLServer:Resource Pool Stats Avg Disk Read IO (ms) Base default 25546724 1073939712
SQLServer:Resource Pool Stats Avg Disk Read IO (ms) internal 2045730 1073874176
SQLServer:Resource Pool Stats Avg Disk Read IO (ms) Base internal 208270 1073939712
I'm thinking I would need to do some sort of weighted average, but I'm not sure if that will result in the correct value. Here's the formula I am thinking about using currently before doing the calculation over time
((default * default[base]) + (internal * internal[base]))/(default[base] + internal[base])
Then to do the calculation over time, I'd use the changes in the calculated numerator and denominator to get the average.
Does this sound like to correct way to get this value? Is there a good way to verify?
View 2 Replies
View Related
Jun 15, 2007
I'm getting this, after upgrading from 2000 to 2005.Replication-Replication Distribution Subsystem: agent (null) failed.The subscription to publication '(null)' has expired or does notexist.The only suggestions I've seen are to dump all subscriptions. Sincewe have several dozen publications to several servers, is there adecent way to script it all out, if that's the only suggestion?Thanks in advance.
View 3 Replies
View Related
Sep 13, 2007
Hi,I have transactional replication set up on on of our MS SQL 2000 (SP4)Std Edition database serverBecause of an unfortunate scenario, I had to restore one of thepublication databases. I scripted the replication module and droppedthe publication first. Then did a full restore.When I try to set up the replication thru the script, it created thepublication with the following error messageServer: Msg 2714, Level 16, State 5, Procedure SYNC_FCR ToGPRPTS_GL00100, Line 1There is already an object named 'SYNC_FCR To GPRPTS_GL00100' in thedatabase.It seems the previous replication has set up these system viewsSYNC_FCR To GPRPTS_GL00100. And I have tried dropping the replicationmodule again to see if it drops the views but it didn't.The replication fails with some wired error & complains about thisviews when I try to run the synch..I even tried running the sp_removedbreplication to drop thereplication module, but the views do not seem to disappear.My question is how do I remove these system views or how do I make thereplication work without using these views or create new views.. Whyis this creating those system views in the first place?I would appreciate if anyone can help me fix this issue. Please feelfree to let me know if any additional information or scripts needed.Thanks in advance..Regards,Aravin Rajendra.
View 2 Replies
View Related
Jan 17, 2002
Hi,
In my production box is running on SQL7.0 with Merge replication and i want add one more table and i want add one more column existing replication table. Any body guide me how to add .This is very urgent
Regards
Don
View 1 Replies
View Related
Aug 22, 2007
Hello,
I have this problem on a Production database.
DBCC OPENTRAN shows "REPLICATION" on a server that is not configured for replication. The transaction log is almost as large as the database (40GB) with a Simple recovery model. I would like to find out how the log can be truncated in such a situation.
Thank you.
View 4 Replies
View Related
Mar 6, 2007
Hello,I'm getting the following error message when I try add a row using aStored Procedure."The identity range managed by replication is full and must be updatedby a replication agent".I read up on the subject and have tried the following solutionsaccording to MSDN without any luck.(http://support.Microsoft.com/kb/304706 )sp_adjustpublisheridentityrange (http://msdn2.microsoft.com/en-us/library/aa239401(SQL.80).aspx ) has no effectFor Testing:I've reloaded everything from scratch, created the pulications from byrunning the sql scripts generated,created replication snapshots andstarted the agents.I've checked the current Identity values in the Agent Table:DBCC CHECKIDENT ('Agent', NORESEED)Checking identity information: current identity value '18606', currentcolumn value '18606'.I check the Table to make sure there will be no conflicts with theprimary key:SELECT AgentID FROM Agent ORDER BY AgentID DESC18603 is the largest AgentID in the table.Using the Table Article Properties in the Publications PropertiesDialog, I can see values of:Range Size at Publisher: 100,000Range Size at Subscribers: 100New range @ percentage: 80In my mind this means that the Publisher will assign a new range whenthe Current Indentity value goes over 80,000?The Identity range for this table cannot be exhausted! I'm not surewhat to try next.Please! any insight will be of great help!Regards,Bm
View 1 Replies
View Related
May 26, 2015
What is the main difference between snapshot and transactional and merge replication?
View 5 Replies
View Related
Jul 28, 2006
Hi,
I have a VB.net app that access a SQL Express database. I have transactional repliaction set up on a SQL 2000 database (the publisher) and a pull subscription from the VB.net app. I use RMO in the VB app to connect to the publisher. My problem is I am getting some strange behaviour as follows
- if I run the app and invoke the pull subscription it works fine. If I then close my app and go back in, I can access my data without any problem
- If I run the app and try to access data in my SQL Express database it works fine. I can then close the app, reopen it and run the pull subscription it works fine
however.......
- if I run the app, invoke the pull subscription (which runs fine), and then try to access data in my local SQL Express database without firstly closing and reopening the app, I get a login error
- if I run the app, try to access data in my local SQL Express database (which works fine), and then try to run the pull subscription I get a "the process cannot acces the file as it is being used by another process" error. In this case I need to restart the SQL Express service to be able to run replication again.
I get exactly the same behaviour when I use the Windows Sync tool (with my app open at the same time) instead of my RMO code to replicate the data.
I am using standard ADO.Net 2 code to access my SQL Express data in the app and closing all connections etc
Any advice appreciated !
Thanks
Ronan
View 2 Replies
View Related
Jun 28, 2007
Hi all,
I have recently setup a transactional replication in MS SQL 2000. After setting up the replication the clients TempDB grew by almost 60GB. Now the client is Blaming me for the TempDB GROWTH and saying that its because of the replication being setup i tried to convince them but they are not satisfied yet. Can anybody please tell me does replication cause the tempdb to grow. If yes then how. can u suggest any good link for getting to know the internal working of SQL Server replication????
Thanks in advance
Jacx
View 3 Replies
View Related
Aug 30, 2007
Hi all,
I know that adding a column using ALTER TABLE to add a column automatically allows SQLSERVER 2005 to replicate the schema changes to the subscribers, however, I would like to add a new column to an existing article that is being used for merge replication, however, I don't want this column to be replicated. Re-initialising the subscriptions is not a option. Help would be appreciated.
I am using SQLSERVER 2005 (SP1).
View 3 Replies
View Related
Sep 2, 2015
I have been researching on the proper steps or sequence to follow to completely remove SQL Server 2012 Transactional Replication. I have read articles about using SSMS as well as using replication stored procedures and some procedures use SQLCMD or just regular TSQL executed in SSMS. I have also read articles where people said all you really need is connect to the Publisher instance, find the publication you want to remove and choose "Delete" and everything will be taken care of behind the scene. I have three SQL servers that participate in transactional replication. SQL-P (publisher),
SQL-D (distributor) and SQL-S (subscriber). Do I need to connect to the distributor instance and the subscriber instance when removing transactional replication or is it just really connecting to the publisher and click delete on the publication? I want everything gone including any metadata, systems tables, distributions db and any other replication objects created during the initial configuration.
View 6 Replies
View Related
Apr 17, 2007
Hello everyone,I am involved in a scenario where there is a huge (SQL Server 2005)production database containing tables that are updated multiple timesper second. End-user reports need to be generated against the data inthis database, and so the powers-that-be came to the conclusion that areporting database is necessary in order to offload report processingfrom production; of course, this means that data will have to bereplicated to the reporting database. However, we do not need all ofthe data in the production database, and perhaps a filtering criteriacan be established where only certain rows are replicated over to thereporting database as they're inserted (and possibly updated/deleted).The current though process is that the programmers designing thequeries/reports will know exactly what data they need from productionand be able to modify the replication criteria as needed. For example,programmer A might write a report where the data he needs can beexpressed in a simple replication criteria for table T where column X= "WOOD" and column Y = "MAHOGANY". Programmer B might come along amonth later and write a report whose relies on the same table T wherecolumn X = "METAL" and column Z in (12, 24, 36). Programmer B willhave to modify Programmer A's replication criteria in such a way as toaccomodate both reports, in this case something like "Copy rows fromtable T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X ="METAL" and col Z in (12, 24, 36))". The example I gave is reallytrivial of course but is sufficient to give you an idea of what thecurrent thought-process is.I assume that this is a requirement that many of you may haveencountered in the past and I am wondering what solutions you wereable to come up with. Personally, I believe that the above method isprone to error (in this case the use of triggers to specifyreplication criteria) and I'd much rather use replication services tocopy tables in their entirety. However, this does not seem to be anoption in my case due to the sheer size of certain tables. Is thereanything out there that performs replication based on complexprogrammer defined criteria? Are triggers a viable alternative? Anyalternative out-of-the-box solutions?Any feedback would be appreciated.Regards!Anthony
View 11 Replies
View Related
Oct 9, 2007
I am working on bringing our disaster recovery site to be a live site. Currently we replicate to one of out servers (server B) with merge replication (from server A). Server A also does one way transactional replication form some table to several other servers including servers at the DR site.
This setup is not going to be fast enough for what we need so I am wondering if a table is receiving merge replication will the merge updates also replicate down the transaction path??
Example...
Server B update a row and merges to Server A. With this update them replicate (via transactional) to Server C??
thanks...
View 5 Replies
View Related
Nov 9, 2006
I have a wired situation..!I set up transactional replication on one of my development server (SQL2000 Dev Edition with sp4).It is running fine without any issues and all of a sudden, i noticed inmy repication monitor tab under Publisher where I usually see thepublication is empty now.I do see the snapshot agent, log reader and distribution agent under myagents inside the replication Monitor. But its usefull to see all 3agents in one window under publisher before. What happend? Is there anyway to get that inside that monitor? Has someone encountered thissitation before? Please advise....After that I tried to create a new set of replication on differentdatabase on the same server and i dont see those either underReplication Monitor - Publishers....All it says is (No Items)....I would appreciate any help to correct this issue... Thanks in advance..
View 2 Replies
View Related
Jan 30, 2007
I have setup transactional replication everything on one box. later(two or three weeks later), Replication monitor is show red X Under my publishers (publications is disconnected). this is SQL2005.
Everyone known how to fix this problem?
Thanks,
Frank
View 1 Replies
View Related