Transactional Replication Over Satellite Connection. Set Parameters For High Latency?
Jul 31, 2007
I have 2 servers; 1 in the head office and the 2nd at a remote location connected over a satellite. The satellite connection has typical latency and bandwidth. 2 mb down and 128kb up. Latency is average of 700ms.
Both servers have plenty of processing power and RAM and are very underutilized.
I have 1 push publication at the publisher at the remote location. The publication has 1 article. The table that is contained in the article only grows; ie inserts are the only statements applied, no updates or deletes. The table grows by 5 inserts every second(approximate). the table has 5 columns. none of the columns are blobs etc. An ID field, 1 datetime, 2 integers and 1 float column.
I have the agents set for continuous with a 2 second polling interval.
I am aware of the following command parameters for the agents but not sure which way to go to maximize the throughput of the data.
The subscriber table also has an insert after trigger to update another small table with the current value. Under normal network conditions, the trigger works great. Over the sat connection with high latency, the smaller target table seems to get a lock placed on it. I can only guess that the lock is from the inserts and perhaps a large number of inserts per transaction?
Under normal network conditions, the latency between publisher - distribution - subsciber is about 4 seconds.
What would be good command parameter settings for the agents to compensate for the latency?
--> Cursor for fetching all publications in all databases DECLARE curPublishedDatabases CURSOR LOCAL FAST_FORWARD FOR SELECT [name] FROM sys.databases WHERE is_published > 0
OPEN curPublishedDatabases FETCH curPublishedDatabases INTO @PublishedDBName WHILE @@FETCH_STATUS = 0 BEGIN SET @SQL = N'SELECT ''' + @PublishedDBName + ''', [name] FROM ' + @PublishedDBName + '.dbo.syspublications' INSERT INTO @Publications (PublicationDatabase, PublicationName) EXEC sp_executesql @SQL
FETCH NEXT FROM curPublishedDatabases INTO @PublishedDBName END
CLOSE curPublishedDatabases DEALLOCATE curPublishedDatabases
--> Cursor for posting tracer tokens DECLARE curPublications CURSOR LOCAL FAST_FORWARD FOR SELECT PublicationPK, PublicationDatabase, PublicationName FROM @Publications
OPEN curPublications FETCH curPublications INTO @PublicationPK, @PublicationDatabase, @PublicationName WHILE @@FETCH_STATUS = 0 BEGIN SET @Parmameters = N'@PublicationName varchar(200), @TracerTokenID_OUT int OUTPUT'; SET @SQL = N'EXEC ' + @PublicationDatabase + '.sys.sp_posttracertoken @publication = @PublicationName, @tracer_token_id = @TracerTokenID_OUT OUTPUT' EXEC sp_executesql @SQL, @Parmameters, @PublicationName = @PublicationName, @TracerTokenID_OUT = @TracerTokenID OUTPUT
UPDATE @Publications SET TracerTokenID = @TracerTokenID, TracerTokenPostTime = GETDATE() WHERE PublicationPK = @PublicationPK
FETCH NEXT FROM curPublications INTO @PublicationPK, @PublicationDatabase, @PublicationName END CLOSE curPublications DEALLOCATE curPublications
--> Wait two minutes for all tokens to be commited at all subscribers WAITFOR DELAY '000:02:00.000'
--> Then check the results for each posted token DECLARE curTokens CURSOR LOCAL FAST_FORWARD FOR SELECT PublicationPK, PublicationDatabase, PublicationName, TracerTokenID FROM @Publications
OPEN curTokens FETCH curTokens INTO @PublicationPK, @PublicationDatabase, @PublicationName, @TracerTokenID WHILE @@FETCH_STATUS = 0 BEGIN --> Insert token history for each token SET @SQL = N'EXEC ' + @PublicationDatabase + '.dbo.sp_helptracertokenhistory @publication = ''' + @PublicationName + ''', @tracer_id = ' + CAST(@TracerTokenID AS VARCHAR(50)) + '' INSERT INTO @Latency (DistributorLatency, Subscriber, SubscriberDB, SubscriberLatency, OverallLatency) EXEC sp_executesql @SQL
--> Make sure that the PublicationPK is added to the token history UPDATE @Latency SET PublicationPK = @PublicationPK WHERE PublicationPK IS NULL
--> Clean up the tracer token SET @SQL = N'EXEC ' + @PublicationDatabase + '.sys.sp_deletetracertokenhistory @publication = ''' + @PublicationName + ''', @tracer_id = ' + CAST(@TracerTokenID AS VARCHAR(50)) + '' EXEC sp_executesql @SQL
FETCH NEXT FROM curTokens INTO @PublicationPK, @PublicationDatabase, @PublicationName, @TracerTokenID END CLOSE curTokens DEALLOCATE curTokens
SELECT b.PublicationDatabase, b.PublicationName, a.Subscriber, a.SubscriberDB, a.OverallLatency, b.TracerTokenPostTime FROM @Latency a INNER JOIN @Publications b ON a.PublicationPK = b.PublicationPK
We have setup up transactional replication with dedicated distributor in SQL Server 2005 environment. I have noticed that during particular time of the day latency is increasing dramatically. I have been checking Tracer Tokens and Total latency during that time is around 30-40 min (both publisher to distributor and distributor to subscriber is taking much longer that normal). Normally, it is less than 10 sec. I was wondering if there is a way to pinpoint exact cause of the latency. This is pull subscription. I would appreciate if someone can share (or point to the right direction) best practice on transactional replication setup/maintenance. My understanding is that only committed transactions are replicated, correct? I checked database on publisher and didn't see any outstanding long running transaction.
I have been working on a project to evaluate transactional replication in SQL Server 2005 to find out if it will be suitable to provide real time reporting without impacting the primary database server. Thus far I have not seen latency better than several seconds replicating a change to a table or by utilizing the "tracer tokens" troubleshooting functionality. It usually takes a couple seconds from the publisher to the distributor and a couple more from the distributor to the subscriber.
I guess my question is actually whether it is possible to achieve sub-second (milliseconds) latency for transactional replication under any circumstances. Someone on another forum told me it was not, so I thought I would ask a separate community to confirm. Regarding the hardware, the latency on ping is less than 1ms between the servers and the servers themselves are decently powered quad processor boxes w/ 4GB of ram each. It seems to have the same latency of 3-5 seconds regardless of whether I publish a single table or all of the tables in the database. I currently have it configured with the publisher on one server and the distributor/subscriber on the second with the log reader and distributor agents set to run continuously. If it is possible, please recommend configuration of the agents or the topology of the PDS to achieve this. Thanks in advance for any insight you provide.
I have multiple questions and would appreciate any suggestions in resolving them. I'm a novice to these issues.
1) First of all, what is the exact command to setup the trunc. log on chkpt. option on for a transactional log of a SQL Server 2000 database? Is this option on by default? I have noticed for one of the databases I'm managing that the transaction log was over 12 GB, while the db was only 425 MB.
2) How's it possible to run a DBCC TRACEON to see the content of the transaction log to see if we are having any issues with any uncommitted transactions, i.e. updates, inserts, and/or deletes.
3) What are the commands to truncate inactive transactions and increasing the readbatchsize?
4) lastly, how do I validate transactional replication via checksum and find valid latency between a small number of changes that need to be committed between publisher and subscriber.
I have a transactional replication that was running fine for about few months, but eversince there was a massive update of records at the publisher site and when I use the replication monitor, i notice Status: Performace Critical, Subscription:[localservername]:[Orders_Repl], Performance:Critical, Latency:03:34:47.
I was thinking of redoing the whole replication but before i am about to proceed. Can you guys explained what happen and what does it meant when Latency:03:34:47. I don't remember seeing a Latency:03:34:47.
I am hoping to solve this critical performance issue hope you guys can help. Thanks
Hi Everybody, Can anybody tell me how to get the number of commands delivered per minute in case of Merge Replication with Publisher and subscribers. This way, we can be sure that even if there is a latency (due to high volume transaction processing), replication is in good shape and things will catch up soon. Also if there are any other similar measures which can be monitored to make sure that replication is going on fine, it would be great
Please let me know If anyone has got information on same.
I'm interested in combining the Peer-to-Peer Transactional Replication and Standard Transactional Replication to provide a scale out solution of SQL Server 2005. The condition is as follows:
We may have 10 SQL Server 2005 (1 Publisher + 9 Subscriber) running transactional replication in the production environment and allow updates in subscribers. To offload the loading of the publisher, we plan to have 2 Publisher (PubNode1 and PubNode2) using Peer-to-Peer Transaction Replication and the rest 8 subscribers will be divided into 2 groups. The subscribers 1-4 (SubNode1, SubNode2, SubNode3, and SubNode4) will be set to be standard transactional replication subscribers of PubNode1, and the rest 4 subscribers (SubNode5, ..., SubNode8) will be set to be standard transactional replication subscribers of PubNode2.
Is it possible to setup above 2 Publisher + 8 Subscriber topology? Also, could we set the 8 subscribers with updatable subscriptions to achieve each node is updatable?
We do not plan to set all the 10 nodes using Peer-to-Peer Transactional Replication as it is necessary to make sure n*(n-1)/2 (i.e. 45) peer-to-peer connections is reliable. It seems that the maintenance cost is high if the servers are not in a LAN and the topology is very high coupling. So we prefer to divide the 10 nodes into 2 groups and reduce the cost of each node to maintain the connections to all other sites.
Hi,I have transactional replication set up on on of our MS SQL 2000 (SP4)Std Edition database serverBecause of an unfortunate scenario, I had to restore one of thepublication databases. I scripted the replication module and droppedthe publication first. Then did a full restore.When I try to set up the replication thru the script, it created thepublication with the following error messageServer: Msg 2714, Level 16, State 5, Procedure SYNC_FCR ToGPRPTS_GL00100, Line 1There is already an object named 'SYNC_FCR To GPRPTS_GL00100' in thedatabase.It seems the previous replication has set up these system viewsSYNC_FCR To GPRPTS_GL00100. And I have tried dropping the replicationmodule again to see if it drops the views but it didn't.The replication fails with some wired error & complains about thisviews when I try to run the synch..I even tried running the sp_removedbreplication to drop thereplication module, but the views do not seem to disappear.My question is how do I remove these system views or how do I make thereplication work without using these views or create new views.. Whyis this creating those system views in the first place?I would appreciate if anyone can help me fix this issue. Please feelfree to let me know if any additional information or scripts needed.Thanks in advance..Regards,Aravin Rajendra.
I have been researching on the proper steps or sequence to follow to completely remove SQL Server 2012 Transactional Replication. I have read articles about using SSMS as well as using replication stored procedures and some procedures use SQLCMD or just regular TSQL executed in SSMS. I have also read articles where people said all you really need is connect to the Publisher instance, find the publication you want to remove and choose "Delete" and everything will be taken care of behind the scene. I have three SQL servers that participate in transactional replication. SQL-P (publisher),
SQL-D (distributor) and SQL-S (subscriber). Do I need to connect to the distributor instance and the subscriber instance when removing transactional replication or is it just really connecting to the publisher and click delete on the publication? I want everything gone including any metadata, systems tables, distributions db and any other replication objects created during the initial configuration.
I am working on bringing our disaster recovery site to be a live site. Currently we replicate to one of out servers (server B) with merge replication (from server A). Server A also does one way transactional replication form some table to several other servers including servers at the DR site.
This setup is not going to be fast enough for what we need so I am wondering if a table is receiving merge replication will the merge updates also replicate down the transaction path??
Example... Server B update a row and merges to Server A. With this update them replicate (via transactional) to Server C??
Here is the situation we are trying to resolve. The client has 2 locations, each has local appliations running on a database. db schema on both locations are the same, data is different and won't overlap.
The requirements are: 1. at each location the application can read and write 2. near zero down time for applications on each site 3. db on one site also has the data from the other site for DR purpose
The client is running on SQL 2005 STD SP2
We looked at the approach of setting up db mirroring on each location + 2-way transactional replication between both locations. The mirroring was fine, and I was able to set up transactional replication from mirrored publisher to a non-mirrored subscriber. But, from what I experienced and from reading, there is no way to have the subscriber db to be mirrored, since Distribution Agents simply doesn't have the option to specify Failover Partnr for mirroring, so I guess it is not supported. Any comments on that?
Assuming that's correct, then the only way of using SQL out-of-box technique seems to be using Clustering on each location instead of mirroring, then the 2-way transactional replication works on clustered subscribers I think (although I haven't tested it). Peer-2-Peer replication would have been a good candidate between sites in this case, but STD version of sql 2005 ruled that out.
On my distribution server there is a directory called mssql7 epldataunc that houses the bcp, sch and idx files for replications. A directory is created for each publication. Most directories are empty but some contain all the table idx, sch and bcp files for the publication. Some of these files are over 6 month. Does anyone know if these files are neccessary to keep and why do only some publications have these files. I am wondering if they are kept when you push the publicaiton and it errors out originally and you have to restart it? If any one can shed some like on this I would appreciate it.
Hi, I am planning to do a transactional replication from production server to reporting server. In one of my database is having 150 tables in it, only 100 tables are having primary key . Remaining tables are not have primary keys. So i am not able to do transactional replication for those 50 tables. Could any one tell me how i have to approch transactional replication. Because i need to have production data at my reporting server in regular interval. So pls suggest me . I am using Sql 7.0 with service pack2.
I have a problem with transactional replication in SQL Server 7. I have two databases (a publisher-distributor server and a suscriber server) and when I do an UPDATE in the publisher database the Log Agent Reader reads the UPDATE as an DELETE-INSERT and when the suscriber pull the suscription and try to do first the DELETE the transactional replication stops because of the Foreign Keys in the suscriptor database (that are the same as the publisher). If anybody knows how to solve it, please tell me
I have two servers networked via workgroups (no domain) and am trying to install Transactional Replication. Everything appears to install correctly, but when I run the initial Snapshot Agent I get the following error -
The process could not create file 'CONCEPCIONC$MSSQL7ReplDatauncCONCEPCION_Erik_Rep lication_Erik_Replication19991029153207Employer.sc h'. Access is denied.
The SQLAgent is starting with same user account on both machines - admin rights and the same password. I have added this user to the ( epldata) folder permissions as well. I even see a "Employer.sch" file in the directory.
I am planning to implement replication between two servers (either snapshot/transactional). Those two servers are physically connected but we don't have domain concept i.e we connected both servers physically and it can be pinked from each other, but not under any domain (not also under different domain). How should I register SQL server& agent services to start with 'Service startup Account'. In this case I cannot start both servers using same domain Account, they how can I relate...
Hi guys can anyone help me with this issue. if so thankyou kindly.
1. created transactional publication (for tables, stored procs & views ) 2. created empty database at subscriber 3. set up pull subscription at subscriber 4. ran the initial snapshot agent at the publisher 5. checked the schema files & found out it hadn't scripted the primary keys on the tables.
Problem: PRIMARY KEY AINT GOIN ACROSS, COULD ANYONE TELL ME IF IVE MISSED SOMETHING? THANK YOU MUCH APPRECIATED GUYS.
? Could it be that it is not picking up the primary keys on the tables because I am replicating stored procs and views with that publication?
? Should I break my publications into 3; say 1 for each object?
Hello all! I am setting up my first Replicated database, and want to get this right.
We are a Newspaper, and I will be Publishing the database that is the back end of the web server. The goal is to load balance the web site as well as to have a second identical server in case something goes wrong. Both the Subscriber and the Publisher need to be kept in sync as new articles are posted, so my client will just be updating the Publisher. But I'd like to keep a single set of usage statistics, so I was thinking of using Immediate Updating so the Subscriber can update the Publisher using two-phase commit.
What will happen if the Publisher shuts down and an update is attempted on the Subscriber? Will it just hang? Let's assume the worst - what if my Publisher is kaput? How do I determine that my other database is a Subscriber; and how do I stop it from being a Subscriber?
I have a requirement for pushing transactional publication to various subscribers. Each subscriber will need to receive a different subset of rows from a table. So I need all rows published, but with horizontal filtering at subscriber level. Is there a way of applying filtering at the subscription level.
I am using transactional replication between two SQL 2005 databases. On Sunday evenings, we run a maintenance plan that rebuilds the indexes on the publisher DB. When the plan runs, the replication receives errors and cannot continue. The error appears to be at the publisher level. The publisher runs continually and the subcriber runs every 10 minutes. The error states "cannot run sp_replcmds" Other than Sunday, everything runs fine.
I want to set up the transactional replication. How many service accounts we need and what kind of permissions it required in windows level?Only two servers we are using. Pub/Distributor and Subsciber.Which option is better Push or pull?
at the 'specify articles" on the wizard. Some tables have a key next to it that doesn't let me select them to be replicated. This is happens on transactional replication. But if i do a snapshot, the keys next to the tables are not there.
Need help guys... I'm new to SQL Replication, though I successfully created a transactional replication testing on my machine using PUBS DB and a separate DB.
I'm planning to establish/setup a Transactional Replication. There are two SQL Servers, one is being used by WebApp_1 and the other is being used by WebApp_2. Both servers are maintain by different administrators and situated on different domain/location (both used Win2003 Server). We will transfer some data from Server_1 to Server_2. Server_1 will be setup as Publisher and the other is the Subscriber, then additonalServer_3 will be added and will serve as the Distibutor.
My questions are:
1. Do I need to request separate domain account from the two different servers to establish connection to the Distributor?
2. Will the User databases from Publisher in compromise/affected as far as database settings and data are concerned?
I set up a sql server 2005 database as publisher for about 50 sql mobile subscriptions. Yet in the replication monitor windows, it shows in subscription columns is 26549. Does it mean I have 26549 subsriptions or it means 26549 synchronizations?
I have a question for anyone who mas some tips/pointers for optimizing SQL merge replication publications.
The front end web server is running IIS 6.0 on Windows 2003 x86 Server Standard (Server A). The back end database server is running SQL 2000 Standard on Windows 2003 x86 Standard (Server B). The merge replication clients connect via HTTPS over the Internet from a custom C#.NET 2005 application using SQL 2005 Mobile running on Windows Mobile 5.0 (Client).
The publication itself has several filters on it. The entry point uses the user's Windows username to start the filter. Based on the user, it then filters the records in multiple tables. There are 68 articles and 44 filter statements. The filters extend multiple layers deep, in other words they are not all filtering off the HOST_NAME() variable, some tables filter from records in tables that filter from the HOST_NAME() variable. The publication is set to minimize data sent to the clients, and considers a subscription out of date if it has not synced in the last 4 days. All the rowguids are indexed as well.
There are approximately 35 clients actively using the application at any given time. On average, a client will initiate a merge replication 3-4 times per hour from 8am-5pm. Generally, a sync will take between 10 seconds and 2 minutes to complete, with most of them being around 30 seconds on average.
When a client starts a sync, there is a spike to about 50% on the server's CPU graph. If multiple clients attempt to sync at the same time the CPU utilization can be pushed to 100% for extended periods (more than 30 seconds).
I recently completed a project to increase the bandwidth available to the clients, and plan to reduce the number of filters significantly (although this will obviously increase the amount of data going to the clients and the storage needs on the individual devices). I also plan on changing the setting to not minimize the amount of data sent to the clients.
Having said all that, does anyone have any information about how to further optimize merge publications to mobile clients? The next publication will be on SQL 2005 x64 Standard if I can solve the issues in the text environment. I would like to enhance the publication as much as possible to make the end user experience better than it currently is.
I'm still pretty new when it comes to SQL server and I'd like to get some opinions about replication. I need to set up replication between 10 sites. The database is about 2 GB and at each size about 10 users are entering data. The databases need to be synchronized at least 3 times a day. Now our application developer told us to use transactional replication while I'm thinking of using merge replication.
Can anyone give me some more advice about the pro and cons of either replication type. TIA
We have 3 sites & would like to do replication btw the 3 sites. 1 publisher & 2 subscribers. All 3 sites should be able to do updates. Subscribers should be able to operate even if publisher is down. Any advice will be appreciated.