Data Replication Performance.
Jul 20, 2005
I'm seearching for information regarding database replication
performance. We need to compare the performance of replication for SQL
Server and Oracle and it is urgent! Anyone who can describe the
performance bottlenecks for each database when performing replication,
or can point me to a white paper or webpage.
View 1 Replies
ADVERTISEMENT
Jun 25, 2006
I'm having a tough time finding any good resources on Sql Server Replication Performance. Are there any benchmarks / state of any kind? How well does replication scale out?
In my scenario, I have one central publisher and several large tables, all with hundreds of millions of records. Every day I may insert/update millions of records in the publisher, and then I need to replicate the changes (in a few hours at most) onto a pool of subscribers, while they remain online.
Is the replication story robust enough to handle a situation like this?
View 1 Replies
View Related
Oct 26, 1999
Can anyone suggest hwo can I improve the performance of the replication process and make it faster.
Pran
View 1 Replies
View Related
Mar 9, 2004
Hi all,
I am keen to hear peoples perspectives on how much additional load Transactional Replication will have on a server.
Obviously this will depend greatly on the level of transactions in the database, but a general indication would be great (eg 10% increase in overheads).
I am thinking of encorperating this into a new server structure which we are going to be setting up and am unsure as to whether to make the primary server BOTH the publisher and distributor; or make the secondary server the distributor to reduce the load on the primary to only being the publisher.
Basically the secondary server will simply be a 'hot swap' of the primary - so i/o load on the secondary is not going to be an issue.
There may be 2 primary's (if that makes sense) replicating to the hot swap so that if either primary is dropped the hot swap could take over either servers load/responsibilities - not sure if this will make a difference on where to put the roles?
Anyone have any thoughts on this?
Thanks in advance.
Cheers
View 6 Replies
View Related
Aug 24, 2006
Hello,could you please advice on how to measure replication performancein Oracle, DB2 & MS SQL Server RDBMS installed in Windows servers ?I've got two servers with databases installed and configured,I prepared set of data using DBGEN from TPC and I already imported theminto databases.Also, I configured the replication.Now I have to do a test with a few kind of replications methodimplemented in these RDMBS, but I don't know which tool or reports or"v$iews" should I use to measure replication performance.The replication is configured only between the same RDBMS, I meanOracle <-Oracle, DB2 <-DB2 and MSSQL<-MSSQL.Most of applications are great for checking performance of local DB,not for replicated/distributed.I've found description of CA Unicenter Database Performance Managementfor distributed RDMBS, and I think it could be the right one, but Ican't find any demo or trial version :(Could you please advice any place to download it, or other application,script, description, just whatever.Perhaps just any other idea how to check the replication mechanismefficiency ?Regards,Mark
View 5 Replies
View Related
Apr 18, 2006
Hello,
I am trying some replication sample.I create a table with thousands of records in the publisher side. as I create a subscription(the subscription database is on a remote machine),
the whole table is created on the remote database.
I wanted to measure the performence as:
1. how much time was taken in filling the whole table in the subscriber side?
2. If i insert some 10000 records on publisher side, I want to measure, how much time was taken in inserting the same records on the subscriber?
How do I measure this ? Can I use some Log reader stuff.
thanks in advance
View 3 Replies
View Related
May 15, 2007
Hello to all,
I have a performance question: we have a cluster with 2 SQL instances on 1 node (another instance is on another node, but no link with my current problem!). Let's call them C1SQL1 and C1SQL2.
This node is a Hyperthreaded Xeon 2.8Ghz with 1 gig of memory.
These 2 instances are using transactionnal replication and are configured as the distributor and publisher. C1SQL1 is not using much power, it's a small replication with around 10 agents. C1SQL2 is a bit heavier, with around 100 distribution agents. C1SQL2 has around 50 subscribers in 12 publications, but not all subscribers are used in each publications.
Once in a while, this cluster node impacts our production environment (since it's also a production server) and we're wondering if performance wise, it's really not powerfull enough to be the distributor?
I've isolated C1SQL2 on it's own logical CPU, and in idle mode, the replication workload (history, checking if new transactions are made) peaks at around 15-50% each 4-10 seconds.
Can I have any input on this?
Thanks!
View 1 Replies
View Related
Feb 25, 2004
Hi
I have sucessfully set up transactional replication, allowing the subscriber to update the publisher. All works well for a while, but after a couple of weeks or so it fails, but always for a different reason !
My question is, is there anything that can be done to help replication stay healthy. I had thought of doing regular backups of the database and the transaction log, and then truncating the transaction log.
Any advice, or links to other troubleshooting resource much appreciated.
View 1 Replies
View Related
Dec 14, 2006
Hi all.
Any assistance would be greatly appreciated.
We recently created transactional replication to hopefully improve performance issues we were expereincing. The replication is between 2 SQL Servers (2000), and since we have introduced the replication, the performance has degraded considerably.
I will try and explain the scenario.
We have a primary db that our internal users use and we also have the newly replicated db that our website and another application use. The users are complaining that the website and the internal application is extremely slow and I was just wondering if it is possible to do an Index Tuning on both the primary db and replicated db based on trace files so as to create new indexes or would this have an impact on the replication?
Thanks in advance.
View 4 Replies
View Related
Dec 18, 2006
Hello,
I have been experiencing some difficulties with poorly performing synchronizations using replication from SQL Server 2005 to SQL Server Mobile running on Windows Mobile 5 devices. Currently there are two main databases (each client will only use one of them), the 1st one has around 500,000 rows, and the 2nd has about 1,200,000 rows. The initial synchronization for the 1st database takes around 45 minutes, and for the 2nd, around 2.5 hours. This is quite long, but we have comforted our clients by saying that this is a one time delay, and that further synchronizations will be much quicker. Well, synchronizing the data after this is usually quite speedy, however, things get bad rather quickly when the number of changes increases.
In normal cases, the client will have at most a few thousand changes and all is well, the synchronization will typically be under a few minutes - no big deal. Once in a while though, there are a substantial number of changes to the database (from the SQL Server 2005 side), perhaps around 50,000 changes. When this happens, the synchronization process doesn't seem to ever finish (I've left it over the weekend and come back to find it still synchronizing). For the record, there seems to be a level at which the database will finish synchronizing, but be agonizingly slow - around 10,000 to 20,000 records will finish eventually (but take a few hours, at which point it's faster to just blow away the database and start again from scratch). This is obviously not acceptable, and I need to find a way to resolve this. Does anyone have any thoughts?
While on this topic, why does this synchronization process take so long anyways? The snapshot creation (even for the database with millions of rows) finishes in a couple minutes, and the actual transfer of data shouldn't take more than a few minutes. The device can't possibly be storing the database content in memory (the SDF file ends up being between 40MB and 100MB), but when I watch network activity, there tends to be an initial busy period, then a periodic and fairly small spike every few seconds until the process completes, so the connection isn't being saturated at all.
At this point, I am almost considering breaking the nice database design I have and creating combined logical records to see if reducing the number of rows may help. I'd really prefer not to have to go this route though, so if anyone has any suggestions, I'd really appreciate some feedback.
Thanks,
Adrien.
View 6 Replies
View Related
Jun 29, 2006
The client production server CPU starts thrashing. Task manager indicates that SQL server is gobbling CPU cycles. Having a look at the replication monitor, it is obvious that indivual synchronisations to the mobile devices are taking significantly longer expected.
Observing an indivual synchronisation attempt, the Upload changes to Publisher rows are very quickly resolved.
The Download changes to Subscriber seems to take up a very long time.
Along the way, the estimated completion does a few interesting things, like going from 100% complete with no estimated time to complete, back to seomthing like 77% with 2 minutes left to complete.
This sort of behaviour occurs when there are only a hundred rows to download.
Synchronisations for minimal amounts of data suddenly taking anywhere from 2 to 15 minutes. Totally unacceptable form the client perspective but seems that 2005 behaves quite different from 2000 and the tricks are yet to reveal themselves.
Note - It is a server hardware issue as there is in excess of 3 GB ram, the database is on a SAN and there are 4 3Ghz CPUS in operation.
Any possible help appreciated as this issue is beginning to drag on.
View 4 Replies
View Related
Sep 27, 2007
I have just upgraded from 2000 to 2005 and my transactional replication is running very slow, I already have a latency of 10 mins and its getting worse. I'm just using the default agent profile, is there anything I need to change?
Help please.
View 19 Replies
View Related
Jun 7, 2007
Hi All,
We are developing a system which will have to support more than 3000
subscribers. We will have to support both Transactional replication
and Merge replication.
I checked the following document about SQL 2005 replication <http://
www.microsoft.com/technet/prodtechnol/sql/2005/mergrepl.mspx>. The
document does not clearly specify what is the maximum number of
subscribers supported without a significant performance degradation.
The questions i have are:
1. Given the fact that there will be more than 3000 subscribers, there
will be more than 500-1000 subscribers trying to replicate at the same
time. Will be there be a performance degradtion in such a scenario
2. Has anyone used SQL Server 2005 in a scenario involving more than
3000 subscribers?
3. Will it be better if we develop our own system to perform
replication activity instead of relying on SQL Server 2005?
- Ngm
Mail me atnarasimha (DOT) gm (AT) gmail (DOT) com )
View 6 Replies
View Related
Feb 25, 2008
Wondering if anyone has any experience with SQL Server Express Edition (SSEXP). We're looking at a mobile sales force type model, so a local database on a laptop with no real time network connection. So the users would collect data locally, then connect up to the network every few days to replicate the data to a central server.
So questions.. Has anyone tried anything similar? How stable/mature is SSEXP? Any other thoughts, alternatives or gotchas anyone can think of?
Thanks for the input.
View 1 Replies
View Related
May 14, 2007
Summary: Started replication April 1 of 4M xact / day publishing system to subscribing system.
Performance was good. Latency was ~ 5-7 seconds.
May 10 we noticed that the DB was behind (latency was 12 hours).
All performance counters seem good with the exception of the disk.
. Performance spikes are 8 minutes apart and last from 30 - 60 seconds.
. During this period, Disk % Busy (1 - Disk % Idle) is 100%
The publisher DB publishes about 50-52 xacts/sec.
Rate of distribution (distribution DB to Subscriber DB) is ~ 47 xacts / second, so latency is increasing (currently at 33 hours). Previously my Subscriber system's "capacity" was 150 xacts / sec.
I know this because several weeks ago, the network went down, we were 24 hours behind.
When the network came back up the replication subscriber system was able to catchup at around 150 xacts / sec, or 3X the production system rate.
What has changed between then and now? Not much. We did install Tivoli Service Manager (IBM's backup system) a couple of weeks ago. It seems to run fine on a nightly basis, but I don't see any periodic heavy Disk I/O from that. Just to be sure, I've had them shut the TSM services down just to be sure.
We've also eliminated all extraneous processes other than those I need for performance monitoring (there was a RTVscan, virus scan process).
I've eliminated Autogrowth's as an issue as I've bumped the growth so that they are very infrequent (several days at this point. When we resolve the problem, I'll dial this down to something more reasonable.
My disk configuration is not ideal I realize (single Raid-5 disk with 3 partitions), however, this has not changed in the 6 weeks.
Thanks for any help on this!
Jack Griffith
Configuration:
Subscribing System:
SQL Server: 2000, SP4 - 8.0.2039
CPU - 2.8GHZ Xeon, Quad Dual-core
Memory - 3.5GB RAM
Disk: 3 partitions on a single RAID-5 disk with 1118 GB of space:
C: 39GB System and Programs
D: 97GB Log space
E: 982 GB Data space
Replication configuration:
- nosynch, continuous Transactional Replication
- Distribution db is on Subscription system
- distribution - Publication of approx. 50 transactions / second
Subscriber DB configuration:
DB size: 64458 MB
Logging: Simple (at this point)
distribution
DB size: 3111 MB
Logging: Simple (at this point)
View 1 Replies
View Related
Jun 10, 2015
I've been asked to put together an estimation for the performance impact that replication would have on our database server during a particular operation. I know that this depends on a lot of different factors, including:
* Number of articles being replicated
* Types of articles being replicated
* Number of DML transactions that would result in delivery of replicated data
Any way to turn this into a meaningful metric?
View 0 Replies
View Related
Sep 21, 2007
Hello,
We previously having two servers A and B. Server A is used for updation of data and the data then replicated to server B. Server B is used for
Server A :
purpose : used for database updation/ modification
SQL Server version : SQL Server 2000 SP 2
Server Z :
purpose : used for Reporting
SQL Server version : SQL Server 2000 SP 2
We were doing Transactional replication from Server A to Server B.
Last month we have broght another server (Server B) with same hardware configuration but having SQL SERVER 2005 installed. This is to speed up our database update process. We have moved some of the database on this new server so that we can achieve our deadlines.
Server B :
purpose : used for database updation/ modification
SQL Server version : SQL Server 2005
I have set up the transactional replication from Server B to Server Z and replication works fine.
However, the issue is after it is started replicating from this new server (Server B) performance of all the queries reduced a lot.(making my life harder)
I didnt expected this as our reporting server is still SQL server 2000.
I have restored the backup of database which was replicated from server A (sql server 2000) and compared execution plan for one of our common query (which is used in most of the reports and which is now taking longer time to provide results)
I found that database which is replicated from Server B (Sql server 2005) is having primary keys. which was not present in the database which replicated from server A(Sql server 2000).
I have then removed the primary key and make the indexes same as previous copy of database(which was replicated from server A) But still the query takes long time.
Execution plan now shows "Table Spool" which was not present in previous copy of database.
Almost every query for this database is taking longer time now.
Can someone suggest me what is wrong and what should I need to fix.
Am I going on the right direction?
View 14 Replies
View Related
Jan 8, 2007
Hi guys , may I know is there any examples of store procedure/scripts for monitoring replication status and performance ? I just know about sp_replmonitorhelppublisher. Thx for the assistance.
From,
Hans
View 1 Replies
View Related
Jul 28, 2006
Hi,
I have a VB.net app that access a SQL Express database. I have transactional repliaction set up on a SQL 2000 database (the publisher) and a pull subscription from the VB.net app. I use RMO in the VB app to connect to the publisher. My problem is I am getting some strange behaviour as follows
- if I run the app and invoke the pull subscription it works fine. If I then close my app and go back in, I can access my data without any problem
- If I run the app and try to access data in my SQL Express database it works fine. I can then close the app, reopen it and run the pull subscription it works fine
however.......
- if I run the app, invoke the pull subscription (which runs fine), and then try to access data in my local SQL Express database without firstly closing and reopening the app, I get a login error
- if I run the app, try to access data in my local SQL Express database (which works fine), and then try to run the pull subscription I get a "the process cannot acces the file as it is being used by another process" error. In this case I need to restart the SQL Express service to be able to run replication again.
I get exactly the same behaviour when I use the Windows Sync tool (with my app open at the same time) instead of my RMO code to replicate the data.
I am using standard ADO.Net 2 code to access my SQL Express data in the app and closing all connections etc
Any advice appreciated !
Thanks
Ronan
View 2 Replies
View Related
Feb 21, 2007
We recently implemented merge replication.We were expereincing. The replication is between 2 SQL Servers (2005) over same network box, and since we have introduced the replication, the performance has degraded considerably on subscriber end.
1) One thing that should be mention is that its a "unidirectional Direction" flow of changes is from publisher towards subscriber (only one publisher and distributor as well and one subscriber ).
2) Updates are high than inserts and only one article let say "Article1" ave update up to 2000 per day and i am experiecing that dbo.MSmerge_upd_sp_Article1_GUID taking more cpu time.what should be do..
on subscriber database response time is going to slow and i am experiencing a lot of number of LOCK time outs on application end.
can any one can also suggest me server level settings for aviding locking time out.
looking for any experieced solution/suggestion.
Thanks in advance.
View 3 Replies
View Related
Oct 8, 2007
We have a SQLServer 2005 Enterprise merge replication publication with SQL Mobile 3.0 subscribers (Windows Mobile 5.0 and 6.0). We do not use pre-computed partitions due to trigger performance issues with an SSIS/ETL application that supplies data to the merge database. We do use the "Optimize" (=true) option, though we have tried this both ways with no significant differences. We use filters and joins for each worker ID (as HOST_ID) from the subscriptions.
The sync times become increasingly worse after we run the snapshot and bring the publication online. I have tried rerunning the snapshots, this helps little, as it often behaves like the subscription was set to reinitialize and forces a big sync (reload of all data) to the subscriber. We have tried much of the obvious (e.g., flattening filters and joins, adding indexes, etc.).
When users are synchronizing, we watch replication monitor and notice that a lot of time is spent processing "enumerating inserts and updates for article [any article]", especially processing the many generations and batches. This is true for any follow-up syncs after the 1st big sync (initializing the subscription).
I read several posts regarding the batches and generations of changes, and decided to try increasing the €œDownloadGenerationsPerBatch€?. I tried adding this parameter to the snapshot agent job, and the job fails each time with a vague message, even with the default value of 100. How do you change this parameter for SQLServer 2005 Enterprise?
Any suggestions?
Thanks in advance,
Matt
View 5 Replies
View Related
Nov 17, 2006
Hi,
I am trying to setup Trans Replication with updating subscriber on sql2000. One column on few tables got data with single quote (').
How do I handle in this case? Did any one come across such case?
Can I Change default QUOTED IDENTIFIER from ' (single quote) to something else (@@@) on SQL2000?
If yes, how to do?
Thanks
mka
View 1 Replies
View Related
Apr 13, 2008
Hi,
I have a table (replica of my original table for test purpose) with half a million records. The structure of the table is as following:
id (int), Name (varchar), DataComplete (DateTime), Age (int), bit1 (bit), bit2 (bit), bit3 (bit), bit4 (bit), bit5 (bit), bit6 (bit), bit7 (bit), bit8 (bit), bit9 (bit), bit10 (bit), bit11 (bit), bit12 (bit)
The data retrievel on this table is (in my opinion) pretty slow. A simple (SELECT * FROM MYTABLE) takes 5 seconds. How can i improve the time of retrievel. I have tried to define some indexes but no improvement. Can some guru help me in this regard. Atleast the SELECT * must return data in 1 or under 1 second.
Thanks in adavance...
View 2 Replies
View Related
Dec 12, 2005
Well, I hope, after my endless search in the web... maybe, somebody can help me here.I'm passing a application to a web-based application, therefore I have to use ADO.net to access a sql server DB. But the performance is extremly poor!!!!Just to compare: with the normal application or with the SQL Query Analyzer, the query needs about 2 seconds. Well, by using ado.net, it's about 200 seconds......!!!!Of course, the query is quite long. But the difference is extrem... too extrem for using the same query....I already tried a lot... actually everything I found on the web. What is really strange is that neither the processor nor the memory is fully used... And the web server use more of the cpu power than the sql server itself (the 2 server are still on my machine where I'm programming). So, could it be that the problem is by filling up the dataset with the sqladapter or something similar?Thx for your help!!
View 8 Replies
View Related
Dec 13, 2005
I was wondering if someone could clear this up for me...
I have a table that will be used to store information about products that I will be selling on a new website. I would like to store each product in a table which will include a description of around 1000 words. I was deliberating over whether to store this chunk of text in a column with the data type set as 'text', or to store the text in a seperate txt file on the server. The search facility on the site will not be required to query this text so which would offer the best in terms of performance?
View 3 Replies
View Related
Nov 9, 2007
I am running a trace and one of the columns is start time
I want to import corresponding performance log data in the profiler
It is a new feature in 2005
however this option is disabled in the profiler
this option is at
File -> Import performance Data
Please advice on how to enable this option
thanks in advance
View 5 Replies
View Related
Feb 2, 2006
Hi SQL gurus,I have a table structure question. I will have a table 'Models' thathas one to many 'incomes' and one to many 'costs'. These 2 entitieshave exactly the same structure, which is 7 smallmoney and a name. Isit better to create a table 'Incomes' and a table 'Costs', with boththe same number of fields like this :Incomes-------------in_idmodelin_1in_2in_3in_4in_5in_6in_7in_nameCosts-------------c_idmodelc_1c_2c_3c_4c_5c_6c_7c_nameor is it better to create one single table that will contain bothentities like that :Incomes_Costs-------------ic_idmodelic_1ic_2ic_3ic_4ic_5ic_6ic_7ic_nameic_isIncomewhich only differs from the 2 above by the isIncome field to know whichrow is an income and which row is a cost.I'd like to know which method is the best in terms of performance andgeneral structure and would greatly appreciate if you explain a littlethe reasons that drove you to suggest me a method over the other.Thanks all for your time!ibiza
View 4 Replies
View Related
Jul 20, 2005
Hello guys,Wonder if any of you could help me out here. I have just created annew empty database and imported data from another database into it.This was done with the import wizard from MMC.First thing that I noticed was the size difference, the old databasewas well over 1GB, but the new one was only about 400MB.Second thing I've noticed, and this is the problem, is that accessingthe new (smaller) database instead of the old one causes a huge speeddegradation, about 5 times slower than the old version.We are using MS SQL Server 7. Any help would be very gratefullyreceived.RegardsGethyn
View 1 Replies
View Related
Mar 20, 2007
Hello All,
I am using SSIS to transfer data between two SQL Servers (2000). There is no transformation involved as the source and destination table structure is same. Even then the package execution takes lot of time.
The data in the tables is of the order of 66000000 the we were required to kill the package execution after it took more than 24 hours. The CPU usage was more than 13000s and disk I/O was well above 330000000. I am new to the tit-bits of SIS. Can anyone please tell me the reason as to why the package has gone so resource hungry.
Thanks in advance,
Atul
View 3 Replies
View Related
Oct 26, 2007
Guys,
I have 14 databases, the last database - 14th one will have lookup tables only. The other 13 databases will have these lookup tables and data tables. At the end of each day I will make updates for lookup tables on 14th database, I want to be able to push the updates to any or some of the 13 databases. Look up tables will have only upto 100 rows, so I am not concerned about the bandwidth. What is the best way to accomplish this.
Any suggestions and inputs would help
Thanks
View 1 Replies
View Related
Apr 22, 2008
I have an opportunity to rebuild a database model with the express purpose of improving query performance. So given the following I have a few questions.
Table A (~500M records)
Primary Key Field (int)
Field 1 (varchar)
Field 2 (varchar)
Field 3 (varchar)
Field 4 (varchar)
Field 5 (varchar)
Table B (1B+ records)
Primary Key Field (int)
Foreign Key Field (int)
Field 1 (varchar)
Field 2 (varchar)
Field 3 (varchar)
Field 4 (varchar)
Field 5 (varchar)
* Assumed: Tables are inner joined on all queries. The database is readonly.
-- Most of my lookups are based on querying Field 1 of Table A. The data content of Field 1, Table A is 90% unique.
1) Would it be more beneficial to put the clustered index on Field 1 instead of the PK field in Table A?
2) Can an Identity column be non-clustered?
3) Alternatively, would it be beneficial to build a separate lookup table with just the PK & Field 1 of Table A, with a clustered index on the lookup table Field 1 which I join on Table A? (did that make sense?)
-- I have a secondary lookup that performs queries on Fields 1, 2, 3, 4 & 5 of Table B
1) Would it be more beneficial to create an additional indexed lookup column of the concantenated values of Fields 1-5 of Table B versus a covering index of all 5 columns?
2) Does a clustered index have to be unique?
3) Would a clustered index be more beneficial over Fields 1-5 or the special lookup column versus the PK or FK fields?
4) Would creating a special lookup table with just the requisite fields be more beneficial?
An extra question. The existing data model uses the CHAR datatype for all columns less than 9 characters wide and the columns are set to allow nulls. This requires every select statement to COALESCE() and RTRIM() all these columns. I intend to make all (affected) columns VARCHAR, NOT NULL with a default value of a 0-length string.
Will this enhance query performance?
Thanks in advance for any insight.
View 7 Replies
View Related
Jul 23, 2005
Hi,I am using SQL 2000 and has a table that contains more than 2 millionrows of data (and growing). Right now, I have encountered 2 problems:1) Sometimes, when I try to query against this table, I would get sqlcommand time out. Hence, I did more testing with Query Analyser and tofind out that the same queries would not always take about the sametime to be executed. Could anyone please tell me what would affect thespeed of the query and what is the most important thing among all thefactors? (I could think of the opened connections, server'sCPU/Memory...)2) I am not sure if 2 million rows is considered a lot or not, however,it start to take 5~10 seconds for me to finish some simple queries. Iam wondering what is the best practices to handle this amount of datawhile having a decent performance?Thank you,Charlie Chang[Charlies224@hotmail.com]
View 5 Replies
View Related
Apr 11, 2006
I want to write a application monitering program to collect the SQLServer 2000 performance data,such as pages/sec, bytes total/sec, etc, BUT I don't know how to do it, In Oracle , there are the v$ views and DBA view , which I can findthe information I interested, the question is , is there a similar suitof view in SQL Server 2000 to provide the performance information ?Thank you very much, I will be mad by this question, for I have googledall the day , but in vain
View 1 Replies
View Related