Problem : Extreme Activity - Slow Merge Replication - From A Schema Update
Jul 4, 2007
Hi there,
We have a big problem here with merge replication, specifically whenever a schema change occurs. We are replicating schema changes, and triggers/stored procedures. The example is that we changed about 150 stored procedures and about 30 triggers. This is then replicated to the subscriber database (which is also a merge publisher for further remote systems that were offline at the time) over a 10Mb link - hardly low bandwidth. However, the replication process takes about an hour and a half - considering the SQL on the primary server took less than a minute to run this is a big suprise.
We've run a trace to see if we can identitfy what is going on. There seems to be a great number of calls to sp_MSunmarkschemaobject - we are still waiting for a trace to complete to fully analyse this however it looks like it calls this repeatedly for every stored procedure in the database. We are currently re-testing to one of the remote servers with the merge agent set to the slow profile (not much hope that this will alter the poor performance).
This task looks to be excessive - and certainly does not seem to function in a sensible manner. Has anyone else had similar issues or have any suggestions. This is very infuriating as it means that the servers are effectively offline for a minimum of and hour and a half (in fact the remoter servers take over 4 hours !).
My sql query analyzer my windows xp machine is very slow. I run a query in my collegues computer, it takes about 2 mins to finish. This might means the sql server has no problems. On my computer, the same query took about 3 hours to finish. Can someone give me some suggestion how to fix this problem.
I wonder if I can ask for some advise? I wish to use SQL 2005 and merge replication but I have a few concerns over the schema changes that I am allowed to do on the publisher.
Ideally I want sql replication to push all my schema changes from the publisher to the subscribers but I don't know if it can. or even if it is recommended.
I see the BOL suggesting that schema repl. was mainly intended for ALTER type of statemenets, but can I use it for CREATE and DROP type DDL too?
For example, if I add a new table at the publisher I presume I have to add a new article too for this table. Then how can I create a new snapshot? Is this even the right way of doing it? Similarly if I want to DROP a table at the publisher I'm guessing I have to also delete any existing articles too, then recreate the snapshot.
Are there better / safer ways of doing schema changes? Perhaps I'm best disabling publishing and asking each remote site / db to update their schema manually and then re-enabling the publisher?
Hello sql and .net gurus :-)I have a problem with my website www.eventguide.it. It's completly developed under .NET 2 and SQL Server 2005 Express. My problem is the folowing:The server is a Intel 3Ghz HT processor with 1GB Ram. No other page on the running system is a CPU consuming site. We optimized the SQL statements, the code, the caching and many other parts of the website (pooling on SQL access), but the SQL Server uses about 50% to100% of the CPU and about 400MB RAM all the time. The whole site seems to be very, very slow. In fact there are many of SQL operations on every page request, but we cache a lot of them in different ways (page output caching, application caching). So I don't understand we have so much performance problems. Any suggestions for optimised code in general? I read nearly all of the MS .NET performance papers - but real world experience is the missing part :-) It is better to cast the values of a SQL reader like thisDim String1 as String = Ctype(DataReader.item(0), String)Dim Integer1 as Intger = Ctype(DataReader.item(1), Integer)or like thisDim String1 as String = DataReader.item(0) Dim Integer1 as Intger = DataReader.item(1)Thanks a lot for your help!FOX
I 'm trying to set up merge replication between two servers that have the same schema databases. The two database have the majority of there data the same but as well as data inserted at a later time independently on the two servers. (The two servers were connected in a merge replication scheme that failed at some later time and replication was paused, but users continout to insert data indepentedly at the two servers.)
I need to get them up and running.
I cleaned replication at both servers, I recreated the publication at the puplisher distributor and all is fine.
When I create a push subscription to the subscriper I get the error invalid column name 'rowguidcol' .
I so far managed to have merge replication running on two identical databases (schema and data).
Just some thoughts. After some reading I found that it might be related to identities and identity range or indexes. I set the identity seed and increment at 2, 2 at publisher and at 1,2 at subscriper. (On the same tables at puplisher and subscriber.) Is that ok? is that the way to do it?
Digging a bit more Using the SQL Profiler I can locate the error to happen when sp_MSaddmergetriggers executes.
Thanks allot for any help
Version.
Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Standard Edition on Windows NT 5.0 (Build 2195: Service Pack 4)
I am using a merge publication set to synchronize schema changes. Why am I getting the following message when I try to sync after having added columns in the publication database using ALTER TABLE statements?
The schema definition of the destination table ... in the subscription database does not match the schema definition of the source table in the publication database. Reinitialize the subscription without a snapshot after ensuring that the schema definition of the destination table is the same as the source table.
Also, why doesn't reinitializing the subscription with @upload_first = false not fix the problem?
I'm trying to set up a merge replication. Publication is created successfully and snapshot also, but when I create pull subscription on subscriber server and merge agent starts, after some time I get an error message of this type:
The schema script '\ANILREPLDATAuncANIL_BEJK_BEJK20070625142735dl_HF_vMSCene_3836.sch' could not be propagated to the subscriber. It seams there is a problem with certain Views and SPs, because tables are successfully created, and some Views and SPs also.
I tried to exclude problematic articles, but every time another one pops up. Up until now, I excluded 7 articles from publication (1 Stored Procedure and 6 Views) but I still get errors.
I gave up because I can't exclude half of the Views and SPs just to make it work.
Is there something that can be done to solve this problem?
Question re Merge rep (pull) and processing order.  We have a group of changes associated with an app upgrade, the scripts run fine on the publisher.   Part of the change includes creation of a new table , followed by altering a view to use new table.Following the change at the publisher, when the sync is kicked off from the subscriber, it fails - the alter of the view throws an'invalid object' error with regard to the new table. Seems as if the view alter is attempted before the dependant table has been created.Â
I have tried to amend the processing order of the view using sp_changearticle, which executes (quickly) with a 0 return code.But it is to no avail , the error still occurs.  is it possible to change the processing order for a view article , which will be applied to schema changes ? Have
I have a problem with sql server merge replication in sql server 2000. If my db owner €œdbo€? and replication setup under €œsa€? account it works with out any problem. But when I use another db owner it can not work properly. For an example I have customer table ([dbo]. [Customer]) When I setup merge replication under sa account it€™s work properly. Again I was setup merge replication using another db owner ([INV]. [Customer]) It doesn€™t work.
Hi, I'am using several SQL 2000 servers with different working merge-replications. Actually I wrote a .NET assembly which makes updates to key values in one of the replicated tables:
Ex: update ReplTable set key = -key where key between (1 and 100)
Loop ...... update ReplTable set key = NewKey where key = .... end of loop
Intention is to resort the datarows. So I made two updates to every keyvalue in a very short time. At the ConflictViewer I got messages like:
"The row was inserted at 'ServerX' but could not be inserted at 'ServerY'. Violation of PRIMARY KEY constraint 'PK_xxxx. Cannot insert duplicate key in object 'ReplTable'. "
The replication did only deliver one (the last) update - this makes the violation. Is there a possibility to make the mergeagent known that I wanted to deliver every transaction? Is there a parameter in the agents profile?
I have created an integration services package with a script-source reading data from Active directory. Pretty much data is read and written into a sql2005 database.
I have notised that when I run the package via right-clicking the package (Under stored packagesMSDB) and selecting "Run Package" it takes about 45 minutes for it to complete successfully. The same goes if I run the package from inside Visual Studio. However, when I create a job and put this package as one of it's tasks the task takes about 1h 40min. This is more then double the time! In neither case the server is occupied with soething else. And I have tried it several times so it wasn't just an "accident".
I am trying to create Replication Topology (Merge Replication) like below.
Subscriber1 --> Publisher <-- Subscriber2.
I have created both subscribers with Subscription Type as Server with Priority as 75. I am updating the Column A of Row_10 in Subscriber1 on time say 11 am. After i am running the Starting synchronizing agent from Subscriber1. The value propagated to Publisher now publisher contains the latest value in Column A. Uptonow the Subscriber2 is not synchronized with Publisher.
Now in Subscriber2 also Column A of Row_10 is updated say at 11.10 am. Actually now Publisher contains the value from Subscriber1 for that Column and in Subscriber2 we have the same column updated.
Now i am running the Synchronization in Subscriber2, i am getting the result which is not expected. Here Publisher's value is propagated to Subscriber2. But as per real scenario Subscriber2 has the Latest value which is updated on 11.10 AM.
I don't know what am i missing here. Actually merge replication should see the time stamp and it has to decide winner. But here it always considers publisher as a winner and puts the data to Subscriber.
I have got a business logic update conflict handler working, but I have had to work round what appears to be a bug.
Please can someone confirm if this is indeed a bug €“ and if it is a known bug?
My conflict handler needs to take some columns from the publisher row and some from the subscriber row in the event of conflict.
I can quite happily generate a custom dataset which contains the winning row that I want €“ I can see that because I can step through the conflict handler with debug when a conflict occurs.
However, just returning ActionOnUpdateConflict.AcceptCustomConflictData from the UpdateConflictsHandler method does not set the publisher and subscriber columns correctly. I end up with different values on the two databases.
I have found that the only way to get the correct rows on both publisher and subscriber is to create a new ADO connection to the publisher and actually perform an update €“ updating all the modified columns. This now works reliably in my testing.
Fortunately, due to business rules the frequency of update conflicts are likely to be very infrequent, but I would very much like to avoid having to do the €˜unnecessary€™ update.
Notes:
I am using column level tracking €“ but I have seen the problem with row level tracking too I have mainly been using SP1 but have repeated the test on a configuration using the SP2 CTP €“ and the problem occurs there too The problem is not due to complex logic in my code. If the method just sets customDataSet = publisherDataSet.Copy and then returns ActionOnUpdateConflict.AcceptCustomConflictData, the changed and winning publisher values are not sent to the subscriber
2 Users in 2 locations issue updates to the same table. 1 updating 1 column and the other updating another column. Now in reality the actual Stored Procedure issuing the update statement is passed in all the possible columns that could change and builds an update statement that updates all columns even the ones that havent changed.
Will this break Merge Replications conflict tracking? Or does SQL Server 2005 Merge Replication pickup that in reality the 2 updates only in reality changed the values in 2 columns.
I'm working on a replication topology that is completely merge. We have a single consolidated instance (SQL 2005 SP1 Standard) that holds all data and is a continuous push merge publication filtered by region to regional instances (SQL 2005 SP1 Standard). Then we have individual user instances (SQL Express SP1) that pulls from the republished regional instances which is filtered by user. Both publications have Replicate Schema Changes set to true.
I'm testing out changes to tables and sps on a test system I've been using this process:
1-Run Snapshot on the Consolidated instance
2-Verify all published articles have a status of 2 in sysmergearticles
3-Run Regional Snapshot
4-Verify all published articles have a status of 2 in sysmergearticles
5-Run alter table scripts
6-Once all three levels have the table changes, run the alter sp scripts
I've gotten to step 5 and and the changes get replicated to the regional instance just fine however only the existing column changes get replicated to the SQLExpress instance, not the new columns. Looking at the articles in the regional publication it shows the new columns, but they are not selected. I know I can manually select them (or probably write a script that adds them to the publication although sp_repladdcolumn has been depreciated), but isn't there a way to make this a completely automated process since it's just a republished database? Also is the process I'm using the correct one?
I've got the following problem: Our product is delivered with SQL2000 and SQL2005.
Now, there are some schema changes, which I'd like to deploy with T-SQL on the publishers. With SQL2000 I do it with the sp_addmergecolumn etc and on SQL 2005 (if replication compatibility level is 90) with replication of DDL. So far so good.
But how can my T-SQL Script determine, wether the replication of DDL is on? I know there exists sp_helpmergepublication, but how do I get the column replicate_ddl of the result-set?
I know that adding a column using ALTER TABLE to add a column automatically allows SQLSERVER 2005 to replicate the schema changes to the subscribers, however, I would like to add a new column to an existing article that is being used for merge replication, however, I don't want this column to be replicated. Re-initialising the subscriptions is not a option. Help would be appreciated.
I do realize that his could be posted in a few spots but I think the answer is in the SQL. I have a ASP.NET page, with a SqlDataSource, Text Box and Calendar Controls. I have the textbox and calendar controls eval'ed to the same sql data source DateTime Field. The text box is formatted eval to small time and the calendars eval has no formatting. ex: <asp:TextBox ID="START_TIME" Text='<%# Eval("EVENT_START","{0:t}") %>' runat=server Width=200></asp:TextBox> I want to merge the two controls; one has the date the other has the time when I update the pages data to the SqlDataSource field EVENT_START. I've tried a couple of methods, but I would like some other opinions. As Sql server only supports date and time together I am storing the two together. I could merge the two together in the code behind on the update button's event handler or merge the two during the update query using parameters. Not that I could get an illegal date for the calendar control, but I could get garbage from the textfield time. So I still would have to do validation on the text field before the SQL server could do the update. There's a few ways to go about this, so I was wondering if anyone else has figured out an elegant way to handle it. wbochar
I see from help books that table changes are not allowed using Enterprise Mgr. after a publication exists. The only way to make db changes is to use T-SQL. I found the correct sp to make changes below.
I am working on bringing our disaster recovery site to be a live site. Currently we replicate to one of out servers (server B) with merge replication (from server A). Server A also does one way transactional replication form some table to several other servers including servers at the DR site.
This setup is not going to be fast enough for what we need so I am wondering if a table is receiving merge replication will the merge updates also replicate down the transaction path??
Example... Server B update a row and merges to Server A. With this update them replicate (via transactional) to Server C??
Hi all, I am doing a replication of a SQL Server 2005 database from one server to another server, everything worked fine. Now, when I added some fields to this database, the replication stopped. can I do replication of a schema of a SQL database? how? Should the replication work even if I add some fields.
A table is currently set up with Transactional Replication. A new field needs to be added to the table. The system needs to continue to be live. What is the best way to handle this kind of situation?
The database model cannot be frozen once replication is in place, and a fresh snapshot cannot be taken each time this occurs since this is very timely on large tables, and errors could occur with other tables replicating since there are foreign key constraint in the Target database.
I haven't been able to find a way to be able to continue replicating the old records which may have been in the log during the addition of the field, as well as the new records with the new field. I must be missing something, since the model must be able to be altered once replication is in place, without the shuting down of the entires system.
Hi I have to questionsproblems I would need help with. First I am creating a three level merge replication. I mean I have a main publisher with its distributor, a subscriber/publisher and also with it's own distributor and lastly a subscriber. I start by creating the relation between subscriber/Publisher and the subscriber and than I move up to the main Publisher and SubscriberPublisher relation. After this second merge I get the following errors and the articles in my SubscriberPublisher are lost...
The merge process could not retrieve article information for publication 'I-Test-Publisher'. (Source: Merge Replication Provider (Agent); Error number: -2147201017)
Cannot insert the value NULL into column 'tablename', table '@tmp_table'; column does not allow nulls. INSERT fails. (Source: DEVSERVER3MSDE (Data source); Error number: 515) ---------------------------------------------------------------------------------------------------------------
Failed to allocate new identity range. (Source: DEVSERVER3MSDE (Data source); Error number: 21197) ---------------------------------------------------------------------------------------------------------------
Secondly
I have set up a two level merge replication with a publisher and a subscriber and had modified the structure of a table in the publisher with sp_repladdcolumn. During the day all was fine I continuously tested it by adding more columns, droping columns and testing the replication itself with data entry, but when I got back the next morning my test table and all other table I presume had lost its data and structure. The conflict table for the corresponding test table had kept it's structure but no data were present since no conflict had occured. This is a recuring problem that some colleagues had. Does anyone have any idea what might cause this problem and why it happenned. Thanx
When we try to replicate the database from one server to another server it is very very slow...
Then we assumed that the Indexed View may cause this issue.. But we didn't check by removing the index & replicated it..
Can any one suggest me for the following Qs..
Is it Indexed View really bad on Replication(Main Table)? What is the alternate & best solution for Indexed Views? Is Covered Index solve the Indexed View issue? Is Convered Index wont create the Replication issue? Is it good idea to disable the Index on View on Replication and re-create the index?
How can I keep schema changes in the subscriber's database when I replicate the snapshot from the publisher?
I just want to move data from a remote server, but it seems that the tables are being dropped which is not good as we use the subscriber as a development box.
We have several 2005 servers with "Maximum server memory" set to 214 gig, which I believe is the default at installation time. I am told that this means "use all the memory there is including paging." Well, this is nuts but the servers seem to work fine with this setting no matter how much physical memory they have.
One of our 2005 servers recently started paging like crazy, so I reduced "Maximum server memory" to 6000 and the paging disappeared (server has 8 gig of physical memory) and the server appears happy.
I can not explain why only this one server has this paging issue and the others do not. Should I be setting "Maximum server memory" on all my servers? Are there other considerations which might cause the server to eat-up all the memory? As far as I know no other applications run on this box.