Adding New Table In Replication And Changing One Column Replication Database
Jan 17, 2002
Hi,
In my production box is running on SQL7.0 with Merge replication and i want add one more table and i want add one more column existing replication table. Any body guide me how to add .This is very urgent
Regards
Don
I'm working with merge replication between Sql Server 2005 and Sql Server 2005 Mobile. I'm using dynamic filtering by function HOST_NAME (... where Table.FilteredColumn = HOST_NAME())
When I'm trying to chagne values from filtered column on the client side (Table.FileteredColumn) I get error message that the column is read-only.
How can I change data in filtered column? Is this possible?
Changing datatype in a table used for merge replication ?
I'm trying to change the data type on a column in a table that is used for a merge-replication with another (identical) DB.
I get this error: Cannot alter the table '[Tablename]' because it is being published for replication.
I've tried to remove that specific table from the publication (Publication properties -> Articles tab) so I can change the data type and then put the table back into the publication, but I can't (probably because it's a MERGE rep.)
I know that adding a column using ALTER TABLE to add a column automatically allows SQLSERVER 2005 to replicate the schema changes to the subscribers, however, I would like to add a new column to an existing article that is being used for merge replication, however, I don't want this column to be replicated. Re-initialising the subscriptions is not a option. Help would be appreciated.
Hi,I have transactional replication set up on on of our MS SQL 2000 (SP4)Std Edition database serverBecause of an unfortunate scenario, I had to restore one of thepublication databases. I scripted the replication module and droppedthe publication first. Then did a full restore.When I try to set up the replication thru the script, it created thepublication with the following error messageServer: Msg 2714, Level 16, State 5, Procedure SYNC_FCR ToGPRPTS_GL00100, Line 1There is already an object named 'SYNC_FCR To GPRPTS_GL00100' in thedatabase.It seems the previous replication has set up these system viewsSYNC_FCR To GPRPTS_GL00100. And I have tried dropping the replicationmodule again to see if it drops the views but it didn't.The replication fails with some wired error & complains about thisviews when I try to run the synch..I even tried running the sp_removedbreplication to drop thereplication module, but the views do not seem to disappear.My question is how do I remove these system views or how do I make thereplication work without using these views or create new views.. Whyis this creating those system views in the first place?I would appreciate if anyone can help me fix this issue. Please feelfree to let me know if any additional information or scripts needed.Thanks in advance..Regards,Aravin Rajendra.
I have a table used in Merge Replication. If i add a new column in that table it gives me an error for tables in which either this table is a key or this table has key from other tables "Error validating the default for column 'rowguid'." Also for views i'm getting an error "View 'dbo.vw_TestCaseBasic': indexes and schema binding will be removed."
If i proceed further i get an error and m not able to add the column. "It is invalid to drop the default constraint on the rowguid column that is used by merge replication. The schema change failed during execution of an internal replication procedure. For corrective action, see the other error messages that accompany this error message. The transaction ended in the trigger. The batch has been aborted."
I got a problem when replicating a table with an image field.
First I started replication and it seemed to work. Then after a day or so I noticed that the replication had stopped and it had a few errors like this:
NULL textptr (text, ntext, or image pointer) passed to UPDATE TEXT function. (Source: MSSQLServer, Error number: 7133) Get help: http://help/7133
NULL textptr (text, ntext, or image pointer) passed to UPDATE TEXT function. (Source: MSSQLServer, Error number: 7133) Get help: http://help/7133
I tried adding a filter to prevent rows which are NULL from being replicated with the filter "[content] is not null ", but I got the error message:
Cannot save properties for article "xxxxxxxxx" Aditional Information
An Exception occured while executing a Transact-SQL statement or batch. (Microsoft.SQLServer.Rmo)
Replication filter procedures may not contain columns of large object, large value, XML or UDT type. Cannot generate a filter view or procedure. Verify that the value specified for the @filter_clause parameter of sp_addarticle can be added to the were clause of a select statement to produce a valid query. Changed Database context to 'xxxxxxxxxx'. (Microsoft SQL server, Error: 341).
Now Im left wondering how to get the replication to work?
We have a requirement to add a not null column to an existing table.
I created the column with null and updated with single quotes, and then converted the column into not null.
Now, I can see in the publisher, it went fine. but in the subscribers, i'm getting the error as, can not insert null value. I then tried to make column as null to resolve the issue, but still it is holding the same error and not proceeding further.
Now I even dropped the column. still same issue. how can I resolve this issue? I'm using sql server 2008 R2 with SP3. replication type is Merge replication.
I have the folowing databases DB1,DB2,DB3,D4,DB5........ I have to loop through each of the databases and find out if the database has a tablename with the word 'Documents'( like 'tbdocuments' or 'tbemployeedocuments' and so on......) If the tablename having the word 'Documents' is found in that database i have to add a column named 'IsValid varchar(100)' against that table in that database and there can be more than 1 'Documents' table in a database. can someone show me the script to do it? Thanks.
I have a replicated database on 2008 R2 being moved to a new instance with 2014. Application regression testing is complete and all tests are good.
The current transactional replication topology is Publishing subscriber: the distributor is local and pushes to a subscription on an intermediary server. This server then has an identical publication and local distributor which then pushes to a subscription on a customer's server. There's no business case for this setup, it's just how it was created at the time.
I want to change this to a Central Distributor model when we do the move to speed things up and remove any extra potential points of failure.
The customer has a requirement that during this process, their subscription database remain online, all tables are fully populated and replication pick up exactly where it was left off. This is a completely new scenario for me and I was hoping to just be able to rebuild the subscription from scratch.
Is there a way to do this move and topology change and meet the customer's requirement?
Hello everyone,I am involved in a scenario where there is a huge (SQL Server 2005)production database containing tables that are updated multiple timesper second. End-user reports need to be generated against the data inthis database, and so the powers-that-be came to the conclusion that areporting database is necessary in order to offload report processingfrom production; of course, this means that data will have to bereplicated to the reporting database. However, we do not need all ofthe data in the production database, and perhaps a filtering criteriacan be established where only certain rows are replicated over to thereporting database as they're inserted (and possibly updated/deleted).The current though process is that the programmers designing thequeries/reports will know exactly what data they need from productionand be able to modify the replication criteria as needed. For example,programmer A might write a report where the data he needs can beexpressed in a simple replication criteria for table T where column X= "WOOD" and column Y = "MAHOGANY". Programmer B might come along amonth later and write a report whose relies on the same table T wherecolumn X = "METAL" and column Z in (12, 24, 36). Programmer B willhave to modify Programmer A's replication criteria in such a way as toaccomodate both reports, in this case something like "Copy rows fromtable T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X ="METAL" and col Z in (12, 24, 36))". The example I gave is reallytrivial of course but is sufficient to give you an idea of what thecurrent thought-process is.I assume that this is a requirement that many of you may haveencountered in the past and I am wondering what solutions you wereable to come up with. Personally, I believe that the above method isprone to error (in this case the use of triggers to specifyreplication criteria) and I'd much rather use replication services tocopy tables in their entirety. However, this does not seem to be anoption in my case due to the sheer size of certain tables. Is thereanything out there that performs replication based on complexprogrammer defined criteria? Are triggers a viable alternative? Anyalternative out-of-the-box solutions?Any feedback would be appreciated.Regards!Anthony
We are doing Reporting for a transaction system. since we do not want to hamper the live database we are planning to do the transactional replication.
Few questions for transactional system.
1. If we replicate a database , then what ever changes happened for the source db will be transferred automatically?
for ex: If i change a column name of a table in source system, then will it transferred automatically to the replicated db?
2. If we do any change to any of the tables in source system, do we need to recreate the replication and reload the entire data?
3. Also we are planning to enable cdc on this replicated db to enable incremental load to my warehouse. So if we disable the cdc and do a full load into the replicated db, then do we need to perform full refresh on warehouse?
4. Can we replicate on a table level? so that if we reload only the changed table and then reload then there wont be any impact on the over all flow of other tables.
We are currently using continuous transactional replication from an OLTP type database to a Reporting database. We recently changed the 'sa' password for the first time since replication was introduced and now the Replication Log Reader job fails to start.
This is the error from the Job Log:
Message 2008-04-14 15:18:24.894 Copyright (c) 2005 Microsoft Corporation 2008-04-14 15:18:24.894 Microsoft SQL Server Replication Agent: logread 2008-04-14 15:18:24.894 2008-04-14 15:18:24.894 The timestamps prepended to the output lines are expressed in terms of UTC time. 2008-04-14 15:18:24.894 User-specified agent parameter values: -Publisher OASV-SQL1 -PublisherDB georgiaoas -Distributor OASV-SQL1 -DistributorSecurityMode 1 -Continuous -XJOBID 0xCE146F975BC43043AB1ECBF42F801C43 -XJOBNAME OASV-SQL1-georgiaoas-1 -XSTEPID 2 -XSUBSYSTEM LogReader -XSERVER OASV-SQL1 -XCMDLINE 0 -XCancelEventHandle 00000730 -XParentProcessHandle 0000075C 2008-04-14 15:18:25.034 Status: 0, code: 20015, text: 'Login failed for user 'sa'.'. 2008-04-14 15:18:25.034 Login failed for user 'sa'.
When we change the 'sa' password back to the previous value, we can start the Log Reader without issue.
We have tried restarting the SQL Server Agent process without success. We have not restarted the SQL Server process due to the server hosting production databases.
After i changed my computer's name(eg. the original computer name is 'SERVER-1',I has changed it to 'SERVER-2'),I can not create a new replication in SQL Server 2005.The error message :
I want to move table from one database to another database in same instance, table should migrated with complete data,with same column data type, all constraints like PK,FK unique key, check, identity, permissions has to be there.. which is the right way to achieve this.
We have inherited a remote distribution db where the password has been lost. I would like to confirm that if you change the password in the distributor properties-->publishers--> administrative link password that the password will be changed on all publishers without having to resnap the publications?
I am having a server where replication is set up between 2 differnt databases. It is currently running. I want to add a couple of tables to the replication. I tried using sp_addArticle, but after executing it, in the properties of the publication it shows the new tables, but at the database level the tables are missing.
I tried with sp_addsubscription but I am getting strange error:
Server: Msg 14100, Level 16, State 1, Procedure sp_addsubscription, Line 240 Specify all articles when subscribing to a publication using concurrent snapshot processing.
What can I do to publish the tables into the target database?
Dear group,i have the following setting:satellite databases in different locations with the same structure (andsometimes identical data) need to be replicated into one centraldatabase. The databases hold information on machine results andmachines are numbered from 1 to n.Results for machine 1 in location 1 should be entered in the centraldatabase whereas results for machine 1 in location 2 should also beentered.Thus i would need a copy of the tables in the satellite database whichadds one column for "location id" (or something like that).Is there an easy way to accomplish this ? Seems to me that thereplication scenarios in the "replication wizard" for SQL-Server missout this case.Some hint at documentation or websites would be extremely helpful.Thanks + Greetings from ViennaUli
I've been looking for the right stored procedure to create a push subscription for publication but i haven't had much luck so far. So I was wondering if you guys know which SP is appropriate, that'd be super great!! thanks guys!
note. the sp should run on the subscriber database.
We have our replication environment with a separate SQL 2014 publisher, distributor and subscriber. We have a table named DPSP on 14 databases which is to be published via transactional replication to 1 subscription database.
The source tables are identical in name, schema and clustered index. We have created 14 tables in the subscription database with a suffix to the name (see below):
Each table in the subscription database has its own uniquely named primary key and clustered index:
Below are the settings for one of the published article:
The same process is repeated for each DPSP table created in 14 separate publications. Snapshots are generated and subscriptions created successfully. After some time, the log reader starts to show the following errors in replication monitor:
2015-08-12 21:48:24.156 7 transaction(s) with 11 command(s) were delivered. 2015-08-12 21:49:04.233 5 transaction(s) with 7 command(s) were delivered. 2015-08-12 21:50:04.355 No replicated transactions are available. 2015-08-12 21:50:09.378 13 transaction(s) with 17 command(s) were delivered. 2015-08-12 21:51:12.168 No replicated transactions are available. 2015-08-12 21:52:12.306 No replicated transactions are available. 2015-08-12 21:52:32.337
We have a database we are replicating to about 8 SQL Express subscribers from a SQL 2012 SP2 publisher. The size of the database grew too large for the 10GB license limit for SQL Express and now replication refuses to replicate any of our deletions on the publisher to reduce the size of the database. I've come up with a few options below.
1) Drop one of the larger table indices on the subscriber database to get below the size restriction. Permit the replication to replicate the deleted records and then rebuild the index. (I'm not sure how important an index is to this table. Is it merely performance related?)
2) "Upsize" SQL Express to SQL Standard on the affected boxes. Allow the deletes to replicate. Backup the database, downgrade to SQL Express and restore the database back to SQL a new SQL express instance. This would involve a lot of work on each box. I'd like to avoid it if possible.
I have a merge replication. Currently works fine. Publisher & Distributor are on the same server. I need to change the location of the alternate folder for the snapshot files.
I’ll probably just change it through the GUI, but would I use sp_changedistpublisher or sp_changemergepublication if I were scripting everything?
My real concern is the subscribers. Do I have to ‘tell’ the subscribers where the alt folder has been changed to? Do I just run sp_changemergepullsubscription on the subscribers?
I have two machines running SQL Express that I am attempting to use as proxies for a client application as the security admin has deemed that outside clients can not connect directly to the main DB servers.
When I have one of the SQL Express boxes subscribed to my SQL 2005 standard server, everything is happy, but when I attempt to add the second SQL Express box as a subscriber, everything seems to blow up.
OK, I'm setup as a merge with updates and no filter since I want all of the servers to get all of the data. I'm also on a non-standard port but have my aliases setup, and I don't have any connection issues. When I run the subscription wizard from the management studio, it comes back with "Success", but about 2 seconds later, I get errors on both subscribers from the replication monitor. Oh, I am aslo setup for push subscriptions and am setup to run continuously.
Here is the error message from the rep. monitor.
Error messages:
No subscription is on this publication or article. (Source: MSSQLServer, Error number: 14050) Get help: http://help/14050
The merge process was unable to deliver the snapshot to the Subscriber. If using Web synchronization, the merge process may have been unable to create or write to the message file. When troubleshooting, restart the synchronization with verbose history logging and specify an output file to which to write. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147201001) Get help: http://help/MSSQL_REPL-2147201001
What I find odd is that It only dies when I try to add a second subscriber, and then it kills both of them. Any help or ideas on where I should look would be most welcome.
I have an existing publication in sql 2012 with 2 articles, and then I add 2 more articles. After that when I generate a snapshot, will the snapshot be generated for 2 new articles only or for all 4 articles?
I remember adding 1 new articles to one existing publication with 150 articles and when I generated snapshot, it was generated only for 1 article. But I don't remember clearly.
Does it behave differently for small and large number of articles?
-----Table Proc Index Performance TSQL &&%$#@*(#@$%.......------------
I'm getting this, after upgrading from 2000 to 2005.Replication-Replication Distribution Subsystem: agent (null) failed.The subscription to publication '(null)' has expired or does notexist.The only suggestions I've seen are to dump all subscriptions. Sincewe have several dozen publications to several servers, is there adecent way to script it all out, if that's the only suggestion?Thanks in advance.