SQL 2012 :: Script Failure During Snapshot For Transactional Replication
Apr 9, 2014
I built a number of publications on a SQL Server 2005 box to replicate to a SQL Server 2012 subscriber. All the publications except one are fine. During the snapshot phase of schema script generation I get Script Failed for Table 'dbo.MediaDisplayLibraryFileData'. From the Replication monitor for the Snapshot Agent on the Publication I get, "Column FileData in object MediaDisplayLibraryFileData contains type VarBinaryMax, which is not supported in the target server version, SQL Server 2000." This message makes no sense since the target server version is 2012. I have even checked that the compatibility level was set to 110 before I started the process of setting up replication. How do I resolve this error?
I've set up Transactional replication Pushed from SQL 7 to SQL 2000. All works fine accept I have noticed that the snapshot agent is running once a week and storing a lot of data I don't really need in c:msql7 epldataunc eplicationnamedatastamp. This is gradually building up and as C is a smallish partition it'll only survive for another week.
My questions are: 1) Can the Snapshot schedule be disabled? I'm under the impression that it's only required when a new subscriber comes along which is never! If I do need a new subscriber I could just manually produce another snapshot.
2) Can I delete the previous snapshot folders under ReplData? On a development server I renamed one of the folders and the ground didn't open up below my feet.
3) What effect would changing the working folder used by the distribution agen have, the distributer is on the publisher (which as I mentioned is pushing) I've tried changing the working folder, on a development server, I manually ran a snapshot which whose BCP files all ended up in my new location and the Transactional Replication didn't seem to care.
I've looked in a few places for a difinitive answer and I'm not having much luck so any help (preferably before saturday) would be greatly appreciated.
I am in the process of setting up replication between two SQL boxes - the first being the production SQL Server and the second being a secondary server which is to be used as part of a web portal solution. (In fact there are two pairs of SQL boxes - SQL 7 and SQL 2k, one of each on the LAN and one of each at the hosting site.)
We need to have near realtime replication, so transactional replication is the desired mechanism. The trouble is that not all tables have primary keys defined. Oh yes, and the main database is 20G in size and has >900 user tables! :-0 (The other databases are much better behaved!)
When I set up transactional replication, I am allowed only to include tables which have PKs defined. AFAIK, the initial transactional replication snapshot will also only include these same tables.
If I set up snapshot replication (separately), I can include all the tables in the database. However, I cannot then replicate in real time.
Can I combine the two replication schemes to deliver updates to the same target database: - transactional replication delivering realtime updates to the tables with PKs during the day and - snapshot replication updating all the tables once per 24hrs at night?
Or is there a better way of doing this?
I am not sure whether I can modify the existing schemas, as some of the databases are 'maintained' by an external provider. Even if I could, if I had to add a column to have a PK, I would potentially be adding to my diskspace requirement rather significantly...
Why would you schedule a Snapshot Agent to run daily, weekly, monthly, etc. for Transactional replication when the transactions are replicated once the change is committed to the Publication database?
Does this help to replicate DDL against the Subscriber?
When you create a transactional Replication.Is there a way to disable to snapshot agent scheduler. I believe that there is a way to disble it when you create a publisher.
We are running transactional replication in a high volume environment and we are starting to see high IO on the distribution database when the distribution cleanup runs. We only build the snapshot once a week and I was wondering if I rebuilt the snapshot more often it would reduce the size of the distribution database and the amount of IO that are being generated during the cleanup?
Hi, I have transactional replication setup with SQL 2000 on W2K3 cluster using updateable push subscribers. While setting up replication, we chose default location for snapshot folder that resides on non-clustered drive. Is there a way to change this location without disturbing current replication setup. I looked at the 'alternate snapshot location' solution, but it requires snapshot re-initialization. I am trying to do this with minimal effort and downtime. Thanks, np70
I have transactional replication running. I would like the Snapshot Agent to run regularly (once per day, probably, at 23:00) so that the target data are completely refreshed so that there is little possibility of the target getting out of sync.
The Snapshot Agent's job is kicking off OK, but the "snapshot was not generated because no subscriptions needed initialization". I know I can initialise the subscriptions manually - is there a way to do this so that the initialisation happens just before the snapshot job launches?
SQL Server 2005 Books Online provides an article entitled, "Initializing a Transactional Subscription without a Snapshot". Is it possible in SQL Server 2000 to initialize transactional replication without a snapshot?
So far, I have been unable to find a similar procedure mentioned in the SQL 2000 Books Online. I was able to follow the 2005 procedure using SQL 2000 until I got to the step that says to enable the "Allow initialization from backup files" option on the "Subscription Options" tab of the "Publication Properties" dialog. But that option does not appear in the SQL 2000 version of the specified dialog box.
I have a pretty big (350 gb) OLTP database that I want to replicate in its entirety. I'm concerned about the impact of taking a snapshot of it (it is processing at some level pretty much 24x7). I know on SQL2005 there is the option to initialize from backup, but unfortunately we won't be on 2005 in time.
I'm thinking of doing something like this:
Set up the distributor, publication, and subscription Turn off distribution agent Set the publisher to "sync with backup" Backup the publisher, full then log Truncate tables MSrepl_transactions and MSrepl_commands in the distribution db (I don't have any other replication going on) Turn off "sync with backup" Restore the full and tran log backups to new subscriber db Create subscriber stored procs in subscriber Start up distribution agent
I'm looking for opinions on whether it's worth going this route to avoid taking the snapshot. Data integrity is the number one priority -- if I have to do a snapshot to ensure that, I will do it.
We have a large database with a small number of large tables in it (and a larger number of SMALLER tables), and it is a publisher for a transactional replication scenario. When I create a snapshot to initialize a new subscription, I notice with the larger tables that sometimes it generates multiple files in the snapshot folder, usually in multiples of 16, and numbers them like this:
With other tables, I'll get just one LARGE snapshot file, named:
MyOtherTable_4.bcp
In the latter case, the file can be very large (most recent is 38GB).
In both cases, the subscription will eventually be initialized, but the smaller files will generate separate log entries every few minutes in the Replication Monitor, showing 'Bulk Copied data into 'MyTable' (34231221 rows)', whereas the larger table will generate only ONE log entry, showing 'Bulk coping data into table 'MyOtherTable', and it may take a couple of hours before there is anything else showing...except for an entry saying, 'The process is running and is waiting for a response from the server.'
My question is: what would be the difference between the two tables that would result in one generating MULTIPLE snapshot files, the other only a single, much larger one? The only difference I can see in the table definition is that the one generating multiple files has a clustered index, whereas the others do not.
We have setup transactional replication between 2 databases on SQL Server 2000 SP3a (~70GB), using a concurrent snapshot (to prevent locking out of the live database) to initilaise the data and a pull subscription from the second database.
From analysing the msdistribution_history table in the distribution database on the subscriber it appears that the snapshot is being applied in a continuous loop to the subscriber database. Viewing the comments column in the msdistribution_history table we can see the following sequence of events occuring
Initialising Applied script 'snapshot.pre' Then it applies all the schema files .sch Then it applies all the index files .idx The it bulk copies the data in (bcp) Then it creates the Primary Keys Then it applies all the trigger files .trg Then it applies all the referential integrity files .dri
These all complete successfully but then the process kicks off again immediately after reapplying the snapshot. We are unaware of any settings that may be causing this.
Any help on what maybe causing this would be much appreciated.
There is a SQL Server 2008 R2 SP3 Clustered Instance that has Transactional Replication. It is by no means a large replication setup in terms of data/article count. SQL Server was recently patched to SP3 and is current on Windows 2008 R2 Patches.
When I added a new article to replication (via 2014 SSMS GUI) it seems to add everything correctly (replication tables/procs show the new article as part of the publication). The Publication is set to allow the snapshot to generate for just new articles (setting immediate_sync & allow_anonymous to false).
When the snapshot agent is run, it runs without error and claims to have generated a snapshot of 1 article. However the snapshot folder only contains a folder for the instance (that does have the modified time of the snapshot agent execution) and none of the regular bcp/schema files.
The tables never make it to the subscribers and replication continues on without error for the existing articles. No agents produce any errors and running the snapshot agent w/ verbose output provides no errors or insight into any possible issues.
I have tried:
- dropping/re-adding the article in question.
- Setting up a new Snapshot Folder
- Validated all the settings and configurations
I'm hesitant to reinitialize a subscriber since I am not confident a snapshot can be generated. Also wondering if this is related to the SP3 Upgrade, every few months new articles are added to the publication and this is the first time since the upgrade to SP3 that it has been done.
I have taken over a transactional replication setup that is being usedfor fault tolerance (I know, I know...).The scenario I am concerned with is where the publisher goes down due tofailure, so we need to point our application at the subscriber and startupdating the data there. We are not using the immediate updating option.How do I go about re-syncing the publisher with the data that been addedto, or changed on the subscriber, when the publisher comes back online?Thanks,TGru*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
We are planning to setup HA using either AAG or FC. ON my production environment (P), I have transnational replication configured for 2 of databases out of three. These data get replicated to another server( C1) hosted on cloud, with local distributor at P.
If I configure AAG/ FCI , how do I handle failover?I wanted to setup P as primary AAG with two replicas as S1 and S2. P->S1 will be synchronous while P->S2 will be asynchronous.Incase my P goes down how the replication will failover S1? Incase P and S1 goes down how do I failover to S2 with the replication.
So, Microsoft decided that they were deprecating Transactional Replication with Updatable subscriptions. In that case, you have 2 options (if I am correct): Pay for Enterprise (if you are already not) and use peer-to-peer or use bidirectional transactional replication which is basically setting up a transactional from db1 to db2 and also transactional from db2 to db1.
The issue I see in both cases is conflict resolution. With updatable subscriptions, you could specify how to handle the conflict. With either of these 2 options (from what I can tell) you cannot allow the engine to handle this for you.
Any thoughts? Seems like a slap in the face to those who have been using MS for years and a damn good reason for companies that rely on updatable subscriptions to not upgrade to 2012.
I have a publication on Sql Server 2012 that uses transactional replication to 7 subscribers (these are a mix of Sql Server 2008R2 and Sql Server 2012). Last night I scheduled the Snapshot job to run to "re-publish" the database to the subscribers. I had a few new table to push down. Unfortunately the snapshot job became the deadlock victim. Now updates to the publisher are not being sent to the subscribers.
Short of rerunning the snapshot job, is there a way to repair the replication so the updates to the publisher are pushed to the subscribers? The "re-publish" can only be run overnight when there is very little impact to users.
Bill Soranno MCP, MCTS, MCITP DBA Database Administrator Winona State University Maxwell 143
"Quality, like Success, is a Journey, not a Destination" - William Soranno '92
I have found some articles with no publication in our transactional replication.
For example, running this:
select p.publication, a.publication_id, a.article from dbo.MSArticles as a left outer join dbo.MSpublications as p on a.publication_id = p.publication_id
shows this:
NULL1org_Community NULL3org_Community Purchasing to EDW5org_Community NULL1org_Division NULL3org_Division Purchasing to EDW5org_Division
How can I get rid of the articles that are not part of a publication?
I can't use sp_droparticle because it requires a publication which these articles do not have.
I am using SQL 2012 SE and implementing transactional replication. I need to insert the rows from publisher database tables to new tables, drop the old tables and rename the new tables with the old table names.
For example:
Publisher database tables that are being replicated:
Table1 Table2 Table3
and I am going to create new tables in publisher database
Drop constraints from and then tables (does this require articles to be removed from replication?)
Table1 Table2 Table3
Rename
Table1_new to Table1 Table2_new to Table2 Table3_new to Table3
Does this require replication to set up from scratch or add the three articles only to replication? Is there a way this can be done without pausing or reinitializing replication or without removing articles and adding them back?
I have an existing publication in sql 2012 with 2 articles, and then I add 2 more articles. After that when I generate a snapshot, will the snapshot be generated for 2 new articles only or for all 4 articles?
I remember adding 1 new articles to one existing publication with 150 articles and when I generated snapshot, it was generated only for 1 article. But I don't remember clearly.
Does it behave differently for small and large number of articles?
-----Table Proc Index Performance TSQL &&%$#@*(#@$%.......------------
We have a database which is (a subset of tables are) replicated to another via transactional replication. Whilst most changes made at the published database reach the subscriber within a matter of seconds, we have a SQL Agent job which performs a calculation in the published database and then immediately exports data from the subscriber using log shipping. The result is that the calculated changes do not make it through to the exported transaction logs in time.
Is there a way to manually "refresh" the subscriber databases using T-SQL?
We have a vendor that is exposing our database via a High Availability replica. They are geographically far away from us though so we would like to extract portions of the database over to our side for our reporting /warehousing purposes. I was curious if it is possible to setup snapshot replication on a high availability group?
I have a transactional replication environment that creates subscribers on another server as a staging area for an ETL process to a data warehouse application on a 3rd server which is the report repository. Currently the ETL process runs every 10 minutes and performs it's function across approx 150+ subscriber databases and consolidates it to the data warehouse.
I have an SLA of 2 minutes. I'd like to rework the ETL process (which run as SSIS job at the moment) to be specific to a single database and fire that one ETL proces when changes have been applied to that subscriber database only. Of these 150+ databases generally only about 8-10 are updating the subscriber at any given time per Repl Monitor. I'm thinking that if I only have a few transactions to apply to a single db the ETL would run in seconds dynamically as the subscriber is update.
The issue is how to fire the ETL process upon completion of updates to the subscriber DB? I'm thinking of using SP_Start_job passing the DBID to update the warehouse but unsure whether this is possible but if so where to trigger it.
I am very new to SQL coming from a SAS background.
I work at a College where I am trying to create a table that basically stores application data so that I can record how many applications we had on a set day.
For example I want to run it say bi-weekly and create a date/time stamp for each row and then the next time it runs it appends to the original table but with the new date/time stamp.
I have managed to create the table and the date/time stamp but I'm not sure how to do the next stage.
This is not the best practice as the table will get "big" quite quickly (although we're only dealing in the hundreds in terms of rows). Ideally I'd have a more transactional database that only appends a record if there is a change i.e. the status of the application has changed from "Received" to "Interview" or similar.
I'm not sure how to copy and paste the code dynamically in this forum but this is what I have got so far (see below).
SELECT CC.cc_name as Academy, SUBSTRING(CC2.cc_reference,1,4) as Cost_Centre, MD2.m_reference as Course, MD2.m_name as Course_Name, CAST(MD2.m_start AS DATE) as Start_Date,
I'm interested in combining the Peer-to-Peer Transactional Replication and Standard Transactional Replication to provide a scale out solution of SQL Server 2005. The condition is as follows:
We may have 10 SQL Server 2005 (1 Publisher + 9 Subscriber) running transactional replication in the production environment and allow updates in subscribers. To offload the loading of the publisher, we plan to have 2 Publisher (PubNode1 and PubNode2) using Peer-to-Peer Transaction Replication and the rest 8 subscribers will be divided into 2 groups. The subscribers 1-4 (SubNode1, SubNode2, SubNode3, and SubNode4) will be set to be standard transactional replication subscribers of PubNode1, and the rest 4 subscribers (SubNode5, ..., SubNode8) will be set to be standard transactional replication subscribers of PubNode2.
Is it possible to setup above 2 Publisher + 8 Subscriber topology? Also, could we set the 8 subscribers with updatable subscriptions to achieve each node is updatable?
We do not plan to set all the 10 nodes using Peer-to-Peer Transactional Replication as it is necessary to make sure n*(n-1)/2 (i.e. 45) peer-to-peer connections is reliable. It seems that the maintenance cost is high if the servers are not in a LAN and the topology is very high coupling. So we prefer to divide the 10 nodes into 2 groups and reduce the cost of each node to maintain the connections to all other sites.
Hi,I have transactional replication set up on on of our MS SQL 2000 (SP4)Std Edition database serverBecause of an unfortunate scenario, I had to restore one of thepublication databases. I scripted the replication module and droppedthe publication first. Then did a full restore.When I try to set up the replication thru the script, it created thepublication with the following error messageServer: Msg 2714, Level 16, State 5, Procedure SYNC_FCR ToGPRPTS_GL00100, Line 1There is already an object named 'SYNC_FCR To GPRPTS_GL00100' in thedatabase.It seems the previous replication has set up these system viewsSYNC_FCR To GPRPTS_GL00100. And I have tried dropping the replicationmodule again to see if it drops the views but it didn't.The replication fails with some wired error & complains about thisviews when I try to run the synch..I even tried running the sp_removedbreplication to drop thereplication module, but the views do not seem to disappear.My question is how do I remove these system views or how do I make thereplication work without using these views or create new views.. Whyis this creating those system views in the first place?I would appreciate if anyone can help me fix this issue. Please feelfree to let me know if any additional information or scripts needed.Thanks in advance..Regards,Aravin Rajendra.
I have been researching on the proper steps or sequence to follow to completely remove SQL Server 2012 Transactional Replication. I have read articles about using SSMS as well as using replication stored procedures and some procedures use SQLCMD or just regular TSQL executed in SSMS. I have also read articles where people said all you really need is connect to the Publisher instance, find the publication you want to remove and choose "Delete" and everything will be taken care of behind the scene. I have three SQL servers that participate in transactional replication. SQL-P (publisher),
SQL-D (distributor) and SQL-S (subscriber). Do I need to connect to the distributor instance and the subscriber instance when removing transactional replication or is it just really connecting to the publisher and click delete on the publication? I want everything gone including any metadata, systems tables, distributions db and any other replication objects created during the initial configuration.