Replication Without Replication? Thoughts Welcome
Apr 24, 2007
Hi,
I have a particularly difficult problem.
I have a SQL Server in an internal network and need to "replicate" to an identical server and database in a DMZ. The DMZ can only receive files sent by a custom component and no port(s) can be opened in the firewall or standard FTP used.
I also need to minimise traffic (i.e. sending whole tables is not an issue as some contain millions of rows), replicate approx once hourly and allow record locking only (as opposed to DB locking/exclsuive access or table locking).
Any thoughts / experiences greatly appreciated otherwise I'm looking at putting triggers on all the tables to monitor changes and generate SQL statements for execution against the DMZ server.
View 4 Replies
ADVERTISEMENT
Jun 15, 2007
I'm getting this, after upgrading from 2000 to 2005.Replication-Replication Distribution Subsystem: agent (null) failed.The subscription to publication '(null)' has expired or does notexist.The only suggestions I've seen are to dump all subscriptions. Sincewe have several dozen publications to several servers, is there adecent way to script it all out, if that's the only suggestion?Thanks in advance.
View 3 Replies
View Related
Sep 13, 2007
Hi,I have transactional replication set up on on of our MS SQL 2000 (SP4)Std Edition database serverBecause of an unfortunate scenario, I had to restore one of thepublication databases. I scripted the replication module and droppedthe publication first. Then did a full restore.When I try to set up the replication thru the script, it created thepublication with the following error messageServer: Msg 2714, Level 16, State 5, Procedure SYNC_FCR ToGPRPTS_GL00100, Line 1There is already an object named 'SYNC_FCR To GPRPTS_GL00100' in thedatabase.It seems the previous replication has set up these system viewsSYNC_FCR To GPRPTS_GL00100. And I have tried dropping the replicationmodule again to see if it drops the views but it didn't.The replication fails with some wired error & complains about thisviews when I try to run the synch..I even tried running the sp_removedbreplication to drop thereplication module, but the views do not seem to disappear.My question is how do I remove these system views or how do I make thereplication work without using these views or create new views.. Whyis this creating those system views in the first place?I would appreciate if anyone can help me fix this issue. Please feelfree to let me know if any additional information or scripts needed.Thanks in advance..Regards,Aravin Rajendra.
View 2 Replies
View Related
Jan 17, 2002
Hi,
In my production box is running on SQL7.0 with Merge replication and i want add one more table and i want add one more column existing replication table. Any body guide me how to add .This is very urgent
Regards
Don
View 1 Replies
View Related
Aug 22, 2007
Hello,
I have this problem on a Production database.
DBCC OPENTRAN shows "REPLICATION" on a server that is not configured for replication. The transaction log is almost as large as the database (40GB) with a Simple recovery model. I would like to find out how the log can be truncated in such a situation.
Thank you.
View 4 Replies
View Related
Mar 6, 2007
Hello,I'm getting the following error message when I try add a row using aStored Procedure."The identity range managed by replication is full and must be updatedby a replication agent".I read up on the subject and have tried the following solutionsaccording to MSDN without any luck.(http://support.Microsoft.com/kb/304706 )sp_adjustpublisheridentityrange (http://msdn2.microsoft.com/en-us/library/aa239401(SQL.80).aspx ) has no effectFor Testing:I've reloaded everything from scratch, created the pulications from byrunning the sql scripts generated,created replication snapshots andstarted the agents.I've checked the current Identity values in the Agent Table:DBCC CHECKIDENT ('Agent', NORESEED)Checking identity information: current identity value '18606', currentcolumn value '18606'.I check the Table to make sure there will be no conflicts with theprimary key:SELECT AgentID FROM Agent ORDER BY AgentID DESC18603 is the largest AgentID in the table.Using the Table Article Properties in the Publications PropertiesDialog, I can see values of:Range Size at Publisher: 100,000Range Size at Subscribers: 100New range @ percentage: 80In my mind this means that the Publisher will assign a new range whenthe Current Indentity value goes over 80,000?The Identity range for this table cannot be exhausted! I'm not surewhat to try next.Please! any insight will be of great help!Regards,Bm
View 1 Replies
View Related
May 26, 2015
What is the main difference between snapshot and transactional and merge replication?
View 5 Replies
View Related
Jul 28, 2006
Hi,
I have a VB.net app that access a SQL Express database. I have transactional repliaction set up on a SQL 2000 database (the publisher) and a pull subscription from the VB.net app. I use RMO in the VB app to connect to the publisher. My problem is I am getting some strange behaviour as follows
- if I run the app and invoke the pull subscription it works fine. If I then close my app and go back in, I can access my data without any problem
- If I run the app and try to access data in my SQL Express database it works fine. I can then close the app, reopen it and run the pull subscription it works fine
however.......
- if I run the app, invoke the pull subscription (which runs fine), and then try to access data in my local SQL Express database without firstly closing and reopening the app, I get a login error
- if I run the app, try to access data in my local SQL Express database (which works fine), and then try to run the pull subscription I get a "the process cannot acces the file as it is being used by another process" error. In this case I need to restart the SQL Express service to be able to run replication again.
I get exactly the same behaviour when I use the Windows Sync tool (with my app open at the same time) instead of my RMO code to replicate the data.
I am using standard ADO.Net 2 code to access my SQL Express data in the app and closing all connections etc
Any advice appreciated !
Thanks
Ronan
View 2 Replies
View Related
Jun 28, 2007
Hi all,
I have recently setup a transactional replication in MS SQL 2000. After setting up the replication the clients TempDB grew by almost 60GB. Now the client is Blaming me for the TempDB GROWTH and saying that its because of the replication being setup i tried to convince them but they are not satisfied yet. Can anybody please tell me does replication cause the tempdb to grow. If yes then how. can u suggest any good link for getting to know the internal working of SQL Server replication????
Thanks in advance
Jacx
View 3 Replies
View Related
Aug 30, 2007
Hi all,
I know that adding a column using ALTER TABLE to add a column automatically allows SQLSERVER 2005 to replicate the schema changes to the subscribers, however, I would like to add a new column to an existing article that is being used for merge replication, however, I don't want this column to be replicated. Re-initialising the subscriptions is not a option. Help would be appreciated.
I am using SQLSERVER 2005 (SP1).
View 3 Replies
View Related
Sep 2, 2015
I have been researching on the proper steps or sequence to follow to completely remove SQL Server 2012 Transactional Replication. I have read articles about using SSMS as well as using replication stored procedures and some procedures use SQLCMD or just regular TSQL executed in SSMS. I have also read articles where people said all you really need is connect to the Publisher instance, find the publication you want to remove and choose "Delete" and everything will be taken care of behind the scene. I have three SQL servers that participate in transactional replication. SQL-P (publisher),
SQL-D (distributor) and SQL-S (subscriber). Do I need to connect to the distributor instance and the subscriber instance when removing transactional replication or is it just really connecting to the publisher and click delete on the publication? I want everything gone including any metadata, systems tables, distributions db and any other replication objects created during the initial configuration.
View 6 Replies
View Related
Apr 17, 2007
Hello everyone,I am involved in a scenario where there is a huge (SQL Server 2005)production database containing tables that are updated multiple timesper second. End-user reports need to be generated against the data inthis database, and so the powers-that-be came to the conclusion that areporting database is necessary in order to offload report processingfrom production; of course, this means that data will have to bereplicated to the reporting database. However, we do not need all ofthe data in the production database, and perhaps a filtering criteriacan be established where only certain rows are replicated over to thereporting database as they're inserted (and possibly updated/deleted).The current though process is that the programmers designing thequeries/reports will know exactly what data they need from productionand be able to modify the replication criteria as needed. For example,programmer A might write a report where the data he needs can beexpressed in a simple replication criteria for table T where column X= "WOOD" and column Y = "MAHOGANY". Programmer B might come along amonth later and write a report whose relies on the same table T wherecolumn X = "METAL" and column Z in (12, 24, 36). Programmer B willhave to modify Programmer A's replication criteria in such a way as toaccomodate both reports, in this case something like "Copy rows fromtable T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X ="METAL" and col Z in (12, 24, 36))". The example I gave is reallytrivial of course but is sufficient to give you an idea of what thecurrent thought-process is.I assume that this is a requirement that many of you may haveencountered in the past and I am wondering what solutions you wereable to come up with. Personally, I believe that the above method isprone to error (in this case the use of triggers to specifyreplication criteria) and I'd much rather use replication services tocopy tables in their entirety. However, this does not seem to be anoption in my case due to the sheer size of certain tables. Is thereanything out there that performs replication based on complexprogrammer defined criteria? Are triggers a viable alternative? Anyalternative out-of-the-box solutions?Any feedback would be appreciated.Regards!Anthony
View 11 Replies
View Related
Oct 9, 2007
I am working on bringing our disaster recovery site to be a live site. Currently we replicate to one of out servers (server B) with merge replication (from server A). Server A also does one way transactional replication form some table to several other servers including servers at the DR site.
This setup is not going to be fast enough for what we need so I am wondering if a table is receiving merge replication will the merge updates also replicate down the transaction path??
Example...
Server B update a row and merges to Server A. With this update them replicate (via transactional) to Server C??
thanks...
View 5 Replies
View Related
Nov 9, 2006
I have a wired situation..!I set up transactional replication on one of my development server (SQL2000 Dev Edition with sp4).It is running fine without any issues and all of a sudden, i noticed inmy repication monitor tab under Publisher where I usually see thepublication is empty now.I do see the snapshot agent, log reader and distribution agent under myagents inside the replication Monitor. But its usefull to see all 3agents in one window under publisher before. What happend? Is there anyway to get that inside that monitor? Has someone encountered thissitation before? Please advise....After that I tried to create a new set of replication on differentdatabase on the same server and i dont see those either underReplication Monitor - Publishers....All it says is (No Items)....I would appreciate any help to correct this issue... Thanks in advance..
View 2 Replies
View Related
Jan 30, 2007
I have setup transactional replication everything on one box. later(two or three weeks later), Replication monitor is show red X Under my publishers (publications is disconnected). this is SQL2005.
Everyone known how to fix this problem?
Thanks,
Frank
View 1 Replies
View Related
Dec 1, 2006
Hello,
I'm interested in combining the Peer-to-Peer Transactional Replication and Standard Transactional Replication to provide a scale out solution of SQL Server 2005. The condition is as follows:
We may have 10 SQL Server 2005 (1 Publisher + 9 Subscriber) running transactional replication in the production environment and allow updates in subscribers. To offload the loading of the publisher, we plan to have 2 Publisher (PubNode1 and PubNode2) using Peer-to-Peer Transaction Replication and the rest 8 subscribers will be divided into 2 groups. The subscribers 1-4 (SubNode1, SubNode2, SubNode3, and SubNode4) will be set to be standard transactional replication subscribers of PubNode1, and the rest 4 subscribers (SubNode5, ..., SubNode8) will be set to be standard transactional replication subscribers of PubNode2.
Is it possible to setup above 2 Publisher + 8 Subscriber topology?
Also, could we set the 8 subscribers with updatable subscriptions to achieve each node is updatable?
We do not plan to set all the 10 nodes using Peer-to-Peer Transactional Replication as it is necessary to make sure n*(n-1)/2 (i.e. 45) peer-to-peer connections is reliable. It seems that the maintenance cost is high if the servers are not in a LAN and the topology is very high coupling. So we prefer to divide the 10 nodes into 2 groups and reduce the cost of each node to maintain the connections to all other sites.
That's the scenario.
Any feedback is welcome and appreciated.
Thanks,
Terence
View 4 Replies
View Related
Aug 20, 2007
For intranet development, our DBAs are asking web developers to use fixed domain NT ID accounts
instead of SQL accounts to connect to backend databases in all web applications. We are: Windows XP workstations in a 2003 Active Directory Topology.
I don’t think that this is a
good idea (I could say more:) but I have found very little to no information on this subject. So I ask you guys... What are your thoughts? Why or why not?
View 7 Replies
View Related
Jul 23, 2005
Hi,I'm in the process of designing a DB (typical management system DB; 2transaction tables and about 5 look-up ones )for one of the departmentsin our company. The user wants this DB and thusly the client (forms,reports..etc.) solely for his department. However, I do expect soonafter deployment that other users want similar DBs and clients,therefore, facing problem of integrating such DBs, if companymanagement wants to implement it as enterprise DB. My question (may beyou could also tell me about other groups specialized in these kind ofissues), how should I create such DB? Should I create extra look-uptable for departments and have each one with its own ID and link it tothe main transaction oneMTIA,Grawsha
View 3 Replies
View Related
Jul 20, 2005
I'm still a database newbie so I would like to solicit thoughts aboutthe smartest way to do something in sqlserver.My company has a web application that we customize for each client.We can do this because everything is database driven. We havedatabase tables that contain our HTML and database tables as well assome standard tables for each database. We have an in house app thatlets us tweak both of these things and creates a new web site anddatabase tailored to each project.Each of these sites has a table that stores a schedule are clientsuse.The records in this schedule table change when information in othercustom generated tables change.My company currently uses a legacy foxpro app to update the scheduletable.The foxpro app contacts sqlserver, reads a table with a list of tablesand scheduling information to check, checks each of those items andupdates the schedule table.I would like to lose the foxpro app.At first thought.........as a database newbie.......putting triggersin each of the tables to update the schedule when something changesseems the way to go.However, since we change a part of the schema ( we have an app thatgenerates the database tables unique to each client ) for each clientI would like a scheme that would not involve having to create adifferent trigger for each new table.I would also like something that updates in real time. Right now thefoxpro app is executed once a day.I was thinking of making a large stored procedure and putting anidentical call to that procedure in each table.Each table would have the same trigger in it that would get fired whenthe record was altered. It would call the stored procedure withrelevent arguments to update the schedule.Does this sound like a smart way to solve this problem or am I notthinking "database enough"?Any thoughts are welcome.I would like to build a better solutionSteve
View 1 Replies
View Related
Jan 3, 2007
I noticed that the current SLQCe driver does not offer support for the APM(Asynchronous Programming Model). Are there any plans to do this in the future? In light of the lack of APM functionality doe anyone have any ideas or thoughts on how async operations could be done, or if they are even needed in the context of applications that use SQL Ce
View 4 Replies
View Related
May 2, 2008
Currently evaluating Red Gate SQL tools targetting SQL Server 2005 Express.
Specifically
- SQL Compare
- SQL Packager
- Dependency Tracker
This is the first tool of this kind that I've evaluated and must say I'm impressed. It looks like it will save me hours of writing scripts.
Any of you guys have any thoughts, recommendations, etc to share? Alternate tools?
Most appreciated in advance, thanks.
View 1 Replies
View Related
Dec 17, 2007
I was hoping to elicit some feedback on a trend I am seeing in the Portal market, and specifically with SharePoint development.
If you are not familiar with SharePoint, there is a data table abstraction within SharePoint called a "List". Lists are used for storing data (duh!). However, they are built using the SharePoint front end, and the data entered into all lists is stored in a few tables in the SharePoint content database.
What I am seeing happening is SharePoint gurus reccomending AGAINST storing your relational data within database tables, and within SharePoint lists instead. I am not sold on this approach, and it actually makes me think we are taking a step backwards with regards to persistent data storage and best practices.
- Lists cannot be natively related to one another, however they support "lookups"
- Anyone can create a list...and repeat the same data all over the enterprise.
- Lists are maintained in two tables within the SharePoint content database using meta-data patterns.
- Portals contain a multitude of sites. Users and portal admins can create lists all over the place, thus spreading related data over a wide swath of the enterprise.
Is it just me, or are SharePoint pundits absolutely CRAZY to be recommending persistent data storage using lists? I see nothing but problems arising from this approach.
I apologize beforehand if you have not worked with SharePoint and Lists, as this post may not make much sense to you. ;)
View 2 Replies
View Related
Nov 8, 2006
Thought I should post in the newbie forum for a while, instead. :-)
I have a couple of scripts that I've generated that drop a couple of system stored procedures and recreate them. I'm not sure why I did it in the first place, but I think it was that it wouldn't let me run an ALTER statement on them. Specifically, I'm now looking at sp_add_operator. I changed it to a 500 character email field instead of whatever it was (100, I think.)
/* Explanation: Why did I do that? SQL Mail is prohibited here, so I'm using CDO_Sysmail to email myself and the developers if a job fails. The list of people to email is determined by the owner of the database, who is also an operator in SQL. I get the list of emails from the email field of the operator properties. Hence, I need a bigger email field. Yes, I now know it would most likely be better to create an ADMIN database on each server for this kind of stuff. (Thanks to Tara for that blogged suggestion.) */
While I will probably go back to the default stored procedure, this got me to thinking: when would it be better to use an ALTER statement on a SProc rather than to do a DROP and CREATE?
Your thoughts, oh SQL gurus?
View 12 Replies
View Related
Jul 20, 2005
Folks, I have a quick question that I would very much appreciate somefeedback on. We are a not for profit charity organization that has decidedto develop a software in-house to manage our volunteers. We have SQL andthat makes the most sense from a database solution but we have some issuessurrounding the choice of the development language. Some have suggested100% java while others say Visual Basic. The head of our team has suggestedwe do it in Cold Fusion since this will be an internet based application andI guess I would very much like some feedback on that choice. We have about 5organizations that we will tie into this system with about 5000 userslogging in once per month.Any suggestions or comments would be greatly appreciated.CheersWade
View 4 Replies
View Related
May 10, 2007
I have a database with a dozen or so tables. No table constraints. Logic is all in stored procedures.
I have several Excel spreadsheets of data to import into the database, one speadsheet to a table. Each spreadsheet has additional data(columns) that each table has no interest in and should be ignored.
I would appreciate your thoughts on methods and best practices for loading this data to the database.
I am about to investigate SQL Server 2005 Express handling of XML. I am familiar with XML and XSL conversions and it seems to me that XSL conversion of Excel data to XML gives me a lot of flexibility prior to database import for shaping the data.
In short, importing data to the database from an XML source.
I am not famliar with SQL Server's XML capability and would appreciate thoughts on this while I look into it.
And of course alternate ways that I am overlooking.
Thanks
View 4 Replies
View Related
Jul 12, 2007
I just learned I can deploy and schedule jobs to run SSIS packages (via job/sqlagent) without the Integration Service (agent) itself actually running alongside (or on) the server. (Double-click on manifest, deploy IS package to server, create job/job step to run IS package, watch it run even when integration service is completely disabled)
Other than convenient viewing, configuring, and RMC running w/in SQL Server Mgmt Studio 2005, why then do I need the integration service running on a production box at all? When do I really need the IS service itself?
In our (finance) world only (a) an act of God or (b) a DBA can touch production databases/servers. Allowing anyone to connect to yet another service - in this case, an integration service - to meddle w/ a package would be a no no, so...
1) Could I trouble someone for a concrete, critical reason why the DBA should enable it on a production server. Speed? Caching? Peace of mind knowing everything is piled onto and neatly running on the server?
2) On a more minor note, if I'm deploying a package to be housed solely w/in MSDB, is there anyway to prevent the prompt of a file location during deployment, i.e. the creation of an empty directory that would otherwise hold package dependencies if I were running it as a file?
We'd like to deploy only to MSDB (I know all the pros/cons w/r/t saving dtsxs to files v. msdb) and keep deployments clean (read: all in one place). DR is via SAN-to-SAN replication with, among other things, msdb cleanly getting replicated. We would very much like to avoid having to worry about (more) file/directories sitting out on a server share to be replicated to DR (it seems the default is to allow deployments to directories on the SQL server instance itself..ugh) Any architectural insights on this would be appreciated.
Kind Regards,
Jim
View 3 Replies
View Related
Oct 3, 2005
I like the new gig a lot. Real busy, smart folks and I have been in high demand since 5 minutes after my butt hit the chair. I already have code in production.
Anyhow, we have a security situation on the sql servers I pointed out on my first day. So they want me to roll everything over to Windows Authentication and give the developers and report writers more restricted rights inside SQL Server. So they have NT Groups for different kinds of users and all of that jazz and I layed on the typical stuff about using NT groups vs individual accounts and ease of admin vs granularity of control. Well the boss came back and said he wants ease of admin and granularity of control over security. So, does anyone have any fresh thinking on turning my eitheror into an AND.
View 5 Replies
View Related
Dec 29, 2006
Hi,
I would appreciate any thoughts/ideas on the following use case for the distributed service broker application we plan to migrate from our existing proprietary tcp based message protocol using database tables for reliability.
There are two ssb services running in separate sql server instances, each on a different server machine. For simplicity, let us assume the ssb endpoint names are SSBA, SSBB. SSBB is the Initiator of the Dialog while SSBA is the Target. Now the requirement is that if the underlying network communication between the two ssb endpoints(SSBA and SSBB) is broken or if the critical service SSBB is down, then processing of any incoming message into SSBA's queue from a third service broker service (say SSBEXPR) running within a SqlExpress instance should be delayed until SSBB is alive and network communication between SSBA and SSBB is established. In our existing implementation (wherein SSBA, SSBB and SSBEXPR are windows services) we use a combination of TCP socket disconnects and Heartbeat messages between SSBA and SSBB to determine the health of network connection and that of the SSBB service.
Now my understanding of how the underlying network connection for a ssb dialog works is that if there is no activity on a dialog for a certain amount of time then the underlying network connection is closed. Is there a way to specify the amount of time to say infinite value or something and thus change this behavior? My other question is how can one query the underlying network connection (i.e. a row from sys.dm_broker_connections) associated with a particular conversation? If none of this is possible, then any other patterns/ideas/approach is welcome.
Thanks,
View 8 Replies
View Related
Sep 12, 2006
I was working on a project that use local and remote SQL server. In order to keep the database up-to-date I wanted to implement replication on the SQL servers. But unfortunately the transaction replication which meet my requirement best is disabled on the replication configuration module (snapshoot and merge replications are active). Is there any way I can make the transaction replication enabled. I know it was supposed to be enabled by default. I’m using Windows 2003 server and SQL server 2000. Sincerely
View 3 Replies
View Related
Feb 4, 2008
NOT NET REPLICATION: indicates that the IDENTITY property should not be enforced when a replication login such as sqlrepl inserts data into the table.
What is purpose statement above?
View 1 Replies
View Related
Apr 11, 2004
I have a failry complicated problem, or at least I think its complicated.
I'm working with a database that is not very normalized, and I was asked to make a portion of the dataset it available to web users.
Rather than build a complicated mess on top of the existing database, I opted to create an entirey new database inspired by the original, only normalize the data. So, where I had 3 tables in the old system..I now have around 10 nice and neat normalized tables.
Now the trick is to get the data from the original database to the new one. I could write my own application to do this, but I wasn;t sure if SQL's built in replication would be of any help.
I think I'm a little bit confused because thetwo DB schemas are very different from one another, and I'm not sure if built in Replication can work with this sort of set up.
Luckicly, the replication pretty much on needs to go one way, from the original database to the new one. The original databse will still be in use, but for the most part only insert will be happening there, and a few updates. In the new database, there will be no inserts, but there will be updates, and they don't have to be replicated back to the original DB. So, that seems pretty easy to me.
If I were to use replication rather than write my own tool to sync up the data, how woudl I go about this, and is it even possible?
One thing I thought of was was creating a bunch of views in the original database, that looked just like the tables in the new system..and having replicaiton work with those...the only downside I see is the views will have no relationships, and in the new DB all the tables have valid relationships..so I may run into situations where child records are getting replication before parant records..and that will cause issues.
Any advice would be helpful. Thanks guys!
View 1 Replies
View Related
Oct 17, 2005
I've been trying to setup transactional replication with two servers
for 2 weeks now with little success. Can anyone point me to a good
article or something that will help me.
View 1 Replies
View Related
Nov 20, 2001
I am in the process of setting up replication from SQL2k to 6.5.
Both server have mixed authentication and are using the same login
that has domain rights and is a local admin on each box.
However, repliation fails with this error message;
Login failed- User: Reason: Not associated with a trusted SQL Server connection.
I have tried searching this site, BOL and other newgroups sites for
a step by step guide to setting up replication, but am unable to find anything.
Anyone have any suggestions ?
Please advise
Thanks
View 1 Replies
View Related