I am building a system that requires a small database to be replicated.
The DB is made up of a few tables that do not change much, primarily by user-input or triggers on those tables, and one constantly growing table for incoming messages with rows constantly being added to it.
We require having two systems (System=Hardware&OS/SQLDB/Application) running in parallel where one could act as a failover to the primary system. Both systems would be run in the same site and have a direct network link.
My goal is to have both systems maintain identical databases and to be able to "re-synchronize their data after a failure in some recovery state.
I am looking at:
1. SQL Server Replication Methods (Publishers/Subscribers)
A possible scenario i need to support:
ServerA: primary server acts as publisher
ServerB: secondary failover server acts as subscriber
ServerA: Fails!
ServerB: Assumes role of primary server (i.e Publisher)
ServerA: recovers later, becomes subscriber to ServerB(secondary)
Q: Can Roles Be Switched or A server be a subscriber and
a publisher at the same time?
2. Use ServiceBroker Queues and Have Each server SEND a message
to the other server whenever it gets incoming data?
3. Use Transact-SQL in ServerA to write data to ServerA
and ServerB and vice versa.
4. Not worth replication, just have the application write data
to both servers in two seperate requests.
I have a product that sits on a main server and wish to implementfunctionality to allow salesmen to come along, pick up a snapshot of thedatabase, go away and maybe modify/add to it and then come back and"synchronise" their data. I'm reading up on Merge Replication for thispurpose. But anyway, I created a publisher on my server and it went awayand generated a "rowguid" column on all of my tables (my tables all have anIdentity column key field). Now of course my "Inserts" no longer work, asthey expect a GUID. I would have expected SQL Server to automaticallygenerate a guid for new inserts (in a similar way to it's TIMESTAMP), but itappears it doesn't, despite the fact I have "(newid())" as the default forthe column. It always inserts the same value:{00000000-0000-0000-0000-000000000000}.So, back to basics, now I have a guid field for each record, how do I manageinserts?Thanks.
And it involved my favourite Design Technique - Simplicity
So to achieve Simple Replication across geographically disparate Servers we could use:-
300 plus SQL2000 Servers enabling 1000 Concurrent Active Clients (10% actual light Activity) WAN = National Private Secured (ping 300ms max) (128kbps leased line Min) One Interface Server constantly Running A Simple Dynamically Built Partitioned View (removing down servers) One Stored Proc (or more) that Synchronizes the DPV Updateable Partitioned View with Interface Mirror Table DeNormalized Holding a Physical Copy of each of the Subscribers/Publishers/Client Sql Servers Data
The Question is:-
Was my Dream a Nightmare OR A Dream Come True ?
I Know it's down to the Network Quality to a great degree but (That's the suck it and see part of the question) but as a form of replication it seems a very simple platform that could possibly tackle our friend The DCP (Data Consistency Problem) with Client Update DateTime Column & frequent activation (30 Secs).
Has anyone had much experience with this type of Scaling out over a WAN ?
I am receiving funny results from a query. To simplify, I have 2 tables (todayyesterday). Each tbl has the same 8 columns. My query joins the two tables then looks where either of two columns has changed. What is happening is that when checking one of the columns it seems as though sql is flipping the column, causing it to be returned in error.
result set
colA colB colC colD colE colF colG colG (from yesterday) 1 1 a b c d e m 1 1 a b c d m e
So what's happening is that the record above is actually the same record and should not be returned. There is a daily pmt column that changes but I am not using that in the query. Aside from that the two records are identicle.
I have the following situation (with a site that already works and i cannot modify the database architecture and following CrossRef tables -- you will see what i mean by CrossRef tables below)
foreach hotel, there definitely is a crossRef entry in AddressCrossRef and Address tables respectively (since every hotel has an address)
however not all hotels have thumbnail image
hence i have hotel inner join AddressXReff inner join Address ..... however i must have left outer join mediaXref left outer join media
the problem is that if there is no entry in Media or mediaXref, I don't get any results
i tried to get over it by using where (media.mediaTyple like 'thumbnail' or media.mediaType is null) but then i started getting multiple results for each hotel because media's of type movie or full_image or etc... all got returned
I'm getting this, after upgrading from 2000 to 2005.Replication-Replication Distribution Subsystem: agent (null) failed.The subscription to publication '(null)' has expired or does notexist.The only suggestions I've seen are to dump all subscriptions. Sincewe have several dozen publications to several servers, is there adecent way to script it all out, if that's the only suggestion?Thanks in advance.
Hi,I have transactional replication set up on on of our MS SQL 2000 (SP4)Std Edition database serverBecause of an unfortunate scenario, I had to restore one of thepublication databases. I scripted the replication module and droppedthe publication first. Then did a full restore.When I try to set up the replication thru the script, it created thepublication with the following error messageServer: Msg 2714, Level 16, State 5, Procedure SYNC_FCR ToGPRPTS_GL00100, Line 1There is already an object named 'SYNC_FCR To GPRPTS_GL00100' in thedatabase.It seems the previous replication has set up these system viewsSYNC_FCR To GPRPTS_GL00100. And I have tried dropping the replicationmodule again to see if it drops the views but it didn't.The replication fails with some wired error & complains about thisviews when I try to run the synch..I even tried running the sp_removedbreplication to drop thereplication module, but the views do not seem to disappear.My question is how do I remove these system views or how do I make thereplication work without using these views or create new views.. Whyis this creating those system views in the first place?I would appreciate if anyone can help me fix this issue. Please feelfree to let me know if any additional information or scripts needed.Thanks in advance..Regards,Aravin Rajendra.
In my production box is running on SQL7.0 with Merge replication and i want add one more table and i want add one more column existing replication table. Any body guide me how to add .This is very urgent Regards Don
DBCC OPENTRAN shows "REPLICATION" on a server that is not configured for replication. The transaction log is almost as large as the database (40GB) with a Simple recovery model. I would like to find out how the log can be truncated in such a situation.
Hello,I'm getting the following error message when I try add a row using aStored Procedure."The identity range managed by replication is full and must be updatedby a replication agent".I read up on the subject and have tried the following solutionsaccording to MSDN without any luck.(http://support.Microsoft.com/kb/304706 )sp_adjustpublisheridentityrange (http://msdn2.microsoft.com/en-us/library/aa239401(SQL.80).aspx ) has no effectFor Testing:I've reloaded everything from scratch, created the pulications from byrunning the sql scripts generated,created replication snapshots andstarted the agents.I've checked the current Identity values in the Agent Table:DBCC CHECKIDENT ('Agent', NORESEED)Checking identity information: current identity value '18606', currentcolumn value '18606'.I check the Table to make sure there will be no conflicts with theprimary key:SELECT AgentID FROM Agent ORDER BY AgentID DESC18603 is the largest AgentID in the table.Using the Table Article Properties in the Publications PropertiesDialog, I can see values of:Range Size at Publisher: 100,000Range Size at Subscribers: 100New range @ percentage: 80In my mind this means that the Publisher will assign a new range whenthe Current Indentity value goes over 80,000?The Identity range for this table cannot be exhausted! I'm not surewhat to try next.Please! any insight will be of great help!Regards,Bm
I have a VB.net app that access a SQL Express database. I have transactional repliaction set up on a SQL 2000 database (the publisher) and a pull subscription from the VB.net app. I use RMO in the VB app to connect to the publisher. My problem is I am getting some strange behaviour as follows
- if I run the app and invoke the pull subscription it works fine. If I then close my app and go back in, I can access my data without any problem
- If I run the app and try to access data in my SQL Express database it works fine. I can then close the app, reopen it and run the pull subscription it works fine
however.......
- if I run the app, invoke the pull subscription (which runs fine), and then try to access data in my local SQL Express database without firstly closing and reopening the app, I get a login error
- if I run the app, try to access data in my local SQL Express database (which works fine), and then try to run the pull subscription I get a "the process cannot acces the file as it is being used by another process" error. In this case I need to restart the SQL Express service to be able to run replication again.
I get exactly the same behaviour when I use the Windows Sync tool (with my app open at the same time) instead of my RMO code to replicate the data.
I am using standard ADO.Net 2 code to access my SQL Express data in the app and closing all connections etc
I have recently setup a transactional replication in MS SQL 2000. After setting up the replication the clients TempDB grew by almost 60GB. Now the client is Blaming me for the TempDB GROWTH and saying that its because of the replication being setup i tried to convince them but they are not satisfied yet. Can anybody please tell me does replication cause the tempdb to grow. If yes then how. can u suggest any good link for getting to know the internal working of SQL Server replication????
I know that adding a column using ALTER TABLE to add a column automatically allows SQLSERVER 2005 to replicate the schema changes to the subscribers, however, I would like to add a new column to an existing article that is being used for merge replication, however, I don't want this column to be replicated. Re-initialising the subscriptions is not a option. Help would be appreciated.
I have been researching on the proper steps or sequence to follow to completely remove SQL Server 2012 Transactional Replication. I have read articles about using SSMS as well as using replication stored procedures and some procedures use SQLCMD or just regular TSQL executed in SSMS. I have also read articles where people said all you really need is connect to the Publisher instance, find the publication you want to remove and choose "Delete" and everything will be taken care of behind the scene. I have three SQL servers that participate in transactional replication. SQL-P (publisher),
SQL-D (distributor) and SQL-S (subscriber). Do I need to connect to the distributor instance and the subscriber instance when removing transactional replication or is it just really connecting to the publisher and click delete on the publication? I want everything gone including any metadata, systems tables, distributions db and any other replication objects created during the initial configuration.
Hello everyone,I am involved in a scenario where there is a huge (SQL Server 2005)production database containing tables that are updated multiple timesper second. End-user reports need to be generated against the data inthis database, and so the powers-that-be came to the conclusion that areporting database is necessary in order to offload report processingfrom production; of course, this means that data will have to bereplicated to the reporting database. However, we do not need all ofthe data in the production database, and perhaps a filtering criteriacan be established where only certain rows are replicated over to thereporting database as they're inserted (and possibly updated/deleted).The current though process is that the programmers designing thequeries/reports will know exactly what data they need from productionand be able to modify the replication criteria as needed. For example,programmer A might write a report where the data he needs can beexpressed in a simple replication criteria for table T where column X= "WOOD" and column Y = "MAHOGANY". Programmer B might come along amonth later and write a report whose relies on the same table T wherecolumn X = "METAL" and column Z in (12, 24, 36). Programmer B willhave to modify Programmer A's replication criteria in such a way as toaccomodate both reports, in this case something like "Copy rows fromtable T where (col X = "WOOD" and col Y = "MAHOGANY") or (col X ="METAL" and col Z in (12, 24, 36))". The example I gave is reallytrivial of course but is sufficient to give you an idea of what thecurrent thought-process is.I assume that this is a requirement that many of you may haveencountered in the past and I am wondering what solutions you wereable to come up with. Personally, I believe that the above method isprone to error (in this case the use of triggers to specifyreplication criteria) and I'd much rather use replication services tocopy tables in their entirety. However, this does not seem to be anoption in my case due to the sheer size of certain tables. Is thereanything out there that performs replication based on complexprogrammer defined criteria? Are triggers a viable alternative? Anyalternative out-of-the-box solutions?Any feedback would be appreciated.Regards!Anthony
I am working on bringing our disaster recovery site to be a live site. Currently we replicate to one of out servers (server B) with merge replication (from server A). Server A also does one way transactional replication form some table to several other servers including servers at the DR site.
This setup is not going to be fast enough for what we need so I am wondering if a table is receiving merge replication will the merge updates also replicate down the transaction path??
Example... Server B update a row and merges to Server A. With this update them replicate (via transactional) to Server C??
I have a wired situation..!I set up transactional replication on one of my development server (SQL2000 Dev Edition with sp4).It is running fine without any issues and all of a sudden, i noticed inmy repication monitor tab under Publisher where I usually see thepublication is empty now.I do see the snapshot agent, log reader and distribution agent under myagents inside the replication Monitor. But its usefull to see all 3agents in one window under publisher before. What happend? Is there anyway to get that inside that monitor? Has someone encountered thissitation before? Please advise....After that I tried to create a new set of replication on differentdatabase on the same server and i dont see those either underReplication Monitor - Publishers....All it says is (No Items)....I would appreciate any help to correct this issue... Thanks in advance..
I have setup transactional replication everything on one box. later(two or three weeks later), Replication monitor is show red X Under my publishers (publications is disconnected). this is SQL2005.
...when I started this endeavor. I have a previously developed Lotus Notes App. The idea was simple; as I am sefl taught on Lotus Script, I figured I'd be able to stumble my way through VB.Well, it started OK. I used VB Express to get familiar with the stuff, but decided to go with a full version of VS 2005 and try and get this thing properly developed as a web app. I purchased several reference books etc., and have become relatively familiar with the forums here.First issue I have is that I simply want to use code to update or add records to an SQL DB. I know about datagridview etc., but I want to update the DB using forms, not the tabular view those controls provide. I thought it would be relatively straight forward, but found my ignorance runs deeper than I thought. When I tried to do so I am finding I am not really clear on where to make declarations etc. in the web app. If anyone could point me in the right direction that would be great. The issue with searching these forums is most posts deal with datagridview or something similar.I have spent a ton of time trying to find relevant posts or articles, but have had no luck yet.Again, all Ireally need is a nudge in the right direction. I am more than willing to plod through reference materials or articles/posts to find what I need to know, I just can't find where to even start on the info I need. Regards, Joe
Hi I have a problem with my sql WHERE query, if i manually type ([Area] = 'The First Area') then it is okay but if i try and pass the variable 'The First Area' using the
([Area] = @Area) it doesnt work. ALTER PROCEDURE dbo.StoredProcedure1(@oby nvarchar,@Area char,@Startrow INT,@Maxrow INT, @Minp INT,@Maxp INT,@Bed INT)ASSELECT * FROM(SELECT row_number() OVER (ORDER BY @oby DESC) AS rownum,Ref,Price,Area,Town,BedFROM [Houses] WHERE ([Price] >= @Minp) AND ([Price] <= @Maxp) AND ([Bed] >= @Bed) AND ([Area] = @Area)) AS AWHERE A.rownum BETWEEN (@Startrow) AND (@Startrow + @Maxrow) Please Help I know it must be something simple as the sql works but not when i pass the variable.... Thanks In Advance
Output: First row: initial values of the fields Second row: average of the same fields
Please help me...
select * from ( select HEM_LOKOSIT, HEM_NNS from LPMS.HEMOGRAMS where HEM_PATIENT_ID = 33 union select AVG(HEM_LOKOSIT), AVG(HEM_NNS) from LPMS.HEMOGRAMS where HEM_PATIENT_ID = 33) order by HEM_LOKOSIT desc nulls last;
I'm relatively new to SQL7 but I did use 6.5 a fair bit. I'm trying to test the restore of the Transaction log backup and having a bit of difficulty. The idea is that I make a complete database backup at 1am backup the transaction log every 30 minutes between 7am and 7pm. I need to be able to restore the database to a known state between 7am and 7pm with a max data loss of 30 minutes.
What I am trying to achieve is (as a test):
1)Create a small test database with a test table 2)Add some data to the test table 3)Back up the transaction log 4)Restore the transaction log to 'undo' the data added in step 2
Should be simple I think !!! The problem I am encountering is that in step 4 it won't let me restore only the transaction log (a tick automatically appears in the database backup as well). Bah !
Can someone please tell me what simple steps are required to get this to work. I need specifically on what options to chose during the backup and restore processes.
Hi, I have a table with two columns. I need to find distinct value of col1 and the correspondin repeated value of col2 for that col1 value with comma seperated list. Is there any function for this in MS SQL? I need somethgn like a 1,2,3 b 4,5 c 7 d 5,55,5
I can do that with creating 2 cursors but looking for some easy way around.
DELETE FROM #RptDetails WHERE StructureType <> @StructureType AND #RptDetails WHERE #RptDetails.TraderId <> @TraderId OR #RptDetails.TraderId is null
But it didnt delete the structure types i changed to : DELETE FROM #RptDetails WHERE StructureType <> @StructureType --AND DELETE FROM #RptDetails WHERE #RptDetails.TraderId <> @TraderId OR #RptDetails.TraderId is null
and it did, how do i format the 2nd sql into 1 statement and what was i doing wrong?