Replication--Too Many Questions---Too Little Solutions
Dec 15, 19981) Do I have to install publishing on both servers (A and B) even though one will be publisher and the other will be subscriber.
View 1 Replies1) Do I have to install publishing on both servers (A and B) even though one will be publisher and the other will be subscriber.
View 1 Replies1) Do I have to install publishing on both servers (A and B) even
though one will be publisher and the other will be subscriber.
2)a. Can named pipes be used for communication between these two servers
which are on the same domain but not on the same network. Why or why
not, whatever the answer may be?
b. If I use TCP/IP, it the connection set up using the client
configuration utility? How is the connection string set up in this
case?
c. Suppose the publishing server was not using Net-Beui. Could this pose
any problems for communication. (is using lmhosts sufficient in this
scenario)
3) I have set up a (remote) SQL Server to be a Publisher/ Distributor.
Both SQL servers have been configured to be remote servers relative to
each other. Following are the steps I have carried out to set up
replication:
______On the Publication Server (a remote server)
I went to Server --> Replication Configuration ---> Install Publishing
Next, I chose a local distribution server. I think that the
instdis.sql script ran fine because the distribution database was
installed successfully.
___Next, I went into Manage Publications from Server menu to set up
the publications.
_____________When I went the subscription server to subscribe to the
published articles, I got the following error message:
Error 14093: [SQL Server] You must be System Administrator (SA)
or Database Owner (dbo) or Replication Subscriber (repl_subscriber)
to execute the stored procedure.
PS
Please Help
Hello everybodyI work at a company in Iceland and we have developed a 3-tier solutionwritten in ASP - Visual Basic - MSSQL2000, 4 companies are using thesolution almost constantly and accessing it through a browser. Theconnection has never gone down (yet) so that it has affected ourclients but we are thinking of how be able to run the solution locallyat every place and then create a replication to a main server that ishosted at our place.My question is: Does it affect speed for the clients that are usingthe solution or is there a better way of doing this?The solution is a ticket sale system and our clients use it every dayand people that sit at home should be able to order ticket online.Because of that we can't update the database every 5 minutes or 15minutes because we don't want a double booking in the same seat.Any help appreciated!- Sindri
View 1 Replies View RelatedAccording to this article from last year:http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1251149,00.htmlThese are the main options:* Merge Replication* Bi-directional Transactional replication* Immediate Updating* Queued Updating* Peer to Peer* RDAAre there any new alternatives that have popped up over the last year? Are all of six above still good options based on needs?We currently have a three server topology using merge replication.ServerA (App1DB) <--> ServerB (App2 DB)ServerA (App1DB) <--> ServerB (App3 DB)ServerA (App1DB) <--> ServerCServerA supports 1 intranet application using 1 DBServerB supports 2 extranet applications using 2 DB's (1 per application)ServerC is our DW server that we have installed a Search DB which is used by all applicationsPrior to our "upgrade" to merge replication we were using 1-way Transactional Replication so our topology looked like:ServerA --> ServerB (App2 DB)ServerA --> ServerB (App3 DB)We also had linked servers between ServerB and ServerA as well as between ServerC and ServerA to update data on ServerA. We would simultaneously update/insert the tables on ServerB/C and create custom stored procedures to handle the data already processed from the subscribers.With our new implementation we are seeing more latency as well as locking since merge replication is not running off of transaction logs anymore.My main question is would we see an increase in performance and less locking as a result of a topology like this:Master <--> ServerA (App1 DB)Master <--> ServerB (App2 DB)Master <--> ServerB (App3 DB)Master <--> ServerCWhere Master is a server and DB supporting no applications (hence no OLTP). Would latency be the same/better/worse? Should we stick with our current implementation and just performance tune it?A secondary question I have is given the bidirectional replication options above did we choose the best one for us? These servers are all on the same network hosted by the same provider over Gigabit Ethernet (I assume). I think we have the polling interval set at 5 seconds and we are thinking of moving it to 10 seconds at most. Real-time latency is not critical to our business but it would be a "nice to have". For conflict resolution we are keeping it simple, whichever was inserted/updated last "wins". It looks like Bi-directional Transactional replication might be a better option for us. Would it give us the autonomy we are looking for? Any major "cons" to using Bi-directional Transactional replication over merge replication (beside scalability). Scalability may come into play a few years down the road but for now it is not a high priority. Also would the Master model described above using Bi-directional Transactional replication be a successful implementation?ETA - One thing merge replication gives us is autonomy between our application servers, particularly when ServerA needs to come down for upgrades, the applications on ServerB can still function without any dependencies like we had before with 1-way transactional replication with linked server calls.
View 2 Replies View RelatedI am reading up on Replication because soon I must set up merge replication at two branch offices. The reading has confused me a little....
1. Do I have to run the Distribution database on another server than the one holding the headquarters database? I tried to configure replication on this server, and I get a message that says, "SQL Server Agent on ROCK currently uses a system account, which causes replication between servers to fail. In the following dialog, specify a domain account for the service startup account."
It then presents a dialog allowing me to add an account and password. I mistakenly assumed that it would create the account if it didn't exist, but it warned me that it didn't have sufficient rights to check the account, and was I sure that it existed with all the rights required? So I cancelled, to send this message :-)
When it says "Account", I assume it means create a virtual user? What rights should I give the account? I assume it will have to be able to do anything. What database should I give it as the default? Could I have the first agent not use a system account, and thereby run the distribution server on the same machine? (I have another machine available, I just want to know if I need it, or at what point of activity I would need it.) Will it matter whether the agent on this third machine usese a system account?
2. Virtually all the tables in the database use incrementing keys. Will I have to modify this at each branch, say seeding Branch 1 to start with values of 1, Branch 2 with values of 10000, and branch 3 with values of 100000 (or other sufficiently disparate values to prevent PK collisions?
3. How frequently can I merge the databases? Can it be done at different intervals with different tables? (I.e., there are a small number of tables in which updated values would be ideally propagated as soon as possible, while most of the tables are less critical. We sell tickets, so quantity-available is pretty important, whereas someone in one office hardly cares who purchased the tickets on a sale recorded in another office. This data we can wait for, but the counts are critical.)
Advice, etc. from those experienced in implementing replication would be most appreciated.
Thanks!
We have a subscription that is failing due to '24000[Microsoft][ODBC SQL Server Driver]Invalid cursor state'
My thought was to unsubscribe and subscribe again synching the tables.
HOWEVER, I have been told that I cannot drop the table in question
and have been asked to truncate instead.
There are over 25 subscriptions to this article, so I am unable to
modify the actual article.
I started playing in the pubs database to see if I can do it before
I messed with production...
I thought if I just change the .sch script located in the repldata direcotry to truncate table instead of create table, that would be it...
However the drop table command seems to come before it even reaches
that script, cause the error I get now is 'S0002[Microsoft][ODBC SQL Server Driver][SQL Server]Cannot truncate table 'authors', because this table does not exist in database 'pubs'.'
My question is - when and where does the "drop table" command get executed
and can I modify it without effecting the other subscriptions?
I checked the sysarticles and the column that says del_cmd is Null....
PLEASE HELP....
Hi,
I'm having a headache with the following questions weavering around...Hope that someone can help me with the answers.
1) I've implemented a immediate updating transactional replication on 2 servers. The loopback detection is also set to true. Under normal circumstances, when there is an transaction at the subscriber, it should be replicated to the publisher. However, this was not the case here. There were some transaction in the subscriber, but they were not replicated to the publisher. Is it because of the "loopback detection" option?
2) If a bi-directional transaction is to be implemented as according to the BOL, there will be 2 distributors. When a publisher has a transaction, it will be replicated to the subscriber. When the subscriber commit the transaction, will the transaction be sent back to the publisher's distribution database? or will the distribution agent at the subscriber will know that it originated from the publisher and will not sent it to the publisher?
Pls pardon my long msg. Any advice is welcome.
Hello, I am playing around with the replication feature within mssql.
The system that I am developing requires a master database at a remote location, possibly at a data centre and have a onsite database. The idea of having a local database is that the system can be running in the event of no internet connection and all the local databases update the master server and the master server updating all the local databases so they all have the same data.
I have some questions.
1. I create a publication on the master server and setup a subscription on a local database; does information only go 1 way? Or do I need to setup a publication on the local database and setup a subscription on the master server for data to go both ways?
2. What is the best type of replication for information on all databases to be up-to-date with each other, Merge?
3. Is there any limitations with the replication feature that I need to be aware of?
4. Is there anything that I need to keep in mind when I set this up?
5. How much bandwidth does replication take up, I know there are a lot of factors involved when trying to calculate this type of thing but a good idea would be good.
Thank you for all your replies.
*edit* I am testing with 2 copies of mssql enterprise trial
1. When intitializing a new subscriber, data that is on the publisher is not being transferred to the new subscriber. Why?
2. Data that is already on the subscriber is not being uploaded to the publisher. Why?
3. When I perform a data validation, the validation fails, but there is no option to resolve the failure (ie, transfer data one way or the other). Why?
4. For the conflict resolver: I have a rowguid and a timestamp column on each article in the publication. It was my hope that by having the timestamp, I could avoid the need to manually reconcile the conflicts between publisher and subscriber. However, I see that the conflicts are still there and still require manual intervention to eliminate. Why?
5. Where is there additional documentation on the conflict resolver (such as what values to enter in the field "Enter information needed by the resolver")?
6. What is a "Local Subscriber"? As in the statement "Use the default merge resolver and create local Subscribers." as described in the "Choosing a Resolver" topic in SQL BOL.
7. What is a "Global Subscriber"? Same reference.
Sorry for all the ignorance. Replication is relatively new to me.
Regards,
hmscott
I have a couple questions about replication (for both 2000 and 2005 servers):
1. which system tables/dmvs/system sprocs can I look at to determine which columns of a table are being replicated?
2. which system tables/dmvs/sprocs can I call to get metadata about publishers and subscribers?
Thanks!
This customer has an SFA application. They are using NET CF 2.0 SP2, SQL Mobile and Merge Replication with SQL Server 2005. The device they are using are Symbol MC7094. It has integrated phone and is a Windows Mobile 5 Aku 3.
They have set up 5 differents publications for this applications. His business case makes them to have 1 publication to recreate all database structure and the first population of the tables.
For this they use AddSubscription .. Syncronize and the DropSubscription
They use the another 4 publications to sync particular tables because they don€™t want to sync everything, every time. All publications point to the same snapshot.
Each time they want to use one of this publication they instance the SqlceReplication object, then AddSuscription.. Synchronize .. then DropSubscription..
They have one of this publications that use a filter 1=0. Doing the previous steps SQLCE doesn€™t track the change and doesn€™t upload the data of the sql ce table to the server. It seems SQL Server recognize it as a new Sincronization, delete the records of the client and doesn€™t upload the changes (if I don€™t do the DropSubscription it works perfect) I can reproduce that using a device or an emulator.
They have 100 devices via GPRS - VPN. They need to be assure that this Add / DropSubscription will assure they don€™t lose the information. They want to assure all the process of sync is fine just to go to the carrier and make some GPRS connection monitor and test.
How is the best way to approach where the NET CF application needs to use more than one publication to the same database? I have suggested to put all the transactional tables in the same publication but due the business case it is not possible. What are the risks to use many publications?
Another question€¦ each X hours they do a full sync using the first subscription the application returns the next message: The snapshot for this publication has become obsolete. Why this happens? Due the changes in the another publications? How could we manage that to avoid this message? Note: All the test were doing via cradle and GPRS with the same results.
Our database has grown to the point where our current server is struggeling with the query load. One option is to get a 4 processor machine with 16GB of RAM, but I'm also looking at transactional replication as a solution. Currently we run dual Xeon with 4GB of ram (using the /3GB switch in the OS) We have SQL 2000 Enterprise.
The idea is to setup a secondary server with transactional replication pushed from the main server, so that some SELECT-only queries can be executed on the secondary server - thus taking load of the main one. We should be able to add PKs to the small number of tables that currently don't have them, and we should be able to run all updates / inserts on the main server.I'll setup a push-subscription for the entire DB (maybe excluding some log tables) and then for ceratain stored procedures I'll direct our applications to use the backup instead of the main server.
So: Is this a good idea? Is it easy to backup the server using transactional replication? How much extra overhead will this mean for the main server?
All,
We have SQL 2005 db mirror configured with a witness server for high availability. Node 1 is the principal and Node 2 is the mirror. A nightly job creates a snapshot on Node 2. The snapshot is used for previous day reporting queries. We have now been asked to present another copy of the database for near-time reporting. I thought about possibly adding a peer-to-peer replication as part of my environment but was hoping to see what everyone else out there is doing.
Regards,
Ian
I can choose synchronization direction for articles: a) Bidirectional b) one way
1) Is that possible somehow to replicate the schema only of an article but no synchronization / zero direction :-)/
2) Same question about columns, I should replicate schema only for few columns, but without data synch. These columns are freely updateable at anywhere (publisher and subscribers), but the data changes shouldn't be replicated.
Thanks for the answers in advance
Hi,
I am seeing an unusual pop up when I try to hit the Website directory. I have setup replication setup for mobile units on IIS 6.0. When I try to hit the - http://defaultwebsite/test/sqlcesa30.dll - it tells me to open, save for cancel the sqlcesa30.dll file. This is weird. I have not seen this before.
When I hit the path from a internet browser - http://defaultwebsite/test/sqlcesa30.dll - it should come back with something like SQL Mobile Agent 3.0.
Any thoughts,
P
You will all have to excuse my ignorance. I'm a developer who also doubles up as a development DBA. I am however not particularly knowedgeable about all the really important DBA stuff.
We've built a small BI solution using SQL Server 2000. Our problem is that our server is getting on in years (5) and doesn't really have enough disk space or grunt. We havce a number of summary cubes that we've optimised quite successfully but our billing line level cubes run to 60 million rows and, well, they're about as quick as a dead ferret. Especially given the stupid queries our data analysts keep running.
We have however proved our point. That this can be done and indeed SQL Server can do it. So we're now looking at some infrastructure spend and some new copies of SQL2005.
But i need some advice. Our user base is climbing through the roof, we originally had 10, now we have closer to 50 and at this rate it'll be a couple of hundred by the end of the year. We're using a plugin called XLCubed to deliver that data into Excel from the Analysis Server.
The OLTP database that sits behind it is fairly robust but we have a number of web based apps (mostly lookup systems) that want to use the nice shiny new accurate tables of data we have created.
So I'm looking at a fairly big server to hold the OLTP DB, this will also serve up live data to our web apps. Its worth pointing out that the source data system is a batch system that processes overnight so we load data from yesterday at 6pm each evening and process our cubes and stuff overnight. Thus the data is a couple of days out of date. Don't laugh they used to use MS Access and got one mangy data set a month so this is a massive leap forward.
I wanted to mirror the DB to another machine but I also want to have a separate Cube Server. I wondered if the cube server could use the mirror to read its data from as opposed to loading the Main Server (the mirror would be an identical box) we would also have a separate box running some of our other systems acting as the witness.
I also wonderd about exporting the Cubes onto file shares for use locally as opposed to via the server which is how they connect now.
We have been using Reporting Services and some of the queries the devs write are not exactly efficient. So I was also planning on clustering a pair of smaller servers into a reporting farm. Could I use another SQL Server to serve data up to them? Could I use a DB snapshot to copy the data required to this server? What are the time / size implications of using a snapshot and replicating it over each night?
Any suggestions for places to read up on this? I've looked at the MS marketing blurb and while its big on buzzwords its light on specifics. Like how it actually works and how you would actually configure it to do some of this and what the implications would be.
Any advice?
many thanks
Steve
hi. i dont understand what they mean when they say developing oltp solutions. can anybody pls explain it to me. also does anyone know what ways there are to develop sql oltp solutions using SQL 2005 reporting services, OLTO, Excel Services. as well as any good tutorials for it?
thanks for the help.
I guess the Subject line sums it all up, but I need some experienced explanation of what do a solution and a project represent, and how do I use them to my advantage.
Is a solution an entire database? If so, how can I create a solution from my existing databases?
Are either of them a way to collect together scripts etc which will be run against a production database when the solution is rolled out?
What is a project? Is it a single set of scripts related to an upgrade to a database? If so, can it be executed as a single entity? How is the sequence of execution controlled?
And so on and so on....
2005 is such a step backwards for DBA's with all the features we used to have and now don't. If it wasn't for the fact that MS will eventually stop supporting 2000, I frankly see no incentive to upgrade myself.
Ok, OK, flame off.
Can anybody suggest some resources which might give me some insight into these questions?
Thanks,
-Rob
Hi,
Has anyone experience with Meta Integration Solutions and converting to SSIS ?
and converting to SQL ?
and converting to Analysis Server ?
and converting to Reporting Server ?
You can find their website http://www.metaintegration.net/Products/Overview/Solutions.html
Constantijn Enders
Does anyone know where I can find a Northwind end to end database solutions (examples) written in ASP.NET (VB). I would like to reverse engineer this project to learn more about ASP.NET?
Thanks.
My warehouse app employs a distributed architecture. Extractions from disparate (wildly so) systems, and transformations and loads into a standardized schema are performed at various locations close to the source systems (both physically and "logically" speaking). There are security and other reasons for this. However this causes some related design and implementation challenges for the ETL processing.
For one, the ETL processes must be successfully operated by non-technical medical administrators, who actually have little interest in the application and sometimes even the analytics produced by the system, who have other more pressing day to day work they want to be doing, in organizations where turnover is high,training is spotty, and LANs are fragile and often congested.
So, real-time feedback to the operator during processing is pretty dern important. I have built a fairly sophisticated GUI (using .Net forms inside a script component) for the operational interface-input boxes just wouldn't cut it).
But that interface is lacking real time feedback as to processing progress at runtime.
Anyone got that T-shirt yet? I'm thinking I need progress bars and real-time task and component progress reports. Also. is there a way to capture the built in logging output in real-time?
I am on a project to develope an route finding system that search for the optimal route to stick with for users of the system. The current version that i've done and successfully run is using normal database access in MS SQL 2005. I stored nodes information in the database and the application will query them using normal "select" clauses and return a datatable object to the application. The result is rather slow cause by the multiple access to database server to query. The application used 8 second to look for a short route withour cosidering lots of calculation of traffic information that i will use later. Any comments on the architecture or approach to switch my algo to T-SQl?
View 5 Replies View RelatedI have been informed that all my keyword search solutions are susceptible to SQL injection attacks. Does anyone have links discussing basic ' multiple ' keyword search solutions? I would think this is a very common routine (perhaps so much so than only newbies like myself do not know it). I have read the posts about escaping ', doing replace " ' ", " '' ", using parameters and yet every multiple keyword solution I come up with is said to be injection prone.
Example: visitor enters: Tom's antiquesinto a TextBox control and the C# code behind securely generates the below call to the database.
SELECT L_Name, L_City, L_State, L_Display FROM tblCompanies WHERE L_Kwords LIKE '%' + 'Tom's' + '%' AND L_Kwords LIKE '%' + 'antiques' + '%' AND L_Display = 1 RETURN
I understand that concantenting string parts using an array and then passing the sewn together string to a stored procedure exposes it to injection. I hope that my single keyword routine below is secure, if it is not then I am not understanding how parameterized SP are supposed to be constructed to protect against injection.string CompanyName;CompanyName = TextBox1.Text;PROCEDURE CoNameSearch @CompanyName varchar(100)AS SELECT DISTINCT L_Name, L_Phone, L_City, L_State, L_Zip, L_Enabled, L_Display FROM tblLinksWHERE (L_Name LIKE @CompanyName + '%') AND L_Enabled = 1 AND L_Display = 1 ORDER BY L_NameRETURN
I'm trying to figure out what solution (replication, mirroring, clustering) would work best for me.
I have been reading many articles in BOL and in this forum. Most talk about getting data TO a backup/standby/subscriber, but I can't find a lot of info regarding getting the data BACK after a disaster is over.
We have a main office and a disaster recovery facility. Most of the time there are no data updates at the disaster location. So, I need to get data to the disaster facility via WAN (latency is not a huge issue - end of day syncing is fine) for backup purposes. In the event of a disaster, the main office will be offline and data changes will happen at the disaster site. When the disaster is "over" and we return to the main office, what's the best scheme to reverse the data back to the main office to start business again? We are a financial company, and have gigabytes of relatively static data. Most changes are current day. So, to snapshot a 100GB database when I know only a few hundred MB changes a day doesn't seem feasible to me.
Most replication scenarios (at least from what I see) can't easily "reverse" the replication after a disaster situation. I'm looking at merge replication on a schedule which seems to look good, but was wondering if anyone else has any ideas or suggestions?
Hello Reporting Services Gurus!
I'm about to start on my first reporting services project, but before I mess it up, I'm looking for some guidance on how best to achieve my mission. Here's what I'm looking to achieve:
I have a datacentric application (SQL Server 2005 Express w/ Advanced Services backend) in which I want to build about 50 "canned" reports for the end users. I want to build the reports utilizing server mode so I can take advantage of some of Reporting Services advanced features. I'm not sure what the best practice would be to build the reporting services project. Is it better to include the report project as another project within the application solution? Or, should I build the report project independent of the application solution? What are the pros and cons of doing it either way? How does including the report project build if it's included in the application solution? How would a ClickOnce deployment deploy the report project to the report server?
My ultimate goal would be to have an "off-the-shelf" software solution that includes an installation package consisting of the application project and report project. Is it even possible due to the Reporting Services architecture to achieve an install in this manner with ClickOnce, Windows Installer, or Installshield? Or, is building the report project indepedent of the application project and deploying the reports to the report server "manually" (i.e. deploy within the report server project) the only solution?
Any help would be greatly appreciated!
Tony
My company wants me to research and flags or registry tricks that would allow non-ansi joins '=*' and '*=' in SQL Server 2005 with a compatiblity mode of 90 to be allowed.
The way I understand the situation is that in SQL Server 2005 with the database compatiblity set to 90, non-ansi join SQL such as the following would not work.
Select * from
Customer, Sales
Where Customer.CustomerID *= Sales.CustomerID
To work, the SQL above would have to be converted to ansi join SQL such as the following:
Select * from
Customer LEFT OUTER JOIN Sales
On Customer.CustomerID = Sales.CustomerID
Many hours would be spent browsing through millions of lines of code to find the non-ansi SQL and have changes made.
Does anyone know of any trace flaqs or registry entries that would allow SQL Server 2005 work in 90 compatiblity and still allow non-ansi =* and *= joins in SQL?
Thanks,
AIMDBA
At my current employer we are struggling with the best way to manage security and deployment of a project that contains databases, SSIS, SSAS and SSRS components, using configurations.
Environment (Dev):
3 SQL Server databases, all using mixed-mode security, using SQL Server security credentials.
12 SSIS packages; one master package, eleven child packages, 3 shared data sources
1 SSAS database; one cube, 15 dimensions, three referenced data sources from the SSIS project (in same solution)
6 SSRS reports, one data source to cube (not shared- doesn't appear SSRS can share datasources among other projects in the solution? Why?)
Everything runs fine in development. Now comes the tricky part.
Deploying SSIS and SSAS into production environments:
-Packages use XML config files for connection strings to three relational data sources.
-Deploy to SQL Server storage. Deploy wizard copies package dependencies (including XML config files) to default location set in INI file. When I do this, no config file shows up in remote server (remote server not set up identical to local, so directory does not exist. Need UNC path?) So, being a developer with no "special" permissions on the PROD server, what security permissions is allowing the deployment wizard from copying files to this location on a production server?
-Using a deploy script using dtutil doesn't copy the SSIS dependencies. Is this matter of using COPY or XCOPY to copy the configuration files to the dependency location? Again, in real-world practice, do developers typically change this location in the INI file to another location, or stick with the default. In either case, how does security work that allows files to get copied to the remote folder? (i.e. manual, or SQL Server manages this file folder permission through some other magic)
When using SSMS and running the package after being deployed on the remote server, if the config path is the default (e.g. C:program filesMicrosoft SQL Server90DTSPackages...) it appears to be read from the local machines directory rather than the remote machines directory path (do I need to use UNC paths? The wizard doesn't give this option it seems)
-When scheduling the job from SQL Agent, does the proxy account need permissions to the folder the config files sit in?
-What about the roles security on the packages themselves? Where does the server roles come into play (dtsltuser, dtsadmin)
-Because the SSAS project uses connection references to the SSIS project in BIDS, and SSIS project uses configurations, will SSAS pick up on these connections?
-What about impersonation levels for SSAS? Leave all data sources set to default, and set the database impersonation level to "UseServiceAccount"? What if the developer is not the same as the OLAP administrator on the production server? In this case, Use Service Account isn't an option, and neither is the current users credentials.
-SSAS database also has security for Full Control, but still doesn't prevent security at the data source level within the database (talking about impersonation level, not source db credentials)
-How can SSRS connections leverage other shared connections?
As you can see, there are a ton of security considerations, none of which are intuitive and can be configured multiple ways and actually work (and a ton of ways that won't work).
I need a simple cheat-sheet about each step to take to configure this so multiple developers can work without interruption, hot-deploying SSIS, SSAS, and SSRS changes into different environments (QA, PROD).
-Kory
Our current setup is as follows:
serverA - DB1, DB2
ServerB - DB3
ServerC - DB4, DB5
Question 1: In peer-to-peer, is this the right setup?
ServerA - DB1, DB2, DB3, DB4, DB5
ServerB - DB1, DB2, DB3, DB4, DB5
ServerC - DB1, DB2, DB3, DB4, DB5
Question 2: Are we backing up all DBs or just DB1?
Question 3: When serverA.DB1 goes down, does it affect the other DB1s?
Question 4: Can any of the DBs publish to any server say a reporting_serverG that is outside the peer-to-peer topology.
Question 5: Is it best to have .mdf, .ldf, tempdb, etc. on local drive or in the SAN?
Question 6: What is the recommended NLB hardware needed to handle peer-to-peer?
1. Is it legal and OK to use a MSDN SQL copy on a production environment or is it strickly for test environments ??
2. If I own a legal copy of SQL 7 with 5 cals, can I legally use SQL MSDE and have more than 5 people access my SQL server or am I also limited to 5 users as my original ??
Sorry I am a newbie at this SQL thing.
I'm getting this, after upgrading from 2000 to 2005.Replication-Replication Distribution Subsystem: agent (null) failed.The subscription to publication '(null)' has expired or does notexist.The only suggestions I've seen are to dump all subscriptions. Sincewe have several dozen publications to several servers, is there adecent way to script it all out, if that's the only suggestion?Thanks in advance.
View 3 Replies View RelatedHi,I have transactional replication set up on on of our MS SQL 2000 (SP4)Std Edition database serverBecause of an unfortunate scenario, I had to restore one of thepublication databases. I scripted the replication module and droppedthe publication first. Then did a full restore.When I try to set up the replication thru the script, it created thepublication with the following error messageServer: Msg 2714, Level 16, State 5, Procedure SYNC_FCR ToGPRPTS_GL00100, Line 1There is already an object named 'SYNC_FCR To GPRPTS_GL00100' in thedatabase.It seems the previous replication has set up these system viewsSYNC_FCR To GPRPTS_GL00100. And I have tried dropping the replicationmodule again to see if it drops the views but it didn't.The replication fails with some wired error & complains about thisviews when I try to run the synch..I even tried running the sp_removedbreplication to drop thereplication module, but the views do not seem to disappear.My question is how do I remove these system views or how do I make thereplication work without using these views or create new views.. Whyis this creating those system views in the first place?I would appreciate if anyone can help me fix this issue. Please feelfree to let me know if any additional information or scripts needed.Thanks in advance..Regards,Aravin Rajendra.
View 2 Replies View RelatedHi,
In my production box is running on SQL7.0 with Merge replication and i want add one more table and i want add one more column existing replication table. Any body guide me how to add .This is very urgent
Regards
Don
Hello,
I have this problem on a Production database.
DBCC OPENTRAN shows "REPLICATION" on a server that is not configured for replication. The transaction log is almost as large as the database (40GB) with a Simple recovery model. I would like to find out how the log can be truncated in such a situation.
Thank you.