ServerA supports 1 intranet application using 1 DB
ServerB supports 2 extranet applications using 2 DB's (1 per application)
ServerC is our DW server that we have installed a Search DB which is used by all applications
Prior to our "upgrade" to merge replication we were using 1-way Transactional Replication so our topology looked like:
We also had linked servers between ServerB and ServerA as well as between ServerC and ServerA to update data on ServerA. We would simultaneously update/insert the tables on ServerB/C and create custom stored procedures to handle the data already processed from the subscribers.
With our new implementation we are seeing more latency as well as locking since merge replication is not running off of transaction logs anymore.
My main question is would we see an increase in performance and less locking as a result of a topology like this:
Where Master is a server and DB supporting no applications (hence no OLTP). Would latency be the same/better/worse? Should we stick with our current implementation and just performance tune it?
A secondary question I have is given the bidirectional replication options above did we choose the best one for us? These servers are all on the same network hosted by the same provider over Gigabit Ethernet (I assume). I think we have the polling interval set at 5 seconds and we are thinking of moving it to 10 seconds at most. Real-time latency is not critical to our business but it would be a "nice to have". For conflict resolution we are keeping it simple, whichever was inserted/updated last "wins". It looks like Bi-directional Transactional replication might be a better option for us. Would it give us the autonomy we are looking for? Any major "cons" to using Bi-directional Transactional replication over merge replication (beside scalability). Scalability may come into play a few years down the road but for now it is not a high priority. Also would the Master model described above using Bi-directional Transactional replication be a successful implementation?
ETA - One thing merge replication gives us is autonomy between our application servers, particularly when ServerA needs to come down for upgrades, the applications on ServerB can still function without any dependencies like we had before with 1-way transactional replication with linked server calls.
is it possible to implement bidirectional replication with queued updating subscriptions in SQL Server 2000? I am currently testing bidirectional replication on two servers and it works well so far. My concern is how to I update the subscriber or publisher once both servers become disconnected? How to I resync? thank you for your help,
Hi, We have Two Database server at location A and at location B, Say 'Server A' and 'Server B'. Both database contain's same schema and data. A group of user's will make updation's to 'server A' and rest using 'Server B'. My requirement is that i want the both database to be identical, ie if an updation or insertion in server A is made then it should also affect 'serverB' and vice versa. * Can go for batch updation , because very 10min or lesser i want the datbase to be syncronized. * can go for single side replication as both database will be updating Thanks In Advance, KCube
Following setup s1<--- p1<-->p2 ---->s2 (bidirectional replication between publishers as each one have its own subscriber) What are the disadvantages of this solution if only one publisher gets written to at the time. How about schema changes (would I need to stop all activity on p1 & p2 similar to p2p replication) ? Would changes get republished to s1 & s2 ? Are identities the only problem when instead on p1 as main server I start using p2 ? Thank you.
I have a problem configuring Bidirectional replication in SQL Server 2005 SP2. I configured Publication and Subscription on two different SQL 2005 instances on different machines (Station1SQL2005 and Station2SQL2005 respectively). Databases are DBTest1 in Station1 and DBTest2 in Station2. I have two tables one in DBTest1 and the other in DBTest2.
Script for the above configuration:
This below configuration does not work if i configure Publication and Subscription on the same machines
For Station1: IF EXISTS(SELECT * FROM sys.databases WHERE name = 'dbtest1') DROP DATABASE dbtest1;
CREATE DATABASE dbtest1 go
--Create table named two_way_dbtest1 that have an IDENTITY column with the NOT FOR REPLICATION option set USE dbtest1 go
IF EXISTS (SELECT * FROM sysobjects WHERE name LIKE 'two_way_dbtest1') DROP TABLE two_way_dbtest1; GO
CREATE TABLE two_way_dbtest1 ( pkcol INTEGER PRIMARY KEY NOT NULL, intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION, charcol CHAR(100), timestampcol TIMESTAMP )
/*Allocate a predetermined range of values to the primary key column so that the values on the different servers are not in the same range. For example, you can enforce 1-1000 as the key range for the two_way_dbtest1 table in the dbtest1 database, and then enforce 1001 -2000 as the key range for two_way_dbtest2 table in the dbtest2 database. To do so, use the following code: */ -- Constraint to enforce a range of values between 1 and 1000 in database dbtest1 USE dbtest1 go
ALTER TABLE two_way_dbtest1 WITH NOCHECK ADD CONSTRAINT checkprimcol CHECK NOT FOR REPLICATION ( pkcol BETWEEN 1 AND 1000 ) go
--Enable your server as the distributor, and then create a distribution database --Ensure SQL Server Agent service is running before executing the below statement. USE master go sp_adddistributor @distributor = 'Station1SQL2005' go
--create a distribution database for the distributor USE master go sp_adddistributiondb @database='distribution' go
--Enable the computers running SQL Server that are participating in the replication as publishers USE master go
--Enable the identified databases for replication USE master go
exec sp_replicationdboption N'dbtest1', N'publish', true go
--Create the custom stored procedures in the dbtest1 database USE dbtest1 go
-- INSERT Stored Procedure
CREATE PROCEDURE sp_ins_two_way_dbtest1 @pkcol int, @intcol int, @charcol char(100), @timestampcol timestamp, @rowidcol uniqueidentifier AS INSERT INTO two_way_dbtest1 ( pkcol, intcol, charcol ) VALUES ( @pkcol, @intcol, @charcol ) go
--UPDATE Stored Procedure
CREATE PROCEDURE sp_upd_two_way_dbtest1 @pkcol int, @intcol int, @charcol char(100), @timestampcol timestamp, @rowidcol uniqueidentifier, @old_pkcol int as DECLARE @x int DECLARE @y int DECLARE @z char(100)
SELECT @x=pkcol, @y=intcol, @z=charcol FROM two_way_dbtest1 WHERE pkcol = @pkcol
DELETE two_way_dbtest1 WHERE pkcol=@pkcol
INSERT INTO two_way_dbtest1 ( pkcol, intcol, charcol ) VALUES ( CASE ISNULL(@pkcol,0) WHEN 0 THEN @x ELSE @pkcol END, CASE ISNULL(@intcol,0) WHEN 0 THEN @y ELSE @intcol END, CASE ISNULL(@charcol,'N') WHEN 'N' THEN @z ELSE @charcol END ) go
-- DELETE Stored Procedure
CREATE PROCEDURE sp_del_two_way_dbtest1 @old_pkcol int AS DELETE two_way_dbtest1 WHERE pkcol = @old_pkcol go
--Create a transactional publication, and then add articles to the publication in both the dbtest1 and the dbtest2 databases --In the database dbtest1. USE dbtest1 go
-- Adding the transactional publication. EXEC sp_addpublication @publication = N'two_way_pub_dbtest1', @restricted = N'false', @sync_method = N'native', @repl_freq = N'continuous', @description = N'Transactional publication for database dbtest1.', @status = N'active', @allow_push = N'true', @allow_pull = N'true', @allow_anonymous = N'false', @enabled_for_internet = N'false', @independent_agent = N'false', @immediate_sync = N'false', @allow_sync_tran = N'true', @autogen_sync_procs = N'true', --To avoid expiry if there are 5 continuous holidays for a company. If 0, well-known subscriptions --to the publication will never expire and be removed by the Expired Subscription Cleanup Agent. @retention = 120 go
/*In this scenario, the dbtest1 database is the central subscriber. Create transactional subscriptions in the dbtest2 database that subscribe to the publication at dbtest1 and in the dbtest1 database that subscribe to the publication at dbtest2 */ --Create all the subscriptions with the LOOPBACK_DETECTION option enabled --Adding the transactional subscription in dbtest1. USE dbtest1 go EXEC sp_addsubscription @publication = N'two_way_pub_dbtest1', @article = N'all', @subscriber = 'Station2SQL2005', @destination_db = N'dbtest2', @sync_type = N'none', @status = N'active', @update_mode = N'sync tran', @loopback_detection = 'true' go
For Station2: --Create database named test1 IF EXISTS(SELECT * FROM sys.databases WHERE name = 'dbtest2') DROP DATABASE dbtest2 go
CREATE DATABASE dbtest2 go
--Create table named two_way_dbtest1 that have an IDENTITY column with the NOT FOR REPLICATION option set USE dbtest2 go
IF EXISTS (SELECT * FROM sysobjects WHERE name LIKE 'two_way_dbtest2') DROP TABLE two_way_dbtest2; GO
CREATE TABLE two_way_dbtest2 ( pkcol INTEGER PRIMARY KEY NOT NULL, intcol INTEGER IDENTITY(1,1) NOT FOR REPLICATION, charcol CHAR(100), timestampcol TIMESTAMP )
/*Allocate a predetermined range of values to the primary key column so that the values on the different servers are not in the same range. For example, you can enforce 1-1000 as the key range for the two_way_dbtest1 table in the dbtest1 database, and then enforce 1001 -2000 as the key range for two_way_dbtest2 table in the dbtest2 database. To do so, use the following code: */ -- Constraint to enforce a range of values between 1 and 1000 in database dbtest1 USE dbtest2 go
ALTER TABLE two_way_dbtest2 WITH NOCHECK ADD CONSTRAINT checkprimcol CHECK NOT FOR REPLICATION ( pkcol BETWEEN 1 AND 1000 ) go
--Enable your server as the distributor, and then create a distribution database --Ensure SQL Server Agent service is running before executing the below statement. USE master go EXEC sp_adddistributor @distributor = 'Station2SQL2005' go
--create a distribution database for the distributor USE master go sp_adddistributiondb @database='distribution' go
--Enable the computers running SQL Server that are participating in the replication as publishers USE master go
--Enable the identified databases for replication USE master go
exec sp_replicationdboption N'dbtest2', N'publish', true go
--Create the custom stored procedures in the dbtest1 database USE dbtest2 go
-- INSERT Stored Procedure
CREATE PROCEDURE sp_ins_two_way_dbtest2 @pkcol int, @intcol int, @charcol char(100), @timestampcol timestamp, @rowidcol uniqueidentifier AS INSERT INTO two_way_dbtest2 ( pkcol, intcol, charcol ) VALUES ( @pkcol, @intcol, @charcol ) go
--UPDATE Stored Procedure
CREATE PROCEDURE sp_upd_two_way_dbtest2 @pkcol int, @intcol int, @charcol char(100), @timestampcol timestamp, @rowidcol uniqueidentifier, @old_pkcol int as DECLARE @x int DECLARE @y int DECLARE @z char(100)
SELECT @x=pkcol, @y=intcol, @z=charcol FROM two_way_dbtest2 WHERE pkcol = @pkcol
DELETE two_way_dbtest2 WHERE pkcol=@pkcol
INSERT INTO two_way_dbtest2 ( pkcol, intcol, charcol ) VALUES ( CASE ISNULL(@pkcol,0) WHEN 0 THEN @x ELSE @pkcol END, CASE ISNULL(@intcol,0) WHEN 0 THEN @y ELSE @intcol END, CASE ISNULL(@charcol,'N') WHEN 'N' THEN @z ELSE @charcol END ) go
-- DELETE Stored Procedure
CREATE PROCEDURE sp_del_two_way_dbtest2 @old_pkcol int AS DELETE two_way_dbtest2 WHERE pkcol = @old_pkcol go
--Create a transactional publication, and then add articles to the publication in both the dbtest1 and the dbtest2 databases --In the database dbtest1. USE dbtest2 go
-- Adding the transactional publication. EXEC sp_addpublication @publication = N'two_way_pub_dbtest2', @restricted = N'false', @sync_method = N'native', @repl_freq = N'continuous', @description = N'Transactional publication for database dbtest2.', @status = N'active', @allow_push = N'true', @allow_pull = N'true', @allow_anonymous = N'false', @enabled_for_internet = N'false', @independent_agent = N'false', @immediate_sync = N'false', @allow_sync_tran = N'true', @autogen_sync_procs = N'true', --To avoid expiry if there are 5 continuous holidays for a company. If 0, well-known subscriptions --to the publication will never expire and be removed by the Expired Subscription Cleanup Agent. @retention = 120 go
/*In this scenario, the dbtest1 database is the central subscriber. Create transactional subscriptions in the dbtest2 database that subscribe to the publication at dbtest1 and in the dbtest1 database that subscribe to the publication at dbtest2 */ --Create all the subscriptions with the LOOPBACK_DETECTION option enabled --Adding the transactional subscription in dbtest1. USE dbtest2 go EXEC sp_addsubscription @publication = N'two_way_pub_dbtest2', @article = N'all', @subscriber = 'Station1SQL2005', @destination_db = N'dbtest1', @sync_type = N'none', @status = N'active', @update_mode = N'sync tran', @loopback_detection = 'true' go
---************************************************************************************************* It would be grateful if somebody gives me a solution.
Greetings... Presently, I am doing one way replication in SQL Server 2005. Server-A is local server at local place and Server-B is remote server at different place. There is not a problem in one way replication. Server-A is Distributor and Server-B is Subscriber in one way replication. I want to setup the following configuration using bidirectional replication (two way replication) on SQL Server 2005 And I am not able to do it. What should I do for this? Should I use Merge Replication for bidirectional Replication. Server-B is live server for users which cannot be stop for a moment. Server-A is local server which is live too. Now please let me know how to do Bidirectional Replication. So whatever data in Server-B (Which is live) should replicate to Server-A or Vice versa ... If we add some column into Server-B's table of Database what could be the effect on Server-A...
Hello everybodyI work at a company in Iceland and we have developed a 3-tier solutionwritten in ASP - Visual Basic - MSSQL2000, 4 companies are using thesolution almost constantly and accessing it through a browser. Theconnection has never gone down (yet) so that it has affected ourclients but we are thinking of how be able to run the solution locallyat every place and then create a replication to a main server that ishosted at our place.My question is: Does it affect speed for the clients that are usingthe solution or is there a better way of doing this?The solution is a ticket sale system and our clients use it every dayand people that sit at home should be able to order ticket online.Because of that we can't update the database every 5 minutes or 15minutes because we don't want a double booking in the same seat.Any help appreciated!- Sindri
I want to be able to setup the following configuration using bidirectional transactional replication on SQL 2005
instance A lives on machine 1 instance B lives on machine 2
Instance A publishes to a transactional subscription on Instance B Instance B does the reverse and publishes to a transactional subscription on Instance A
Instance A pushes to a distribution database on machine 2 Instance B pushes to a distribution database on machine 1
Problems Implementing Configuration
I can setup each instance as a distributor and create separate distribution databases using sp_adddistributor and sp_adddistributiondb
I can then enable each publisher to use the correct distribution database using sp_adddistpublisher
However, when I try and run sp_replicationdboption to setup a database for publication, I get the error:
Msg 20028, Level 16, State 1, Procedure sp_MSpublishdb, Line 55 The Distributor has not been installed correctly. Could not enable database for publishing.
If I try and configure a publication with the wizard, it says that instance A must be enabled as a publisher before you can create a publication - and then presents the dialog to add instance A to be handled by the local distribution database - which I don't want.
It appears that I need to run sp_adddistributor on the publisher as well as the distributor for the appropriate instance, but if I do this I get the error:
Msg 14099, Level 16, State 1, Procedure sp_adddistributor, Line 109 The server 'instance A' is already defined as a Distributor.
It seems that you can't select a remote distributor if you already have a local distributor configured.
Is there a way round this limitation?
Note that configuring the environment with a single distribution database works fine
1) Do I have to install publishing on both servers (A and B) even though one will be publisher and the other will be subscriber.
2)a. Can named pipes be used for communication between these two servers which are on the same domain but not on the same network. Why or why not, whatever the answer may be? b. If I use TCP/IP, it the connection set up using the client configuration utility? How is the connection string set up in this case? c. Suppose the publishing server was not using Net-Beui. Could this pose any problems for communication. (is using lmhosts sufficient in this scenario)
3) I have set up a (remote) SQL Server to be a Publisher/ Distributor. Both SQL servers have been configured to be remote servers relative to each other. Following are the steps I have carried out to set up replication:
______On the Publication Server (a remote server) I went to Server --> Replication Configuration ---> Install Publishing
Next, I chose a local distribution server. I think that the instdis.sql script ran fine because the distribution database was installed successfully.
___Next, I went into Manage Publications from Server menu to set up the publications.
_____________When I went the subscription server to subscribe to the published articles, I got the following error message:
Error 14093: [SQL Server] You must be System Administrator (SA) or Database Owner (dbo) or Replication Subscriber (repl_subscriber) to execute the stored procedure.
I create a distributed database for mobile application. I replicate a table that distribute on mobile device. I follow instruction how to create distributor, publication, replication, web synchronization, and subscriber database. I have done fine for synchronization between mobile database into desktop database (in this case SQL Server 2005 Standard Edition). But the problem is how can setup publication so it can bidirectional, not only from mobile database into desktop database, but also from desktop database into mobile database. So in the mobile database can have same data with desktop database even on mobile database lost some old data.
Its like data exchange between both engine. Desktop and mobile have same data. For filtering I can put filter on the desktop server for replicated table, so don't worry how I split the data.
hi. i dont understand what they mean when they say developing oltp solutions. can anybody pls explain it to me. also does anyone know what ways there are to develop sql oltp solutions using SQL 2005 reporting services, OLTO, Excel Services. as well as any good tutorials for it?
I guess the Subject line sums it all up, but I need some experienced explanation of what do a solution and a project represent, and how do I use them to my advantage.
Is a solution an entire database? If so, how can I create a solution from my existing databases?
Are either of them a way to collect together scripts etc which will be run against a production database when the solution is rolled out?
What is a project? Is it a single set of scripts related to an upgrade to a database? If so, can it be executed as a single entity? How is the sequence of execution controlled?
And so on and so on....
2005 is such a step backwards for DBA's with all the features we used to have and now don't. If it wasn't for the fact that MS will eventually stop supporting 2000, I frankly see no incentive to upgrade myself.
Ok, OK, flame off.
Can anybody suggest some resources which might give me some insight into these questions?
We have been looking for books which cover the DB synchronisation subject. We have started creating a plugin to our socket server but we quickly realised how much of a challenge it would be (We are creating it "from scratch"). The conflict problems, the order to tables (rows) be synchronised (Child - Parent) and how to delete rows were some of the problems we found. The main goal is to synchronise devices in the field (Compact framework) to the server using less bandwich as possible. We are trying to be "agnostic" about the DB in both Server and Client side.
We are pretty sure there are books out there which cover this subject, we just couldn't find them!
We know this will be a challenge but we also know it will be a great feature to add to the server.
Does anyone know where I can find a Northwind end to end database solutions (examples) written in ASP.NET (VB). I would like to reverse engineer this project to learn more about ASP.NET?
My warehouse app employs a distributed architecture. Extractions from disparate (wildly so) systems, and transformations and loads into a standardized schema are performed at various locations close to the source systems (both physically and "logically" speaking). There are security and other reasons for this. However this causes some related design and implementation challenges for the ETL processing.
For one, the ETL processes must be successfully operated by non-technical medical administrators, who actually have little interest in the application and sometimes even the analytics produced by the system, who have other more pressing day to day work they want to be doing, in organizations where turnover is high,training is spotty, and LANs are fragile and often congested.
So, real-time feedback to the operator during processing is pretty dern important. I have built a fairly sophisticated GUI (using .Net forms inside a script component) for the operational interface-input boxes just wouldn't cut it).
But that interface is lacking real time feedback as to processing progress at runtime.
Anyone got that T-shirt yet? I'm thinking I need progress bars and real-time task and component progress reports. Also. is there a way to capture the built in logging output in real-time?
I am on a project to develope an route finding system that search for the optimal route to stick with for users of the system. The current version that i've done and successfully run is using normal database access in MS SQL 2005. I stored nodes information in the database and the application will query them using normal "select" clauses and return a datatable object to the application. The result is rather slow cause by the multiple access to database server to query. The application used 8 second to look for a short route withour cosidering lots of calculation of traffic information that i will use later. Any comments on the architecture or approach to switch my algo to T-SQl?
I have been informed that all my keyword search solutions are susceptible to SQL injection attacks. Does anyone have links discussing basic ' multiple ' keyword search solutions? I would think this is a very common routine (perhaps so much so than only newbies like myself do not know it). I have read the posts about escaping ', doing replace " ' ", " '' ", using parameters and yet every multiple keyword solution I come up with is said to be injection prone. Example: visitor enters: Tom's antiquesinto a TextBox control and the C# code behind securely generates the below call to the database. SELECT L_Name, L_City, L_State, L_Display FROM tblCompanies WHERE L_Kwords LIKE '%' + 'Tom's' + '%' AND L_Kwords LIKE '%' + 'antiques' + '%' AND L_Display = 1 RETURN I understand that concantenting string parts using an array and then passing the sewn together string to a stored procedure exposes it to injection. I hope that my single keyword routine below is secure, if it is not then I am not understanding how parameterized SP are supposed to be constructed to protect against injection.string CompanyName;CompanyName = TextBox1.Text;PROCEDURE CoNameSearch @CompanyName varchar(100)AS SELECT DISTINCT L_Name, L_Phone, L_City, L_State, L_Zip, L_Enabled, L_Display FROM tblLinksWHERE (L_Name LIKE @CompanyName + '%') AND L_Enabled = 1 AND L_Display = 1 ORDER BY L_NameRETURN
I'm trying to figure out what solution (replication, mirroring, clustering) would work best for me.
I have been reading many articles in BOL and in this forum. Most talk about getting data TO a backup/standby/subscriber, but I can't find a lot of info regarding getting the data BACK after a disaster is over.
We have a main office and a disaster recovery facility. Most of the time there are no data updates at the disaster location. So, I need to get data to the disaster facility via WAN (latency is not a huge issue - end of day syncing is fine) for backup purposes. In the event of a disaster, the main office will be offline and data changes will happen at the disaster site. When the disaster is "over" and we return to the main office, what's the best scheme to reverse the data back to the main office to start business again? We are a financial company, and have gigabytes of relatively static data. Most changes are current day. So, to snapshot a 100GB database when I know only a few hundred MB changes a day doesn't seem feasible to me.
Most replication scenarios (at least from what I see) can't easily "reverse" the replication after a disaster situation. I'm looking at merge replication on a schedule which seems to look good, but was wondering if anyone else has any ideas or suggestions?
I'm about to start on my first reporting services project, but before I mess it up, I'm looking for some guidance on how best to achieve my mission. Here's what I'm looking to achieve:
I have a datacentric application (SQL Server 2005 Express w/ Advanced Services backend) in which I want to build about 50 "canned" reports for the end users. I want to build the reports utilizing server mode so I can take advantage of some of Reporting Services advanced features. I'm not sure what the best practice would be to build the reporting services project. Is it better to include the report project as another project within the application solution? Or, should I build the report project independent of the application solution? What are the pros and cons of doing it either way? How does including the report project build if it's included in the application solution? How would a ClickOnce deployment deploy the report project to the report server?
My ultimate goal would be to have an "off-the-shelf" software solution that includes an installation package consisting of the application project and report project. Is it even possible due to the Reporting Services architecture to achieve an install in this manner with ClickOnce, Windows Installer, or Installshield? Or, is building the report project indepedent of the application project and deploying the reports to the report server "manually" (i.e. deploy within the report server project) the only solution?
My company wants me to research and flags or registry tricks that would allow non-ansi joins '=*' and '*=' in SQL Server 2005 with a compatiblity mode of 90 to be allowed.
The way I understand the situation is that in SQL Server 2005 with the database compatiblity set to 90, non-ansi join SQL such as the following would not work.
Select * from Customer, Sales Where Customer.CustomerID *= Sales.CustomerID
To work, the SQL above would have to be converted to ansi join SQL such as the following:
Select * from Customer LEFT OUTER JOIN Sales On Customer.CustomerID = Sales.CustomerID
Many hours would be spent browsing through millions of lines of code to find the non-ansi SQL and have changes made.
Does anyone know of any trace flaqs or registry entries that would allow SQL Server 2005 work in 90 compatiblity and still allow non-ansi =* and *= joins in SQL?
At my current employer we are struggling with the best way to manage security and deployment of a project that contains databases, SSIS, SSAS and SSRS components, using configurations.
Environment (Dev): 3 SQL Server databases, all using mixed-mode security, using SQL Server security credentials. 12 SSIS packages; one master package, eleven child packages, 3 shared data sources 1 SSAS database; one cube, 15 dimensions, three referenced data sources from the SSIS project (in same solution) 6 SSRS reports, one data source to cube (not shared- doesn't appear SSRS can share datasources among other projects in the solution? Why?)
Everything runs fine in development. Now comes the tricky part.
Deploying SSIS and SSAS into production environments:
-Packages use XML config files for connection strings to three relational data sources. -Deploy to SQL Server storage. Deploy wizard copies package dependencies (including XML config files) to default location set in INI file. When I do this, no config file shows up in remote server (remote server not set up identical to local, so directory does not exist. Need UNC path?) So, being a developer with no "special" permissions on the PROD server, what security permissions is allowing the deployment wizard from copying files to this location on a production server? -Using a deploy script using dtutil doesn't copy the SSIS dependencies. Is this matter of using COPY or XCOPY to copy the configuration files to the dependency location? Again, in real-world practice, do developers typically change this location in the INI file to another location, or stick with the default. In either case, how does security work that allows files to get copied to the remote folder? (i.e. manual, or SQL Server manages this file folder permission through some other magic) When using SSMS and running the package after being deployed on the remote server, if the config path is the default (e.g. C:program filesMicrosoft SQL Server90DTSPackages...) it appears to be read from the local machines directory rather than the remote machines directory path (do I need to use UNC paths? The wizard doesn't give this option it seems) -When scheduling the job from SQL Agent, does the proxy account need permissions to the folder the config files sit in? -What about the roles security on the packages themselves? Where does the server roles come into play (dtsltuser, dtsadmin) -Because the SSAS project uses connection references to the SSIS project in BIDS, and SSIS project uses configurations, will SSAS pick up on these connections? -What about impersonation levels for SSAS? Leave all data sources set to default, and set the database impersonation level to "UseServiceAccount"? What if the developer is not the same as the OLAP administrator on the production server? In this case, Use Service Account isn't an option, and neither is the current users credentials. -SSAS database also has security for Full Control, but still doesn't prevent security at the data source level within the database (talking about impersonation level, not source db credentials) -How can SSRS connections leverage other shared connections?
As you can see, there are a ton of security considerations, none of which are intuitive and can be configured multiple ways and actually work (and a ton of ways that won't work).
I need a simple cheat-sheet about each step to take to configure this so multiple developers can work without interruption, hot-deploying SSIS, SSAS, and SSRS changes into different environments (QA, PROD).
I'm getting this, after upgrading from 2000 to 2005.Replication-Replication Distribution Subsystem: agent (null) failed.The subscription to publication '(null)' has expired or does notexist.The only suggestions I've seen are to dump all subscriptions. Sincewe have several dozen publications to several servers, is there adecent way to script it all out, if that's the only suggestion?Thanks in advance.
Hi,I have transactional replication set up on on of our MS SQL 2000 (SP4)Std Edition database serverBecause of an unfortunate scenario, I had to restore one of thepublication databases. I scripted the replication module and droppedthe publication first. Then did a full restore.When I try to set up the replication thru the script, it created thepublication with the following error messageServer: Msg 2714, Level 16, State 5, Procedure SYNC_FCR ToGPRPTS_GL00100, Line 1There is already an object named 'SYNC_FCR To GPRPTS_GL00100' in thedatabase.It seems the previous replication has set up these system viewsSYNC_FCR To GPRPTS_GL00100. And I have tried dropping the replicationmodule again to see if it drops the views but it didn't.The replication fails with some wired error & complains about thisviews when I try to run the synch..I even tried running the sp_removedbreplication to drop thereplication module, but the views do not seem to disappear.My question is how do I remove these system views or how do I make thereplication work without using these views or create new views.. Whyis this creating those system views in the first place?I would appreciate if anyone can help me fix this issue. Please feelfree to let me know if any additional information or scripts needed.Thanks in advance..Regards,Aravin Rajendra.
In my production box is running on SQL7.0 with Merge replication and i want add one more table and i want add one more column existing replication table. Any body guide me how to add .This is very urgent Regards Don
DBCC OPENTRAN shows "REPLICATION" on a server that is not configured for replication. The transaction log is almost as large as the database (40GB) with a Simple recovery model. I would like to find out how the log can be truncated in such a situation.
Hello,I'm getting the following error message when I try add a row using aStored Procedure."The identity range managed by replication is full and must be updatedby a replication agent".I read up on the subject and have tried the following solutionsaccording to MSDN without any luck.(http://support.Microsoft.com/kb/304706 )sp_adjustpublisheridentityrange (http://msdn2.microsoft.com/en-us/library/aa239401(SQL.80).aspx ) has no effectFor Testing:I've reloaded everything from scratch, created the pulications from byrunning the sql scripts generated,created replication snapshots andstarted the agents.I've checked the current Identity values in the Agent Table:DBCC CHECKIDENT ('Agent', NORESEED)Checking identity information: current identity value '18606', currentcolumn value '18606'.I check the Table to make sure there will be no conflicts with theprimary key:SELECT AgentID FROM Agent ORDER BY AgentID DESC18603 is the largest AgentID in the table.Using the Table Article Properties in the Publications PropertiesDialog, I can see values of:Range Size at Publisher: 100,000Range Size at Subscribers: 100New range @ percentage: 80In my mind this means that the Publisher will assign a new range whenthe Current Indentity value goes over 80,000?The Identity range for this table cannot be exhausted! I'm not surewhat to try next.Please! any insight will be of great help!Regards,Bm
I have a VB.net app that access a SQL Express database. I have transactional repliaction set up on a SQL 2000 database (the publisher) and a pull subscription from the VB.net app. I use RMO in the VB app to connect to the publisher. My problem is I am getting some strange behaviour as follows
- if I run the app and invoke the pull subscription it works fine. If I then close my app and go back in, I can access my data without any problem
- If I run the app and try to access data in my SQL Express database it works fine. I can then close the app, reopen it and run the pull subscription it works fine
however.......
- if I run the app, invoke the pull subscription (which runs fine), and then try to access data in my local SQL Express database without firstly closing and reopening the app, I get a login error
- if I run the app, try to access data in my local SQL Express database (which works fine), and then try to run the pull subscription I get a "the process cannot acces the file as it is being used by another process" error. In this case I need to restart the SQL Express service to be able to run replication again.
I get exactly the same behaviour when I use the Windows Sync tool (with my app open at the same time) instead of my RMO code to replicate the data.
I am using standard ADO.Net 2 code to access my SQL Express data in the app and closing all connections etc
I have recently setup a transactional replication in MS SQL 2000. After setting up the replication the clients TempDB grew by almost 60GB. Now the client is Blaming me for the TempDB GROWTH and saying that its because of the replication being setup i tried to convince them but they are not satisfied yet. Can anybody please tell me does replication cause the tempdb to grow. If yes then how. can u suggest any good link for getting to know the internal working of SQL Server replication????