I wanted to confirm that when using Database Mirroring that DDL Updates such as table and stored procedure updates on the principle server are replicated to the mirror server.
I thought I read that non-logged operations would not be replicated to the mirror server. What would be some examples of that. At the moment...you would think every entry in the the database would be saved into a table and that would be replicated.
Using SQL Server 2008, we would like propose mirroring between two servers of a critical database. Since we initiate, may require to clarify on its purpose and also required changes from application end.Any changes required from OS Level? (I believe both servers IP or Host name should be added in host entries. Mirroring ports should be allowed/open including Principal and mirror server IP Addresses): Windows Team.Any changes required from Application? (Instance name, authentication: user name and its password should be added in web config files): Application Team.Any changes required from Network Team?Also for mirroring both the principal and mirror servers should be with same version, does it only mean SQL Server 2008 versions are enough or does it also mean to say build numbers 10.00.4000 should also be same.URL....
I am interested in adding a new row to a table 'Table05' that exists in a SQL Server 2005 database whenever a table 'Table00' in another SQL Server 2000 database has a row added to it. Can someone tell me a way to implement the above solution?
I have just finished configuring my first test mirrored environment (High safety mode). I setup the database engine service accounts on each of the servers with domainuser. I inherited a production mirrored environment set up by someone else. On the production servers the database engine service account is NT Authorityuser a local account. I am trying to practice installing Windows updates within a mirrored environment and I not sure how to proceed when the service account is NT Authority user account. should I change the service account to a domainuser?
No error is thrown, but the update is not made? Possibly due to the dreaded inline sql being used? The data structure for these 2 servers is horrific (but that is a story in itself). The current structure (which needs immediate change) is each employee all 10 of them, have their own database. If the userid is 6 or higher they are on server2 if the userid is 5 or below they are on server1. Then they each have their own table that stores the data. (A story for another day, but def needs overhaul). However, here is the sql -- what needs to be altered to work with the current data structure and still get the updates needed? Or we can scratch the update statements entirely if we can get the correct counts needed.
I will have many client databases that will be updated. When they are updated I need to transfer some of that data to a central Server database somewhere on the Internet. Note, schemas do not necessarily match.
Transfering the data (Web Services, Remoting) is not a problem.
What I am looking for is a cool, correct, advisable way, for the client database to notify me of an Insert, Update or Delete. I can then initiate the connection to the Server and 'push' the data (maybe pull some back from the Server too).
Obviously Triggers may be a place to start... But I need to know 'external to the database' that an update has happened or can I send the data from within the SQL Server assembly?
Anyone have any ideas or a technique, maybe something new in SQL Server 2005 (all databases will be 2005), for acheiving this? Just so I don't go down the 'wrong' path...
There are some BLOB's involved (1-2 page PDF's), if that makes a difference.
I envisage that the process transfering the data will be a Windows Service running on each client. The connection may not always be available, so some kind of 'to do' list of outstanding data to be transfered is required.
I'm just starting on this, so any pointers would be great, I'm sure it's all been done before ;)
ALTER DATABASE foo set PARTNER = 'TCP://10.3.3.1:1234' I get this error message:
The database is being closed before database mirroring is fully initialized. The ALTER DATABASE command failed. What does that mean, and how do I fix it?
I'm trying to work out a database design to make it quicker for my clientprogram to read and display updates to the data set. Currently it reads inthe entire data set again after each change, which was acceptable when thedata set was small but now it's large enough to start causing noticabledelays. I've come up with a possible solution but am looking for others'input on its suitability to the problem.Here is the DDL for one of the tables:create table epl_packages(customer varchar(8) not null, -- package_type char not null, -- primary keypackage_no int not null, -- /dimensions varchar(50) not null default(0),weight_kg int not null,despatch_id int, -- filled in on despatchloaded bit not null default(0),item_count int not null default(0))alter table epl_packagesadd constraint pk_epl_packagesprimary key (customer, package_type, package_no)My first thought was to add a datetime column to each table to record thetime of the last change, but that would only work for inserts and updates.So I figured that a separate table for deletions would make this complete.DDL would be something like:create table epl_packages(customer varchar(8) not null,package_type char not null,package_no int not null,dimensions varchar(50) not null default(0),weight_kg int not null,despatch_id int,loaded bit not null default(0),item_count int not null default(0),last_update_time datetime default(getdate()) -- new column)alter table epl_packagesadd constraint pk_epl_packagesprimary key (customer, package_type, package_no)create table epl_packages_deletions(delete_time datetime,customer varchar(8) not null,package_type char not null,package_no int not null)And then these triggers on update and delete (insert is handled automaticallyby the default constraint on last_update_time):create trigger tr_upd_epl_packageson epl_packagesfor updateas-- check for primary key changeif (columns_updated() & 1792) > 0 -- first three columns: 256+512+1024insert epl_packages_deletionsselectgetdate(),customer,package_type,package_nofrom deletedupdate Aset last_update_time = getdate()from epl_packages Ajoin inserted Bon A.customer = B.customer andA.package_type = B.package_type andA.package_no = B.package_nogocreate trigger tr_del_epl_packageson epl_packagesfor deleteasinsert epl_packages_deletionsselectgetdate(),customer,package_type,package_nofrom deletedgoThe client program would then do the initial read as follows:select getdate()selectcustomer,package_type,package_no,dimensions,weight_kg,despatch_id,loaded,item_countfrom epl_packageswherecustomer = {current customer}order bycustomer,package_type,package_noIt would store the output of getdate() to be used in subsequent updates,which would be read from the server as follows:select getdate()selectcustomer,package_type,package_no,dimensions,weight_kg,despatch_id,loaded,item_countfrom epl_packageswherecustomer = {current customer} andlast_update_time > {output of getdate() from previous read}order bycustomer,package_type,package_noselectcustomer,package_type,package_nofrom epl_packages_deletionswherecustomer = {current customer} anddelete_time > {output of getdate() from previous read}The client program will then apply the deletions and the updated/insertedrows, in that order. This would be done for each table displayed in theclient.Any critical comments on this approach and any improvements that couldbe made would be much appreciated!
we're in a business where customers often ask for a foolproof scheme that even prevents folks with DB privileges from fraudulently inserting, updating or deleting data. The scheme must be so air tight that a judge can be convinced of its reliability.
HiI have have two linked SQL Servers and I am trying to get remote writesworking correctly (fast).I have configured the DB link on both machines to:Point at each others DB.I have security set up to map each others server loginsand Server Options: Collation Compatible, Data Access, RPC, RPC Out, UseRemote Collation all checkedMy problem is that when a SP performsBegin TransactionUpdate Local TableUpdate Remote TableCommit TranIt takes several seconds to complete. (about 7 seconds not acceptable tous)This is due to the remote update - how can I improve the response time?example of a stored procedures that takes timewhere ACSMSM is a remote (linked) SQL Server.procedure [psm].ams_Update_VFE@strResult varchar(8) = 'Failure' output,@strErrorDesc varchar(512) = 'SP Not Executed' output,@strVFEID varchar(16),@strDescription varchar(64),@strVFEVirtualRoot varchar(255),@strVFEPhysicalRoot varchar(255),@strAuditPath varchar(255),@strDefaultBranding varchar(16),@strIPAddress varchar(23)asdeclare @strStep varchar(32)declare @trancount intSet XACT_ABORT ONset @trancount = @@trancountset @strStep = 'Start of Stored Proc'if (@trancount = 0)BEGIN TRANSACTION mytranelsesave tran mytran/* start insert sp code here */set @strStep = 'Write VFE to MSM'updateACSMSM.msmprim.msm.VFECONFIGsetDESCRIPTION = @strDescription,VFEVIRTUALROOT = @strVFEVirtualRoot,VFEPHYSICALROOT = @strVFEPhysicalRoot,AUDITPATH = @strAuditPath,DEFAULTBRANDING = @strDefaultBranding,IPADDRESS = @strIPAddresswhereVFEID = @strVFEID;set @strStep = 'Write VFE to PSM'updateACSPSM.psmprim.psm.VFECONFIGsetDESCRIPTION = @strDescription,VFEVIRTUALROOT = @strVFEVirtualRoot,VFEPHYSICALROOT = @strVFEPhysicalRoot,AUDITPATH = @strAuditPath,DEFAULTBRANDING = @strDefaultBranding,IPADDRESS = @strIPAddresswhereVFEID = @strVFEID/* end insert sp code here */if (@@error <> 0)beginrollback tran mytranset @strResult = 'Failure'set @strErrorDesc = 'Fail @ Step :' + @strStep + ' Error : ' + @@Errorreturn -1969endelsebeginset @strResult = 'Success'set @strErrorDesc = ''end-- commit tran if we started itif (@trancount = 0)commit tranreturn 0
Hello,Can someone point me to getting the total number of inserts and updates on a tableover a period of time?I just want to measure the insert and update activity on the tables.Thanks.- Vish
This managed application was written to run on a Symbol 3090 Win CE 5.0 scanning device. We are using the symbol provided classes to access the scanning interface, and SQL Compact database on the device to collect the scanned data, and then using merge replication to synchronize scanned data when the device is docked. The problem we have experienced seems to be releated to the performance when inserting and updating records in the database.
We have tested some randomly generated 1000 records and inserting/updatating into a database. At first the time to commit a record increases when the database is flushing into the memory (The flush interval in the connection string property is 10 seconds by default). and then as the database size grows increasing the time to commit every single record which is causing the application to perform slowly as they scan items into the database. However, the device program memory remains consistant as they are scan items. From our tests, I found the time to execute either a update/insert command on 2MB sqlMobile database (upto 10000 records, depending on the size of the columns) is taking nearly 2 to 2 and half seconds to complete. Below is the only code I am executing,
I set up a new mirror server. Everything is good except that the Database Mirroring Monitor is not working for one of the databases. In the monitor the principal data is showing up as blank
If I look at dbm_monitor_data on the principal most of the data columns (e.g. Send_queue_size) are null where as they have data for the other databases.
Both servers are SQL Server 9.0.2047 Enterprise Edition.
BEGIN TRANSACTION Copy records from live to archive END TRANSACTION with commit or rollback execute sproc to write audit log with success or fail IF transaction was committed BEGIN TRANSACTION Delete records from live the archive END TRANSACTION with commit or rollback execute sproc to write audit log with success or fail End IF
END TRANSACTION OUTERTXN with commit if both inner transactions were successful or rollback if either failed
If either inner transaction rolled back execute sproc to write audit log saying whole process is rolling back End IfMy problem is that if the outer transaction rolls back then I am losing the two audit records because they are part of the transaction scope. I want these executes to commit even if the master transaction fails.
I have a sql 2005 express database uploaded to my website with important information in it.
Now, I had to make some table change and need to update the online database.
I am not sure if the 'Copy Website' function in Visual Studio 2005 will update the database structure and data or will simply overwrite it.
Does anybody know the answer? If it overwrites it, would you please point me to information on how can I update the database structure and data without ruining it?
I will be getting data in either Excel or Access form on a daily basis. I would like to automate the process of converting this (excel or access) data to a table in an existing SQL database. Since this conversion needs to performed on a daily basis, note that I need to update the table that contains data from the day before.
Is it possible to do this and if it is possible, can someone tell me how to do it.
I have a project that consists of a SQL db with an Access front end as the user interface. Here is the structure of the table on which this question is based:
Code Block
create table #IncomeAndExpenseData ( recordID nvarchar(5)NOT NULL, itemID int NOT NULL, itemvalue decimal(18, 2) NULL, monthitemvalue decimal(18, 2) NULL ) The itemvalue field is where the user enters his/her numbers via Access. There is an IncomeAndExpenseCodes table as well which holds item information, including the itemID and entry unit of measure. Some itemIDs have an entry unit of measure of $/mo, while others are entered in terms of $/yr, others in %/yr.
For itemvalues of itemIDs with entry units of measure that are not $/mo a stored procedure performs calculations which converts them into numbers that has a unit of measure of $/mo and updates IncomeAndExpenseData putting these numbers in the monthitemvalue field. This stored procedure is written to only calculate values for monthitemvalue fields which are null in order to avoid recalculating every single row in the table.
If the user edits the itemvalue field there is a trigger on IncomeAndExpenseData which sets the monthitemvalue to null so the stored procedure recalculates the monthitemvalue for the changed rows. However, it appears this trigger is also setting monthitemvalue to null after the stored procedure updates the IncomeAndExpenseData table with the recalculated monthitemvalues, thus wiping out the answers.
How do I write a trigger that sets the monthitemvalue to null only when the user edits the itemvalue field, not when the stored procedure puts the recalculated monthitemvalue into the IncomeAndExpenseData table?
I have read on the web that high protection mode not recommended, except in the event of replacing the existing witness server. But I can't find the reason why anywhere. Can anybody explain? Thanks.
1. Once the mirroring is setup is it possible to switch between high-protection and high-performance modes? If it is will I have to stop the mirroring, switch the modes and then restart it again?
2. Let's say the principal server went down and I manually failed over to the mirror server. Mirror server runs as a new principal server for a couple of days and then I bring the original principal server back up. What needs to be done in order to bring transactions on the principal server up to date?
for a database mirroring , which stretergy is good. Creating with witness server or without it.?
I am firsttime doing this , so i am posting it to the forum..
and also if we are creating the database mirroring with witness server for automatic failover , will the witness server need the same amount of harddisk space like the pricipal or mirror server.??
we want to migrate our production server , without taking the database down.so what we did was , we mirrored the databases from production to one of our development servers and we took the prodcution server for uprgrade.
so today we are moving back (mirroring) all the databases from that development server to our new production server.
my question is , when i move
productionserver_old(principal server) - having the script for principal server
development server(mirror server---->principalserver)) - having the script for mirror server , which is now acting as principal server
new production server(mirror server) - having the script for mirror server. will become principal server , once i moved all the databases
now while mirroring between the developement server and the new production server, i have to run the scripts for the mirror server only.
so both the servers are havign the mirror script now...
is that a problem, or any other way i can handle this....
Does anybody know any good Tutorials for Database Mirroring (Automatic Failover) or Manual Sync with Mirror Server. I tried some sites online, but doesn't have detailed steps.. pls.help as i need to sync data with One of my Mirror using Sql Server 2005.
I have some questions about database mirroring. we have almost 40 databases in one server. Can I set up the database mirroring for all the databases. Is it going to any affect on performance. we have already setup the mirroring on almost 30 databases, and I need to set up for the rest of 10. please some body could help on this. Thank you.
Hi, i'm a novice of sql server and I have a problem. I have to reply a server in which there are database that are managed with sql server 2000 sp3; what I must make in order to set up the mirroring between the database of the two server so that the data are always aligned ?
I am new user of SQl and preparing the exam database mirroring....can you tell me how i start the mirroirng ..what is my first tep..and is there only one instance
1. Once the mirroring is setup is it possible to switch between high-protection and high-performance modes? If it is will I have to stop the mirroring, switch the modes and then restart it again?
2. Let's say the principal server went down and I manually failed over to the mirror server. Mirror server runs as a new principal server for a couple of days and then I bring the original principal server back up. What needs to be done in order to bring transactions on the principal server up to date?
Can we have database mirroring for two databases. Like i have an application where in the db used are two, namely DB1 and DB2, DB1 is used in the connection string and with the help of DB1 the tables and sps of DB2 are handled.
in this case can i go ahead with database mirroring. Please comment and give the idea or link on the same.