SQL Data Replication, And Business Resumption/Continuity.
Sep 5, 2007
I am interested for tips/pointers on Remote Data replication using SQL 2005. Let's say there are 2 sites (A and B). I plan to have site A comprised of 2 SQL servers (1 active and 1 on standby). Site B will have the same configuration. If site A becomes disasterous then Site B will come up with zero loss of data or minimal loss of data. Will SQL data replication provide solution? or I need to look at other methods. By the way, both sites have servers attached to SAN storage. Thanks in advance;
I have a question and excuse me if I look dumb ... but I can't MIRROR MASTER or other dBs. If I lose the server then (primary) ... will the db still be open? do I not require some of these non-mirrorable database to run?
I have got a business logic update conflict handler working, but I have had to work round what appears to be a bug.
Please can someone confirm if this is indeed a bug €“ and if it is a known bug?
My conflict handler needs to take some columns from the publisher row and some from the subscriber row in the event of conflict.
I can quite happily generate a custom dataset which contains the winning row that I want €“ I can see that because I can step through the conflict handler with debug when a conflict occurs.
However, just returning ActionOnUpdateConflict.AcceptCustomConflictData from the UpdateConflictsHandler method does not set the publisher and subscriber columns correctly. I end up with different values on the two databases.
I have found that the only way to get the correct rows on both publisher and subscriber is to create a new ADO connection to the publisher and actually perform an update €“ updating all the modified columns. This now works reliably in my testing.
Fortunately, due to business rules the frequency of update conflicts are likely to be very infrequent, but I would very much like to avoid having to do the €˜unnecessary€™ update.
Notes:
I am using column level tracking €“ but I have seen the problem with row level tracking too I have mainly been using SP1 but have repeated the test on a configuration using the SP2 CTP €“ and the problem occurs there too The problem is not due to complex logic in my code. If the method just sets customDataSet = publisherDataSet.Copy and then returns ActionOnUpdateConflict.AcceptCustomConflictData, the changed and winning publisher values are not sent to the subscriber
I have two columns, one column has a document ID and a given document can have many pages. The second column has the pages. Now I want to find out when the page number is broken. For example, if doc ID 1 has 3 rows and each of the three has 1,2,3 and then the fourth row has document 1 but the value jumps from 3 to 7 and then goes to 8,9,10 and then jumps again and starts from 17, i want to have the ranges identified.
DocID Page Number 1   1 1 2 1 3 1 7 1 8 1 9 1 10 1 17 1 18 1 19 1   20
Pocket PC 2003, SQL Compact Edition, SQL2005, IIS6.0
I implemented a business logic handler to deal with conflicts. When I deploy it on the SQL server which is also the web replication server, the logic handler seems working fine. However, if I deploy this handler to another web server. The logic handler failed to be loaded.
My enviroment settings are desribed as below.
Machine A, distributor, with database and publication. The business logic handler is deployed at C:Program FilesMicrosoft SQL Server90COMBusinessLogicHandler.dll. It's registered by using sp_registercustomresolver. The @assembly is specified as @assembly=C:Program FilesMicrosoft SQL Server90COMBusinessLogicHandler.dll';
Machine B, IIS server. The same business logic handler is deployed at C:Program FilesMicrosoft SQL Server90COMBusinessLogicHandler.dll on the Machine B itself.
When I ran the web replication, the Merge Agent reported the error as below.
Error loading custom assembly "C:Program FilesMicrosoft SQL Server90COMBusinessLogicHandler.dll", Error: "Could not load file or assembly 'C:\Program Files\Microsoft SQL Server\90\COM\BusinessLogicHandler.dll' or one of its dependencies. The given assembly name or codebase was invalid.
It seemed that the Merge Agent had trouble to find my logic handler because the path reported in the error log has two backward slashes. I have no idea where did that came from. I am not sure if that's the cause of the error. Without business logic handler. I had successfully finished web replication of Machine B to sync with Machine A. If I setup web replication directly on Machine A with business logic handler, I can successfully sync as well.
Does anyone has any idea about how to correctly deploy business logic handler on a web server?
Does anyone have a successful prescribed sequence for installing VS2005 and Business Intelligence Reports Projects on a Vista Business workstation to be used to create reports for a server?
I've looked through everything I can find here and I don't seem to see a clear solution without a lot of trial and error.
Fact is, I've not been successful getting just the reports to install on a plain XP box. Of course, the report creation looks fine on the server but I don't want to work directly on the server.
Can anyone take me through synchronization of contacts within Business Contacts Outlook into Microsoft Small Business Accounts?
I run a stand alone PC with NO network. When SBA came SQL was also installed. Apparently you can synchronise Contacts within Business Contacts with SBA but both SBA & Outlook should work through the same SQL server.
There have been a lot of promising words around about data mining for (at least) the last decade. If one investigates how much data mining is applied in business, it can be concluded that this is rather limited. Currently I am investigating the possible causes of this limited data mining usage for my Master Thesis of Industrial Engineering and Management. This investigation includes a literature study and input a couple of experts from the data mining field.
Currently I want to check my results for two aspect with aid of other data mining practitioners:
Is my analysis of possible causes complete (iow: what is missing?)
Are the identified causes recognized confirmed or rejected by data mining practitioners? In order to proceed with my verification, I am looking for data mining practitioners, who are willing to give feedback on my work. I will send the participants a document with my findings to enable them to give their comments and other kind of feedback. (Or is it a better idea to start the discussion right here in this forum, let me know!) I think that my findings might be useful for every practitioner in the field and encourage you to participate.
Are there any people in this forum, who are willing to participate? In return you will receive my final results, which may be valuable for you as well.
We have decided to use business objects in our new application, which seems to be working well, since they can be used as a binding source for grids, etc.
I am trying to evaluate whether SSIS is a practical solution for our ETL requirements. The problem is, I can't find any examples or references, or even the slightest hint that anyone is using them with business objects. Any attempts to search yield a ton of results which are based on a commercial product called "Business Objects" rather than the design pattern.
It is currently a requirement of our development team that all data access must be done via business objects, rather than communicating directly with the database.
Can anyone provide some more information (besides just suggesting I write a custom connection manager)? Is there anyone who has actually made SSIS work with business objects?
I am using the Microsoft .NET Data Provider v1.0 for SAP NetWeaver® Business Intelligence to connect to SAP BW queries. I have tried looking for documentation on the connection string arguments but have not had much success. My question is this: How do you specify the connection timeout in the connection string:
I have read somewhere that any integration technology wrt SAP has to be SAP certified.
If no, what are the implications of using this in a project? will SAP refuse to support the customer because they are using a non-SAP certified product?
I am pulling data out of SAP ECC 6 to MS SQL server 2005 from Task - Import data .Net Framework Data Provider for mySAP Business Suite
The data which I am getting out of SAP gets multiplied by 1000 i.e. Table CE1LSC0 has field VVQTY which is Quantity€¦ The value of quantity in SAP is 8.0 when I pull in by the aforesaid tool it come as 8000.00.
Do anyone has any idea what is going wrong, I even ask the basis guys to reinstall the Function module required for .Net Framework Data Provider for mySAP Business Suite in SAP.
I uninstalled .Net Framework Data Provider for mySAP Business Suite tool and reinstalled it but still I am encountering the same problem.
I am having data where there are empty string in the business keys which should be used for Slowly changing dimesnion type 2, how do i over come this as due to empty strings i am getting new rows even though the rows havent really changed.
example of data is name and salary are business keys
name salary age address dev 23 klddldldlk sdfg 24 34 kdlddlkd
when the same is given as input the row dev 23 klddldldlk is coming as anew row where it already exists how do i over come this
Does anyone know if there is a x64 version of the Microsoft .NET Data Provider for mySAP Business Suite for use with SSIS? On the SAP site, I found a librfc32.dll which works under x64, but the x86 SAP Connector can't use it.
Does anyone know if there are any licence implications (SAP side) against Microsoft .NET Data Provider for mySAP Business Suite?
I heard from a colleague that if my SQL Server has (for example) 5 users, if SQL does connect to SAP, it would require the user license of 5 more users?
If there any implications similar to this one with SAP?
I deleted some records out of an entity, I'd like to keep the Codes as contiguous and incremental, meaning no breaks between the code numbers.I created a business rule and applied it but codes remain the same.
I used the "Default to a generated Value" action, then selected the Code attrib. --Saved.
Then back to the Entity, I applied business rules. But nothing seemed to have happened. As there was no change in codes.
I am currently working with C and SQL Server 2012. My requirement is to Bulk fetch the records and Insert/Update the same in the other table with some business logic? How do i do this?
I have a VB.net app that access a SQL Express database. I have transactional repliaction set up on a SQL 2000 database (the publisher) and a pull subscription from the VB.net app. I use RMO in the VB app to connect to the publisher. My problem is I am getting some strange behaviour as follows
- if I run the app and invoke the pull subscription it works fine. If I then close my app and go back in, I can access my data without any problem
- If I run the app and try to access data in my SQL Express database it works fine. I can then close the app, reopen it and run the pull subscription it works fine
however.......
- if I run the app, invoke the pull subscription (which runs fine), and then try to access data in my local SQL Express database without firstly closing and reopening the app, I get a login error
- if I run the app, try to access data in my local SQL Express database (which works fine), and then try to run the pull subscription I get a "the process cannot acces the file as it is being used by another process" error. In this case I need to restart the SQL Express service to be able to run replication again.
I get exactly the same behaviour when I use the Windows Sync tool (with my app open at the same time) instead of my RMO code to replicate the data.
I am using standard ADO.Net 2 code to access my SQL Express data in the app and closing all connections etc
I am attempting to create the "Classification - Children at Home" Data Mining Model as described in Larson's book. Each time that I create it.. ONLY the ALL LEVEL is shown and it is impossible to expand the model to look at the Decision Tree, Neural Network, or the Clustering model etc. Drill down is enabled (tried it with and without enabling the drill down). The Children at Home field has been populated with values from 0 - 4. Any ideas would be greatly appreciated. regards Steve
I am attempting to create the "Classification - Children at Home" Data Mining Model as described in Larson's book. Each time that I create it.. ONLY the ALL LEVEL is shown and it is impossible to expand the model to look at the Decision Tree, Neural Network, or the Clustering model etc. Drill down is enabled (tried it with and without enabling the drill down). The Children at Home field has been populated with values from 0 - 4. Any ideas would be greatly appreciated. regards Steve
I am having a question as stated in the subject title, yes, when we want to deploy scorecards to reporting services, as the prerequisite, how can we install and register the scorecard custom data processing extension with the Microsoft reporting services server?
I am looking forward to hearing from you shortly and thank you again.
I have 14 databases, the last database - 14th one will have lookup tables only. The other 13 databases will have these lookup tables and data tables. At the end of each day I will make updates for lookup tables on 14th database, I want to be able to push the updates to any or some of the 13 databases. Look up tables will have only upto 100 rows, so I am not concerned about the bandwidth. What is the best way to accomplish this.
I have 3 databases which i will reference as Database1, Database2 and Database3.
Database1 and Database2 are running SQL SERVER 2000 Database3 is running SQL SERVER 2005
Database1 and Database2 each have a Member Info table containing login information for seperate websites.
Database3 has a Member Info table also, but the login information is populated from the Member Info tables in Database1 and Database2.
Curently this is achieved by running a job once a day that clears out the Member Info table on Database3, and then running two insert statements that take the data from Database1's Member Info table and Database2's Member Info table and insert it into Database3's Member Info table.
This all works fine, but I don't think it is the best way to do this. Also, doing it this way leaves the chance that the job might hang up and then the Database3's Member Info table will be empty.
I am trying to find if there is a more efficient way to do this. And any ideas are appreciated!!!
In idea i thought about trying is to populate the Database3's Member Info table, and then place triggers on Database1 and Database2's Member Info tables. If a row is created/updated/deleted in Database1 or Database2's Member info table, it could take that information and update the corresponding information in Database3's Member Info table. But I have never really messed with triggers, so this may not be possible.....
Another idea I though of is to run a job once a day, but then have a cursor that goes through each row in Database1 and Database2's Member Info table, and if the information is not found in Database3's Member Info table, it could then either create/update/ delete that info.
Just wanted some opinions/ideas before I tackle this.
Hi all, Where is the replication log will be stored.My replication is failed due to server down.I have setup as a transaction replication. I just want to know where that log will be availbale.
Hi, What is the best way to do a two way data transfer between Sql server 2005 and sql server 2000. These data transfer should take place on triggers happening on either of these databases. Please help. Is there any tutorial online which addresses this. Thank You
I have two sets of data i.e two instances of SQL server in two different cities. Data entry happens at both the places.I need to balance the data in two servers i.e. I need to Synchronize the data.
Currently I am Exchanging the data between two sides and using a Buffer database to update the incremental infromation.(Running a SQL script).
I have GUID columns in my database. I am too new to replication.I wanted to know whether I can implement repliaction and what changes i need to make?(e.g. Replacing GUID columns with Identity columns). Permanent connectivity between two databases is required or Replication can be performed whenever connected?
Hy im PCV I want to know how to calculate the amount of data(in MB) that is transfered from 1 server trought another Puplisher--->Subscriber, using a merge replication. I know that the amount of data depends on the number of the rows and the scale of the colums. I only want to know how to calculate that amount of data. I am using Sql server 2000, and a OS windows XP profesional, thank you
I'm seearching for information regarding database replicationperformance. We need to compare the performance of replication for SQLServer and Oracle and it is urgent! Anyone who can describe theperformance bottlenecks for each database when performing replication,or can point me to a white paper or webpage.
We are looking at setting up peer-to-peer transactional replication between two databases. We have a customer requirement to encrypt the SSN in this database. I have configured replication successfully. I have also successfully encrypted the SSN using a symmetric key (with encryption by certificate). What I haven't done yet is set up encryption to work across a replication topology.
What steps would I have to follow in order to be able to encrypt the SSN on one server, replicate it to the subscriber, and then decrypt the SSN on the subscriber? For this scenario, is there a better way to handle encryption other than a symmetric key encrypted by a certificate?