We are setting up the clients' database with transactional replication that allows the subscriber to send updates by pull subscriptions.
The plan is to create a second database so the users can keep working on the original database while we get the replication set up and working. Once we feel confident that the replication is working on the second database, the plan was to backup the old one, then detach it, and then restore the new database with the latest data from the old one. My question is whether this would preserve all the replication configurations that we set up? Since the replication adds a column to each table what would happen to that column? Or what is an alternative, which would allow us to set up the replication without disturbing the user's work, and then implement the replication with the latest data?
I am also wondering how to set the synchronization to happen once daily? I do not see where I can set that. I only see options for continuous vs on demand. Does on demand mean I can somehow schedule with an external program to run once a day?
Thanks for your patience, and I hope my questions make sense.
I have transactional replication, The publisher DB contains table call Courser with timestamp column, this column values are unique for the publisher DB
The subscriber DB also contains same copy of data in publisher DB Course table, but the timestamp column values are different.
So my problem is how can I keep this two tables (Course) identically, (same timestamp column vales in both table)
NOTE: Publisher and Subscriber DB reside under two different SQL server instance
hi guys :( ive searched on all the topics here about replication. but now i am wondering what windows xp settings do i have to edit? i read something about dcom settings and changing permissions. what is the RIGHT way to share a file or whatever because the schema cannot be accessed for some reason. i need a whole list of things to do because things are not working out after i read the tutorials on other sites. please someone help me.
basically i need to have two servers that need to be replicating each other. i realized that the instances have to be named so i am just totally confused. ill even let anyone connect remotely to help me out if they want. or explaining will do fine. please! thanks guys! happy holidays!
Sebastian Garibaldi writes "Hi I'm Sebastian from Argentina, and i have a problem with a SQL data base. I receive error from data base of broken index and consistency errors. I set the fill factor with the information from the books online, i put 70 in tables that have a lot of INSERT/UPDATE/DELETE but it works for two o tree days.
I now that i must make some maintain in the database but which tools i shuld use?
here i paste an error from dbcc checktable:
Server: Msg 8964, Level 16, State 1, Line 1 Table error: Object ID 981108969. The text, ntext, or image node at page (1:949979), slot 52, text ID 57535781339136 is not referenced. Server: Msg 8964, Level 16, State 1, Line 1 Table error: Object ID 981108969. The text, ntext, or image node at page (1:949979), slot 53, text ID 57535782191104 is not referenced. DBCC results for 'FCRMVI'. There are 108460 rows in 17430 pages for object 'FCRMVI'. CHECKTABLE found 0 allocation errors and 2 consistency errors in table 'FCRMVI' (object ID 981108969). repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKTABLE (ArleiProd.dbo.FCRMVI ).
We have many transactional publications, and would like to have identical settings on each of them. Is any way to compare settings of these publications using script?
I have two production servers in the same data center. I need to ensure that database remains available if a catastrophic server failure or a disk failure occurs. I need to maintain transactional consistency of the data across both servers. I need to achieve these goals without manual intervention. I was suggested to use this optionTwo servers configured on the same subnet. SQL Server Availability Group configured in Synchronous-Commit Availability Mode<<<<But I think the correct answer should be Two servers configured in Windows Failover Cluster in the same data center SQL Server configured as a clustered instance<<<<
Hi,I have transactional replication set up on on of our MS SQL 2000 (SP4)Std Edition database serverBecause of an unfortunate scenario, I had to restore one of thepublication databases. I scripted the replication module and droppedthe publication first. Then did a full restore.When I try to set up the replication thru the script, it created thepublication with the following error messageServer: Msg 2714, Level 16, State 5, Procedure SYNC_FCR ToGPRPTS_GL00100, Line 1There is already an object named 'SYNC_FCR To GPRPTS_GL00100' in thedatabase.It seems the previous replication has set up these system viewsSYNC_FCR To GPRPTS_GL00100. And I have tried dropping the replicationmodule again to see if it drops the views but it didn't.The replication fails with some wired error & complains about thisviews when I try to run the synch..I even tried running the sp_removedbreplication to drop thereplication module, but the views do not seem to disappear.My question is how do I remove these system views or how do I make thereplication work without using these views or create new views.. Whyis this creating those system views in the first place?I would appreciate if anyone can help me fix this issue. Please feelfree to let me know if any additional information or scripts needed.Thanks in advance..Regards,Aravin Rajendra.
How do i change data type settings in when building my tables? Specifically i want to reduce the number of decimal places on 'smallmoney' so it doesn't show 4 but rather 2. Thank you.
In SQL 2000, working with Maintenance plan wizard, what would be best settings and values should i choose in "Update Data Optimization information" window,
I tried Data Mining Add-Ins for Office 2007 - CTP December 2006. Test settings: Windows XP SP2 english with Italian regional settings, Office 2007 english (RTM), SQL Server 2005 Developer (with SP2 CTP Dec06) and Data Mining Add-ins for Office 2007 (CTP Dec06).
If I keep regional settings in Italian, I get error like this:
Old format or invalid type library. (Exception from HRESULT: 0x80028018 (TYPE_E_INVDATAREAD))
If I change regional settings to English, the Add-in works.
I found this description as the possible cause of the problem: http://msdn2.microsoft.com/en-us/library/ms178780(vs.80).aspx - if this is the issue, it would be necessary to change the ExcelLocale1033Attribute on the component.
Is there another workaround other than to install the Office 2007 MUI?
Marco Russo http://www.sqlbi.eu http://www.sqljunkies.com/weblog/sqlbi
I have a requirement wherein PDF files are being rendered from an .rdl (report definition language), through the use of a SSRS scheduler automatically. The generated PDF file is further emailed to a mailing list, through the same scheduler. However, there are situations where the PDF is generated as an empty file (under certain specified circumstances) [through the automatic scheduler run]. In this situation, it is required not to email the PDF at all. I would appreciate an input which lets me know how to prevent the generation of the PDF file, when there are no records in the dataset that binds to the .rdl.Alternatively, is there any indicator via which the scheduler can be alerted NOT to pick up files 0 KB in size?
SQL Transaction replication, specifically SQL backup and restore Transaction replication. So Scenario,
S1 = Primary Server 1 R1 = T - Replication Server 1 R2 = T - Replication Server 2
So we have S1 replicating to R1, and we want to build another subscriber which is R2.
Can I take the Replicated Database from R1, backup it up, then restore it to R2, and create the publication/subscription?
Will that work? if not, is there an easier way to avoid the snapshot? the reason i ask this is because we do have replication snapshot, but takes long. One of my Colleagues stated he tried this, however replication made duplicate rows on R2, which is why he had to use replication snapshot.
hi expertsss.. my msdb database is like 2gb big.. to me is really big.. so is there a way to maintain that? and how. .. also.. my disk level fragment are bad on one of my drive (some datafiles in there and msdb is there too). is there any 3rd party tool i can use to do the defragment and set schedule ? please help thanks~
I have a VB.net app that access a SQL Express database. I have transactional repliaction set up on a SQL 2000 database (the publisher) and a pull subscription from the VB.net app. I use RMO in the VB app to connect to the publisher. My problem is I am getting some strange behaviour as follows
- if I run the app and invoke the pull subscription it works fine. If I then close my app and go back in, I can access my data without any problem
- If I run the app and try to access data in my SQL Express database it works fine. I can then close the app, reopen it and run the pull subscription it works fine
however.......
- if I run the app, invoke the pull subscription (which runs fine), and then try to access data in my local SQL Express database without firstly closing and reopening the app, I get a login error
- if I run the app, try to access data in my local SQL Express database (which works fine), and then try to run the pull subscription I get a "the process cannot acces the file as it is being used by another process" error. In this case I need to restart the SQL Express service to be able to run replication again.
I get exactly the same behaviour when I use the Windows Sync tool (with my app open at the same time) instead of my RMO code to replicate the data.
I am using standard ADO.Net 2 code to access my SQL Express data in the app and closing all connections etc
Hi All, Pandon me for asking such question, I am still a beginner to ASP.NET. I have a project that require me to do single operation that is suppose to update two databases, wonder how do I maintain transaction between these two databases? Please advise, thank you!
My application uses sql for performing operation. something like conncetion.execute(query). so there is only conncetion object no recordset object or something like that .
i want to run multiple instence of my application so i want to maintain integrety of data. and i am looking for solution through sql for locking mechanism. so concurrent data access dont currept data.
Can anyone give me an idea like, what percentage of organizations use 'code' to maintain the parent-child relations on their tables than having FK constraints thru the db model? Because,all the companies that I worked with used 'code' to control the relationships across the tables(not the PK/FKs.!!) Thanks. Neil.
I have to synchronize 2 databases hourly but am having difficulty maintaining foreign key relations. These tables use auto-increment columns as primary keys, with child records in other tables related with foreign keys. I can't change the way the local software uses primary or foreign keys as it is hardcoded in the local app. (microsoft retail management system)..(however the web-remote app is easily customized). I am using CDB synchronizer to sync the two databases because the remote one is mysql.
Example tables layout: Items table has auto-increment primary key 'id' TransactionEntry table has its own auto-increment primary key 'id' and a foreign key 'item_id'
Example of how remote and local database foreign key relations are incorrect after sync using CDB synchronizer: 8:00am -first installation of database-'item' tables auto-increment 'id' columns match with id last record value of '6'
locally the following products are added:
11001 short sleeve t---gets added with primary key in 'item' table 'id' of '7'
11002 long sleeve t----gets added with primary key in 'item' table 'id' '8'
remotely the following products are added:
21001 hipster jeans- --gets added with primary key in 'item' table 'id' of '7'
31001 overalls---gets added with primary key in 'item' table 'id' '8'
remotely someone orders 21001..so TransactionEntry table records sale of "item_id" of '7', but after synch with our local server,
product with "item_id" of '7' is "short sleeve t".
9:00 -synch takes place...item_id foreign key isn't accurate because of independent auto-increment values..
whenever a product is ordered, the TransactionEntry table will record the product's ID column thats available in it's own local copy... after synch, the 'item_id' field will not match the 'Item' table id field and the data about the transaction's product is lost.
I have read of solutions involving staging/temporary tables to cascade update foreign keys before synching into main database, but hopefully there is a more elegant solution for this. If this is only way, will it be reliable? foreign key mix-match seems like could cause havoc.
I would like to maintain version control of the all the sql objects (sp, view , tables ) and maintain source code versions control. Any way to use TFS to maintain versions of sql objects. Also the folder structure when using TFS.
Hey everyone, I'm new to .NET and I've recently inheirited a rather large and busy asp.net website. I was asked to add a testimonials section on each page that will randomly pull a testimonial out of the db. This is fine, however, I'm getting random errors about the DB connection either being closed or connecting. Here is the code for the testimonials class: 1 public SqlDataReader GetTestimonials(ref SqlDataReader reader, int iCatID, string sLanguageType)2 {3 SqlCommand cmd = new SqlCommand("sp_DVX_Testimonials_Fetch", Connection);4 cmd.CommandType = CommandType.StoredProcedure;5 6 cmd.Parameters.Add(new SqlParameter("@Cat_ID", iCatID));7 cmd.Parameters.Add(new SqlParameter("@LanguageType", sLanguageType));8 9 reader = cmd.ExecuteReader();10 11 return reader;12 } I know this isn't the best way to do this (especially for each page[this site averages about 1000 hits a day]), so I was wondering was--is there a way to maintain a single DB connection that's set up in the Application_Start that will maintain the connection so I don't have to worry about this error. If not, does anyone and any ideas as to what would help? Thanks in advance!
Hi guys, We have a scenario where there are about 50 tables in our database and we want to build an intranet web application for users to with the office to access those tables. Users ability to access tables falls into diferent category:
Some users can NOT view some tables at all Some users can ONLY view some tables but not insert/update any field Some users can view and also insert/update some tables (in the same time they might not have view(select) permision on some other tables) Now, what is the right way to implement this. I say we have to have a Role, RolePermission, User, UserPermission inside our database to implement this (something which would look like the Roles and Users inside MSSQL) and we only have one user for our Database (MachineName/ASPUSER) to access the database and all the tables within My colleague says NO, instead of creating all these tables and implement this, we add every user of our application as a Database user inside MSSQL in the Databse Users. All the web application I have seen so far, DNN, CommunityServer, ... the have tables to implement all these and they don't add users inside the MSSQL. Now which way is the way to go with, and what problem might we fall into if we use SQL users, is this possible at all. How can I convince him that we have to make and use our own tables to manage this. Thanks for any help,Mehdi
Say I have a result set with two fields numbers and letters.
1 A3 A1 B2 B
The result set is ordered by the letters column. How can I select the distinct numbers from the result set but maintain the current order? When I tryselect distinct Number from MyResultSet
it will reorder the new result set by the Number field and return
123
However, I'd like maintain the Letter order and return
I would appreciate If any one could help me in this matter.
problem is : how to maintain perpetual inventory transaction table order in batch mode updation ?.
I have designed a table to hold all inventory transactions. The table order is perfectly maintained in online system of updation. But if I go with batch updation then the order of the transaction is collapsing. For example consider the following table design. (note I used auto number to maintain the order).
version used : SQL Server 2000 with service pack updates.
ofcourse if the order collapse means costing can not be accurate. so please any one could help me to solve this problem. because many software packages are not posting in sequence if we choose in batch mode.
I have a new cluster (2 sync, 2 async) with about 50 databases going from 1 to 200gb ( all of the objects are compressed).That at sql server 2012, sp1 CU7.I have several drives for logs with 200gb of space in there...I am having issues at rebuilding indexes on this env, ie, I have a table with the clustered index heavily fragmented (~80%), and the table has about 60gb of data, uncompressed that should be about 160gb.
The index rebuild is creating a log file big enough as to consume all the space that I have for logs, and that is only 1 table, so for sure my old process to maintain indexes (ola.hallengren code) won't work on this scenario.
I am trying to encode the barcode on the reporting service using SQL server 2005, they told me it is possible to maintain the barcode on exported excel sheet, however, I did not find the barcode on it, only squre black image.
I am working in ASP.NET 2.0 and using sql server 2000 as backend . In my application i need to insert/update to oracle database table lying on different server. Please let me know how can i maintain two different connecttions to different databases lying on different servers.....
I am working in ASP.NET 2.0 and using sql server 2000 as backend . In my application i need to insert/update to oracle database table lying on different server. Please let me know how can i maintain two different connecttions to different databases lying on different servers.....
I am going to use the backup and restore function to copy data from one server to the other server. We would like to keep the servers in sink at this point (not instantaneously but update the server say once a day) and I would like to do this by using the back up transaction logs. I have tried to back up from individual transaction logs but in also seems to restore the full database also. The database is roughly 6 gig and transaction logs are about 25- 50 meg. I really do not want to have to restore the database every time.
I know I could set up replication but this has been more of a pain administering this on a daily basis. I would like to do a schedule and forget type of thing. This is going to be done on 6.5.
I am trying to export the result of a select into a .csv file using SQL Server 2000 DTS. The data for varchar fields has leading zeroes in the database, which is very much required in the csv file.
But, the .csv file trims the leading zeroes. How do we force to maintain the same data as in source?
I had used Text File Destination Connection as the destination, with the below options File Extension: .csv File Format: Delimited File Type: ANSI Text Qualifier: Double Quotes ("") Row Delimiter: {CR}{LF} Column Delimiter: comma
Source Data: 0123 Target Data (Requirement): 0123
The data in .csv: 123 (This is the issue)
When I open this file in a Text Editor, I do see the data in double quotes..."0123".