Site A Db1 has to perform transaction replication to Site A- Db2 and Site B- Db1 and Db2.
I started Site A as pubisher and distributor and Site A and Site B both as subscriber.
Site B is in a different geographical area (state).
----------
Please suggest the best scenario to save bandwidth and server load for Publisher, and Distributor.
-------
Earlier I thought that I will implement local replication between Site B - in between Db1 and Db2. The Sql Server does not let me set Db1 as publisher, and distributor for its local database Db2.
-------
P.S. My all databases need same transactions though they are connected to different hardware at different places. So please don't question that why I need four similar databases.
I have a huge replication task I need to perform. The source table has over 250,000,000 records and everyday approximately 400,000 records are added to the system regularly.
Currently, I am running snapshot replication and it's taking 10 to 11 hours to complete (The internet connection between the production and the report server is slow). The reason I am using this method is because the source table does not have a timestamp column (so I can run an incremental replication) and I cannot modify that table because it belongs to a third party software. It does have a field which is linked to another table holding the timestamp.
Here is the source table and the other table's structure for reference:
DataLog
Name Datatype Size
DataLogStampID int 4 Value float 8 QuantityID smallint 2
DataLogStamp
ID (Which is foreign key to DataLogStampID field in the above table) int 4
SourceID smallint 2
TimestampSoruceLT datatime 8
What I would like to know is, what is the best practice to handle such replication with MSSQL Server 2000.
I am thinking of developing a procedure which follows the following step using a temp table:
This procedure will run (say) every night.
The temp table will contain a timestamp field.
For one time only, I will run a snapshot replication, so that I can have the existing data into the destination db.
I will record this timestamp into a repl_status table which holds the timestamp of the last replication.
The procedure will read the record from repl_status and select all records from the source table which were added since this date and put them into a new temp table (this table will have the same structure as the source and destination tables) and then a snapshot replication will "add" these new records only every night to the dest. table. By using this method, I will only transfer the records which have been added since the last replication (last night - less records).
I have a product that sits on a main server and wish to implementfunctionality to allow salesmen to come along, pick up a snapshot of thedatabase, go away and maybe modify/add to it and then come back and"synchronise" their data. I'm reading up on Merge Replication for thispurpose. But anyway, I created a publisher on my server and it went awayand generated a "rowguid" column on all of my tables (my tables all have anIdentity column key field). Now of course my "Inserts" no longer work, asthey expect a GUID. I would have expected SQL Server to automaticallygenerate a guid for new inserts (in a similar way to it's TIMESTAMP), but itappears it doesn't, despite the fact I have "(newid())" as the default forthe column. It always inserts the same value:{00000000-0000-0000-0000-000000000000}.So, back to basics, now I have a guid field for each record, how do I manageinserts?Thanks.
On the national Server: SQL 2005 Enterprise On the mobile clients: SQL 2005 Workgroup.
I was asked to find a solution to such scenario
2 mobile subscriber S1 and S2 to the same simple merge publication P1 on server N
Day 1,
S1 and S2 both synch up with N and both go off to do fieldwork Day3,
S1 and S2 both synch up with N.
S1 goes back to work
S2 shutdown the laptop and goes on 2-week vacation. 2 Weeks late
S1 Synch up wit the server and goes off to do fieldwork.
S2 meets S1 in the field. Their workfield is in the North Pole.
S2 has the laptop with data 2-weeks old but no longer can have access to the master publisher N to synch the replica and get latest changes.
S2 will have to sync with S1 since S1 database is fresh. The challenge is to have S2 and S1 replica identical
The Questions:
Is it possible for S2 to sync with S1?
if yes, How to go about it€¦ we need S1 and S2 to have identical replica on their machines?
Now that S1 and S2 are have identical databases and are both doing their fieldwork in the northpole. Can they both sync back with the national publisher N when they have access? Keep in mind that S2 got its data updated from the replica on S1?
Since Windows Integrated Authentication does not work over proxy for Replication, could I still use SSL or SQL authentication over proxy? Thanks for any advice.
Hello I have this data in a Access DB of ~4500 posts. Here is a sample of my problem. The Name has no ID, it is a simple text field with ~1800 diferent names in it: Year|Name| ------------------- 2005|NN| 2005|NN| 2005|YY| 2005|XX| 2006|XX| 2006|XX| 2006|XX| 2006|NN| 2006|NN| 2008|NN| 2008|NN| 2008|NN|
I have tried to make a SQL query to show this: Count of each Name Grouped by year
1)Which statement shows the maximum salary paid in each job category of each department?_______ A. select dept_id, job_cat,max(salary) from employees where salary > max(salary); B. select dept_id, job_cat,max(salary) from employees group by dept_id,job_cat; C. select dept_id, job_cat,max(salary) from employees; D. select dept_id, job_cat,max(salary) from employees group by dept_id; E. select dept_id, job_cat,max(salary) from employees group by dept_id,job_cat,salary;
2)description of the students table: sid_id number start_date date end_date date which two function are valid on the start_date column?_________¡£ A. sum(start_date) B. avg(start_date) C. count(start_date) D. avg(start_date,end_date) E. min(start_date) F. maximum(start_date)
Hello all, I have 2 primary key fields the ssn and refnum... if the data in the file is duplicated it will not import to my table rights even though i am using DTS to do my import, correct? or do I need to add an extra validator in there?
I would like to pull some data from a SQLServer database, and save it into an Access MDB file (which can be empty to start). I would then zip up the MDB and download it to the user.
I am seeking advice on the most "elegant" or "efficient" way to do this. Here are some ideas I have been considering:
1) Should I start with an empty template MDB and file-copy it before I populate it? Or is there a neat way in ASP.NET to allocate a brand new MDB outright?
2) I could read the SQLServer data into a Dataset object. I could then open a connection to the MDB, create a table object, defining all the columns, etc., and then I could write the data to the new table object. BUT ... I have a hunch there is a nifty ADO.NET way to save the data already in the Dataset object right into the MDB (creating the table and columns as a matter of course) ... all with an instruction or two (or three). Any ideas?
A database with 1 mdf and 2 ldf has been detached from SQL Server 7.0 . Then removed the log files ( they are gone , unable to recover ) and there's no backup at all . Now I want to attach the database with the same mdf , but got error msg - 'Device activation error'. It seems like it's looking for one of the log files.
Is there any way to recover the db ?
I guess NOT , isn't it ?
I don't understand why it doesn't work with sp_attach_single_file_db and sp_attach_db . I actually tested it with a dummy database with 1 log file , and it worked - a new log file was recreated. Thus I performed in production server. Don't understand why it doesn't work.
There is a big table with several million records. I am developing a query that retrieve the first rowset that meets WHERE condition. Any suggestions for the fast query? Thanks a lot.
Hello, I am asking a question i have seen many threads on, but I am looking for an idiot's guide on how to convert my SQL 2005 database to SQL 2000 so i can get it to run on my web hosting server. I'm very new to asp.net , but have ahd years of experience in normal HTML and a year or two in the old ASP. I was advised to learn ASP.Net 2.0 and have found it nothing but brilliant. The intergration with SQL 2005 made it a lot quicker to link up a database than using Access. Unfortunatly my hosting company is a little behind and still using SQL 2000. There isn't mch databse intergration (a few aplication forms) so I dont mind re-writing the whole database but I dont know how to set Visual web developer up with a SQL 2000 Database. I have also read from various other forums that you can convert a databse to 2000 by doing something with the scripts, but the explaination is too complicated for me to follow. Is there anyone out there who wouldn't mind going over some old ground and explain this all in simple terms? Im using 'SQL Server Managment Studio Express' (although i dont know how to use it) and 'Microsoft visual web developer 2005 express edition' . Thanks for reading this
Hello all, I'm new to SSIS and this forum, and this is my first post.
We're migrating a 2000 DTS ETL process to 2005 SSIS. We really like the enhanced functionality of SSIS thus far.
One problem we have with our 2000 process is that runs at 1:00am each morning. The scheduling is done via a distributed scheduling tool called Maestro. Our process pulls data from a mainframe-based DB2 OLTP and reformats it into SQL Server reporting tables. We have nightly mainframe batch processing that updates the DB2 tables, and we need those updates on a nightly basis.
The mainframe batch process starts at 8:00pm each evening. It finishes normally by 1:00am 90% of the time, but it is 20+ years old, and has its share of problems, especially during month-end. The problems can't be resolved until the next business day in some cases.
We'd like to elegantly connect the two processes somehow so the SSIS ETL process kicks off when the mainframe batch process finishes. I intentionally didn't use the word 'trigger' up until this point.
It would not be a problem to modify the mainframe batch process to insert or update a DB2 table that SSIS has access to, but I don't think we can get the mainframe batch process to update SQL Server 2005 tables...?
Hello everyone, I am upgrading from SQL Server 2000 to SQL Server 2005. Any caveats? Can I just detach the db's and attach them into 2005. Or is there any conversion I should run or import first?
In desperate need of implementing a solution where the customer has purchased a CMS to replace their corporate site and wants to use MSSQL as the DB server type. I have 3 servers allocated to me to complete this and I could use some advice on the best setup. THey're running windows 2003 standard server along with sql 2000 standard server. The intended plan is for the 2 servers to become web servers with the last server becoming the sql server. The CMS will reside soley on the sql server and the content for the web site on the seb server(s). What I need to know is if it's possible to set up an active/passive node to accommadate this using the items mentioned above? From what I've been reading sql 2000 enterprise does clustering but I'm hoping this version of sql can be used for something. Any responses are appreciated.
Hi,I am seeking the help of volunteers to test some software that I'vedeveloped which facilitates distributed two-phase commit transactions,encompassing any resource manager (e.g. SQL/Server or Oracle) controlled byMicrosoft's Distributed Transaction Coordinator in a Windows2000environment, with any resource manager under the control of DECdtm (e.g. Rdb(or Oracle via the XA Veneer)) in a VMS environment.[Yes, at some stage, I hope to sell this software and make money out of it,so unless you have a large philanthropic streak or are simply a techie wholikes to stay on top of Windows<->VMS connectivity issues, then you may wishto look away now. But if you do choose to participate, then rest assuredthat I have no interest in your personal or company details. (Just yourwork-rate :-)]What differentiates my Transaction Manager software from existingTransaction Monitor packages that are already in the marketplace (and whyyou should be interested) is that it is based on the Transaction InternetProtocol TIP standard. (RFC 2372) For those of you who don't know, thebeauty of TIP's "Two-Pipe" strategy is it's application-pipe (or middleware)neutrality. Whereas most XA implementations mandate homogenous TransactionMonitor deployments (such as Tuxedo everywhere, Encina everywhere, MQSerieseverywhere, ACMSxp everywhere and so on . . .), hotTIP from TIER3 Softwaregives you complete freedom to choose the middleware product(s) that bestsuite your particular application and heterogeneous network needs.Would you like to talk to VMS with TIER3 Sockets, COM or DCE/RPC? BEAMessageQ, IBM MQSeries or HTML? The choice is yours and yours alone. Butonce you realize that you need to encase your critical transactions withinthe ACID properties of a true Heterogeneous Two-Phase Commit then you willcome to the conclusion that you need a Transaction Manager that looks a lotlike this.Another drawback of traditional "One-Pipe" strategies is that they precludethe run-time determination of transaction participants. (Functionalitywhich may be advantageous in a wide-area or Internet based application.)Anyway, this is what I have: -On the Windows side, you need absolutely *NO* additional software! I'llreply to this note with a brief description of the COM+ and DTC functionsthat you would need to invoke in order to successfully push a MTS/DTCtransaction to VMS. NB: These are standard Windows APIs that are fullydocumented on MSDN.On the VMS side, I have a VMSINSTAL saveset that (all zipped up) is some150KB that I'm happy to e-mail to you along similar lines to the VMShobbyists (non-commercial use) license. I'll reply to this note with anInternet Daemon (INETd) example of code that uses my software to cedetransactional control, over an SQL insert into a Rdb database, to MTS/DTC.It's under 500 lines long and contains all of the DCL, 3GL, SQL required toproduce a working example of a TIP-2PC capable TCP/IP auxiliary server. Thisexample will insert a row into the MF_PERSONNEL.Employees table on the VMSside in co-operation with Windows2000 MTS/DTC client that is inserting a rowinto the NORTHWIND.Employee table. Commit them all or roll them all back.So, in summary, If you'd like to volunteer to put hotTIP through it's pacesthen simply reply to this mail.Regards Richard MaherPS. The following are a few functionality restrictions with the currentversion of my software that may effect your decision to participate: -1) Transaction has to be started/mastered/coordinated by W2K MTS/DTC2) Transactions cannot be PULLed from VMS and must be PUSHed from W2K3) No cluster-wide recovery.(If a txn falls over after being prepared then you have to wait for thatspecific node to become contactable again even though that lovely RDMrecovery job is sitting on another node protecting the database until myhotTIP TM tells it to commit or abort.)4) There is currently no Alpha or Itanium version available. The Alpha portis currently in progress but, for the time being, you'll either need a VAXor a VAX emulator on your PC.
I read somewhere that market basket analysis finds rules with substitutes as likely as rules with complements due to a consumer behavior called "horizontal variety seeking". This is when customers buy more than one product in the same category even though they are subsitutes. For example, when people go to the grocery store and buy soda, they buy coke and sprite at the same time even though they are substitutes of each other. I was wondering if anyone has experience with this anomaly and how they solved it. I found a time series model called the vector autoregressive model which is used to find the elasticity of prices over a time period. Does anyone have experience working with the VAR model? I am having trouble figuring out what some of the variables in the model are.
Hi. We've decided to convert our Crystal Reports to SSRS 2005. We know (thanks to this forum) there are companies that will convert the reports at a cost; however, we'd like to undertake this ourselves. Are there resources you can point us to that might be specific for Crystal Reports users coming over to SSRS, especially for newbies? Thank you.
I'm a complete newbie. Need to insert a Company logo into a databasecolumn to use later on in a check printing application. Read how toinsert the pointer instead of the object into the column. Below iswhat I did:SET QUOTED_IDENTIFIER OFFGOINSERT INTO BankInfo(CoLogo) VALUES(0xFFFFFFFF)***Then I did this****DECLARE @Pointer_Value varbinary(16)Select @Pointer_Value = TEXTPTR(CoLogo)FROM BankInfoWHERE CMCo = '91'WRITETEXT BankInfo.CoLogo @Pointer_Value"\192.31.82.77DataCheckImagesWyattLogo.jpg"****This was straight out of a book and it seemed to work it gave me amessage that it was successful and when I view the data in the columnI can see the pointer0x453A5C436865636B496D616765735C57796174744C6F676F 2E6A7067*****But when I try to use the column in either Crytal Report or an AccessReport the Bank Logo does not show up. I also placed the logo on my Cdrive and tried pointing to it there with "C:WyattLogo.jpg" with nosuccess.It can't be this difficult to get a Company logo into a column. Idesperately need assistance. Remember I am the ultimate newbie. Ilooked at my first sql database last week. Thanks in advance for anyhelp, it is appreciated.
Hello,I've been searching the web for quite some time to resolve the problemof "1/1/1900" returning in a datetime field in SQL that resulted from ablank (not NULL) value being passed to it through an ASP page.The solution is that a NULL value needs to passed to SQL from ASP.Thats fine...I understand the why the problem is happening and thesolution around it. HOWEVER, I can't seem to get the proper syntax towork in the ASP page. It seems no matter what I try the "1/1/1900"still results. Below are a few variations of the code that I havetried, with the key part being the first section. Does anyone have anysuggestions?!?!?______________cDateClosed = ""If(Request.Form("dateClosed")= "") ThencDateClosed = (NULL)end ifsql="UPDATE rfa SET "&_"dateClosed='"& cDateClosed &"', "&_"where rfaId='"& Request.Form("RFAID")&"'"_____________________________cDateClosed = ""If(Request.Form("dateClosed") <> "") ThencDateClosed = (NULL)end ifsql="UPDATE rfa SET "&_"dateClosed='"& cDateClosed &"', "&_"where rfaId='"& Request.Form("RFAID")&"'"_____________________________cDateClosed = ""If(Request.Form("dateClosed")= "") ThencDateClosed = NULLend ifsql="UPDATE rfa SET "&_"dateClosed='"& cDateClosed &"', "&_"where rfaId='"& Request.Form("RFAID")&"'"_______________Thanks in advance!!!!
I'm trying to ascertain how I can find out more about a particular job.
The information I have from a script I have to identify deadlock root causes gave me back this information: spid 86 is blocking spid 51... spid 86 info: SQLAgent - TSQL JobStep (Job 0xBAD836E3D331B44BA4CCAC400D244B17 : Step 1)
Well, that's good to know, but I would like to be able to identify the particular job that 'owns' TSQL JobStep (Job 0xBAD836E3D331B44BA4CCAC400D244B17 : Step 1).
I've read the BOL on the sysjob type tables; and, while they tell me about the columns in the tables and what they are, they tell me absolutely nothing about how one goes about figureing out what I want to know.
I suspect one problem I have is that '0xBAD836E3D331B44BA4CCAC400D244B17' needs converting to something else and I have no idea how to go about doing this. I was never that good at converting hex (I assume that is what this is) when I was doing it rather often, which is years ago, so I really have no idea how to start.
I will have to create a table that consists of only of two main fields. one: them employeeID and two: the SupervisorID, my question is what should I define as my primary key. Should it be an aditional field, or could it be the EmployeeID field.
The employeeID is an unique filed. The end user for this application will be updating rearly some of this records, and may be adding or deleting some new records exporadically.
one of my table has more than 300million rows. that i need to remove the data b4 may 2006 which is over 250 million rows. since the insert rate so high(none stop) and i cant truncate the whole table and dump the data after may 2006 back in (they wont let me). and if i use delete statment to delete only 200k rows data it will freeze up the whole database for long time(around 45 mins) and it will take me few months i believe. so that's not the option either. is anyone out there got better idea?
also i got another table has more than 150 millions rows that index need to rebuild .... anyone got any idea how long it gonna take? just roughly guess. thanks
Dear all I am using sql server 2005 , asp.net 2.0 ,C#. I need some suggesstion on how to pass request info to new level . that is if a employee request for something mail should go its next level ie. immidiate boss(next level can be null) and at any time requested or chain of employee( immidiate boss and succesor ). Please suggest something
We are plan to develop a new VB6 / MS Access application that will produce a whole bundle of reports for the outside salesman team (all of them have a laptop). The data is come from midrange AS400 tables.
Now we have decided should we using SQL Express with SQL tables or just MS Access MDB tables. Some of the laptop the salesman is only P3-800 and I'm worry about the speed for SQL express on those laptop.
The other hand, I like to use SQL databases, since we have the SQL server 2005 running for other application already.
Any one tested the performance between the SQL Express via MS MDB, it seems the SQL 2005 express is use more overhead than MSDE.
We tried the MSDE and the performance is not that great after the database grow beyond 500MB .
Hey guysI have to implement a dynamic Parent - > Child Scenario ... but the catch isas Follows :I need to create this Table Design so that I can have multiple Parent ->child - > parent Relationships(ie Db driven "Window Explorer - feel". 1 Folder that holds another Folderthat holds another Folder etc to Infinite )So if the Above makes any sense ... Suggestions would be WelcomeThanx
Hi,I've 2 Databases one is online and one is offline (connected toInternet).. using SQL SERVER for both servers, i need to update theOFFLINE Database one on a time schedule for every 15 mins from theOnline DATABASE.. can anybody suggest a better way other than web services.. coz ourserver is quite old and doesn't support webservices..any ideas.--chavasreedharMessage posted via http://www.exforsys.com for all your training needs.
All, I need to keep checking (every 30 minutes) for a record in a table to appear, Any idea on how to do it? Should it be done in a SSIS package or in sql server job schedule? Please give more detail on to accomplish.
We have SQL server 2005 as backend in our portal project.We have job search interface with below searchable fields 1.) City 2.)State --length 2 3.)Zip-- 5 4.)Keywords ( space or comma delimeted words) 5.)Category (Drop down List) 6.)Subcategory (Drop down List) Based on the above fields we have 2 options to search and avoid performance related issues. 1.) concatenate all the above 6 columns into 1 column and apply a full text search. Fields like state & Zip will be appended with some delimeter to do a search on those particular values . Ex : store the state "CA" as '#stCA#'. so that it will not search the rest of the data that has keyword CA. 2.) create a table with the above 6 columns and create a Full-Text search on all the columns. Can you suggest us which one will best and why? If not any alternate better solution
I have a table that allows the scheduling of devices.
I can create an Insert Statement that allows a user to schedule a device as follows: Device #1 From 5/1/08 To 5/30/08 Is there away to create a check that prevents the user from scheduling that SAME device for any days within that month? Device #1 is NOT available to be scheduled anytime during the month of May because it is already scheduled. Thanks for any suggestion. Wayne
I have a few tables which store classinfo, testsinfo, scoresinfo respectively. For a given classID/TestID/CourseID i need to be able get the top 5/10/15 etc, avg, max, min of scores for reporting purposes. I am wondering if i should maintain a seperate table which would be populated by a job daily and which would maintain the topper IDs rather than having a Stored proc return them dynamically when needed. I am looking for some suggestions. Please let me know.