I have a database A include five tables, and have more than 1,500,000 rows. There is a replica database of A. First of all, there is no data in the two dbs. When I finish inserting data into A, the replica db seems still work, the log file size still changes. How can I know the replication finished or not? How long it will take to finish replicate 1,500,000 rows of data?
My company is considering purchasing MS SQL Server to run anapplication on (SASIxp). I am mainly familiar with Oracle, so I waswondering how long it would take to copy a database. Basically we havedatabase A and each night we want to replace database B with thecontents of A. How long would this take say if we had a 10GB databaseor a 20GB database.What would be the technique to do this nightly, the Copy DatabaseWizard, Snapshot Replication, Attach & Detach...? We need to automatethis process, and the source database can be made unavailable whenthis happens.Thanks,Roger
Using SQL 7.0 I'd like to replicate just schema from DB on server A to DB on server B, then be able to replicate data only form DB on server B to DB on server A. I need help!!
Thanks for ANY information you can give me... ~Jepadria
Hi everyone, I've recently been thrown into a DBA role (I've never done any DBA work except writing a few SQL queries), so please go easy on me if these are stupid questions. My first task is to find the best way to replicate data between two SQL Server production databases. The data is to come from Production DB #1 to Production DB #2 (for access by a different system). The data has to be super-close -- not necessarily real-time, but within a few minutes. So when data is updated in #1, #2 shouldn't be be lagged by more than 45 minutes (5-10 is ideal). There are hundreds of thousands of records.What would be the best way to do this? Are there options in SQL Server 2005 to do "differential" updates from DB1 to DB2? Or is that how "transactional replication" works? If we were to implement a "full recovery model", will this impact any sort of replication? Thanks.
Hi, there I have a situation here.... I have Production DB and Development DB in the same SQL7.0 box.... I want to update the data and SP from Pro. to Dev at least once a day... So this is my plan.
1. Back up(Pro.) and restore(Dev.) so I can have the same DB in the same box. 2. Using a Replication (Pulication and Subscription) ro update the data and SP. 3. Because DTS can not update the SP, I use the replication instead of DTS. I need to update the SP, too..(Front end is ACCESS and backend is SQL7.0 DB)
Is there any other methods or way to make ti happen... Any suggestion can help...
We have transactional replication setup to replicate data from production across to a reporting server.
We want to ARCHIVE production, but don't want the ARCHIVE duplicated on the reporting server.
Does anyone know of a way that the reporting server can be stopped from replicating these changes, and continue to hold the FULL history of the database?
We have transactional replication setup to replicate data from production across to a reporting server.
We want to ARCHIVE production, but don't want the ARCHIVE duplicated on the reporting server.
Does anyone know of a way that the reporting server can be stopped from replicating these changes, and continue to hold the FULL history of the database?
Hello, I am trying to add a View to an existing publication and it the subscribers (all devices with SQL CE) don't get the View after replication. I have deleted and recreated the publication, and only the tables will appear on the device, not the view.
Also, I want the data from the view to be dynamic, filtered by using the Host_Name() function/value. Will this work for a View?
I'm working on a web app that needs to be able to take a row in the database and duplicate it, creating a new row in the same table with the same data except for the ID field and reference field.So basically: table1.row1 references table2.row1. I need to duplicate the data in table1.row1 (creating table1.row2) with the same reference to table2.row1.Is there any easy way to do this in SQL? I'm just looking for some ideas or a framework to accomplish this.
We have a distributed Database in DB2 and SQL Server 2000. As the user updates/Inserts data to DB2 Database , the data need to be dynamically replicated to SQL Server. Please Let me know the best possible method of doing it. Thanks, Ravi.
Hello group,i am relatively new to SQL-Server database, but i have lots ofexperience with DB2 and Oracle Database. One of my tasks is setting upa replication between a Mysql-Database running on Linux and one of ourSQL-Servers.How do i achieve this ?If i understand the documentation correctly you have to program thereplication mechanism for yourself or you have to use some third partytool.Could anyone please outline, how to set up the replication mechanism(pointing me to some web-site should be enough) and also tell me ifthere is any third party tool.Thanks in advance and greetings from ViennaUli
I have a small three server development environment where I am getting my feet wet with replication. I have set up peer-to-peer transactional replication and it works fine for data added to the publication's table after the publication was created. However, rows in the table that existed prior to the publication's creation have never replicated. If any of the "old" rows are edited they cause an error on the subscribing servers when the replicator attempts to apply updates to rows that do not exist.
How can I get the old rows that predate the publication to replicate?
I am trying to replicate data from DB2/AS400 to SQL Server2005 (ENT edition) currently we use 3rd party tool to replicate data from DB2 to SQL Server2000 (ENT edition) and like to get rid of this 3rd party tool. I am searching for the last 3 weeks but didn€™t get a good starting point. I have linked DB2 to SQL Server2005 and can run queries against DB2/AS400 box. Now I want to set up transactional replication from DB2 to SQL Server 2005. I have read about peer to peer topologies but I don€™t know if that€™s the route I have to take?
So can someone please help me? I really appreciate your help.
I am migrating DTS packages to SSIS (recreating all the logic). I have a Data Driven Query task in DTS with
Source query - select x,y from table1 (from database db1) Binding - table2 which contains columns which match table1 x,y (fron database db2) Transformation - maping from source table1 x,y to Binding table2 x,y Queries - type update update table2 set x=? where y=?
I know that there is no similar task in SSIS,can someone tell me how to replicate this in SSIS
there are several remote locations where sql is running, my company has asked me to find a way to collect all the data from the remote locations to a central location automatically,for example day to day data should be synced at night time from 2am to 7 am and it should be compressed automatically before data transfers to the central location. NOTE there is no domain only standalone workstations
Is it possible to replicate data from 3 publishers to a single/central subscriber transactionally? In other words I have Server A, Server B, Server C with databases A,B,C respectively. I need to replicate 2 articles from A,2 from B and 2 from C to a central Server D that hosts database D. D will have only 6 articles. The replication is Transactional Replication.
If it is possible what will be the drawbacks of such implementation? (if one server goes down will the whole replication break?) If not possible then what is the best way of implementing this?
I am having a problem importing data into SQL 7 from any type of source. I go through the whole import process no problem. When I click the finish button to start the import, nothing at all happens. Enterprise Manager and the DTS just hang and I must use crtl+alt+delete to end the program. Can anyone give me any suggestions as to what might be happening. Big Thanks in advance, I've been working on this for days.
I am a VoIP phone system using SQL on the back end. I am trying to get a Trigger to fire and email me when a certain number has been dialed.
Create Trigger trg_Emergency_Calls on dbo.CallDetailRecord for Insert
IF @@ROWCOUNT=0 RETURN ---NO rows affected exit proc
IF (SELECT finalcalledpartynumber FROM inserted)='95593684' BEGIN --RAISERROR ('Call Stored Procedure Here',16,10) EXEC WEB_SRVR03.master.dbo.sp_SMTPMail @body='This is a test Email'
END Return
GO
The problem is I have to execute the actually email SP is on another server and has to be that way. The trigger actually runs each time but if the IF statement becomes true then the trigger hangs and never completes. Watching the other server(WEB_SRVR03) there is never a request to execute the sp_SMTPMail. I have been trying to troubleshoot this with profiler but I never see any locks or anything that would give me a problem. Also the insert statement that caused the trigger to fire also never finishes and so the record isn't written to the db. If anyone has any suggestions I would appreciate it. Thanks
I want to finsih the execution of a DTS package from an ActiveX task. If a condition is ok, the package would continue as normal. If not, I want to finish the package without any error. Just without executing the next tasks. Do you have any idea?
Hello, I would like to know as to what data type is the best if I want to store MP3's and large amount of text in a SQL server. Please let me know about the data type for both the tasks. The table for MP3 is different than the table for large text (eg . saving somebody's resume) Please do let me know.
I have an ASP page that will take form info that a user has entered,then save it into SQL server, and retrive and display the info onanother page. My problem is with long text data (10,000 bytes ormore). It appears to save the long text data, as in it gives noerrors... but it does not save it. In the SQL table, the field isdefined as ntext... So why won't it save?Thanks in advance,adam
I have a table with millions of rows and about 70 columns that move through a number of states (11 possible states in all) from "New" via various states to "Processed" and eventually to "Archive" (there's a complicated state diagram that I won't bore you with)
Movement between states is based on a heap of business logic including the move to Archive (not just dates).
Different sorts of processing (querying and update both by users and overnight processing) are carried out on the data according to its state.
Maintaining the indexes for optimum performance across the board is a headache.
We have two problems in that we want better query performance and want to be able to easily switch out objects that are in the Archive state.
I had in mind partitioning the table (and its indexes) on state so that : (a) Queries would be directed only at the appropriate partition (that is always use "where state=" as part of the query) (b) The Archive partition could be swapped out of the table periodically
In my test setup 10 of 11 partitions are in [PRIMARY] but Archive is in a different filegroup.
Query performance is OK - execution plans look good.
However my update performance is now appalling when moving between any two states (10 times as long as on the unpartitioned table).
I understand that when you update a column which is used as a partition key it will cause the row to "move from one partition to another" as it says in another post.
Fine - because that's exactly what I want - logically.
I can also understand that moving from one filegroup (and hence the underlying file) to another must mean that the data has to physically move.
However is the data physically moving whenever you move between partitions or what's going on to cause such a degradation in performance ?
I have a Stored Procedure which Ideally should run when a Customer Logs in, the Procedure will check the available stock and create a Temp Table for the Information, which allows many other Queries in the site to run a lot faster, (due to no joins). The Query has taken as much as 30seconds (Lots of Records and 1/2 dozen Joins) to run upon log in and causing a Timeout for the web application.
I want the procedure to run as it is, but for the login method to not be dependent on the Process tried this in .NET cmd.BeginExecuteNonQuery() (cmd=SQLCOmmand) which doesn't do what I want it just allows me to run heaps of QUeries at the same time.
Can anyone help me with getting this procedure to run and not hold up the Web application? Not sure whether I need to do this in .NET, or whether I can get .NET to run a Batch File or something, but someone must have had a similiar problem, please help.
I have a scheduled job to run daily at 4 am. the job imports data from client side from text files and puts data in our sql table.It takes around 1 hour. the problem is it doesnt stops after the import process completes.so after one hour i can see the data is imported into my sql table so thats fine but the job keeps running. I tried observing that job's spid in activity monitor in sql 2005 but after one hour i cant even see that spid but still job runs. its weird.and after that when i stop the job manually then it stops saying job completed succcessfully. that step is a last step and it uses windows cmd.my understanding is the job step doesnt understands that it got finished. what should i do in here?? any ideas r appreciated we are running the same job for another servers and its fine.
No Error but Export to Excel does not finish When the report has 2 pages with total 500 rows exporting to Excel is not a problem. If it has 100 pages 5000 rows exporting to excel does not end and it does not return any error but the process does not end either. What might the problem be?
I have been working with this for about a month now, and no similar problems to date. Today I am trying to introduce 4 configuration flags that control whether optional ETL stage feeds are executed. I did this by adding a do-nothing script component. The precedent and constraint is used, and it checks the boolean variable flag. The first package executes fine. But it never returns from there. This precedent has nothing fancy on it either. It simply does not run any more of the package, make any more conditional checks, nor the common completion tasks. It just seems to think it is done.
The optionals all fire execute package tasks. One thing that might be tripping it up is that I attempt to run one package twice, each time with varying parent package variable set to control it to use a different destination database for each run. Should this not be OK to do?