I have transactional replication set up between 2 SQL Server 2005 databases on 2 different boxes. Both the log reader and distribution agent run in continuous mode. The distributor is residing on a third SQL Server box. We are having performance issues with replication when there are large batch deletes/inserts happening on the publisher. There is a batch job that runs for about 8-10 hours everyday on the publisher and deletes/inserts thousands of records as part of transactions. The amount of time to replicate all this data on the subscriber is around 13-15 hours which is not acceptable to our user community. While monitoring I found that the distribution database (MSRepl_commands table) at times has millions of records in it which would explain why the latency is so high. Add to it the fact that there are large transactions occuring on the publisher.
I was wondering if anyone has faced similar problem before. Are there any conifguration changes I can make to the replication infrastructure to reduce latency?
Please help guys,I have a DataTable filled from the parsing of a csv file by the OleDb text driver.This DataTable could on occassion contain in excess of 2000 rows.I want to be able to batch the inserts to my backend sql table and be able to recorver on errors during the insert.i.e, maybe send the first 500 rows to insert via an insert dynamic text.... really don't know the optimal insert technic to use.but, if I get an error on say the third batch, I want to be able to recorver, and not have to start all over again and continue the inserts from the batch that failed.....Please help... what is the best way to perform the inserts and how can I track these inserts and recorver on errors like power failures or sql server unavailable etc.Please help....
I have a table with 370 million rows and 50+ columns. I need to change the data type of a column from character to numeric. Here's what it contains:
40 million with numbers I want to keep, the rest I just want to set to null:
4k with alpha characters 55 million other numbers 275 million empty strings
An alter column statement fails not just on the alpha characters but on the empty strings. So I tried a couple things on a test database to get an idea of the time it would take:
An update statement to clear out the non-numeric data is too slow (~1.5 days, batched 10000 at a time). I think I probably should create a new column anyway though, so I'm going to copy the data to a new table since it would be faster than adding a new column to the original table.
An insert ... select ... takes about 12 hours; adding WITH (TABLOCK) didn't seem to have any effect, and I'm not sure how to batch it. Recovery model is simple.
A select ... into ... only takes about 1 hour, but can't be batched.
Using a 3rd party ETL tool takes about 5 hours, batched.
I wanted to batch it to minimize impact on other queries but primarily the logs. Is there any way to do a fast batched bulk transfer within SSMS?
Dear Experts,What is the best way to do a large insert WITHOUT having direct accessto the machine SQL Server is running on? For example, imagine I wantto insert something like 20,000 records. If I were to have access tothe server, I could BULK INSERT into a temp table and then insert intothe destination table. But if I can't create a file on the server touse for BULK INSERT, what is the next best alternative to doing lotsof 1 record insert statements?Thanks,-Emin
We are inserting into a table, which includes an identity primary key column. When the table gets really large (i.e. 1.5 million records), the performance of the inserts reduce.
I noticed that when we insert into the table an exclusive lock on the table is obtained. Do inserts into tables with identities always lock the table?
Given the table size is unavoidable, does anyone have a suggestion to improve the performance?
I have an application that dumps massive amounts of data into a database during the installation. My log file always ends up being 30-40GB+ at the end of the install. Can I turn off logging while I do the install and enable it after? What are my options.
I have a table with about 466 Million rows. In this table there is a int column called WeeksToRetain as well as a EventDate column containing the date the row was inserted. I am trying to delete all the rows that that should be deleted according to the WeeksToRetain. For example, if the EventDate is 5/07/15 with a 1 in the WeeksToRetain column the row should be removed by 5/14/15. I am not sure what days SQL considers the beginning and end of the week. However the core issue I am having is the sheer mass of deletions I must do and log growth.
So I am trying to do the delete in batches. More specifically I want to load a temporary table with a million rows, then use the temporary table to load a sub temporary table with 100,000 rows and join this temporary table to the table I want to delete from looping through 10 times to get 1 million. The Logging.EvenLog table which is the table I'm trying to purge has a clustered index on EventDate (ASC). I would like to run this in a schedule job with enough time between executions for log backups to run.
DECLARE @i int DECLARE @RowCount int DECLARE @NextBatchDate datetime CREATE TABLE #BatchProcess ( EventDate datetime, ApplicationID int,
In another forum post, a poster was deleting large numbers of rows from a table in batches of 50,000.
In the bad old days ('80s - '90s), I used to have to delete rows in batches of 500, then 1000, then 5000, due to the size of the transaction rollback segments (yes - Oracle).
I always found that increasing the number of deleted rows in a single statement/transaction improved overall process speed - up to some magic point, at which some overhead in the system began slowing the deletes down, so that deleting a single batch of 10,000 rows took more than twice as much time as deleting two batches of 5,000 rows each.
good rule-of-thumb numbers (or even better, some actual statistics and/or explanations) as to how many records should be deleted in a single transaction/statement for optimum speed? 50,000 - 100,000 - 1,000,000 or unlimited? Are there significant differences between 2008, 2012, 2014?
I need to take a variable from a tabel in SQL Server pass to a Batch file and execute the batch file. Right now I can exec the batch file with XP_CMDSHELL but how can I pass the variable to the batch file and loop through all the variables.
I am using the following batch file to execute a script that creates a db and all its objects in the local sql express:
sqlcmd -S (local)SQLExpress -i C:CreateDB.sql
This works fine, but I'm wondering if there's an easy way to put the script in the batch file, so users don't have to worry about putting the script in the C drive. I tried getting rid of the i parameter and pasting the script from the sql file into the batch file, but it didn't work.
I'm 99% sure this is possible, but I wanted to confirm before I go upgrading one box in our replication scheme without having to do all the others (which are geographically dispersed):
Can an SP2 box replicate (merge replication in our case) with pre-SP2 servers? Most of our servers don't even have SP1 applied, and we're ready to upgrade, but I want to be sure I can do them one at a time rather than all at once.
I am trying to create an auto off-site backup of an entire database. This would include databases and users. It should also include changes made throughout the day.
Something challenging about this is I want it to also include design changes that may have been done throughout the day.
I understand log shipping or replication can deal with the data part of my solution. But how can I copy over the logins, users, and design changes?
Is it possible to have design changes replicated from publishers to subscribers?
I am using Sql2000 and have 6 servers. On this 6 servers 4 servers have the same database
MY QUESTION is i need a script or advice that will help to do this:
Every time data chages in 1 of the 4 servers that have the same database, I want all changes to happen in the other 3 so that they must always have the same information
I posted a question about replicating logins to the database and the answer i got about doing a DTS to transfer logins is not good for me, is it possible to replicate syslogins tables so that i can do this if so, how because they are not listed Database and Publications when i try to create a publication, only individual created databases can be seen.
The reason for this is because when the DBA decides to change the user permission, i want the info to be merge replicated to the subscriber. At this moment i can run the DTS to transfer the login but it won't know when the logins have been updated and hence i won't know when to run it.
I am very new to SQL Server. Plenty of SQL knowledge but the whoe SQL server enviornment is new.
I am working with SQL Server 2005. My task is to generate reports without affecting our live database. I have setup a second server and installed SQL Server 2005 on that too. My thought was that maybe I could mirror or replicate the table I require over to this new server and run my queries from here. Is this easy to do ?
I read that mirroring might not work as it is solely for back up /fall over purposes and that data on the mirrored server would not be accessible.
I have also been looking at SSIS but at the moment this is all a bit like double dutch to me ! Can anyone point me in the right direction, preferably somewhere beginner friendly ie not overly complicated !!
I am using Sql 2005 and merge replication. I am relying on the feature where schema changes are replicated to subscribers but I have come across a situation where schema changes stop being replicated.
This is the scenario:
I create a database and publish it for merge replication.
I add subscribers.
If I need to change the published database I can use ALTER TABLE ddl and the subscriber gets the changes.
If I have to add or remove a merge article as part of a database change I specify the @force_invalidate_snapshot=1, @force_reinit_subscription=1. No any ALTER TABLE statements following the article change will NOT be replicated.
Is this a known 'feature'? Is it because @force_reinit_subscription is set to 1?
We have four mobile devices that are set up for merge replication via the web. We are not receiving errors but some of the data is not coming over to the devices. If we manually add a record that record will come over, but there is data that is on the server that isn't on the devices. If we run the snapshot for each device (We're using host_name as a filter) nothing happens. If we do validation check we get errors. If we reinitialize all devices it works but the next days data (sql job populates data to the publisher db at night) isn't on the device after syncing the next morning. Any help would be appreciated.
I have a sql 2005 publisher and distributor and a sql 2000 subscribers. for some reason on one of the subscribers i'm getting errors that it can't replicate the UDT's. i tried a new snapshot and made sure it was set not to replicate UDT's but i'm still getting Create Type errors.
would anyone have any idea why it's trying to create UDT's at the subscriber when i specify not to replicate UDT's?
I'm using Sql 2005 merge replication and I have noticed something, I'm not sure if this is true or not:...
My publication is set to replicate schema changes (replicate_ddl = 1). Now, I have noticed that schema changes are only replicated if the current snapshot is valid. Is this right? If so why?
My next question carries on from the first. If I'm about to run a TSQL script on my publisher that will add a column or two to a published table, how do I ensure my snapshot is valid inorder for the ddl changes to be replcated? Should I be using:
EXEC sp_mergearticlecolumn
@publication = <publicationname>,
@article = <article name>,
@force_invalidate_snapshot = 1,
@force_reinit_subscription = 1
on each table I modifiy, after I have added the new column?
I have a database that is being set up for merge replication (Sql 2005), but there is one table that I only want the schema replicating, not the data - I never want the data to be replicated in either direction. I can see from sp_addmergearticle that you can do something like this for sp's or functions but is it possible to do this for tables?
I use a merge replication between Sql Server and Sql Server Express.
When I enable a DB for .NET features (eg RoleManager), I have new tables and roles that are created and some GRANT are given on SPs.
When I replicate these DB to another one, none of my roles are replicated and I also loose my roles. Is there a way to replicate also the roles and the permissions ?
I am wondering if there is a way to replicate changes in a SQL 2000 DB to 2005 without backing up the DB and restoring it in 2005. We are running an ERP system using SQL 2000 and are moving to a later version that supports 2005 and we want to test it out before going live but I'd like to sync with the current system from time to time instead of having to convert the DB and get it ready again and again everytime I want to update the data. Thanks for any help you can offer, Chad
I have a number of "join" tables ie joins records from two other tables for example, an employee may be responsible for more than one product so the join table would look like this:
table name: employee_products Employee_id foreign key from employee table product_id goreign key from products table
My question is, how do I replication this table? Replication requires all table to have a primary key field. In this case, both fields are foreign keys and I dont have a primary key as the same data appears regularly in either field.
How should I get around this so I can implement replication? I dont want to have to add another field to be the primary key field.
Hi, My transaction replication is working perfecly. I am doing some changes at publication database ,altering stored procedure code , now i want that changes should also take place at subsciber any one has idea please suggest me Thank u. Nil
We're considering one database in the far east, using merge replication with a database in London.
There's a time difference of maybe 7 hours between the two sites - has anyone ideas on conflict resolution (thinking about the Far East updating a record at 16:00 their time, London updates the same record at 15:30 GMT - how does SQL Server know that London's time is the correct record to use?).
I have a table that is used for reporting, the problem is that the data in the table is refreshed every 30 minutes with a bulk insert. I am trying to find a way to have two tables that are mirror images of each other and when the loading table is loaded, then the table assumes the identity of the reporting table. The basic prinicpal is I need to have the table be available almost all the time and when the bulk insert is happening, users cannot query. Any help would be greatly appreciated.
We have a database that I would like to replicate on another server but am unable to use regular replication via publish/subscribe due to the fact that the production database has no primary keys on tables, only clustered indexes. The backup db needs to be as close to real-time synchronized as possible and will be in fairly active use most of the time. Has anyone had success in developing such a system? How did you do it and what are the pitfalls? Any advice would be greatly appreciated. Thank you. W.
I have SQL Server running on my internal LAN. I want to have a second SQL Server running on a hosted (shared) website. I then would like these servers to talk to each other. At some scheduled time I need to publish data to the web, and I then need to subscribe to data input on the web by various clients. My internal LAN can see the Internet via our cable modem.
What is the best way to do this? What software will I need to run. I'm looking for the big picture.
I want to replicate to an archive database. This means that the subscriber will have data that has been removed from the publisher. In my reading, I haven't seen any discussion of this specific scenario.
Here's what I imagine the solution might be:
EXEC sp_addpublication_snapshot @publication = N'My_Publication', @frequency_type = 1 -- only create the snapshot once GO
I set the publication snapshot to only execute once, that would be during the maintenance window when it is initially installed. Then, on the tables that will contain archived data, I specify that deletes aren't replicated.
Here's my concern: aren't there times when you need to resync?
If you could push a new snapshot that dropped the tables on the subscriber and built the thing up from scratch, then things would sync-up just fine. But in this scenario if you drop the subscriber tables then you've just lost your archive.