i have a publisher, a distributer and 3 subscribers all working. All created at the same time.
Today i have tried to introduce a 4 th subscriber server and replication has fallen over. Each cycle will have updates eg
merged 75 data changes(6 inserts, 6 updates, 63 deletes, 48 resolved conflicts)
but the next cycle will show:
merged 55 data changes(4 inserts, 3 updates, 48 deletes, 37 resolved conflicts)
the 48 deletes are the conflicts from the previous cycle. this repeats hour on hour. The conflicts table is storing all of the changes but what have i done wrong in introducing the 4th server to get these symptoms? I followed the wizard all the way (may be that was it? :rolleyes: ) i have disabled this agent now and the others are working fine again. it is merge replication although the subscriber i have added is not going to be causing any updates.
On SQL 7.0, NT SP6 we had transactional replication running fine last Friday. After a reboot (Fri night) replication continued fine and then by Tuesday am, we saw a lot of 'timeout errors' for the distributor agent and the difference in numbers of rows (count(*)) for same tables on each side grew alarmingly high even though the replication continued fine..
I have changed the timeout from '0' to '300' which seemed to help with timeouts, but the difference in row counts kept progressing slowly while the replication seemed to be running fine..
Eventually we resorted to re-intializing the subscriptions on the replicated database to get past the issue and now 24 hours later after intializing was complete, we are still pretty much in synch..
Any one have any ideas why we got so much out of synch (wrt row counts) .. ? Any bugs/ gotchas/ quirks any one can think of?
I have configured a transactional replication between 2 MSSQL 2005 instances. It is set to replicate every 15 minutes. Most replications take less than a minute with about 10 - 50 transactions being replicated.
However, 3 times during the day, the replication takes about 25 minutes to complete with 500-700 transactions being replicated. This also causes the application that is accessing the DB being replicated to become unresponsive when it attempts to retrieve rows from tables in the DB.
Might there be a setting that causes the transactional replication to accumulate transactions during the day?
I am beginning design for a project to maintain an on-line archive of historical data for a production database, and then truncate old, rarely-used data from the production database. I am considering a method whereby we (A) replicate data changes from the production database to an archive database and (B) truncate old data from the production database according to predetermined business rules.
I am trying to determine whether SQL Server merge or transactional replication can be used to accomplish the replication portion of this requirement.
Given this scenario, there are two types of modifications that are made to the production database: (1) application inserts, deletes and updates which should be replicated, and (2) truncation deletes (as described in "B", above) which should not be replicated.
My question is: Will I be able to control the replication process such that application deletes are replicated, but "truncation" deletes are not? Could static row filters somehow be used to accomplish this?
I'm trying to create a transaction replication from SQL Server 2000 to 2005. Basic replication between the servers works just fine. However, what I want to accomplish is to be able to skip some of the transactions. Example - from time to time we want to purge some of the historical data from the main database (the publisher). We don't want the same purging to occur on the destination database, which will be used for reporting purposes and needs to include all the historical information. I wanted to simply stop the replication log reader, purge the records, backup the transaction log with truncation and then restart the reader. The only problem - the truncation on the replicated database keeps the transactions of the purging until they are replicated, so the transaction log backup doesn't help. Any ideas would be greatly appreciated!
There is a database "Foo" sitting on server "A". There is a database "Bar" sitting on server "B". A.Foo publishes a subset of its schema. B.Bar subscribes to A.Foo's publication. The distribution database is on "B" (B.distributor). This a push subscription (transactions are pushed to the subscriber from the distributor). Every day (including the weekend) I get the following alert:
"5/12/2015 3:53:16 AM, Unsubscribed Transactions (Count) on "B" is Warning.
SQL Server instance "B" has 636771 unsubscribed replication transactions received by the Distributor and not received by a Subscriber.
Unsubscribed Transactions (Count): Number of replication transactions received by the Distributor and not received by a Subscriber."
The number of transactions will vary. The alerts will be sent between 1:20 AM (EST) and 3:30 AM (EST). I'm trying to figure out what is causing the backup of transactions. I assume the issue precedes the alerts by 30-minutes or so.
There are no backups occurringNothing is blocking the distributor agent in the subscription databaseJob activity is at a minimum; the few jobs running run throughout the dayThe machine has plenty of resources -- CPU, RAM, etc.The publisher database shows no signs of stress.
Recently we disabled the transactional replication on a prodution database(data size:300GB) , two weeks later we were aware of the transaction log of that db almost reached the limitation (70GB).I investigated the issues and found:
1. open transactions pending for distribution:
DBCC opentran('xxxx')
Replicated Transaction Information:
Oldest distributed LSN : (0:0:0)
Oldest non-distributed LSN : (323830:7083:1)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
2. exec sp_replcounters, got over 240,000 replicated_transactions, which caused the tran log extremely large.
The transactional replication has been disabled on this server, there are no publication, distribution DBs any longer, all replication-related agents are also gone, why did this still happen???
3. use sp_repldone manaully reset those 240,000 transactions to distributed, check opentran, it was fine for a while. The tran log file was freed up 99% after that.
4. check open transactions and run sp_replcounters today, found the over 60000 transactions again on the db.
Did anybody experience the same issue and knows how to fix this issue and stop this happening ?
I have multiple questions and would appreciate any suggestions in resolving them. I'm a novice to these issues.
1) First of all, what is the exact command to setup the trunc. log on chkpt. option on for a transactional log of a SQL Server 2000 database? Is this option on by default? I have noticed for one of the databases I'm managing that the transaction log was over 12 GB, while the db was only 425 MB.
2) How's it possible to run a DBCC TRACEON to see the content of the transaction log to see if we are having any issues with any uncommitted transactions, i.e. updates, inserts, and/or deletes.
3) What are the commands to truncate inactive transactions and increasing the readbatchsize?
4) lastly, how do I validate transactional replication via checksum and find valid latency between a small number of changes that need to be committed between publisher and subscriber.
We have a filtered publisher (SQL Server 2012) with several pull and push subscriptions that have been functioning fine for years.
We have just added a new pull subscriber from a remote server. We configured an alternate folder location on the subscriber (pointing to existing path on a remote server). We generated new snapshot and partitioned snapshot, and moved to subscribing server at alternate path.We're seeing a problem we've never seen before:
When the merge agent runs, it applies the schema and performs a few hundred bulk inserts, but then proceeds to 'download changes to subscriber' as though the snapshot is old (it is brand new and should have all transactions). We have never seen this before, i.e. all data is usually applied in bulk when creating a new or reinitializing an existing subscriber.
we marked the subscriber for re-initialization (from both the publishing and subscribing server) with a fresh set of snapshots. The same problem occurred.We have also tried in our test environment (which mirrors production) and the snapshot applies as expected (via bulk inserts).
(It might be relevant to note the publisher was recently upgraded in-place from 2005 to 2012.)
Hi there, I have decided to move all my transaction handling from asp.net to stored procedures in a SQL Server 2000 database. I know the database is capable of rolling back the transactions just like myTransaction.Rollback() in asp.net. But what about exceptions? In asp.net, I am used to doing the following: <code>Try 'execute commands myTransaction.Commit()Catch ex As Exception Response.Write(ex.Message) myTransaction.Rollback()End Try</code>Will the database inform me of any exceptions (and their messages)? Do I need to put anything explicit in my stored procedure other than rollback transaction? Any help is greatly appreciated
Hi All, Can anybody suggest me a website where I can find articles on Managing transactions with Sql server. Also a scenario where the transactions take place in a environment involving 2 different databases, Like the bank account and credit card transactions (specifically of 2 way kind) Thanks
I have a web application with a shopping cart, how do I stop all the shopping cart transaction from going into the db log? Is this possible? These are are only transient data movements, and will never be need to to restore to, and they are cause log bloat. Or is there a better way to stop log bloat?
How can we change connection properties in a DTS pkg with connection? You can loop through the connection count but the connection ID is not static one.So can’t rely on that. Is there another way of changing connection properties?
I am currently designing a DTS Package to import data that is processed daily into a large database.
I have to design the package such that if any step fails when importing, I roll back the entire transaction.
I have designed the package with this in mind, checked "join transaction if present" and "rollback transaction on failure" in all of the workflows. I have also made all workflows serialized.
However, when I run the package, it fails on one of the data pumps with the error:
I am replicating (finally!!) and on my publishers agent history I can see it says xx transactions with xx commands were delivered. (xx being the number) Where can I look to see what the transactions or commands are?
Is there a place the system stores this information?
Is there a point to wrapping a single UPDATE or INSERT statement in an explicit TRANSACTION:
BEGIN TRANSACTION
INSERT INTO Table (...) VALUES (...)
COMMIT TRANSACTION
I understand ACID and concept of transactions. However, I thought they were only necessary for multi-statement operations. I'm maintaining code that does this and am wondering if this is necessary. Does SQL Server guarantee ACID for single statements? Are single UPDATE/INSERT statements prone to race condition like affects without using explicit transactions?
If you run the Begin Transaction code and then run a create such as an update query and you see that it effects the number of rows that you wanted it to effect is there a way to look at the actual data that changed before you Commit Transaction?
I have a table with around 240 columns and one of the column in the Table is the Inserttime ( DATETIME ) and I using a GETDATE() function in the stored Proc, when we insert data into the table. In the same Milli second 2007-06-27 09:32:58.303 , I have around 7600 records in the database. The Stored Proc is called for each Individual record and we don't bunch the transactions. Is this possible.
I did some bench marking on this server and I can insert only 700 - 800 records approx / sec on this particular table.
I have a small database that I have been testing.I get an error about a transaction deadlock.The code is in stored procedures and I added transactions to the sp'sbut the error happened again.I wrapped the whole sp in just one transaction and I don't have anyindex on the tables.When I test just by running a program that sends 3 calls at a time itwill get a deadlocked transaction as I send 6 or 9 at a time.I am not sure how it can have a deadlocked transaction after I usedtransactions(begin and commit) in the sp's.Steve
I am working with transactions and use try catch to capture errors and in the event of an error i have to rollback the transaction. How can i perform this?, most of the errors which i forsee are either insertion of null values into non nullable columns or violation of Primary keys while inserting duplicates. I started by coding the following way but it does not rollaback apparently the try catch does not work for above kind of errors..Can somebody help..
DECLARE @REPORTING_PERIOD VARCHAR(6)
BEGIN TRY
BEGIN TRANSACTION
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SET @REPORTING_PERIOD =(Select REPORT_PERIOD_ID from dbo.T_REPORT_PERIOD where C_FLAG_ACTIVITY=1)
--Step 1
INSERT INTO [dbo].[T_COUNTRIES]
([C_COUNTRY]
,[LB_COUNTRY]
,[C_REGION]
,[FK_REPORTING_PERIOD])
SELECT [C_COUNTRY]
,[LB_COUNTRY]
,[C_REGION]
,@REPORTING_PERIOD
FROM [dbo].[IN_T_COUNTRIES]
--Step 2
INSERT INTO [dbo].[T_FLE]
([FK_P_FLE]
,[C_COSMOS]
,[LB_FLE]
,[C_PARETO]
,[C_OPCO_SCOPE]
,[C_LEVEL]
,[C_FLE_TYPE]
,[C_ACTIVITY]
,[F_MATERIAL]
,[C_MATERIAL_PRIORITY]
,[C_CALCULATION_METHOD]
,[F_CREDIT_RISK_MATERIALITY]
,[V_PARTICIPATION]
,[FK_REPORTING_PERIOD])
SELECT Null as [FK_P_FLE]
,[C_COSMOS]
,[LB_FLE]
,[C_PARETO]
,[C_OPCO_SCOPE]
,[C_LEVEL]
,[C_FLE_TYPE]
,@REPORTING_PERIOD
FROM [dbo].[IN_T_FLE]
COMMIT TRANSACTION
END TRY
BEGIN CATCH
SELECT
ERROR_NUMBER() as ErrorNumber,
ERROR_LINE() as ErrorLine,
ERROR_MESSAGE() as ErrorMessage;
-- Test XACT_STATE for 1 or -1.
-- XACT_STATE = 0 means there is no transaction and
-- a commit or rollback operation would generate an error.
-- Test whether the transaction is uncommittable.
IF (XACT_STATE()) = -1
BEGIN
PRINT
N'The transaction is in an uncommittable state. ' +
'Rolling back transaction.'
ROLLBACK TRANSACTION;
END;
-- Test whether the transaction is active and valid.
I've been searching around and haven't found anything that simply states what I want to know.
I want to use a transaction within my CLR Stored Proc, to do so I've got System.Transactions referenced and I can access the current transaction via Transaction.Current.
My questions are
Will there always be a current transaction?
Do I need to create a new transaction if one doesn't already exist?
I need to push rows from CE to SQL Server 2000 and after delete these rows of CE database only if all rows have been sent to SQL Server 2000.
I think the best is work with transactions. Since I know I can use transactions for this purpose, can anybody give me a link with push transaction examples ?
What does "Transactions/sec" counter in SQL 2005 under databases do in terms of performance. My counter shows almost 100% all the time in 4 terrbyte DB in superdome with many CPUs.
My first question: Can there be a performance loss if I uncomment the lines about transaction usage? I mean, when I do this I start to get more timeouts.
My problem goes on. When I comment those lines and run a stress tool, I am getting "column X does not belong to table Y" errors. If those lines are not commented i am not getting this error, but I get timeout errors frequently. So, my second question: is there something wrong in my query or is there a bad coding practice I am following? Could someone offer a better and more robust sample for this code block?
By the way, connection pooling is on. And these errors are observed under high loads.
I have transaction that will run on an hourly bases. I need to make sure that no one will start this transaction while it is running. I just need to know the system table that has all the transaction names in them and check to see if the transaction is running or not. what i am trying to do is not to have locks....just wanting to make sure that no one would run the same transaction twice. does anyone have any idea on how we can do this?
If you ask a .net developer he would likely say that he uses System.Transactions to manage transactions, a DBA on the other hand places transactions within the T-SQL of stored procedures.
What rules do architects and others, use when determining if the transactions should be placed in:
.net Middle Tier Components using System.Transactions for example.
T-SQL Stored Procedures.
Both .net Middle Tier Components and T-SQL Stored Procedures. Thanks in advance,
if i have a loop that runs through records in a dataset like thisfor(int i=0;i<ds.Tables[0].Rows.Count;++i) and in this loop i have several sql commands that run as a transaction in a try / catch block like : try{ // do stuff}catch{ trans.RollBack();}how can i keep the loop going even if a transaction failed. So the transaction works for each individual row. if row 100 fails for whatever i would like the loop to continue running, do i just simply remove the "throw" and it will continue looping ? my catch block currently looks like catch(Exception ex){transaction.Rollback();activity.Log("Transaction aborted, rolling back. Error Message: " + ex.Message + " Stack Trace: " + ex.StackTrace.ToString());throw; }thanks,mcm
I have a class that use a TransactionScope object for make make two operations into a database. The problem is that in a computer works well but in another I receive a error message: "MSDTC is not running in .SQLEXPRESS". I had watched the services in the computer and the "Microsoft Distributed Transaction Coordinator" is running. Does somebody know what is happening? Thankyou and sorry for my English.
I am using ADO.Net for data access and was wondering if anyone knows a good resource for information of sql transactions? Also, do you know if the ForEach statement can be made in sql transactions?