On SQL 7.0, NT SP6 we had transactional replication running fine last Friday. After a reboot (Fri night) replication continued fine and then by Tuesday am, we saw a lot of 'timeout errors' for the distributor agent and the difference in numbers of rows (count(*)) for same tables on each side grew alarmingly high even though the replication continued fine..
I have changed the timeout from '0' to '300' which seemed to help with timeouts, but the difference in row counts kept progressing slowly while the replication seemed to be running fine..
Eventually we resorted to re-intializing the subscriptions on the replicated database to get past the issue and now 24 hours later after intializing was complete, we are still pretty much in synch..
Any one have any ideas why we got so much out of synch (wrt row counts) .. ? Any bugs/ gotchas/ quirks any one can think of?
I can't believe this. MSDE was a great product and I'm sure that SQL Express is even better. The difference is that I can't migrate any of my company's applications to it because they rely on a Replication architecture (requiring Replication Publication). And SQL Agent, how are you supposed to keep the database in good shape if you can't run Maintenance Plans at scheduled intervals?
I know, there are third party products out there to compensate for the lack of the SQL Agent, but they don't solve the replication problem. Besides, the Agent still exists in other editions - it's not like it has been dropped completely. Why do we have to lose functionality and features? We have already lost the use of a second CPU and memory > 1GB. Why do they need to keep taking stuff out? Why don't they just limit access to the new feature-set?
I would rather lose the new features such as Report Server and Full Text Indexing and get the SQL Agent and Replication Publication back. I can almost understand that with a free product, they don't want to keep piliing new features in just to give them away - even though I'm sure the sale of the larger editions more than covers the R&D investment in their construction. Why can't we chose? Agent or Report Server, Replication or Full Text Indexing? I don't need some of these new features (others might), I just want the features that I used to have. Perhaps SQL 2005 setup can be made to choose 2 out of the 4 features?
Whats more, I can't even say that I'll just stick with MSDE, as it now has a limited support life. Microsoft announced that any SQL Server version prior to 2005 SP1 will NOT be supported on Vista or Longhorn Server (nor does it support anything less than Windows 2000 SP4 or Windows XP SP2 where MSDE does).
OK - let me get this right. We had something that has been taken away in the new version. Perhaps I don't deserve the new version if I don't pay for it - fine - I'll stick to the old one I already have. BUT wait... I can't use that on the new platforms coming out.
Are my entry-level customers going to pay for the Workgroup Edition (add AUD$1200 to my entry level edition application of AUD$1500) to replace the Replication Publisher and provide a Maintenance plan? Is anyone going to pay for the re-architecting and re-construction of my application to work around these new limitations?
I guess that will be me. Is this what happens for being loyal to MS products?
Please - give back the SQL Agent and Replication publication.
i have a publisher, a distributer and 3 subscribers all working. All created at the same time.
Today i have tried to introduce a 4 th subscriber server and replication has fallen over. Each cycle will have updates eg merged 75 data changes(6 inserts, 6 updates, 63 deletes, 48 resolved conflicts) but the next cycle will show: merged 55 data changes(4 inserts, 3 updates, 48 deletes, 37 resolved conflicts)
the 48 deletes are the conflicts from the previous cycle. this repeats hour on hour. The conflicts table is storing all of the changes but what have i done wrong in introducing the 4th server to get these symptoms? I followed the wizard all the way (may be that was it? :rolleyes: ) i have disabled this agent now and the others are working fine again. it is merge replication although the subscriber i have added is not going to be causing any updates.
I have configured a transactional replication between 2 MSSQL 2005 instances. It is set to replicate every 15 minutes. Most replications take less than a minute with about 10 - 50 transactions being replicated.
However, 3 times during the day, the replication takes about 25 minutes to complete with 500-700 transactions being replicated. This also causes the application that is accessing the DB being replicated to become unresponsive when it attempts to retrieve rows from tables in the DB.
Might there be a setting that causes the transactional replication to accumulate transactions during the day?
I am beginning design for a project to maintain an on-line archive of historical data for a production database, and then truncate old, rarely-used data from the production database. I am considering a method whereby we (A) replicate data changes from the production database to an archive database and (B) truncate old data from the production database according to predetermined business rules.
I am trying to determine whether SQL Server merge or transactional replication can be used to accomplish the replication portion of this requirement.
Given this scenario, there are two types of modifications that are made to the production database: (1) application inserts, deletes and updates which should be replicated, and (2) truncation deletes (as described in "B", above) which should not be replicated.
My question is: Will I be able to control the replication process such that application deletes are replicated, but "truncation" deletes are not? Could static row filters somehow be used to accomplish this?
I'm trying to create a transaction replication from SQL Server 2000 to 2005. Basic replication between the servers works just fine. However, what I want to accomplish is to be able to skip some of the transactions. Example - from time to time we want to purge some of the historical data from the main database (the publisher). We don't want the same purging to occur on the destination database, which will be used for reporting purposes and needs to include all the historical information. I wanted to simply stop the replication log reader, purge the records, backup the transaction log with truncation and then restart the reader. The only problem - the truncation on the replicated database keeps the transactions of the purging until they are replicated, so the transaction log backup doesn't help. Any ideas would be greatly appreciated!
There is a database "Foo" sitting on server "A". There is a database "Bar" sitting on server "B". A.Foo publishes a subset of its schema. B.Bar subscribes to A.Foo's publication. The distribution database is on "B" (B.distributor). This a push subscription (transactions are pushed to the subscriber from the distributor). Every day (including the weekend) I get the following alert:
"5/12/2015 3:53:16 AM, Unsubscribed Transactions (Count) on "B" is Warning.
SQL Server instance "B" has 636771 unsubscribed replication transactions received by the Distributor and not received by a Subscriber.
Unsubscribed Transactions (Count): Number of replication transactions received by the Distributor and not received by a Subscriber."
The number of transactions will vary. The alerts will be sent between 1:20 AM (EST) and 3:30 AM (EST). I'm trying to figure out what is causing the backup of transactions. I assume the issue precedes the alerts by 30-minutes or so.
There are no backups occurringNothing is blocking the distributor agent in the subscription databaseJob activity is at a minimum; the few jobs running run throughout the dayThe machine has plenty of resources -- CPU, RAM, etc.The publisher database shows no signs of stress.
Recently we disabled the transactional replication on a prodution database(data size:300GB) , two weeks later we were aware of the transaction log of that db almost reached the limitation (70GB).I investigated the issues and found:
1. open transactions pending for distribution:
DBCC opentran('xxxx')
Replicated Transaction Information:
Oldest distributed LSN : (0:0:0)
Oldest non-distributed LSN : (323830:7083:1)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
2. exec sp_replcounters, got over 240,000 replicated_transactions, which caused the tran log extremely large.
The transactional replication has been disabled on this server, there are no publication, distribution DBs any longer, all replication-related agents are also gone, why did this still happen???
3. use sp_repldone manaully reset those 240,000 transactions to distributed, check opentran, it was fine for a while. The tran log file was freed up 99% after that.
4. check open transactions and run sp_replcounters today, found the over 60000 transactions again on the db.
Did anybody experience the same issue and knows how to fix this issue and stop this happening ?
I recently installed SQL Express advanced and came across something thats confusing me. When i look at my running services i see the advanced features like SQL Server Full Text Search and the like ,but cant seem to find the reporting service. Is there a chance i skipped over it in the insall? I had assumed i would have a service running that would say something along the lines of SQL Reporting Services. I havent come across directions on how to turn it on, so i assume that i missed it during installation?
I have multiple questions and would appreciate any suggestions in resolving them. I'm a novice to these issues.
1) First of all, what is the exact command to setup the trunc. log on chkpt. option on for a transactional log of a SQL Server 2000 database? Is this option on by default? I have noticed for one of the databases I'm managing that the transaction log was over 12 GB, while the db was only 425 MB.
2) How's it possible to run a DBCC TRACEON to see the content of the transaction log to see if we are having any issues with any uncommitted transactions, i.e. updates, inserts, and/or deletes.
3) What are the commands to truncate inactive transactions and increasing the readbatchsize?
4) lastly, how do I validate transactional replication via checksum and find valid latency between a small number of changes that need to be committed between publisher and subscriber.
Data synchronization and manual failover works fine. But, sometimes, the AlwaysOn cluster automatically fails over to Sync Commit Secondary on Primary data center. Here is the error message from Failover Cluster Manager->Cluster Events:
"Cluster has missed two consecutive heartbeats for the local endpoint xx.xx.xx.yy:~3343~ connected to remote endpoint xx.xx.xx.zz:~3343~"
"Cluster has lost the UDP connection from local endpoint xx.xx.xx.yy:~3343~ connected to remote endpoint xx.xx.xx.zz:~3343~"
I had our network engineer check all connections multiple times and he confirmed everything is fine. But he was also able to confirm (using monitoring tools) that right at the time of a failover, there is almost 2GB worth of traffic going from Primary Server to DR server. That happens every time. I had checked the times of all failovers and there is no job or process occuring that will produce 2GB worth of data. Also, this happens regardless of which server is primary.
Even though the failover works fine, this unexpected automatic failover due to missed heartbeats are occurring often (2-3 times a month).
Here is the list of errors from the Cluster Validation Report:
Under Network Section, I see the following error messages in Red:
Validate Network Communication
Network interfaces Server4 (DR) - SAN_Team and Server1 (Primary) - SAN_Team - VLAN 20 are on the same cluster network, yet address xx.xx.xx.pp is not reachable from xx.xx.xx.yy using UDP on port 3343.
Network interfaces Server4 (DR) - SAN_Team and Server2 (Secondary) - SAN_Team - VLAN 20 are on the same cluster network, yet address xx.xx.xx.qq is not reachable from xx.xx.xx.yy using UDP on port 3343.
We have a filtered publisher (SQL Server 2012) with several pull and push subscriptions that have been functioning fine for years.
We have just added a new pull subscriber from a remote server. We configured an alternate folder location on the subscriber (pointing to existing path on a remote server). We generated new snapshot and partitioned snapshot, and moved to subscribing server at alternate path.We're seeing a problem we've never seen before:
When the merge agent runs, it applies the schema and performs a few hundred bulk inserts, but then proceeds to 'download changes to subscriber' as though the snapshot is old (it is brand new and should have all transactions). We have never seen this before, i.e. all data is usually applied in bulk when creating a new or reinitializing an existing subscriber.
we marked the subscriber for re-initialization (from both the publishing and subscribing server) with a fresh set of snapshots. The same problem occurred.We have also tried in our test environment (which mirrors production) and the snapshot applies as expected (via bulk inserts).
(It might be relevant to note the publisher was recently upgraded in-place from 2005 to 2012.)
Hi there, I have decided to move all my transaction handling from asp.net to stored procedures in a SQL Server 2000 database. I know the database is capable of rolling back the transactions just like myTransaction.Rollback() in asp.net. But what about exceptions? In asp.net, I am used to doing the following: <code>Try 'execute commands myTransaction.Commit()Catch ex As Exception Response.Write(ex.Message) myTransaction.Rollback()End Try</code>Will the database inform me of any exceptions (and their messages)? Do I need to put anything explicit in my stored procedure other than rollback transaction? Any help is greatly appreciated
I have just created some SSIS package. Some package has connection to remote SQL Server 2005 by SQL server authentication mode (not windows).
In the connection property I have setted "save my password", but I have found that after a while the connection is missed and I have to make the Logon. this happen always to random.
Do you know how to keep the connection persistent?
Hi All, Can anybody suggest me a website where I can find articles on Managing transactions with Sql server. Also a scenario where the transactions take place in a environment involving 2 different databases, Like the bank account and credit card transactions (specifically of 2 way kind) Thanks
I have a web application with a shopping cart, how do I stop all the shopping cart transaction from going into the db log? Is this possible? These are are only transient data movements, and will never be need to to restore to, and they are cause log bloat. Or is there a better way to stop log bloat?
How can we change connection properties in a DTS pkg with connection? You can loop through the connection count but the connection ID is not static one.So can’t rely on that. Is there another way of changing connection properties?
I am currently designing a DTS Package to import data that is processed daily into a large database.
I have to design the package such that if any step fails when importing, I roll back the entire transaction.
I have designed the package with this in mind, checked "join transaction if present" and "rollback transaction on failure" in all of the workflows. I have also made all workflows serialized.
However, when I run the package, it fails on one of the data pumps with the error:
I am replicating (finally!!) and on my publishers agent history I can see it says xx transactions with xx commands were delivered. (xx being the number) Where can I look to see what the transactions or commands are?
Is there a place the system stores this information?
Is there a point to wrapping a single UPDATE or INSERT statement in an explicit TRANSACTION:
BEGIN TRANSACTION
INSERT INTO Table (...) VALUES (...)
COMMIT TRANSACTION
I understand ACID and concept of transactions. However, I thought they were only necessary for multi-statement operations. I'm maintaining code that does this and am wondering if this is necessary. Does SQL Server guarantee ACID for single statements? Are single UPDATE/INSERT statements prone to race condition like affects without using explicit transactions?
If you run the Begin Transaction code and then run a create such as an update query and you see that it effects the number of rows that you wanted it to effect is there a way to look at the actual data that changed before you Commit Transaction?
I have a table with around 240 columns and one of the column in the Table is the Inserttime ( DATETIME ) and I using a GETDATE() function in the stored Proc, when we insert data into the table. In the same Milli second 2007-06-27 09:32:58.303 , I have around 7600 records in the database. The Stored Proc is called for each Individual record and we don't bunch the transactions. Is this possible.
I did some bench marking on this server and I can insert only 700 - 800 records approx / sec on this particular table.
I have a small database that I have been testing.I get an error about a transaction deadlock.The code is in stored procedures and I added transactions to the sp'sbut the error happened again.I wrapped the whole sp in just one transaction and I don't have anyindex on the tables.When I test just by running a program that sends 3 calls at a time itwill get a deadlocked transaction as I send 6 or 9 at a time.I am not sure how it can have a deadlocked transaction after I usedtransactions(begin and commit) in the sp's.Steve
I am working with transactions and use try catch to capture errors and in the event of an error i have to rollback the transaction. How can i perform this?, most of the errors which i forsee are either insertion of null values into non nullable columns or violation of Primary keys while inserting duplicates. I started by coding the following way but it does not rollaback apparently the try catch does not work for above kind of errors..Can somebody help..
DECLARE @REPORTING_PERIOD VARCHAR(6)
BEGIN TRY
BEGIN TRANSACTION
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SET @REPORTING_PERIOD =(Select REPORT_PERIOD_ID from dbo.T_REPORT_PERIOD where C_FLAG_ACTIVITY=1)
--Step 1
INSERT INTO [dbo].[T_COUNTRIES]
([C_COUNTRY]
,[LB_COUNTRY]
,[C_REGION]
,[FK_REPORTING_PERIOD])
SELECT [C_COUNTRY]
,[LB_COUNTRY]
,[C_REGION]
,@REPORTING_PERIOD
FROM [dbo].[IN_T_COUNTRIES]
--Step 2
INSERT INTO [dbo].[T_FLE]
([FK_P_FLE]
,[C_COSMOS]
,[LB_FLE]
,[C_PARETO]
,[C_OPCO_SCOPE]
,[C_LEVEL]
,[C_FLE_TYPE]
,[C_ACTIVITY]
,[F_MATERIAL]
,[C_MATERIAL_PRIORITY]
,[C_CALCULATION_METHOD]
,[F_CREDIT_RISK_MATERIALITY]
,[V_PARTICIPATION]
,[FK_REPORTING_PERIOD])
SELECT Null as [FK_P_FLE]
,[C_COSMOS]
,[LB_FLE]
,[C_PARETO]
,[C_OPCO_SCOPE]
,[C_LEVEL]
,[C_FLE_TYPE]
,@REPORTING_PERIOD
FROM [dbo].[IN_T_FLE]
COMMIT TRANSACTION
END TRY
BEGIN CATCH
SELECT
ERROR_NUMBER() as ErrorNumber,
ERROR_LINE() as ErrorLine,
ERROR_MESSAGE() as ErrorMessage;
-- Test XACT_STATE for 1 or -1.
-- XACT_STATE = 0 means there is no transaction and
-- a commit or rollback operation would generate an error.
-- Test whether the transaction is uncommittable.
IF (XACT_STATE()) = -1
BEGIN
PRINT
N'The transaction is in an uncommittable state. ' +
'Rolling back transaction.'
ROLLBACK TRANSACTION;
END;
-- Test whether the transaction is active and valid.
I've been searching around and haven't found anything that simply states what I want to know.
I want to use a transaction within my CLR Stored Proc, to do so I've got System.Transactions referenced and I can access the current transaction via Transaction.Current.
My questions are
Will there always be a current transaction?
Do I need to create a new transaction if one doesn't already exist?
I need to push rows from CE to SQL Server 2000 and after delete these rows of CE database only if all rows have been sent to SQL Server 2000.
I think the best is work with transactions. Since I know I can use transactions for this purpose, can anybody give me a link with push transaction examples ?
What does "Transactions/sec" counter in SQL 2005 under databases do in terms of performance. My counter shows almost 100% all the time in 4 terrbyte DB in superdome with many CPUs.
My first question: Can there be a performance loss if I uncomment the lines about transaction usage? I mean, when I do this I start to get more timeouts.
My problem goes on. When I comment those lines and run a stress tool, I am getting "column X does not belong to table Y" errors. If those lines are not commented i am not getting this error, but I get timeout errors frequently. So, my second question: is there something wrong in my query or is there a bad coding practice I am following? Could someone offer a better and more robust sample for this code block?
By the way, connection pooling is on. And these errors are observed under high loads.
I have transaction that will run on an hourly bases. I need to make sure that no one will start this transaction while it is running. I just need to know the system table that has all the transaction names in them and check to see if the transaction is running or not. what i am trying to do is not to have locks....just wanting to make sure that no one would run the same transaction twice. does anyone have any idea on how we can do this?