Why Is Sys.conversation_endpoints Filling Up Even When Reusing Dialog Conversations
Aug 5, 2007
Hi! I'm wondering why is my sys.conversation_endpoints table inserting a new row for each message i send even when i reuse conversations?
when i send the first message i get the first row in the sys.conversation_endpoints with a uniqueidentifier for the conversation_handle. this uniqueidentifier is then saved in the table which i query the next time i send a message to reuse the dialog conversation.
But even though it looks like the uniqueidentifier is reused i still get a new row for every message i send with a different conversation_handle?
this happens in both target and initator db.
I've tried to understand this by i don't.
Also for the moment i don't end conversations. But as i understand it this shouldn't matter.
Also the message successfully arives to the target and sys.transmission_queue is empty in both databases.
Neither queues have any error messages in them.
I hope someone can help me with this as we plan on using Service Broker in a high volume production environment. The script that builds everything is available if it's needed to diagnose the problem I'm having.
I'm having an issue where sys.conversation_endpoints on the target side of a conversation is never getting purged of closed conversations even after the 30 minute delay. The view is filled with closed conversations and database size is growing every day. I'm aware I can end conversation with cleanup on these conversations, but I would prefer that Service Broker behaves as expected. I'm also aware of the problems with the fire and forget model, but my model is request/response/end between 2 databases on the same server instance. Here's the typical series of events:
I know that if a conversation is normally ended the handle will wind up in the sys.Conversation_endpoints table in a Closed State. I realize they are supposed to stay there for 30 minutes to prevent a reply attack, however the number of rows I have in this table continues to grow with the bulk of the states set to 'Closed'. I am trying to use this table to determine if I have any conversation handle leaks. I see some rows in there with a Disconnected Outbound state while some are conversing. What should I be looking for in this table and how can I know I have a problem (ie. leak). I realize that Disconnected Outbound is probably something i need to look into. Are there any other states I should be concerned with?
So I took the time to build a reproduction of the conversation_endpoint problem that was discussed in another thread. I build two databases, with a send and receive queue. This is essentially the way the code works here at my site. I have a script near the bottom that sends messages every 5 minutes for 2 hours. If there is any logic that removes conversation_endpoints 30 min then the Message record table will show them.
Please let me know what I am doing wrong, so I can change my production code to help eliminate the large buildup in the sys.conversation_endpoints.
Thanks!
use master
go
if exists ( select * from sys.databases where name = 'SBSource' )
drop database SBSource
go
if exists ( select * from sys.databases where name = 'SBTarget' )
drop database SBTarget
go
-- Setup environment for test
create database SBSource
GO
ALTER DATABASE SBSource SET ENABLE_BROKER
ALTER DATABASE SBSource SET TRUSTWORTHY ON
GO
create database SBTarget
GO
ALTER DATABASE SBTarget SET ENABLE_BROKER
ALTER DATABASE SBTarget SET TRUSTWORTHY ON
GO
use SBSource
go
CREATE MESSAGE TYPE [msgTest] AUTHORIZATION [dbo];
CREATE CONTRACT [Test] ( [msgTest] SENT BY ANY );
CREATE QUEUE dbo.[SourceQueue] WITH STATUS = ON , RETENTION = OFF;
Alter QUEUE dbo.[SourceQueue] WITH STATUS = ON, Activation ( STatus = on, procedure_name = dbo.ProcessEndDialogMessages, MAX_QUEUE_READERS = 2, EXECUTE AS 'dbo' )
go
use SBTarget
go
CREATE MESSAGE TYPE [msgTest] AUTHORIZATION [dbo];
CREATE CONTRACT [Test] ( [msgTest] SENT BY ANY );
CREATE QUEUE dbo.[TargetQueue] WITH STATUS = ON , RETENTION = OFF;
Alter QUEUE dbo.[TargetQueue] WITH STATUS = ON, Activation ( STatus = on, procedure_name = dbo.ProcessTargetQueue, MAX_QUEUE_READERS = 2, EXECUTE AS 'dbo' )
go
use sbsource
go
-- start sending messages, check count in conv_endpoints along the way
set xact_abort on
set nocount on
declare @EndAt datetime,
@msg xml,
@ch uniqueidentifier;
set @EndAt = DATEADD( hh, 2, getdate() )
set @msg = '<message>dfsafa</message>';
while getdate() < @EndAt
begin
set @ch = null;
begin transaction;
begin dialog conversation @ch
from service [SBSourceTest]
to service 'SBTargetTest'
on contract [Test]
with
encryption=off;
send on conversation @ch message type [msgTest] (@msg);
--- note the abscence of an end conversation, so no fire and forget!
commit;
waitfor delay '00:05:00'
end
GO
-- check on data after complete.
select state, count(*)
from SBSource.sys.conversation_endpoints
group by state;
select state, count(*)
from SBTarget.sys.conversation_endpoints c
inner join sbtarget.dbo.MessageRecord m on c.conversation_handle = m.conversation_handle
group by state;
select m.*, c.*
from SBTarget.sys.conversation_endpoints c
inner join sbtarget.dbo.MessageRecord m on c.conversation_handle = m.conversation_handle
Is there any thought going into moving these two tables to a file group that we can control? Putting this in Primary with the rest of my system tables is quite problematic, and hinders my ability to manage space usage on my files. Traditionally, we didn't have to consider a primary file group that could grow to large proportions, but now with these two tables it can. If a large volume of messages gets sent through and the system can't keep up, then these tables and my primary file group will grow sometimes enormously.
I am currently designing an auditing application using Service Broker. Right now, when I send a message from a trigger, I start a conversation, and later on when the message has been processed, the conversation has ended. One thing I am concerned with is that when a lot of updates are occurring on the system, if the amount of conversations being created will eat up system resources. Does it make sense to create them and end them later, or should I try to reuse them? Tim
I have a server that is used to query other servers to ensure that things are functioning correctly. On some of our servers we are running ServiceBroker and some of the monitoring involves querying the SYS.CONVERSATION_ENPOINTS view on these servers.
When I query the SYS.CONVERSATION_ENPOINTS view over a linked server it returns zero rows, though when I run the same query locally it returns data. I initially thought it was a permission issue but it seems I can see data across the linked server to other system views within the sys schema (ie sys.all_objects).
Is there something different/special about SYS.CONVERSATION_ENDPOINTS that prevents me from seeing its data across linked servers or have I simply got permission problems.
Most of the examples I've found and played with demonstrate two way conversation. A sender initiates a call, and gets a message back.
My Requirements doesn't really need two way communication. I have a scenario where triggers on two different tables result in modifications to a third table, and I don't want the triggers to deadlock each other, so an asynchronous queueing mechanism seems like the perfect solution...
But I can't seem to make it work one way.
I can get one message through, and then all subsequent messages hang up in the transmission queue with the very informative "One or more messages could not be delivered to the local service targeted by this dialog."
I'm thinking all the examples work the way they do because you have to notify the transmitter that the message was received by sending a message back... and by not doing this I'm stuck in the first conversation. I was thinking that by doing END CONVERSATION <Msg Handle> in the stored procedure bound to the receiver's queue was doing that.
Do I have to communicate bi-directionally always? I guess this is a safety feature but I trust MSMQ to deliver messages...
Say I have a conversation established and the initiator server needs to reboot. Will the conversation automatically restart when the server comes back up? If not, can I get it to with some setting? If not, what is the best way to handle this?
In my application, I need to be able to guarantee that processing for a re-used conversation is completed prior to starting processing the next (re-used) conversation. My application is based on the concepts from the sample posted on Remus's blog: http://blogs.msdn.com/remusrusanu/archive/2007/05/02/recycling-conversations.aspx#comments). Essentially (in this sample), we create a new conversation for each SPID and re-use the conversation, so that messages are sent through the queue (and processed in order) for each SPID. SPID was used in the sample code as an example of some application-specific "thing" that you care about message ordering for. To prevent a conversation from living forever (using up log/resources), they are ended after 1 hour using DialogTimer and a customer message type.
My conundrum is this:
Assume conversation 1 (on SPID 1) is flooded with a large number of messages just before the conversation timer expires. The DialogTimer then expires before the target queue is drained. The sample code (mentioned above) then creates a new conversation for the same SPID (with a DialogTimer of 1 hour). Until the queue for conversation1 is drained, we have 2 conversations being processed for the same SPID. This same problem would occur in any application where you re-use conversations for a period of time (using DialogTimer), and then start a new conversation when the DialogTimer expires.
So although I like having the idea fof being able to re-use conversations, I would need to guarantee that conversation 1 is finished processing before conversation 2 starts processing (for the same SPID, to be consistent with the sample above). If I could get these 2 conversations into the same conversation group on the target queue, the CG locking would solve the problem. But because conversation groups only apply to the initiator queue (when you begin dialog with related conversation), I have no "out of the box" way to control how the conversation groups are associated on the target queue. Remus posted an idea here (bottom of thread): http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=182646&SiteID=1, which was to just send a special message at the beginning of each new conversation (containing the conversation group to use), and then doing a move conversation to conversationgroupid on the target queue. I've tried this solution, and the problem is 1.) if the set convo message fails for some reason, the conversation group is not set and 2.) if the target queue seems to reject most of my move conversation commands with the error "The destination conversation group '<conversation guid>' is invalid." - which I am guessing is due to the fact that this convo group id is being used on the initiator as well.
We have a system that has 35 million conversations piled up. We didn't know to explicitly end the conversation once the processing has completed. Oops. Now, our production box has 35 mm sitting in the table, and we have run into the problem where the amount in sys.conversation_endpoints has exceeded memory and they are being dumped into tempdb, which is killing our disk space, thus bringing the box down. We have fixed the code to end the conversations, but we now have to end the conversations in a hurry. If we select one by one out of the table and end the conversation via END CONVERSATION, it is slow. Very slow. It will finish in a few months. :(
Does anyone know how to get rid of these conversations in a hurry? All of the messages have been applied to our system, so killing the conversations will (should) have no affect on the processed data. Something like a TRUNCATE statement?
Hi all, I am accessing one database a bunch of different times all throughout my code...in various functions and different web pages. Is there a a way to create an sqlconnection that I can access all the time, instead of constanting hardcoding which database to go to? I've tried putting the info in another file and just including it where I want the database to open, but I can't use <!-- #INCLUDE --> inside of the server scripts. Can anyone help
Every time my asp.net app needs to open a connection, it tries to establish a new connection with the mssql server. I´ve already set the max pool size property in the connection string. After that, my app raises an "time out"error saying it couldn´t obtain a connection from the pool. The problem is that I have a lot of iddle connections. With the Enterprise Manager I can see the status of the connections. They´re all the same "awaiting command". How can I reuse this connections? I know that the connection string must be the same for all connections and it is. I´ve set it in the web.config file. If I remove the max pool size property from the connection string I get a lot, I mean A LOT of connections with the sql server. Any ideas?
I implemented the pattern suggested in the 'Recycling Conversations' article that Remus Resanu presented. Everything works great except ended conversations on the receiver remain in the sys.conversation_endpoints table forever in the 'CLOSED' state.
Is there some setting I am missing to have those conversations purged from the endpoints table. I am concerned that in the production environment this table will grow very large.
I want to open and close sql connection only once and want to use in every function without open or close this connection in class file in asp.net 2003 . how can it possible .
I have this SP below, and I am trying to reuse the value returned by the Dateofplanningdate column so that I don't have to enter the code for each additional column I create. I have tried temp tables and derived tables with no luck.
The attempt I tried was looking up an existing dialog in the conversation_endpoints.
However on doing a scale test I would that the non blocking I was hoping wasn't happening. Even through I was giving each spid a new dialog by using a conversation_group_id related to the spid. I found that the following SQL was blocked by a transaction that contains a begin dialog. This suggests the locking on conversation_endpoints is too excessive.
select top 1 conversation_handle
from sys.conversation_endpoints ce
join sys.services s on s.service_id = ce.service_id
join sys.service_contracts c on c.service_contract_id = ce.service_contract_id
where s.name = 'jobStats'
and ce.far_service = 'jobStats'
and (ce.far_broker_instance = @targetBroker OR @targetBroker = 'CURRENT DATABASE')
and ce.state IN ('SO','CO')
and ce.is_initiator = 1
and (ce.conversation_group_id = @conversation_group_id )--or @conversation_group_id is null)
In the Package configurations wizard, I am trying to edit an existing configuration using the edit button. In the Configuration Filter, I get the list of several filters (the filters which were used for other packages). Whe I try to reuse an used filter, it is forcing me to set a new value and when I go back to SQL Server tables , I see the old value has got erased.
Can I not use an existing filter?. Do I need to use new filters for every new package?.
I'm trying to test some queries in SQL analyser without reusing the query plan (already cached). I know that there is a way to avoid that but I don't remember right now. Another option would be to restart MS SQL service but I don't want to do that. Any thoughts...?
I have a replicated table that has a trigger attached to the it. The trigger fires off a service broker message for inserts. Originally for every insert, I would begin a conversation, send, and end the conversation when target send an end conversation. Since replication process is only using a single spid, I would like to reuse 1 conversation. the following is what I have for the send procedure in the initiator. I check the conversation_endpoints for any open conversation, if it's null, I start a new conversation and send else just send with the existing conversation. Is there anything wrong with this code? What could cause the conversation on the initiator to be null if I never end the conversation on the initiator side? thanks
DECLARE @dialog_handle uniqueidentifier
select @dialog_handle = conversation_handle from sys.conversation_endpoints where state = 'CO'
Is it possible to reuse a Lookup component which is configured with Full chaching?
My requirement is as follows....
A input file have 2 columns called CurrentLocation and PreviousLocation. In the dataflow, values of these two columns needs to be replaced with values from a look up table called "Location".
In my package i have added two LookUp components which replaces values of CurrentLocation and PreviousLocation with the values available in the table "Location". Is there any way to reuse the cache of first lookup component for second column also?
I have implemented Remus Resanu's implementation from the Recycling Conversations article and I am experiencing locking issue when I try to insert new conversation handles to the SessionConversations table. I have copied the code in the article exactly including the activation procedure. Any ideas why I may be locking. I am thinking it is related to the HOLDLOCK hint on the table.
The sepcific line where I see locking is directly from the article:
INSERT INTO [SessionConversations] (SPID, FromService, ToService, OnContract, Handle) VALUES (...etc)
Hi,I'm constructing a query that will performs a lot o datetimecalculumns to generate columns.All that operations are dependent of a base calculum that is performedon the query and its result is stored in a columna returned.I wanna find a way of reusing this generated column, to avoidreprocessing that calculumn to perform the other operations, causethat query will be used in a critical application, and all saving isfew.Thanks a lot.
I currently have multiple (parent and child) packages using the same config file. The config file has entries for connections to a number of systems. All of them are not used from the child packages. Hence, my child package throws an error when it tries to configure using the same config file because it can't find the extra connections in my connection collection.
Does anyone have any ideas on the best way to go about resolving this? Is multiple config files (one for each connection) the only way?
I have a real heartache with runtime parameter interogation on my DB. Sure I get the latest and greatest and sure I don't have to type in all those lovely parameter types..but...the hit I take on performance for making no less then 3 DB hits for each SqlAdapter is unreasonable!
So ...I like the idea of maybe calling it once for all my stored procs on application startup...and then maybe saving this in CacheObject.
My problem is that I can't see where you can even serialize a SqlParametersCollection or even for that matter assign it to a Command object. Can you cache a command object ?
LOL
I think I may just have to write some generic routine for creating and populating my command objects based on a key (type) and then use that to fetch my command.Update, command.Insert and command.
I would like to use the new AsynchBlock to do the fetching of the stored proc parameters and then just pull them from the Cache object....put a file watch so that if the DB's change my params it re-pulls them again.
*nice*.....
Then I get the best of both worlds...caching...and no parameter writing...
Hello All I am wanting to fill a drop down list in ASP.NET using C# from a SQL database table using a stored procedure. I have my Sproc. But using ASP.NET C# I have no idea how to do this. Can someone give me a good example, and if not too much trouble, place comments in the code, and give an explanation. I am just learning ASP.NET after moving from Classic. Things are alot different.
I have a ton of data to load into a SQL 2005 database. I just loaded a bunch of data for a number of tables using bcp, and the last table that my script loaded was an 8 million row table. The next table was a 12 million row table, and about 1 million rows into the bcp'ing a log full error was incurred. I have the batch size set to 10000 for all bcp commnads. Here is the bcp command that failed:
Here is the last part of the output from the bcp command:
... 10000 rows sent to SQL Server. Total sent: 970000 10000 rows sent to SQL Server. Total sent: 980000 10000 rows sent to SQL Server. Total sent: 990000 SQLState = 37000, NativeError = 9002 Error = [Microsoft][ODBC SQL Server Driver][SQL Server]The transaction log for database 'billing_data_repository' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
BCP copy in failed
I thought that a commit was issued after every 10000 rows and that this would keep the log from filling up.
The log_reuse_wait_desc column in sys.databases is set to 'LOG_BACKUP' for the database being used.
Does a checkpoint need to be done more often?
Besides breaking up the 12 million row data file into something more manageable, does anyone have a solution?
How can I continue to use my same loading script, and keep the log from filling up?
I have a class that works fine using the SQLDataReader but when I try and duplicate the process using a Dataset instead of a SQLDataReader it returnsa a null value. This is the code for the Method to return a datareader
publicSqlDataReader GetOrgID() { Singleton s1 = Singleton.Instance(); Guid uuid; uuid = new Guid(s1.User_id); SqlConnection con = new SqlConnection(conString); string selectString = "Select OrgID From aspnet_OrgNames Where UserID = @UserID"; SqlCommand cmd = new SqlCommand(selectString, con); cmd.Parameters.Add("@UserID", SqlDbType.UniqueIdentifier, 16).Value = uuid;
SqlDataAdapter adapter = new SqlDataAdapter(); adapter.SelectCommand = cmd;
adapter.Fill(dataset); return dataset;
}
Assume that the conString is set to a valid connection string. The Singlton passes the userid in from some code in the code behind page ...this functionality works as well. So assume that the Guid is a valid entry..I should return a valid dataset but its null. Additionally if I change the sql query to just be Select * From aspnet_OrgNames I still get a null value...I am assuming I am doing something wrong trying to fill the dataset.
MS SQL Enterprise Server, SP5, running under version 6.5.
I have recently been having a problem with the TempDb database filling up. I originally started the database at 250 Mb but recently expanded it to 500 Mb.
My last check of the activity on the server during an event such as this produced the following information.
- Approx. 300 connections to primarily 2 databases.
- 4 active connections:
Connection 1 -SELECT on database 1 with 13,000 records and a record size of approx. 300 bytes.
Connection 2 -SELECT on database 1 with 13,000 records and a record size of approx. 300 bytes.
Connection 3 -SELECT on database 2 with 550 records and a record size of approx. 100 bytes.
Connection 4 -Replication subscriber set at 100 transactions.
My questions are:
1. What processes may cause the TempDb database to fill up?
2. What processes prevent the database from purging?