I have read the articles posted online concerning different dialog reuse strategies. Most of them create a new table in the sender to hold dialog ids. I was wondering what is wrong, if anything, with the following approach:
Code Block
declare @dlg uniqueidentifier
select top 1 @dlg = conversation_handle from sys.conversation_endpoints where state IN ('CO')
if @dlg is null
begin
begin dialog conversation @dlg
from service [tcp://SFT3DEVSQL01:4022/TyMetrix360Audit/DataSender]
to service '//TyMetrix360Audit/DataWriter','386DDD04-7E55-466A-BE83-37EFC20910B9'
on contract [//TyMetrix360Audit/Contract] with encryption = off;
end
;send on conversation @dlg
message type [//TyMetrix360Audit/Message] (@msg)
Here I simply select a conversation handle directly from the sys.conversation_endpoints table. Can anyone see any issues with this approach?
We have implemented our service broker architecture using conversation handle reuse per MS/Remus's recommendations. We have all of the sudden started receiving the conversation handle not found errors in the sql log every hour or so (which makes perfect sense considering the dialog timer is set for 1 hour). My question is...is this expected behavior when you have employed conversation recycling? Should you expect to see these messages pop up every hour, but the logic in the queuing proc says to retry after deleting from your conversation handle table so the messages is enqueued as expected?
Second question...i think i know why we were not receiving these errors before and wanted to confirm this theory as well. In the queuing proc I was not initializing the variable @Counter to 0 so when it came down to the retry logic it could not add 1 to null so was never entering that part of the code...I am guessing with this set up it would actually output the error to the application calling the queueing proc and NOT into the SQL error logs...is this a correct assumption?
I have attached an example of one of the queuing procs below:
Code Block DECLARE @conversationHandle UNIQUEIDENTIFIER, @err int, @counter int, @DialogTimeOut int, @Message nvarchar(max), @SendType int, @ConversationID uniqueidentifier select @Counter = 0 -- THIS PART VERY IMPORTANT LOL :) select @DialogTimeOut = Value from dbo.tConfiguration with (nolock) where keyvalue = 'ConversationEndpoints' and subvalue = 'DeleteAfterSec' WHILE (1=1) BEGIN -- Lookup the current SPIDs handle SELECT @conversationHandle = [handle] FROM tConversationSPID with (nolock) WHERE spid = @@SPID and messagetype = 'TestQueueMsg'; IF @conversationHandle IS NULL BEGIN BEGIN DIALOG CONVERSATION @conversationHandle FROM SERVICE [InitiatorQueue_SER] TO SERVICE 'ReceiveTestQueue_SER' ON CONTRACT [TestQueueMsg_CON] WITH ENCRYPTION = OFF; BEGIN CONVERSATION TIMER ( @conversationHandle ) TIMEOUT = @DialogTimeOut -- insert the conversation in the association table INSERT INTO tConversationSPID ([spid], MessageType,[handle]) VALUES (@@SPID, 'TestQueueMsg', @conversationHandle);
SEND ON CONVERSATION @conversationHandle MESSAGE TYPE [TestQueueMsg] (@Message)
END ELSE IF @conversationHandle IS NOT NULL BEGIN SEND ON CONVERSATION @conversationHandle MESSAGE TYPE [TestQueueMsg] (@Message) END SELECT @err = @@ERROR; -- if succeeded, exit the loop now IF (@err = 0) BREAK; SELECT @counter = @counter + 1; IF @counter > 10 BEGIN -- Refer to http://msdn2.microsoft.com/en-us/library/ms164086.aspx for severity levels EXEC spLogMessageQueue 20002, 8, 'Failed to SEND on a conversation for more than 10 times. Error %i.' BREAK; END -- We tried on the said conversation, but failed -- remove the record from the association table, then -- let the loop try again DELETE FROM tConversationSPID WHERE [spid] = @@SPID; SELECT @conversationHandle = NULL; END;
I'm using service broker and keep getting errors in the log even though everythig is working as expected
SQL Server 2005 Two databases Two end points - 1 in each database Two stored procedures: SP1 is activated when a message enters the sending queue. it insert a new row in a table SP2 is activated when a response is sent from the receiving queue. it cleans up the sending queue.
I have a table with an update trigger In that trigger, if the updted row meets a certain condition a dialogue is created and a message is sent to the sending queue. I know that SP1 and SP2 are behaving properly because i get the expected result. Sp1 is inserteding the expected data in the table SP2 is cleaning up the sending queue.
In the Sql Server log however i'm getting errors on both of the stored procs. error #1 The activated proc <SP 1 Name> running on queue Applications.dbo.ffreceiverQueue output the following: 'The conversation handle is missing. Specify a conversation handle.'
error #2 The activated proc <SP 2 Name> running on queue ADAPT_APP.dbo.ffsenderQueue output the following: 'The conversation handle is missing. Specify a conversation handle.'
I would appreceiate anybody's help into why i'm getting this. have i set up the stored procs in correctly?
i can provide code of the stored procs if that helps.
I have a Query like this and i am using fulltext search.
Select * FROM [Search] WHERE (@limitByName = 0 OR CONTAINS([Name], @searchStringOR))
Say @limitByName is 0, will my second part still executes.??? How Sql server 2005 handles the OR part in WHERE clause.(Need detailed answer)..pls give any useful link to refer.
How can i find this, if i use Actual Execution plan.
when sql store data as nvarchar it store the actual length besides 2 bytes but the question is if there is another field does sql server store the next field besids nvarchar so there is no seprated bytes and is this exist and i update data of nvarchar does sql sqerver shifted the reamin data exist in the same track please send me the answering
I am developing an application to mail the Newsletters on a daily basis with Service Broker . I have a peculiar problem, that the queues associated with same dialog is giving different conversation handles. I'm testing the application with one mail at a time, i.e., there are not more than one item in both the queue at one point of time. But the handles returned by the queues are different within the same conversation.
Why it is happening like that ? I have taken backup of the original database and restored with a different name in the same instance (and will run "ALTER DATABASE db_name SET NEW_BROKER" for the new database) for some purpose. Can it cause problem ?
My service broker was working perfectly fine earlier. As I was testing...I recreated the whole service broker once again.
Now I am able to get the message at the server end from intiator. When trying to send message from my server to the intiator it gives this error in sql profiler.
broker:message undeliverable: This message could not be delivered because the Conversation ID cannot be associated with an active conversation. The message origin is: 'Transport'.
broker:message undeliverable This message could not be delivered because the 'receive sequenced message' action cannot be performed in the 'ERROR' state.
In our application we have created a SSIS package which extracts data from staging table and places the same in destination table. We have created a slowly changing dimension for the same. Slowly changing dimension uses a composite business key of two columns to decide whether it is a old record or a new record.
Problem : On execution of the package it copies duplicate records with same business keys instead of updating the same. Also the same does not happen for all records. For few records update works fine but for others it inserts a new duplicate record.
I will appreciate if anybody can guide me where I am doing something wrong.
I am converting old MS Access queries to T-SQL and ran into a problem. The results of the same update queries returned different results. The idea is to subtract each of the amounts of Table2 from Table1:
Source sample tables and content: Table1 ID Amount 1 100
Table2 ID Amount 1 10 1 20 1 30
In Access (Orginal source): UPDATE Table1 INNER JOIN Table2 ON Table1.ID = Table2.ID SET Table1.Amount = Table1.Amount - Table2.Amount
In T-SQL (Converted): UPDATE Table1 SET Table1.Amount = Table1.Amount - Table2.Amount FROM Table1 INNER JOIN Table2 ON Table1.ID = Table2.ID
Syntax for T-SQL is different from Access. When both queries are ran on their respective database, Table1.Amount in access became 40 (100 - 10 - 20 - 30), but Table1.Amount in SQL became 90 (100 - 10).
It looks as if in T-SQL it only ran one row? Or it could be that in T-SQL, updates written to the database in batches, hence why Table1.Amount was not updated for all update instances? Any help would be greatly appreciated.
Dear Folks, I'm getting the error with application like this..............
"the transaction log for database "mydatabase" is full.....to find out why space in the log can not be reused, see the log_reuse_wait_desc column in sys databases. "
I've checked with the column, there are totally 3 options.(I've checked with all other databases) 1)checkpoint, 2)log_backup, 3)nothing
what can i choose for this error to be suppressed? actually there was log_backup option.
It's a pretty basic question but I haven't been able to find any examples out there. I dimmed a dataadapter and would like to reuse later in my code (line 3 in the code below). What is the correct syntax to do this? Dim da As New SqlDataAdapter("SELECT * FROM myTable", conn)da.Fill(myDataTable)da.______ ("SELECT * FROM myTable2", conn)da.Fill(myDataTable2)
The SqlParameter with ParameterName '@pk' is already contained by another SqlParameterCollection
If I could work out what code to post I would, but I can say that I am managing my Sql data in my code by caching small arrays of SqlParameter objects as pulled from the database. If I need to update the DB I change the cached SqlParameter and re-insert it. If I perform a SELECT I check the cache first to see if I already have the SqlParameter. However, currently, I am experiencing the above error when performing a select, then later an update, followed by another update. Would I be correct in saying that a SqlParameter object can only be used once for a database operation and should be discarded? Would I be correct if I said that the SqlCommand object should be discarded? I am barking up the wrong tree entirely?
I can create a SqlCacheDependency, and link it to a cached item in httpcontext cache. When something change, it will remove the cached item from the cache. I think I have to redo the process when that happens - prepare sql command, create SqlCacheDependency and insert the item into cache. Now I only need a notification from my SQL when something changes in one of my table, I don;t need read anything from db, and I think I should find a way to not recreate the SqlCacheDependency object everytime? any suggestion?
What is the best code pratice to use do the following code,
SELECT fo.no as LNum, fo.name as LName, sum(CASE fo.docnome WHEN "In" THEN fo.etotal ELSE 0 END) as In1, sum(CASE fo.docnome WHEN "In2" THEN fo.etotal ELSE 0 END) as In2, sum(In1+In2)/10 as inDec, from fo group by fo.no,fo.name order by fo.name
instead of
SELECT fo.no as LNum, fo.name as LName, sum(CASE fo.docnome WHEN "In" THEN fo.etotal ELSE 0 END) as In1, sum(CASE fo.docnome WHEN "In2" THEN fo.etotal ELSE 0 END) as In2, ((sum(CASE fo.docnome WHEN "In" THEN fo.etotal ELSE 0 END))+sum(CASE fo.docnome WHEN "In2" THEN fo.etotal ELSE 0 END)))/10 as inDec, from fo group by fo.no,fo.name order by fo.name
I cant use functions and procedures. Is there any better and cleaner way to code this, reusing the calculated values?
I wonder if there is a solution for this in SQL 2000 (or do I have towait for SQL 2005)?I am currently in the middle of developing a 'Yahoo' style portal whichwill be rolled out in about 20 or so countries. I have set up in SQLServer one database per country. All the portals have the samefunctionality - but show different data.Is it possible to have a single database which holds storedproceedures, functions and views and have the individual countrydatabases use these?Note: I want to avoid using EXEC sp_executesql.I look forward to some good news on this! Thank you in advance.Dadou.
Hi. I'm starting to try out LINQ to SQL, and so I'm using SQL Express (on Vista) to experiment. My problem is that once I create and delete a database, I can never use the name again. If, after deleting the old database, I try to create another one with the same name (say Acct1), I get "Create failed for Database 'Acct1'. An exception occurred while executing a Transact-SQL statement or batch. The logical file name "Acct1" is already in use. Choose a different name. Error: 1828"
I am trying to create the database in SQL Server Management Studio Express. That database name does not appear in the list of databases: there is only AdventureWorks and the system databases. The .MDF and .LDF files have been deleted. Not just sent to the recycle bin, but permanently deleted.
I have already used up Acct1, Acct2, and Acct3, just to try out different scenarios. Each time, I delete the old database before trying to create a new one with the same name, but I am forced to always supply a new, different name. I have checked the directory with hidden files and system files showing to be sure there is no old file lurking there somewhere.
Is there a way to delete these old "logical file names"? I can't even find any reference to their existence except for the message that says they are already in use.
with c1 As ( select 1 As '1' , 2 As '2' , 3 As '3' ) , c2 As ( select 4 As '4', 5 As '5', 6 As '6' union all select 1 , 2 ,3 from c1 -- >>>>> select from c1 here ) select * from c2 union all select * from c1 -- >>>>>> and select from c1 here
According to the query above , I try to reuse the subquery by put the subquery into 'with cte' name (c1) then i select this 2 times .
if I do this way , how many time this subquery (c1) execute ? if 1 time then this is the right way to reuse this subquery . if 2 times , it is not then what should i do to reuse this subquery ?
I have been working with SQL for quiet a while but think this perhaps is a very basic question that has always escaped me:
At my work I was exposed to both, MS SQL Server 2000 and Sybase Adaptive Server Anywhere/Sybase SQL Anywhere.
Under Sybase I was able to use aliases in other calculations and filters but i have never been able to do the same with SQL.
Example: In Sybase I can write this:
Select Price * Units as Cost Cost * SalesTax as TotalTaxFrom Invoice Where TotalTax > 3.5 However if i want to do this in MS SQL 2000 i have to go trough
Select Price * Units as Cost Price * Units * SalesTax as TotalTax From Invoice Where (Price * Units * SalesTax) > 3.5
In the long run this is costing me a lot of code redundancy, not to mention a debugging nightmare. Is there a way to replicate this alias usage in MS SQL Server?
I am working on a project developing a fairly large number of reports with a team of developers. Many of these reports have common elements and code, such as common headers with user-selectable colors. Additionally, many of the common parts of the reports are at mockup stage currently, and many features will have to be added to the reports as time goes on.
We're attempting to create a generalized framework that will minimize the duplication of effort as we develop these reports, and as we go back and modify or fix them later.
What is the best way to approach this?
My first attempt was to create a report template and base all the reports off of the same template. That was fine for the first pass, but as we need to make changes later, they will not be propagated to the already existing reports.
My second attempt was to have each componant of that template reference a subreport, so that changes to the actual report template will be minimized as we go forward. This works great for minimizeing work, but it appears that you lose many features with the use of subreports, and there seems to be a pretty serious performance impact as well. I have posted about one such issue here: Pagination
If anyone has pointers about how to go about this, and where I should start, they would be greatly apreciated!
I have a table in which there is a column of type identity(1,1),in the same table i have bulk data, and i have deleted a row from my table, so now i want to reuse that same Id again, how to do that.
I have some common functions that i use in several script tasks. How du i store a function globaly so that i can use it from different projects and still only have to edit it one place ?
I have several stage to star (i.e. moving data from a staging table through the key lookups into a fact table) ETL transformations in a single SSIS package. Each fact table has a different set of measures but the identical foreign key set, e.g. ConsultantKey, SubsidiaryKey, ContestKey, ContestParamKey and MonthKey.
Currently I have to replicate the key lookup (Surrogate Key Pipeline, or SKP) for each data flow. If I could cache each dimension one time in the package and reuse it for each stage to fact it would be much more efficient.
Is there a way for me to reuse a common data flow?
I've added a typed DataSet and dragged a table across from the server explorer. When I click configure on the table adapter, then click previous back to the "Choose Your Data Connection" dialog. The only option is the new connection that was just created when I added the sql server to the server explorer. Is there anyway to reuse my existing web.config connection string? My goal is to have a single connection string in my web.config. Thanks. -David
I'm trying to implement a configurable way of executing a group of SQL statements using either an Oracle or SQL database as the source for the data.
I'm currently building a connection string with a Script Task and then assigning it to a package variable so that it can be used by a connection manager but this only works with a SQL OLEDB connection.
I'm holding all the connection details in a database table, hence the need for the scripting but would it be possible to extend the task to reuse a single connection manager by changing it's properties?
The client was unable to reuse a session with SPID 94, which had been reset for conection pooling. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.
SPID varies.
Anybody that knows anything why this error occurs?
We are running SQL2K5 and have a Web server with a family of sites all sharing an identical connection string to enable ADO connection pooling between them. Today for about 20 minutes we had several (all?) connections from one site that uses a specific DB get a connection reuse error which showed in out SQL logs:
DESCRIPTION: The client was unable to reuse a session with SPID #, which had been reset for conection pooling. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.
We also have SQL Server slowdown and log in problems from other applications that seemed a symptom of this, or some third unknown cause. Note, the # means the run-time spid number was inserted. The misspelling "conection" comes right out of sys.messages (it is not our custom error):
select top 10 * from sys.messages where message_id = 18056
The immediately preceeding error in the SQL Log was always: Message Error: 18056, Severity: 20, State: 29.
Where Severity and State vary, but "Error: 18056" is consistent, although I can find no documentation on "Error: 18056" through Google or MSDN.
Also, the "The client was unable to reuse a session ..." error seems not to be referred to anywhere.
In our IIS logs, the matching entries are of the form:
[DBNETLIB][ConnectionRead (recv()).]General network error. Check your network documentation.
and
Invalid connection string attribute
My questions: Does anyone have experience with this error? We have real good history with ADO connection pooling, but can a "bad" connection be pooled, and if so can it be "flushed" or the pool "drained"?
I guess this is a fairly common topic but couldn't find the right words to find anything in a search.
What I'm getting at, is there any tsql functions or combination of commands for the following.
You have identity columns in your tables, if you set the a seed and autoincrement, I enter in rows 1 -10 and then I delete 4, 6, 7, 8.
My next new record uses 11. Is there any logic that allows you to check and reuse 4, 6, 7 & 8 described above? Not looking for something that consists of having to create an extra ID table for each table and handle configuring what the next available number is everytime an Insert or delete is called.
I have a system that will post a message to a queue, but does not need to wait for a response - just needs to make sure the message arrived properly in the queue, not that is was processed at the receiving end. A second service will poll the queue to retrieve outstanding messages and will then move the message to an outside system. The movement of the message to the outside system will be wrapped in a transaction and if the process is successful, then the transaction will be commited otherwise it will be rolled back.
1) is it appropriate for the service that posts the message to send an END CONVERSATION ? This way the sending service will not be waiting for a response.
2) in the data movement phase, is it appropriate to issue and END CONVERSATION when commiting and not issue when ROLLBACK occurs. Or should ROLLBACK occur with a following END CONVERSATION with error message?
I am attempting to learn Service Broker from Bob Beauchemin's book "A Developer's Guide to SQL Server" - Chapter 11. I'm finding it to be very good but I'm confused over the concept of closing a conversation. Could someone answer the following questions for me?
When a conversation is ended, can the conversation handle that was created when the conversation was created still be used? (I assume not) Beauchemin says, on page 511, that when a conversation ends, "Any messages still in the queue from the other end of the conversation are deleted with no warning." Does this mean that if I send a message that expects a reply, but I end the conversation, the message is still sent, it is still received by the other endpoint, the other endpoint processes it, but I'll never receive the reply? Beauchemin says that if no lifetime is specified, the conversation is active for the number of seconds which can be represented by the maximum size of an integer. Does this mean that if I don't specify a lifetime, a conversation is active for many, many years?
I want to reuse conversations to minimize overhead during bursts of activity. Remus' article on reusing conversations (http://blogs.msdn.com/remusrusanu/archive/2007/05/02/recycling-conversations.aspx) is great. (I know you are reading this Remus, thanks.)
I was wondering if there is a simpler way of ending a cached conversation - Quiesce the conversation (Stop using it), then after some period of time, end it.
I create a conversation, cache it in RLY_Conversations, and use it for 50 seconds. After 1 minute, the dialog timer servicing proc ends the conversation. There will be no messages sent around the time the End Conversation takes place, thus no race conditions.
Do you see any problems with this method?
Select @DialogHandle = [conversation_handle] From RLY_Conversations Where TableName = @TableName and IsActive = 1 And
CreatedTmstp > dateadd(ss, -50, getdate())
if @DialogHandle is null Begin -- initialize a conversation and record it in our reuse table BEGIN DIALOG CONVERSATION @DialogHandle FROM SERVICE FirstHostRelayService TO SERVICE 'SecondHostRelayService' ON CONTRACT RelayContractSentByAny WITH ENCRYPTION=OFF ;
-- cache the dialog handle to minimize dialog creation overhead. Insert into RLY_Conversations ( TableName, conversation_handle, conversation_id, is_initiator, service_contract_id, conversation_group_id, service_id, lifetime, state, state_desc, IsActive, CreatedBy, CreatedTmstp ) Select @TableName, conversation_handle, conversation_id, is_initiator, service_contract_id, conversation_group_id, service_id, lifetime, state, state_desc, 1, 'Setup', getdate() From sys.conversation_endpoints Where conversation_handle = @DialogHandle;
-- initiate housekeeping process BEGIN CONVERSATION TIMER ( @DialogHandle ) TIMEOUT = 60; End
when you move a conversation to a conversation group, that conversation_group has to have been created previously, ie, you cant specify a non-existing conversation_group, right?
I ask because I am trying to develop an application where I use optimally one conversation related to many given conversation_groups, so that when I receive, I lock only a small determined subset of messages. What I could have used was a way to send messages on a conversation, specifying a conversation_group_id.