Are lock hints propagated to the underlying tables of system catolog views? I ask because I often query sys.transmission_queue with nolock, and I wanted to know if this was honoured through out the underlying tables.
Secondly, is sys.transmission_queue indexed at all, providing a way to prevent table-scanning?
I have seen this buried deep with the questions on Service Broker, but I am looking for it again. How do you delete all records from your sys.Transmission_Queue. This is on a test server and I want to clean it before some more test.
We are having a problem with messages getting out of transmission_queue and into the queues themselves. The queues all are actviated and enabled, and our service broker is enabled at the database level. The dba detached & reattached the db yesterday, which I believe may be the cause of this problem. Everything seems to be in order (sp ownership, activation procs, etc), except when I run:
select [name], count(*)
from sys.dm_broker_queue_monitors qm
inner join sys.service_queues sq on qm.queue_id = sq.object_id
group by [name]
I get 2 entries for each queue. And the last_activated_time is the same date as the detach event. When we reattached we had to do an alter database set enable_broker in order to get it back up and running. When I run the above query on our dev and test environments, I get only one entry per queue.
Does anyone know why this would happen? Is this a valid state for SB? And to get past it, if we can't figure out the real fix for it, we want to get a copy of all the messages in the transmission queue, do an alter database set new_broker, then replay them all into SB. We hope this fixes the root cause, but it's a guess.
At my company, we're trying to use service broker to create a client-server system where there is a head office machine and multiple outlets registered with that head office. My problem is that sometimes when a branch sends a message to the head office, it just seems to sit in the transmission queue and never gets sent. If I run a script that forcibly ends the conversations on the client machine (with cleanup), storing the message bodies and then resend them, they seem to get through fine.
The way that we send messages is by calling a t-sql stored procedure from a c# application using SqlCommand (don't know if this should make any difference).
If I monitor the Head Office machine and one of the Outlets while this is happening, on the HO I get three events in a row:
Broker: Message Classify (1 - Local) Audit Broker Conversation (2 - No Certificate) Broker: Message Undeliverable (1 - Sequenced Message) The TextData contained in the third event is: This message could not be delivered because the security context could not be retrieved.
The RoleName of the server is Initiator, and the TargetUserName is the name of the service on the Outlet.
On the Outlet I get the following event repeatedly (presumably as it continues to try sending the message) - Broker: Remote Message Acknowledgement (1 - Message With Acknowledgement Sent).
On the client the RoleName also appears to be Initiator, and the TargetUserName is blank.
This would make me suspect that certificates were missing or something, except that if I remove messages from the queue and resend them they seem to get through, and also I've checked both databases and they have the correct certificates.
Sorry for the stupid question, but I can't seem to figure it out...
There are 119 messages that are stuck in the transmission queue, all for the same queue. When I check the status of the queue (via sys.service_queues), is_receive_enabled = 1, is_activation_enabled = 1, and max_readers = 3. When I check to see if there is an active queue monitor (via sys.dm_broker_queue_monitors) there is nobody watching this queue. What would cause this queue to be active, enabled but have nobody montioring it? Is there something internal that went wrong that made these (outbound) messages get stuck in the transmission queue, and is not showing up in the views? How can I get these messages "un-stuck" and flow through the system?
A problem I am seeing is the return message to this queue (to signify the target has consumed the message, and to end the conversation) are not being consumed, thus getting stuck in the "DI" state.
For some reason the messages are stuck at sys.transmission_queue. The transmission status seems to be null. I verified all the Queues and they hae the values of (is_receive_enables = 1, is_enqueue_enabled = 1, is_activation_enabled = 0)
I have set up the service broker on different domain and have bidirectional ports open to receive messages.
I can send and receive messages, when I run the same service broker setup scripts at the machines that belongs to same domain .
Is there any thought going into moving these two tables to a file group that we can control? Putting this in Primary with the rest of my system tables is quite problematic, and hinders my ability to manage space usage on my files. Traditionally, we didn't have to consider a primary file group that could grow to large proportions, but now with these two tables it can. If a large volume of messages gets sent through and the system can't keep up, then these tables and my primary file group will grow sometimes enormously.
as Christopher Yager say in "Need distributed service broker sample", I also test sending messages between two SQL Server 2005 instances,and after I setup the test environment with instance1 and instance2,I find queue [q2] in ssb2 can't receive message from ssb1. when I query by "select * from sys.transmission_queue",I get some message records that transmission_status is "64(error not found)".
I may have a misunderstanding of how SB works, but this seems like a problem.
If a queue is disable (i.e. status = off) and a message is sent to the queue the message is placed on the sys.transmission_queue. Once the queue is enabled I thought the messages were sent to the queue in the order they were placed on the sys.tranmission_queue? I have been troubleshooting a problem and this is not the case. Do I have a misunderstanding of how the sys.transmission_queue works?
Hi was wondering whether it is possible to log somewhere outside SB that there are messages in the transmission_queue because the Target queue was disabled.
I was testing this scenario:
try to send messages on a disabled queue and log the problem.
But the transmission_status from the trasmission_queue is always empty.
This is the code that I tried to execute between the send and the commit and after the commit:
WHILE (1=1)
BEGIN
BEGIN DIALOG CONVERSATION .....
SEND ON CONVERSATION ......
if select count(*) from sys.transmission_queue <> 0 BEGIN
set @transmission_status = (select transmission_status from sys.transmission_queue where conversation_handle=@dialog_handle);
if @transmission_status = ''
--Successful send - Exit the LOOP
BEGIN
UPDATE Mytable set isReceivedSuccessfully = 1 where ID = @IDMessageXML;
BREAK;
END
ELSE
raiserror(@transmission_status,1,1) with log;
END
ELSE
BEGIN
UPDATE [dbo].[tblDumpMsg] set isReceivedSuccessfully = 1 where ID = @IDMessageXML;
BREAK;
END
END
COMMIT TRANSACTION;
As I wrote before the @transmission_status variable is always empty and I have the same result even if I put the code after the commit transaction!
Maybe what I'm trying to reach has no sense?
With the event notification I can notify when the queue is disable because the receive rollsback 5 times but what if by mistake the target queue is disabled outside the SB environment? I can I catch it and handle it properly?
I've got 2 service broker databases on remote servers. I've created my endpoints, my routes and have everything set up. But when i send a test message, the messages set in the transmission_queue. There is no transmission_status. And when i look in at the sys.conversation_endpoints view I see that the conversation status is conversing. One odd thing I wanted to point out though is that the far_broker_instance column of the sys.conversation_enpoints view is null. When i run a trace on both databases, I see activity on the Initiator with things like Started_OutBound and conversing but I don't see any messages such as acknowledgment or any errors. On the Targer side I see no activity at all. Does anyone know what the deal is. Why don't I get some kind of error message. Why are all my messages staying in the transmission_queue?
I set-up my Service Broker comunication on the same SQL Server Instance beetween 2 different DBs.
One DB behave as Initiator and send messages to the SB service setup in the other DB.
I execute the SEND statement from the Initiator and if I count the messages on the sys.transmission_queue before to commit the transaction the count returns 0.
If I try to send a message not compliant with the message type, the count that runs after the SEND returns 1 - far enought.
I'm confused about the first behaviour because from what I understood the Acknolodgment and the removal of the message from the sys.trasmission queue should happen after the COMMIT.
Hi everyone, I have a question about SQL Server 2005. I have written an ASP.Net 2.0 Web Application and it is using SQL Server 2005 as Database. In the last few days I noticed that the app is down sometimes. To analyze the problem I looked at the activity monitor in SQL Management Studio. I can see there approximately 170 processinfos. I want to describe the column values of the process infos: Process-ID: Unique ID and a red down-showing-arrow-icon User: My UserDatabase: My DatabaseStatus: sleepingCommand: AWAITING COMMANDApplication: .Net SqlClient Data Provider When I click Locks by Object, I can see the IDs of the Processinfos. Again I will show some colums:Type: DATABASERequirementtype: LOCKRequirementstate: GRANTOwnertype: SHARED_TRANSACTION_WORKSPACEDatabase: My Database So my question is, does this mean, that i have locked the db? How are they handled? For example I have a windows service, which is doing checks in db every 10 seconds. I can see, that each check generates a new processinfo? Is this usual, or am I doing something wrong? Thnaks for help,Byeee
When I run a select statement : select 'X' from table1 where c1 = condition locking on indexes behaves as expected
However if I run select 1 from table1 where c1 = condition locking on indexes goes wild locking pages and rows on indexes that are not even referenced in the query. Any ideas Why?
Hello All, I'm just migrating from oracle to SQL.Can anybody tell me that how effectively I can use Row level locking in SQL? If tow users are attemping to Moify same record how i can deal it in Back end(SQL)? Thanks in Advance. Suchee
i have an application in production(sql 6.5 ) which causes locking which times out my other processes , iwant to capture time the locking takes place i have found in bol that i can get time deadlock occurs using trace flag 3605 in sql7.0 ,if i have to use trace flag is it ok with dbcc traceon or -T option in startup is recommended. any advice would be appreciated tia ram
I have used DTS in the past to copy information in certain tables in production over the top of those same tables in test. However, the process is now failing. Does DTS require an exclusive lock on the source table, as well as the destination table during the export process? Will shared locks on the table I need to copy prevent DTS from completing the process?
We are running out of locks while updating a particular table (table name = history, rows = 25,000,000) in SQL Server 6.5.
LE threshold maximum is set to 200. LE threshold minimum is set to 20. LE threshold percentage is set to 0.
Locks is set to 0.
I have also included the stored procedure, which we use to update the history table.
As you can see, from the first four lines, we ran this SP 4 times processing around 6 million rows at a time. It runs out of locks once it is around 5.5 to 6.5 million rows. Is there a way of locking the table so that this SP can be run just once which will effectively process all the 26 million rows in one go?
Any help will be greatly appreciated.
Winston
--declare minihist cursor for (select uin,uan,mailingdate from history(tablock)where rowno between 5635993 and 12000000) --declare minihist cursor for (select uin,uan,mailingdate from history(tablock)where rowno between 12000001 and 19000000) declare minihist cursor for (select uin,uan,mailingdate from history(tablock)where rowno > 19000000)
open minihist fetch next from minihist into @huin,@huan,@hmailingdate while (@@fetch_status <> -1) begin
if (@@fetch_status <> -2) begin
select @mailtot = 1 select @mail12m = 0
/*** Get the gender ***/ select @sex = gender from name where uin = @huin
/*** Calculate if mailed in the last twelwe months ***/ if (@hmailingdate <> null) and (@hmailingdate > '19980524') select @mail12m = @mail12m +1
/*** Get info for this uan from address_summary ***/ select @mailtot = (@mailtot+mailed_total), @mail12m = (@mail12m+mailed_12months), @lastday = last_date from address_summary where uan = @huan
/*** Insert a row into address_summary if doesn't exist ***/ IF @@rowcount = 0
Hi, We are running SQL 6.5 in Produciton and I'm getting one blocking problem but mostly I kill the process and whenever I check the SQL Error Log I see this message : Error : 17824, Severity: 10, State: 0 Unable to write to ListenOn connection '1433', loginname 'XXXY', hostname 'DT SA'. OS Error : 64, The specified network name is no longer available.
I'm trying to use the pessimistic row locking of SQL to get following result.
When a customer form is openend, the row should be locked for writing. This lock should be left open until the user closes the customer form.
I cannot use transactions because there can be more then 1 customer form open in the same app. In ADO a connection is IN transaction or is NOT, nested transactions are not supported.
How can I keep this row locked on SQL and this until I unlock it or the connection is broken ( in case of problems on client machine )? And how can I see on another machine of this row ( customer ) is already locked so I can open him in read-only?
For the moment I'm using extra fields that hold the info wether the customer is locked en by whom. But that's on application level, not on DB-level.
Ok, this may be a brain dead question but I can't seem to figure out what it is I am doing wrong. I have a stored proc which has multiple inserts and updates and deletes. However, I do not want to commit until the end of the procedure. So near the end if no error has been return by a particular insert, update, delete I tell it to COMMIT TRAN. My problem is that it seems to run and run and run and run. I take out the Begin Tran and boom it runs fast and completes.
But if there is a problem near the end then those other statements will be committed. I wish to avoid that. I have an error routine at the end of the SP and I have if statement to GOTO sp_error: if @@error produces a non zero value. I am sure I am doing something goofy but can seem to see it. I know it has come down to the Begin Tran. Is it that I have too many uncommitted transactions? Or perhaps I am locking something up. I know its hard to tell without seeing what I am doing but is there something simple to remember about using explicit transactions that I am forgetting. Any help is appreciated.
Hello . I am using SQL Server 2000 in order to create a multi user program that accesses data. The problem is that multiple users will update and select data at the same time at the same table.
Is there a way to avoid deadlocks ? I heard about two ways: using a temporary table to store data and then write the data only when the user finished the update. and the other is using xml to write the database to a xml file that is stored locally. do the updates on the file and then after completion insert the xml file into the database.
does anybody know much about these ways? do you know where i can find code for this ?
Hi all, firstly I would like to apologise because I don't actually use sql or know diddly squat about it. I am a network administrator and have a problem with a user's domain account getting locked out everytime he starts his sqlagent service (we are running a windows 2003 domain). I know this a vary vague post and I am sorry for that. I am just after some general ideas/information on why this keeps happening. Any help greatly appreciated.
deepak writes "how to lock the record while using a query "select id,name from students" i want to know various locks in sqlserver and and each of its use in insert ,update,delete and select etc. i am using it from visual basic 6.0
use DB1; select * from [Jobs] select resource_type, request_mode, request_status, request_session_id from sys.dm_tran_locks
It produces the following results when run:
|resource_type | request_mode | request_status | request_session_id |Database | S | Grant | 51 |Database | S | Grant | 54
What is "S"? what are the other possibilities and their meaning for this field. And.. 51 and 54...what are they exactly? Are they individual people or user ids? For example, could 51 be "Advanced users" and 54 be "Generic Users" under SQL security?
My next question is... I suspect i have too many Indexes on my table "Jobs". I suspect it is causing page locks. Especially when someone is updating the records. I will run this query when users complain to me about not being able to edit records.
Ok..Question is...if i have a PageLocking entrant.. Through SQL manager..is it possible to boot a user off temporarily..? How do you do it?
Hi All, Please help me out how to implement the locking in below scenarioReq - There are two tables Table1 & Table2 If I will insert in table1 then related data fields will be auto updated in table2 , similarly based on the data in table2 table1 data needs to be updated. Now the sync of table1 & table2 is working fine.My prob is we are handling the updation/insertion from the UI screens . Two separate screen for each table. When we have multiple user accessing the screens say - User1 updates table1 and User2 updates table2 then we need to implement the locking so that at one time one screen will allow updation in the table1 and hence table2.The other screen shouldnt allow updation in table2 and hence in table1.This is very common locking functionality ...but am not getting any way to implement it , Please advise.Srain.
I need to secure an sqlserver database such that it can only be accessed from an application and to prevent anyone with full admin rights on their local machine and an sqlserver licence from getting in to the database.
I am struggling with controlling access to the database from the sa account. If I attach to the database from a second instance of sqlserver which is different than that where the database was created then I am able to gain full access no problems, which is of course The Problem.
From what I can work out.
1. sa is dbo (and this cannot be changed) 2. dbo has the role of db_owner (and this cannot be changed) 3. the permissions for the db_owner role cannot be changed. 4. the password for sa is set at the level of sqlserver and not per database
.....so any sa can access any database.
I don't believe this so have to be missing something significant, any light on the subject would be gratefully received.
Hi!This is a very simple question and I'm sure you guys will help me a lot.I'm using Visual Basic 2005 for programming. I have one table on my MS SQL 2005 database that has an int column with a counter that needs to be incremented when a user registers.So when reading the value I use a simple SQL query like this: SELECT counter FROM companies WHERE company=0 then I store the value in a local int variable and then I increment it. Then I update the incremented value. UPDATE companies ... I need every single customer to have an individual value. My question is how can I prevent an error, data corruption or whatever if two or more users want to register at the same time? I've been reading about lock update but I'm not sure how to implement it on Visual Basic 2005 and I don't want to store scripts on SQL Server. I'll appreciate your comments and help on this situation.
I have a busy transactional table , I wanna use row level locking mechanism in msSQL. SELECT * FROM PARTY WITH (UPDLOCK ROWLOCK) where LastName ='Clinton' is there any downsides of this approach?
I'm using Sql Server 2005... I'm creating a transaction and enlisting the commands inside vb.net code as well as surrounding the t-sql in an "Begin Trans --- Commit Trans" block. I also have the Isolation level set to the highest (Serializable) in the vb.net code and the sprocs. I'm running 4 instances of the app on 1 server and 4 instances of the app on another server. I am handling the lockouts just fine and writing them to an error table within the db. The app keeps spinning and producing data just fine. There are 3 places where the locking may occur within the app. Two of them are just fine (which is a select and and insert). The app will eventually cycle around and pick up the records taht may have been locked out. My concern is the Update portion which updates stats based off the Insert done previously. If the records never get updated, the only way I would know if they were processed would be to check in our Error table to see if the record exists. I would like to know if there is any way possible to cut down on the number of lockouts (which may be perfectly normal) and to get a way to update that table I just talked about. Should I be using different isolation levels, etc. --- anything of importance might be useful.
What I'm trying to do os this: have an application set a lock on a specific row in a table, so other applications can see it's busy. So, I use "SELECT * FROM mytable WITH(ROWLOCK, HOLDLOCK) WHERE condition" to set the lock. That should lock a row until I close this recordset (me thinks anyway...) Then to detect I use "SELECT * FROM sametable WITH(READPAST) WHERE samecondition". If the row I'm looking for is locked, this select will skip that row, so I get an empty selection.
That's what I want to happen anyway, but in the real world this doesn't seem to work. It doesn't lock, or it doesn't skip....