SQL Performance - Disk Queue.
Oct 25, 2007
I am running SQL server 2000 SP4 on a server with 2 Dual core 4G processors with data attached via a SAN>
I have a 70G database with 10 users that is giving attrocious performance. I have just tried to run a count(*) accross a couple of tables and am still waiting for the results 15 mins later. When I look at the disk queue it is around 50/60. I thought the target for this was around 2. I am sure that the hardware that we have in place is capable of running this db. However I`m not sure how to fully analyse what is going wrong here.
Any tips would be greatfully received.
Si
View 2 Replies
ADVERTISEMENT
Sep 10, 2007
If I return the Average, Minimum, and Maximum values for the counter Physical Disk: Avg. Disk Queue Length, and those values are 10, 0, 87 respectively, which value do I use to compute the Avg. Disk Queue Length for a 4 disk array(RAID 10): Average, Minimum, or Maximum? The disk(lun) is on a SAN.
View 1 Replies
View Related
Aug 14, 2007
avg disk queue length is 100%. Any ideas?
=============================
http://www.sqlserverstudy.com
View 2 Replies
View Related
Nov 8, 2007
Again one client with SAN EMC and again performace is several times worse then you can have with a cheap and primitive IDE drive... :(
Anyway, my question.
I am monitoring many parameters, including Avg Read Queue and Avg Write Queue.
So if I ReadQueue=3 and WriteQueue=7, what does it mean?
Scenario 1, there are 2 different Queues (R- read request, W - write request):
Windows --> Device
Read. Queue: R:R:R
Write Queue: W:W:W:W:W:W:W
Scenario 2:
Windows --> Device
Common Queue: W.R.W.W.R.W.W.R.W.W
In other words, if SQL server flushes writes (TRAN COMMIT or CHECKPOINT), generating hundreds or even thousands of write requests in few milliseconds, so Queue grows to 100-300 for a second or so, are read requests locked during that time?
View 1 Replies
View Related
Sep 6, 2007
Hi All,
we have collected perfmon data for a specific lun. Here is the background of the lun. The lun is Raid10 with 4 physical disks. We have problems interpreting the data. In the perfmon counter screen we have a max of 435 and average of 0.512. Can somebody tell us what is that we are missing? Any help is greatly appreciated.
Thanks,
Venkat.
View 3 Replies
View Related
Jun 16, 2003
Does anyone have any recommendation on whether it's better to monitor the average queue lenght for physical or logical drives? What about for a RAID set?
Just wondering,
thanks
View 6 Replies
View Related
May 16, 2008
I have a poorly performing SQL box.
I have run perfmon and the avg read queue length is pretty much permanently maxed out at 100%.
I have run a database index defrag.
On further inspection the file system is highly fragmented. There is a file fragmentation of 98% with the mdf file fragmented in 25,000 pieces. Running a standard windows defrag does not resolve this.
Two questions?
1- Is heavy file fragmentation of the MDF file a likely cause of the read queue length bottlneck?
2 - Why is the MDF file not defragmenting? Does the SQL server have to be taken offline? Is it possible to defrag a MDF file?
View 2 Replies
View Related
Jul 17, 2007
PROBLEM: Sporadic spikes in Avg. Disk Queue Length with users freezing up until it comes down.
Server:
-SQL Server 2000 SP4
-Windows Server 2003
-4 Intel Xeon Processors 5130 @ 2.00 GHz
-4 GB of RAM (SQL Configured Dynamically with min. of 0 and max. of 3072)
-Max Worker Threads – 255
Environment:
-100 users with about 65 on Thin Clients going through a Remote Desktop Server
-Front End CRM package is a program called Telescript
-I do not use replication or any triggers, mainly just scheduled DTS packages
-Largest and most heavily used Database is 13Gbs with using a fill factor of 80 on my indexes
Recent Changes – The most notable thing that I changed in the last couple of weeks as this problem arose was the SQL Mail function. I started using that and created some DTS packages that sent out emails. Our user base has been steadily growing, but nothing drastic.
So Far – The buffer cache hit ratio is good and I have not seen any problems with memory so that is ruled out. The processor time never maxes out, but it does appear to double when the Avg. Disk Queue Length spikes. I re-indexed my active tables and performed shrinks over the weekend, but the problem occurred yesterday again. Sometimes it will last under a minute, but the problem is really when it lasts for several minutes. I used SQL Profiler to try and identify what the query was causing the problem, and while I see some large audit times, I cannot tell what the problem is. I had a hard time reading through that with all of the Stored Procedures that were invoked (ie. cursor fetches). The server itself is very new and has only been in use for about 2 months, so I couldn’t imagine it would be hardware fragmentation.
QUESTION: What else should I be looking at or could I use to figure out where this problem is stemming from? I am in a corner and could use any suggestions as to diagnose this problem.
Thanks in advance,
Don
dkoehne@401kexchange.com
View 14 Replies
View Related
Jul 26, 2006
I know we are not allowed to benchmark SQL Server but..... It would be nice to have material to present which demonstrates the performance gains using a queue compared to insert/delete in a SQL table.
Logically it seems faster to use a queue due to the conversation grouping locking and the service broker itself. But there seems to be some overhead involved just to manage these queues that the service broker has to perform.
I am sure we are not unique with the choice to figure out if we will get a boost in performance using SQL a queue between services rather than a table to queue data. What is available to help understand the performance gains of using a queue?
View 2 Replies
View Related
Aug 7, 2007
Hi,
We are doing a POC for transferring a huge number of messages(millions) from oner machine to another. The two approaches we are examining are MSMQ and SQL Broker. The MSMQ is set up as a remote queue on the target machine, and the source machine takes as little as 1 millisecond to send the message (using a .NET program). However, when testing on Service Broker, we find that the time taken to send message to the queue is significantly higher - like 70 millisecond. Could you please help us in understanding why this is happening?
The service broker distributed queues have been set up as per the directions in the posting at http://www.sqlservercentral.com/columnists/sindukuri/2797.asp
The source program (written in .NET) is calling a stored procedure in the source machine to write to the SSB queue. When we run SQL Trace, we find that the SP is responsible for 99% of the time taken. Here is our SP that send the message:
Declare @ConversationHandle uniqueidentifier
Begin Dialog @ConversationHandle
From Service SenderService
To Service 'ReceiverService'
On Contract SampleContract
WITH Encryption=off;
SEND
ON CONVERSATION @ConversationHandle
Message Type SenderMessageType
(<<XML String>>)
Please let us know if there are any additional settings required in the Service Broker to improve its performance. Or , what are the other approaches for building a distributed SSB application?
View 3 Replies
View Related
Jan 26, 2007
Hello,
is there built-in support for monitoring the number of elements in the transmission queue via performance counters?
Thanks,
View 4 Replies
View Related
Aug 10, 2007
***cross-posted to MSDN SQL Server forum***
Hi All. Hopefully I posted this to the right section.
I would like help identifying if I have a disk performance issue or not. First the background: we have a j2ee application using the MS SQL 2005 JDBC driver and Hibernate on 4 application servers, and an active-passive SQL Server 2005 cluster. All of the servers reside in the same physical rack and switch.
Our application is typically bounded by CPU on the app server, or throughput from the database. Several months ago we were using SQL 2000 and would often max out the CPU on the database server before anything else, but often the database could keep up and we would max out the app servers CPU. Now we have 2005 on a much more powerful machine and more app servers, but we seem to be running up against a problem with throughput from the database.
The issue is not CPU. The total cpu average on the database server, as monitored in perfmon on 30 second intervals, stays consistently below 40%. The app servers stay well below 30%. But what concerns me is the Average Disk Queue Read Length on the database server, particularly for our E: drive. On this db server, the transaction log, the system and temp dbs, and our application's database are all on separate EMC SAN shares, connected via fibre chanel. The E drive houses the app data and is a 15-way meta device (fifteen 10GB logical devices striped at 960k for a 150GB device) in a RAID-S configuration, EMC Symmetrix array located in the same rack. The database is roughly 30GB.
I have read various articles online describing how to interpret the Average Disk Queue Read Length performance counter with regards to SQL Server. Some have said this should not exceed the number of physical spindles * 2. We are seeing values of 32 consistently, averaging over 60 during peak processing hours, and spiking to well over 100 on a scale of 1.0. (3-second sample interval).
So since our application servers seem to be waiting on their database calls (a lot of inserts with frequent, but small-resultset selects) and do not show I/O issues either with their local storage, memory, or network interface. The database server again has no CPU, network, or memory issues. I should add the the Average Disk Queue Write Length counter does not have any issues; its always below 1 (on a 1.0 scale). The EMC array has both read and write caching. The indexes of the application database are rebuilt weekly and defragmented every day, with stats rebuilt after the defrag.
So how can I further determine where my performance problem lies? All thoughts appreciated! Thanks!
-tuj
View 2 Replies
View Related
Aug 9, 2007
Hi All:
I would like help identifying if I have a disk performance issue or not. First the background: we have a j2ee application using the MS SQL 2005 JDBC driver and Hibernate on 4 application servers, and an active-passive SQL Server 2005 cluster. All of the servers reside in the same physical rack and switch.
Our application is typically bounded by CPU on the app server, or throughput from the database. Several months ago we were using SQL 2000 and would often max out the CPU on the database server before anything else, but often the database could keep up. Now we have 2005 on a much more powerful machine and more app servers, but we seem to be running up against a problem with throughput from the database.
The issue is not CPU. The total cpu average, as monitored in perfmon on 30 second intervals, stays consistently below 40%. But what concerns me is the Average Disk Queue Read Length, particularly for our E: drive. On this machine, the transaction log, the system and temp dbs, and our application's database are all on separate EMC partitions, connected via fibre chanel. The E drive houses the app data and is a 15-way meta device (fifteen 10GB logical devices striped at 960k for a 150GB device) in a RAID-S configuration.
I have read various articles online describing how to interpet the Average Disk Queue Read Length performance counter with regards to SQL Server. Some have said this should not exceed the number of physical spindles * 2. We are seeing values of 32 consistently, averaging over 60 during peak processing hours, and spiking to well over 100 on a scale of 1.0. (3-second sample interval).
So since our application servers seem to be waiting on their database calls (a lot of inserts with frequent, but small-resultset selects) and do not show I/O issues either with their local storage, memory, or network interface. The database server again has no CPU, network, or memory issues. I should add the the Average Disk Queue Write Length counter does not have any issues; its always below 1 (on a 1.0 scale). The EMC Celerra array has both read and write caching. The indexes of the application database are rebuilt weekly and defragmented every day, with stats rebuilt after the defrag.
So how can I further determine where my performance problem lies? All thoughts appreciated! Thanks!
-tuj
View 9 Replies
View Related
Mar 2, 2004
Have a sql 2000 db which I have no say in design, just make it run. My typical sql counters such as system queue, and buffer cache and cache hit ratio are all good. If I need to monitor disk activity (mainly how fast my data is being read, how long the user is waiting for that data for both reads and inserts), what are the best counters for this, and what value should throw up a red flag.
View 1 Replies
View Related
Feb 11, 2006
We have an application that is experiencing I/O contention,particularly in tempdb but also in two other databases. The data isstored on mirrored PowerVault 220's, each with 10 of 14 possible disks.The PowerVaults are JBOD devices, not true SANs. The current config hasfour separate groups of physical drives assigned to distinct logicaldrives for log files, tempdb, and the two app dbs. This means, forexample, that tempdb resides on one mirrored drive. The standard advicewhen faced with disk contention is to add spindles if possible. With 4empty slots, we would presumably assign the new physical disks to themost stressed db, e.g. tempdb.An alternative arrangement would be to combine all the physical drivesinto one logical drive, and put all the files, log and data, onto thesingle logical drive. The hope for this configuration is that thePowerVault would automagically distribute the data among the drivessuch that all drives were in use, all spindles reading and writing atmaximum capacity when necessary. It is my understanding thatfull-featured SANs, like NetApps and EMC models, do this. My questionis whether this configuration is best for the PowerVault, as well. Oris this the essential difference between JBOD and a true SAN?Has anyone tried both arrangements?Advice is much appreciated.
View 2 Replies
View Related
Jul 20, 2005
I 'inherited' a group of SQL Server server class machines. They aretrue server technology but the disk sub-systems are lacking. There isone hot-swap backplane that all the drives share (with one SCSIchannel) thusly even though there are three logical drives (composedfrom 6 to 8 hard drives), they all go through one channel. This iscreating a performance issue that is noticable and can be seen invarious performance counters that Microsoft recommended one shouldmonitor in terms of disk I/O. For a cheaper 'fix', I can add aseperate two drive bay (with its own SCSI channel) with mirroreddrives. I would then mostly likely place the transaction log files onthis new channel. Or I could place the indices filegroup files onthis new channel for DBs with mainly searching going on (not muchupdating). If I went this route I would be leaning towards thetransaction log move since the second method would require me movingDBs around quite a bit. Any input on this solution (besides spendingmore money)?What I would prefer to do is get a better server class machine or addan external drive bay solution (not a SAN). I would try to get threeor four SCSI channels in the new hardware to split the differentfile/filegroups out (i.e. transaction logs files, data filegroup,indices filegroup, etc.). My only concern here is: would this moreexpensive solution be worth the money? As far as replacing servers, Ihave only two kinds of experience...replacing somewhat underpoweredservers with slightly less underpowered servers and replacing overkillservers with even more overkill servers. In both cases, the disksub-systems were fairly equivalent from the old system to the new one.Will going the three/four channel route really get data moving along?We have one server in particular that hosts a database (one of many onit) for a web application that gets decent traffic (it is a privatelogin based system for internal use and external use by our clients'agents). Periodically throughout the day, there are 2-5 minute burstswhere performance slows to a crawl. I want to spend more timeprofiling queries and such before recommending we spend more money,but the folks I am working for want quick results and there is quite abit of stored procedure logic to profile and investigate. I know thedisk sub-system is definately in need of an overhaul, but I would liketo get an idea of peformance gains from adding either one additionalchannel over the existing single channel as well as going thethree/four channel route over the existing single channel setup.Any information would be greatly appreciated.Regards,Tony
View 2 Replies
View Related
May 14, 2007
Summary: Started replication April 1 of 4M xact / day publishing system to subscribing system.
Performance was good. Latency was ~ 5-7 seconds.
May 10 we noticed that the DB was behind (latency was 12 hours).
All performance counters seem good with the exception of the disk.
. Performance spikes are 8 minutes apart and last from 30 - 60 seconds.
. During this period, Disk % Busy (1 - Disk % Idle) is 100%
The publisher DB publishes about 50-52 xacts/sec.
Rate of distribution (distribution DB to Subscriber DB) is ~ 47 xacts / second, so latency is increasing (currently at 33 hours). Previously my Subscriber system's "capacity" was 150 xacts / sec.
I know this because several weeks ago, the network went down, we were 24 hours behind.
When the network came back up the replication subscriber system was able to catchup at around 150 xacts / sec, or 3X the production system rate.
What has changed between then and now? Not much. We did install Tivoli Service Manager (IBM's backup system) a couple of weeks ago. It seems to run fine on a nightly basis, but I don't see any periodic heavy Disk I/O from that. Just to be sure, I've had them shut the TSM services down just to be sure.
We've also eliminated all extraneous processes other than those I need for performance monitoring (there was a RTVscan, virus scan process).
I've eliminated Autogrowth's as an issue as I've bumped the growth so that they are very infrequent (several days at this point. When we resolve the problem, I'll dial this down to something more reasonable.
My disk configuration is not ideal I realize (single Raid-5 disk with 3 partitions), however, this has not changed in the 6 weeks.
Thanks for any help on this!
Jack Griffith
Configuration:
Subscribing System:
SQL Server: 2000, SP4 - 8.0.2039
CPU - 2.8GHZ Xeon, Quad Dual-core
Memory - 3.5GB RAM
Disk: 3 partitions on a single RAID-5 disk with 1118 GB of space:
C: 39GB System and Programs
D: 97GB Log space
E: 982 GB Data space
Replication configuration:
- nosynch, continuous Transactional Replication
- Distribution db is on Subscription system
- distribution - Publication of approx. 50 transactions / second
Subscriber DB configuration:
DB size: 64458 MB
Logging: Simple (at this point)
distribution
DB size: 3111 MB
Logging: Simple (at this point)
View 1 Replies
View Related
Jul 20, 2005
Fellas!!This is a very complicated one and it took me a few days to figure outexactly what's going on, but here's the final story:I have a production environment running on .NET with a SQL Server(2000, SP3). The SQL Server is on a dedicated Proliant computer with2GB RAM (the actual SQLServer.exe process has dynamic memoryassignment and can reach up to 1.6GB RAM). Nothing else is running onthat specific computer.Once the SQLServer is started, it hits 300MB RAM (the minimum that wasset in the configuration of the server - remember, it is dynamicallyaquired).Then there is a .NET program that requests just about all the data theSQL Server contains (apart from a single table that contains roughly1.6 million rows and another table that contains about 10000 rowswhich are all of type IMAGE).Once all the data is retrieved, the RAM is at about 400MB. From thereon, every update I make to the data on the server causes the RAM to goup by a bit (that updates are done in a Transaction which of course iscommitted at the end). It seems that BLOB updates are the majorproblem in all of this. For some reason, uploading a blob of size 9MBcauses the RAM to go up by roughly 20MB and after commit it gose down10MB (total gain of roughly 10MB RAM). Eventually the SQLServerprocess hits its upper limit (1.6GB) and at this point it startsslowing down.Some performance checks showed me the SQLServer has a lot of diskactivity, it seems it is reading and writing pages of data from/to theHD all the time (which causes the queries to be much much muchslower).We have a development environment running the exact same code (it isthe exact same in everything, except for the amount of data stored inthe DB). This does not happen there at all.I have a few questions:1. Why is the RAM going up after BLOB updates?2. Why is the RAM going up at all?3. How can I tell the DB which tables should remain in the RAM at alltime (never swapped back to the HD?) - DBCC PINTABLE does not seem todo the job.It does not seem to have anything to do with the .NET code.Thank you very much,M Yamo.
View 4 Replies
View Related
May 29, 2008
Depending on the way I write a query, I come up with these 2 stats.
Is there a sure winner in this race, keeping in mind the overall health of the server?
(I'm not sure of the specs of the server, as I can't log on to it :/ but are there any sql variables that would show cpu speed and # of cpus?)
I almost am leaning towards the single cpu query because of lower resources used -
or are most of the "reads" in the parallel'd query not read directly from the HD, but using the Table Spool created internally (query plan shows it)?
CPU Reads Writes Duration
Parallel: 200k 3.2m 2400 62s
Solo: 79k 1.1m 600 79s
View 9 Replies
View Related
Dec 28, 2006
hello,all
I am new to Sql 2000,I installed sql 2000 database in C disk,but Now I found my C disk space is smaller than before,So I want to move my databse(include data and structure) from C Disk to D Disk(its space is very large) .
is it possible to do it ?
if its can be done ,do I need to change my asp.net program source code (exp: chaneg my crystal report connectstring ) ?
thanks in advanced!
View 1 Replies
View Related
Jan 11, 2006
Hello,
This is info that I am still not certain about and I just need to make sure, my gut feeling is correct:
A.
When a procedure is triggered upon reception of a message in a queue, what happens when the procedure fails and rolls back?
1. Message is left on the Queue.
2. is the worker procedure triggered again for the same message by the queue?
3. I am hoping the Queue keeps on triggering workers until it is empty.
My scenario is that my queue reader procedure only reads one message at a time, thus I do not loop to receive many messages.
B.
For my scenario messages are independent and ordering does not matter.
Thus I want to ensure my Queue reader procedures execute simultaneously. Is reading the Top message in one reader somehow blocking the queue for any other reader procedures? I.e. if I have BEGIN TRANSACTION when reading messages of the Queue, is that effectively going prevent many reader procedures working simultaneously. Again, I want to ensure that Service broker is effectively spawning procedures that work simultaneously.
Thank you very much for the time,
Lubomir
View 5 Replies
View Related
Nov 13, 2007
-- Initialize Control Mechanism
DECLARE@Drive TINYINT,
@SQL VARCHAR(100)
SET@Drive = 97
-- Setup Staging Area
DECLARE@Drives TABLE
(
Drive CHAR(1),
Info VARCHAR(80)
)
WHILE @Drive <= 122
BEGIN
SET@SQL = 'EXEC XP_CMDSHELL ''fsutil volume diskfree ' + CHAR(@Drive) + ':'''
INSERT@Drives
(
Info
)
EXEC(@SQL)
UPDATE@Drives
SETDrive = CHAR(@Drive)
WHEREDrive IS NULL
SET@Drive = @Drive + 1
END
-- Show the expected output
SELECTDrive,
SUM(CASE WHEN Info LIKE 'Total # of bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS TotalBytes,
SUM(CASE WHEN Info LIKE 'Total # of free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS FreeBytes,
SUM(CASE WHEN Info LIKE 'Total # of avail free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS AvailFreeBytes
FROM(
SELECTDrive,
Info
FROM@Drives
WHEREInfo LIKE 'Total # of %'
) AS d
GROUP BYDrive
ORDER BYDrive
E 12°55'05.25"
N 56°04'39.16"
View 16 Replies
View Related
Nov 15, 2006
Hello,
I am trying to setup a test cluster and am having an issue. When I try to create the resource of a physical disk it takes both the drive e: and drive q: and doesn't seperate them into two physical disks as resources. This means when I try to associate the quorum disk it links the to physcial disk resource of drive e and q. Then when I try to install SQL2k5 I get the warning about installing SQL on the quorum disk. Am I missing something? Is there a way to seperate e and q onto two physical disk resources so I can specifically associate the quorum to q and the sql to e or should I be setting the quorum disk to a majority node set? Thanks in advance.
John
View 4 Replies
View Related
Feb 20, 2001
Hello,
this is my configuration :
1) 3 disks in RAID5 that hold the SQL data
2) 1 disk in RAID0 that holds the only paging file.
What will happen to the SQL data (DB) when the disk that holds the paging file crashes?
Kindest regards,
Luc.
View 1 Replies
View Related
May 7, 2004
Hi all,
Ok here goes,
I have a three tier system using SQL server 2000, we are currently experiencing IO bottle necks on our SCSI Raid 10 array, which holds the Data and the logs in separate partitions.
So my options as I understand it are:
Get Enterprise edition
or
Get another physical raid 10 array and separate the logs and data i.e. data on one array and logs on the other array.
I would like to try the latter but I am totally unsure how much difference this will make or whether it will make any difference at all.
Does anyone know how much performance increase I will get from using two arrays as opposed to one?
Any other advice on this scenario would be greatly appreciated.
Thanks
View 4 Replies
View Related
Jul 18, 2007
I have a requirement where once we create a new record in a table, we submit a query to fetch some data and save it in one of the columns of the newly created record. The main requirement is that the server where we fetch the data from can be down for sometime for regular maintenance and we do not want to loose the fetch query in that process. Is there a way we can implement this?
Thanks.
View 3 Replies
View Related
Nov 22, 2002
I have a table that I want to act as a queue.
It has no indexes and no key. Just one column.
Basically I want a stored procedure that will pull / return the first record off the queue (table) and delete it. I'd rather not use MSMQ for this.
There will be about 10 users trying to do this at the same time and will be trying to pull of about 15 times every second.
How can I do this and ensure that no two requests pull off the same row?
Thanks,
Kevin
View 1 Replies
View Related
Sep 28, 2006
Hi Folks,
I was testing my error handling and purposefully failed some messages. Automatic posion message detection kicked in and disabled my queue. I tried the following, one at a time to enable it again but it doesn't work:
ALTER QUEUE MigrationQueue WITH STATUS = ON;
ALTER QUEUE MigrationQueue WITH STATUS = ON, ACTIVATION (STATUS = ON);
I would have thought the first line would've worked but I get the following when trying to receive...
The service queue "MigrationQueue" is currently disabled.
Help.
View 6 Replies
View Related
Mar 14, 2007
Hello!
I am running a basic SSB queue setup (more or less the Hello World example)and running into the following error message:
Transaction (Process ID 120) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
At first, I thought it was because I had the Initiator and Target Services on the same Queue, but I get this error even when I separate the two Services onto two Queues. This happens when I run more than one Target application receiving messages from the Target Queue.
Does anybody have any idea what could be happening here? Am I not allowed to set up more than one receiver?
Thanks --
Robert
View 3 Replies
View Related
May 22, 2007
I have a queue that, after running fine for several days will mysteriously turn off. It doesn't seem to be related to a poison message because I can restart the queue and processing resumes just fine. What are all the scenarios that would cause a queue to turn itself off, so I can 1) take preemptive action to prevent it from happening in the first place and 2) respond appropriately when it occurs.
Also, how to properly setup and verify that the BROKER_QUEUE_DISABLED is working properly. This is the SQL that I have so far, but is there a more direct way to raise the event other than writing an activated stored procedure that rolls back 5 times?
CREATE QUEUE [EventNotificationsQueue];
GO
CREATE SERVICE [EventNotificationsService]
ON QUEUE [EventNotificationsQueue]
([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);
GO
CREATE EVENT NOTIFICATION [QueueDisabled]
ON QUEUE [MyQueue]
FOR BROKER_QUEUE_DISABLED
TO SERVICE 'EventNotificationsService', 'MyDatabase';
GO
View 7 Replies
View Related
Oct 11, 2006
HI There
My activated proc is rolling back the transaction and putting the message abck on the queue infinately ?
Normally it disabled the queue after a few rollbacks, i can see in the sql log that it just keeps rolling back and re-activating thousands of times.
It only stops when i disable activation on the queue.
WHy is the queue not disabling ?
Thanx
View 3 Replies
View Related
Jul 28, 2006
Hi There
I have sent messages and they are all sitting in the transmission queue with a blank status, why is service broker not trying to send them ? They are no errors in the sql log. BOL says this is blank when it has not tried to send the message ? Service broker is definately activated in the database.
How do i force sql server to send anything in the transmission que ?
I have no idea what is wrong or where to check ?
Thanx
View 23 Replies
View Related
Aug 2, 2007
Running transactional replication, dedicated server for distributor. While performance in terms of latency is excellent (usually 1 sec, almost never higher than 4) the disk queue length on the distributor is extremely high (over 6 usually). Is this typical? On any other server I would be very concerned, but cpu and memory usage are excellent and as said, latency is good. what is recommended config for distributor? others see high queue length?
View 2 Replies
View Related