I am trying to implement a very fast queue using SQL Server.
The queue table will contain tens of millions of records.
The problem I have is the more records completed, the the slower it
gets. I don't want to remove data from the queue because I use the
same table to store results. The queue handles concurrent requests.
The status field will contain the following values:
0 = Waiting
1 = Started
2 = Finished
Any help would be greatly appreciated.
Here is a simplified script to demonstrate what has been done.
CREATE TABLE [dbo].[Queue] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[JobID] [int] NOT NULL ,
[Status] [tinyint] NOT NULL
) ON [PRIMARY]
GO
CREATE INDEX [Status] ON [dbo].[Queue]([Status]) ON [PRIMARY]
GO
CREATE PROCEDURE dbo.NextItem
@JobID integer,
@ID integer output
AS
SELECT TOP 1 @ID = [ID]
FROM Queue WITH (READPAST, XLOCK)
WHERE (Status = 0) AND (JobID = @JobID)
RETURN
GO
Hi all,need advice on the following task:copy the content of a big table from DB_A to DB_B in the same serverthe size of table:~ 7 million rows, ~ 9G in size, 1 clustered pk index, 13 nonclusteredindexcurrent practice:use DTS to copy the data, takes over 20 hours as-- first had to delete existing data of the table in DB_B-- then copy-- all these happen while all indexes are in place.I am trying to check what is the best or most efficient way to copythis kind of data and what wouldbe the expected time for such load.my machine: SQL 2000 Enterprise, 8-way P4, 12G RAM on a EMC Clarrion600 SAN.
Hello, This is info that I am still not certain about and I just need to make sure, my gut feeling is correct:
A. When a procedure is triggered upon reception of a message in a queue, what happens when the procedure fails and rolls back? 1. Message is left on the Queue. 2. is the worker procedure triggered again for the same message by the queue? 3. I am hoping the Queue keeps on triggering workers until it is empty.
My scenario is that my queue reader procedure only reads one message at a time, thus I do not loop to receive many messages.
B. For my scenario messages are independent and ordering does not matter. Thus I want to ensure my Queue reader procedures execute simultaneously. Is reading the Top message in one reader somehow blocking the queue for any other reader procedures? I.e. if I have BEGIN TRANSACTION when reading messages of the Queue, is that effectively going prevent many reader procedures working simultaneously. Again, I want to ensure that Service broker is effectively spawning procedures that work simultaneously.
Hi! What is the fastest whay to take the resultset from an temporary table (# table) and write it to an remote destination (mapped network drive) initiated from stored proc?
Similar question if I take the resultset and convert it to xml (using FOR XML clause) and want it written to disk.
I know we are not allowed to benchmark SQL Server but..... It would be nice to have material to present which demonstrates the performance gains using a queue compared to insert/delete in a SQL table.
Logically it seems faster to use a queue due to the conversation grouping locking and the service broker itself. But there seems to be some overhead involved just to manage these queues that the service broker has to perform.
I am sure we are not unique with the choice to figure out if we will get a boost in performance using SQL a queue between services rather than a table to queue data. What is available to help understand the performance gains of using a queue?
I'm trying to flesh out a good queue table design with our dev team.So here is a general overview of the scenario. First an application will hit a WebAPI and grab any updates to Content and store those ID's in SQL (queue table). Next is the fun part, different multi threaded apps will process ID's from the queue. One app will make updates to the data in a different SQL DB while the other will update an index (likely Elastic).
Obviously, we don't want multiple threads working on the same items. One strategy could be to use UPDLOCK & READPAST query hints. However, I'm not sure about the reliability or performance of this solution. I just started looking into setting up a service broker but that would be completely unfamiliar territory for me. Also I can see how a broker might work well within the instance but how would that work with the application making updates to Elastic?
We are writing a web-based multi-user call centre application application.
we are getting concurrency problems as you would expect with a multiuser application.
the application is made for callers who will bring up a different contact to call based on some predefined priority. now because the algorithm that prioritises the contacts takes a good 2 seconds to run, if 2 different caller request for the next prioritised contact, they will retrieve the same contact.
The only way that we think can resolve this problem is by building a queue. The queue would be implemented as a table, the particular implementation of this queue would be, when ever someone retrieves an entry from the queue, a background process will go on and generate a new queued item, i.e. in a FIFO manner. So that's how we think we should implement the queue.
Now come the question how to implement it. My idea is to have row level locking and a trigger to remove queue items from the queue. so that once one caller have looked at one of the item in the queue, another user can't look at the same item.
Any suggestions as to how i might be able to avoid concurrency problems?
What do you all think of my idea of implementing the FIFO queue?/ Is it possible to do row level locking in such a way that other users won't even be able to read the locked entry??
Trying to create a report... Report should show * documents on hold then depending on the "on-hold type" look in the corresponding table and SELECT a few fields. Here is what I have. Where do I SET the @profile variable to return the profile from my queue table?
DECLARe @profilevarchar(256) SELECT q.[profile],q.on_hold,q.on_hold_message,q.dbc_stateĀ FROM QueueASq
hey all you database guru's hopefully someone can lend some insight as to a little table design problem I have.
Basically I've got a system in place to authorize users to access a website typical username password stuff. The table contains a list of users and there passwords plus the auth level and a few other tid bits that aren't really important enough to into detail about them here. What I want to do is add a messaging system to this, I think I could probably figure out a way to do this half decent if I setup a seperate table for each user to add a row to the table for every message entry than in my asp.net code have it delete everything but the last 10 entries every time a user logs on. However I would much prefer a way that I didn't have to setup a whole new table for each user just for messaging purposes, maybe store something like a list in one of the database cell's kind of like .nets generic.list or better yet generic.queue, I would also like a way if it's possible without too much work to have the table automatically delete the oldest message every time a new message is received if there's already 10 messages existing for the user.
Anyways hopefully someone has some experience in setting up a system like this, I don't really require any code samples I can code it all myself (other than the database code to automatically remove entry's, I'm not a database guy) if someone could just explain a way to accomplish what I'm trying to do, or if someone has a different more convenient way of doing this I would be up for suggestions
Thanks in advance for any help offered, I do appreciate it
In a situation where messages are coming in faster than they can be processed, at what point will service broker start up another queue_reader? Also, how do you prevent table locking if part of the processing of that message involves inserting or updating data in a table? We are experiencing this problem because of the high number of messages coming through, and I'm not sure what the best solution is - does service broker have some built-in support for preventing contention on a table when multiple readers are running? Or maybe a pattern that can be used to get around it?
my system has 2 db's - sql server 2000 & db2 @ separate locations. i have a select query which needs 2 pick up consolidated data from both the tables. also the schema on the db2 has minor changes when compared with the schema on sql server 2000.
while searching on microsoft i came across the technique of creating a linked server. would this be possible 2 implement in my scenario. also would in this case, be advised that i create another view in the db2 server which has changed the db2 schema to the sql server schema format??
One or more files listed in the statement could not be found or could not be initialized. (Microsoft SQL Server, Error: 5009)
I accidentaly created a log file on my drive E:, but every time that I try to delete the log file it keeps on returning the same error. Can someone please help me delete the log file.
hii have over million records in my DB, what is the best way to get the results fast in case i need to get details of an employe name say "robert", if i do it normally it will take long, should i use index or is there any other good way.thanx in advancecheers
Hello everybody . I have 40 GB db running mostly transaction processing. I set up 1. back full backup 2 times a day (takes 30 -40 min) 2. log backup every 15 min 3. custom log shipping 4. We don't won't use Cluster.
Once in while becouse of nethwork, or other problem log shipping fails, so I have to restart log shipping all over starting from restore in stand by mode last full back of my db.IT takes 2-3 hrs just to do this restore !!!
1. So I am asking advice is any way I can bring down time for restore ? 2. Should diffrential backup be taken ? 3. We will not use Custer
hello all !for MS SQL 2000i am having a table with > 100 000 rowsI must clean itDELETE FROM myTable WHERE Name LIKE 'aser%' AND info IS NULLDELETE FROM myTable WHERE Name LIKE 'tuyi%' AND Info = 'ok'DELETE FROM myTable WHERE Name LIKE 'hop%' AND info LIKE 'retro%'.....about 20 DELETE commandswhat is the best way to do it ?thank you
Hi everyone im in deep in need of help in a very easy query and few questions i want to ask,, i use msn boy22202@hotmail.com please i want to contact anyone who use sql server 2005 that can help me in it.... thank you
I need to insert data to a temp table in SQL , I have
CREATE TABLE TMP_X ( doc_name varchar(200) )
--select * from TMP_X
INSERT into TMP_X values ( '...,
but its saying there isn't a match, and i know why its trying to insert all the data as one row, but i need them as seperate rows as i want only 1 column.. is there another INSERT type function ?
If I have a table with one column and i want to insert a few 100's rows of names I can't use the INSERT stmt as that does one row at a time , how can i achieve this ?
I have stoopidly enough deleted default Db. That causes Enterprise Manager to be unable to work with my DB's. The default DB I deleted had no other functions other then being default DB, I mean it was outdated, and I had other DB's that contained all my importent work. They are still running, and I can view DB driven site hosted at localhost, even though default DB no longer excist. I am even able to upload new content, or add new users, so this means all my other DB's are fine. I can even see SQL server icon in my bottom right corner of my desktop, and it shows server running.
Now I am in the need of adding tables and rework some of my excisting tables and stored procedures, but I am not able to do that with Enterprise Manager, due to the lack of default Database.
How do I correct this problem? I have gotten one tip of doing the following: EXEC sp_defaultdb 'User', 'DB' but I am not sure what to do with this.....tried to run it from command line, and put my username and the DB I would set to default but nothing happend.
So I need more details, step-by-step guiding will work, as I don't know a hole lot about Enterprise Manager and SQL.
Btw, this is my error in Enterpr.Managr:
A connection could not be established to MyComputerVSDOTNET2003
Reason: Cannot open default database. Login failed..
Please verify SQL server is running and check your SQL server registration prpoerties and try again
Hello everybody, please advice: what is the fastest standard method of user interface access to SQL database? I am looking for fast display of one master record plus related dependent records, plus fast scrolling through master records with display of dependent records as fast as posible. Perhaps a standard problem with standard solution? At current state of matters, I am still much slower then with my old Access97 database.
I notice this morning that my tempdb grows very fast. I have 26GB in my hardrive and all the space occupied by tempdb and finaly the qeury got failed due to 0 space in hardrive and there is no space to grow tempdb. The select query supposed to bring about 40000 rows. I ran this same query in different server that is not growing even 1 mb. I checked the tempdb option the Trunc log on checkpoint is true.
Why this problem happening ?. I have just dbo permission to access all the database. Do you have any advice regarding this?. Thanks, Ravi
Hello everybody We need to move table T1 from database A to T1 database B on same server
size of table T1 15 GB and 40000000 rows
database B just created and will act as warehouse
could it be done simply by 1.creating table T1 on db B and then 2.set db to simple recovery 3. insert into B.dbo.T1 select * from A.dbo.T1 4. create all the indexes on table T1 in db B
I have a database of about 5Gb of size. Some queries where taking more than 1 minute to complete execution (all of them are stored procedures). Because of that lack of performance, I call the command DBREINDEX for each table, executed the sp_updatestats system stored procedure and finally I executed the sp_recompile system stored procedure for each sp in my database.
After all this task, queries completed in a matter of a few seconds instead of minutes. Strange enough is that some hours later (about 6 hrs), after normal use (this database belong to a Client/Server information system), the problem appeared again: Queries started to take too long to complete.
I am assuming that indexes are degrading too fast so that they required another ReIndex, but I am not sure.
I insert a record to a table and "later" I update it. I have two fields to capture time information: Created and LastModified. My update is very simple: update .... set ..,[LastModifiedDate] = GetDate() where id = @pId.
Now my problem is that I am seeing the created and lastmodified times as the same (in format 2007-09-05 12:38:42.383) !!??!
The record has definitely been updated (other fields are populated).
There is a big table with several million records. I am developing a query that retrieve the first rowset that meets WHERE condition. Any suggestions for the fast query? Thanks a lot.
I have made some stored procedures to check if a user is involved with a certain record. basically every stored procedure contains the following logic.
example spCheckClientRelated: select @res = count(*) from client_role where client_id = @cid and employee_id = @eid
if (@res = 0) begin ... next select end if (@res = 0) begin ... next select end .... return @res end
so far so good. But the final check in CheckClientRelated tests if a user is related to one of the sales projects for that client.
I allready have the spCheckSalesProjectRelated that returns 1 or 0 similar to the example above
so I want to find an efficient method that selects all the sales_project_id 's from the sales_project table where client_id = @cid (i use offcourse select @sid = sales_project_id from sales_project where client_id = @cid at the moment)
And then I have to execute the spCheckSalesProjectRelated method for each @sid and @eid. This if offcourse where my problem is located. I don't know how to do a fast check for every selected @sid, until spCheckSalesProjectRelated returns 1
As you probably can determine from my question, sql is not really my domain, and I'm certainly not an expert, but I don't mind reading or looking up some stuff, so even a clue or a direction to look in would be most appreciated
I have a very puzzling situation with a database. It's an Access 2000 mdbwith a SQL 7 back end, with forms bound using ODBC linked tables. At ourremote location (accessed via a T1 line) the time it took to go to a recordwas very slow. The go to mechanism was a box that the user typed the indexvalue into a combo box, with very simple code attached:with me.RecordsetClone.FindFirst "[Index] = " & me.cboGoToIf Not .NoMatch ThenMe.Bookmark = .BookmarkEnd Ifend withNow, one would say that going to a record is slow because I'm using..FindFirst over a T1 line. And that's what I thought. However, as I wasworking with the form, commenting out various sections not related to the GoTo, I found that the Go To functionality changed, though I didn't modify thecode.Previously, going to a record near the end of the 50,000 record recordsettook about 1-2 seconds, but going to a record near the beginning, took about20 seconds. After the form changed, going to any record in the recordsettook about 1-2 seconds.So the question remains: why did it take so long to go to a record near thebeginning of the recordset, but not near the end (and the ones in the middletook an amount of time about halfway between the two), and what changed sothat now the form is working fine for all records?I've compared the changed form with the previous copy, and I don't see anydifferences. I've compared all code in the form module, and I've comparedall form properties. The forms are identical as far as I could tell. Butsomething happened as I was commenting/uncommenting code in the form thatgot rid of the problem with it taking a long time to go to some of therecords.My first thought was that something got recompiled, and now the form isfast. So I went back to the original version and changed some code andrecompiled, also did a compact and repair. But it was still slow. I alsotried doing an explicit decompile and then recompiled it. But it was stillslow.So this is very frustrating that the form is now working fine, but I can'tsee anything that's changed. If I don't see why the form is now fast, thenthere's no reason to believe that it might not at some point go back tobeing slow again. And then I'd just have to hope that something changes. Itwould be good to figure this out.Any ideas as to what might have changed here to cause the form's Go To to befast would be appreciated.Thanks,Neil
I have a table containing URLs. I want to be able to look up an URL veryfast, so I used an nvarchar to store the URL, and put an index on it(maybe naive).Anyway, I bump into:"The index entry of length 911 bytes for the index 'UQ__URL__1367E606'exceeds the maximum length of 900 bytes."What's the best way to handle this? I want to do the look up fast. Theonly thing I could think up was adding an extra column containing a digestfor the URL, and look up all URLs with the same digest, *and* having thesame value (which could give either 1 or 0 results).I am new to MS SQL, so I might describe a silly solution, basically I wantto look up URLs to ID the fastest way possible.--John MexIT: http://johnbokma.com/mexit/personal page: http://johnbokma.com/Experienced programmer available: http://castleamber.com/Happy Customers: http://castleamber.com/testimonials.html
I seem to remember reading many moons ago about a function where youcan retrieve a count of the last recordset you opened.For example:I've got a stored procedure that returns a recordset using the TOP 10so I only get the top 10 records. I need to know the recordcount but Idont want to reuse the SELECT statement because its quite complex.Any ideas?What does @@Count do?Thanks in advance