Long Transaction With Locks Is Blocking All Other Activity To Table
Oct 1, 2007
We have a web-based third-party application that has both background processes and user activity requests running in the same database (SQL Server 2005 SP2). The problem is that a background process will start a long-running transaction and hold an exclusive lock on a few rows in a given table (a small table, <100 rows). The web clients need to scan this same table, but when their "select *" statements get to those locked row(s), the web client queries stall waiting for that exclusive lock to be released. This effectively brings the entire web front end to a halt because all clients must hit this table for each user action. I realize that this is the classic lock condition that multiversioning databases like Oracle, PostgreSQL, SQL Server Compact Edition, and other databases do not suffer because they don't use shared read locks like SQL Server. But since we're on SQL Server for this app, what is the way to get around this problem? Modifying the clients to use WITH (NOLOCK) is not an option... there will be major consistency issues unless the clients run in Read Committed or higher. Any ideas? We could tweak this app if needed. Does SQL Server 2008 introduce multiversioning or at least some mechanism to get around this problem? I did not see it mentioned on the Microsoft site, but maybe I missed it. Thanks in advance.
I seem to be misunderstanding the way transactions work with service broker queues. We have developed and deployed a service broker application that 5 queues and a windows service for each queue on multiple servers (3 currently). Due to a last minute issue, we had to not use transactions when the services executed a recieve and I am not updating the code base to use transactions and am running into blocking issues. One of the services runs for 90 seconds (spooling to the printer) and all of the servers block on the receive operation for this queue. I thought that if I was receving messages from a single conversation, other receives against this queue would not block.
We are migrating our database(s) from ORACLE to SQL. In Oracle we were able to issue a SELECT statement and see all of the locks (Blocking and Non-Blocking) currently in the system. The query also included the Process ID of the process we needed to kill in order to get rid of the lock.
We now need to create the same type of query for Microsoft SQL Server 2012. I have seen postings on different sites saying that this info can be obtained using SP_WHO2 or using the SQL Server Management Studio Activity Monitor's PROCESSES tab, but we are looking for a SELECT statement that will give us similar information.
HiThere is an application that runs on sql server.The application selects/updates some few tables frequentlyOnce there is even a select on this table .It blocks other userssometimes for very long.Is there anything that can be done to reduce this?The table has 18000 rows and does not seem to have an indexI thought indexing might help but 18000 rows without an index isno reason for 30 minutes of lock time.I will appreciate your help as usualVince
I have a sql snippet from a 3rd party application that will not complete its transaction. The SELECT statement executes but does not finish. Instead the statement just sits in AWAITING COMMAND for 1000 seconds then dies, thus killing the UPDATE statement that is supposed to follow.
Let's assume the client code call a stored procedure which reads: SELECT * FROM myTable WHERE ID < 100. Let's assume that for whatever reason, the query becomes very slow and takes 60 seconds to complete. This 60 secs, could be the time for the DB engine to fetch all the rows (happening before returning the resultset to client code), or during the transfer of the resultset to client code (slow network throughput). Question1: could this prevent another user from doing SELECT on myTable? (could be onn different ID or even overlapping the ID between 1 and 100). Question2: could this prevent another user from performing write (UPDATE/DELETE) on the rows with ID between 1 and 100? Question3: can another user perform a write (INS, DEL, UPD) on rows outside of the ID between 1 and 100?
I've got an INSERT that's selecting data from a linked server and attempting to push 10 million rows into the blank table. More or less, it looks like this:
The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions. There are no other active users. I ran it again and monitored the following DMO to watch the growth of locks for that spid:
SELECT request_session_id, COUNT (*) num_locks -- select * FROM sys.dm_tran_locks --where request_session_id = 77 GROUP BY request_session_id ORDER BY count (*) DESC
The number of locks started small and held for a while around 4-7 locks, but at about 5 minutes in the number of locks held by that spid grew dramatically to more than 8 million before finally erroring again with the same message. Researching, I can't figure out why it's not escalating from row locks to table locks at the appropriate threshold. The threshold in was set to 0 at first (Server Properties > Advanced > Parallelism > Locks). I set it to 5000, and it still didn't seem to work. Rewriting the INSERT to include a WITH (TABLOCK) allows it to finish successfully in testing. My problem is that it's coming out of an ETL with source code that I can't edit. I need to figure out how to force it to escalate to locking the entire table via table or server level settings.
A colleague suggested that installing service packs may take care of it (the client is running SQL Server 2008 R2 (RTM)), but I haven't found anything online to support that theory.
Yesterday, we have had a sudden load in our SQL Server 2000 which resulted in several locks. There was not too much time to investigate as we had to rush. A team member had reviewed the processes in EM, Manegement, Current Activity. Looking for blocking processes and killed them.
She told me that as soon as the blocking SPID was killed, another one arose and she had to repeat the operation a dozen of time. When done, the server activity was back to normal. She noticed that more than half of the blocking processes showed that they executed the stored Proc "P_SearchProducts".
We don't own the server and the information on what had happened at that time (batches or resource intensive operations, etc.) is not available for now.
The team suggests that we set the Transaction Isolation Level to Read UNCOMMITTED for this SP. I would like to know better about locks before I go ahead.
P_SearchProducts returns 5 recordsets each one could contains from 1 to 200 rows. To achieve the results, it creates about 10 intermediate tables (SELECT ... INTO #TableX) these temp tables are then used progressively to arrive to the final results. Roughly the volume of these temp tables could be double than the final results. The developer who wrote this SP is not a guru in SQL, there is room for improvement. But here are my questions:
Q1. Could the series SELECT ... INTO #TableX in P_SearchProducts prevent or lock another connection from executing the same SP? If yes, under which conditions?
Q2. Let's assume that P_SearchProducts has a slow execution time. Could it prevent another connection from updating the Product table? And thus leading to a deadlock situation? Something like another transaction (by User2) has obtained lock on most of Product tables, except the Product table which were being slowly read by User1 executing P_SearchProducts. But User1 cannot read the other product tables b/c there are locks by User2.
Q3. If the contention issue was provoked by the slow execution time of many request to exec P_SearchProducts (let's assume there were suddenly 50 users on the web hitting the search product feature at the same time). Could the Read Uncommitted magically resolve the contention issue, providing we accept the consequences of the dirty read.
Sorry for the long post and thank you in advance for any help.
I have a 3 simple packages get called from master package. First package populates time dimension (stored proc), second one populates transaction type dimension(stored proc) and the final one populates date dimension.
I set the TransactionOption = Required in the Master package, every thing else (package & component) set TransactionOption = Supported.
I have been testing transactions, basically I made time and transaction type dimension to fail and the package did roll back. I am stuck in the date dimension.
Within Date dimension I got a stored procedure which populates calendar date attributes into Wrk_Date table. Then I have a data flow task which reads from thats working table, but it cant access it. I tried running SP_WHO2 command, and the status is SUSPENDED, being blocked by id -2.
I saw someone had similar problem and I did read Ash's comments. I did try to change the isolation level, didnt help. http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=74372&SiteID=1
Hi,I am quite puzzled how SQLServer manages transactions.Whatever the isolation level I set when performing an insertion, otherconnections do not have access to the table in select mode.Example in SQL Analyzer:create table foo (id numeric(10),data varchar(100))On Connection 1SET TRANSACTION ISOLATION LEVEL READ COMMITTEDGOBEGIN TRANSACTIONinsert into foo(id,data) values (1,'data');On Connection 2SET TRANSACTION ISOLATION LEVEL READ COMMITTEDselect * from foo-> QUERY HANGSOn Connection 1COMMITOn Connection 2Get the resultUsing READ COMMITTED level, I was expecting not to lock the table whenperforming the select.Thanks in advance for your help,Cedric
We are having a really big problem with a zombie process/transactionthat is blocking other processes. When looking at Lock/ProcessIDunder Current Activity I see a bunch of processes that are blocked byprocess 94 and process 94 is blocked by process -2. I assume -2 is azombie that has an open transaction. I cannot find this process tokill and it seems that this transaction is surviving databaserestarts. I know which table is locked up and when I run a select *from this table it never returns. Does anyone have any ideas as tohow to kill is transaction.Any help is appreciated.A. Tillman
Have the need for going to a table to get an identity value. This is for updating an existing database, blah blah blah. Here is the schema of the table we are using:CREATE TABLE [TableIdentityValue] ( [TableName] [varchar] (50) , [NextNegativeIdentity] [int] NOT NULL , [NextPositiveIdentity] [int] NOT NULL , CONSTRAINT [PK_TableIdentityValue] PRIMARY KEY CLUSTERED ( [TableName] ) ON [PRIMARY] ) ON [PRIMARY]GO Now, depending on the type of data we are inserting into a table, we need to get either a negative or positive number for the PK. There are two sprocs that control the obtaining of those values:CREATE PROCEDURE GetNegativeIdentity @tableName varchar(50)AS DECLARE @nextNegativeIdentityValue int BEGIN TRANSACTION SET TRANSACTION ISOLATION LEVEL SERIALIZABLE SET @nextNegativeIdentityValue = ( SELECT NextNegativeIdentity FROM TableIdentityValue WITH (ROWLOCK) WHERE TableName = @tableName ) UPDATE TableIdentityValue SET NextNegativeIdentity = @nextNegativeIdentityValue - 1 WHERE TableName = @tableName COMMIT TRANSACTION RETURN @nextNegativeIdentityValueGOCREATE PROCEDURE GetPositiveIdentity @tableName varchar(50)AS DECLARE @nextPositiveIdentityValue int BEGIN TRANSACTION SET TRANSACTION ISOLATION LEVEL SERIALIZABLE SET @nextPositiveIdentityValue = ( SELECT NextPositiveIdentity FROM TableIdentityValue WHERE TableName = @tableName ) UPDATE TableIdentityValue SET NextPositiveIdentity = @nextPositiveIdentityValue + 1 WHERE TableName = @tableName COMMIT TRANSACTION RETURN @nextPositiveIdentityValueGOSo, the thing is, we need the read and update of the value from the specific TableIdentityValue row to be atomic - we don't want anyone else reading or modifying that data. The problem is knowing which level of isolation to use and/or locking, and how to implement that. I have tried a few different things that seemed to make sense, like placing a ROWLOCK on the SELECT statement, but is that lock going to hold for the entire length of the transaction? Also, I read that using some of the lock hints can be accomplished in the sense that some isolation levels are the same as some lock hints (e.g. setting isolation level to SERIALIZABLE "has the same effect as setting HOLDLOCK on all tables in all SELECT statements in a transaction" according to SQL Books Online.Any help is appreciated!
I have some locks issues on production database (win 2k3 SP1, sql server 2k5). In fact, I have an asynchronous process that makes SELECT TOP 1 in a table and UPDATE the selected row. The transaction isolation level for doing this action is READUNCOMMITTED. The isolation level readuncommitted is ignored for the update if I'm not wrong. On the other hand, I have some transactional activities with the isolation level read uncommitted too.
But when I control the database activity, I find very often locks between the asynchronous part and the transactional part. This is the transactional activity that is locking the asynchronous activity. The transactional activity is a simple SELECT and this type of query, in spite of the isolation level readuncommitted, makes exclusives locks when the asynchronous makes LCK_M_U.
I tried to modify the strored procedure for the SELECT/UPDATE of the asynchronous process with a "UPDATE my_table ... FROM my_table" query in order to reduce the transaction time. But the problem is always present.
Can someone help me to understand how a select query with the isolation level readuncommtted can make exclusives locks ?
Hi,I am still not very proficient in SQLServer. So apology if thequestion sounds basic.We have a script to clean old unwanted data. It basically deletesall rows which are more than 2 weeks old. It deletes data from33 tables and the number of rows in each table runs into few millions.What I see in the script (not written by me :-) ) is that all data isdeleted within a single BEGIN TRANSACTION and COMMIT TRANSACTION. AsI have background in informix, such an action in Informix may resultin "LONG TRANSACTION PROBLEM". Does SQLServer have a similar concept.Also won't it have performance problem if all rows are marked lockedtill they are committed.TIA.
I'm curious what are considerations for choosing a good transaction retention time? The default SQL uses is 0 to 72 hours. With this setting I found that cleanup was taking upwards of 30 minutes (for a process that defaults to run every 10 minutes). I've read that lowering it can improve performance, and that also you don't want this running too long because of deadlock issues between this and the log reader. So how short is too short? Optimally, since the system this runs on is under heavy use I'd like to optimize this as much as possible, which makes me think that the smaller the retention the better, but is something like 1 or 2 hours too short? What are possible consequences of such a short period of time?
Hi all,I would like to perform anINSERT INTO LINKEDSVR.dbo.xyz.abcSELECT ... FROM dbo.dfgwhere LINKEDSVR is a linked server on another machine. Both servers arerunning SQLServer 2000 and have the DTC running.When I run this batch from QueryAnalyzer without explicitly usingtransactions, it works well (takes about 5 sec) - however, when Ienclose it usingbegin [distributed] tran/commit tranthe query runs forever.I also tried to use the local server as linked server (loopback) but itdid not work either.Any suggestions?Thanks,Jo
I just wanted to post a follow up to a message I posted some months agoabout a long running transaction that was blocking all other users...The link is belowhttp://groups.google.com/group/comp...649bee2002646a2By using the new "Row versioning" functionality of SQL 2005, itcompletely solved this problem. By reading the books online, it saysthere is a performance impact, but that the better performance of SQL2005 in general might offset it. So far this seems to be the case. justposting it here in case anyone else has the problem. The SQL command Iihad to execute to get everything working properly was:ALTER DATABASE DBnameSET READ_COMMITTED_SNAPSHOT ON;
I'm trying to figure out why my transaction log backup is taking up to an hour to complete. I started off with a full recovery model with a Full database back up every Sunday, differential backups every Tuesday/Thursday and log backups every 5 minutes. I would have thought that the log file backups would execute much quicker because I'm backing them up more often.
Here is my backup statement, I'm hoping I've got a wrong option that you can point out to me:
BACKUP LOG [xxxx] TO [LogFilexxxxBackups] WITH NOINIT , NOUNLOAD , NAME = N'xxxx log backup', SKIP , STATS = 10, NOFORMAT
How can I execute a long running transaction using something similar to the fire and forget pattern? I intend to start the execution of a very long stored proc from within IIS. I would like to execute a sql script that will start the job and return immediately so that it doesn't hold an IIS thread.
We have a great big database (90gb) which has been populated (monopolised) by our finance team, and its full of tables that probably aren't being used at all. But know knows whats being used and what isn't or they don't have the time to go through it with me.
So I have decided to implement a procedure that logs table activity on this database, and if for example a table isn't used for a month then it will be archived off and zipped up.
I have a few ideas in my head how I can acheive this, but I am looking for some opinions and ideas from you guys?
Is there any way to find the time when the last DDL was happened in a table? For example: The time when the new column(s) were added into a table or changed the datatype.
Hello,I have an application that will be logging to a SQL Server 2000database user user activity from several Windows 2003 terminalservers. This information will be retrieved by monitoring theSecurity logs of these servers (this part I know how to accomplishalready).A table in the database, tblLogEntries, will contain the followingfields:- ID = autoincrementing int- LogTime = Date/Time the user activity was recorded in the securitylog- Username = User's login ID that the activity was recorded with- Type = int, referencing a lookup table with the values of Logon,Logoff, and possible other future items- Server = The name of the server the activity was recorded on.The only question I have is, can you offer a way to process the totaluser login time during a given range using T-SQL.For Example...Given the table data:ID LogTime Username Type Server1 10-10-2003 8:30:00 Tom Logon SERVER-A2 10-10-2003 8:45:00 Sarah Logon SERVER-A3 10-10-2003 16:45:00 Tom Logoff SERVER-A4 10-10-2003 17:00:00 Sarah Logoff SERVER-A5 10-11-2003 8:30:00 Tom Logon SERVER-A6 10-11-2003 8:45:00 Sarah Logon SERVER-A7 10-11-2003 16:30:00 Sarah Logoff SERVER-A8 10-11-2003 17:15:00 Tom Logoff SERVER-AHow would you receive the output:User Logon Total Time for SERVER-ATom 17.0 hrsSarah 16.0 hrsI know I can handle this type of processing on my ASP.NET front-end,but I'm curious as to how easily it can be done by the database,itself.Thanks in advance for your assistance.
I am interested in getting a better handle on how SQL 2000 determines the locking level to apply to different transactions. I am familiar with the fact that SQL does this on the fly but I was wondering if this could be specified in a Stored Procedure to use one over the other instead and what impact if any that might have on indexes. The databases I work with run anywhere from under 100 GB to 150 GB. Thanks for anyone's input on this subject.
Working on a "social networking site" and would like to have something that records a member's interactions within the site.So for instance if a member uploads a photo, I would like to record it and then display it as "[Member] added photo" with a link to the photo. Also, if a friend is accepted as a friend "[Member1] is now friends with [Member2]" and also if a member posts a comment on the forums "[Member] posted message in [ForumName]" with link to post.This is very similar to Facebook's News Feeds and MySpace Friend Updates.The only way I can think of having this is having an Activity table with a field for each different Id I want to record and an associated list of tracked Actions:Activity:idmemberIdactionTypePhotoIdMember2IdForumIdThen there would be the ActionType tableactionIdNameThe biggest problem I see with this solution is that I would need to add a new field in the Activity table for each different type of activity I would like to track. This could get pretty big as I begin to track more and more activity items.Is there some way I can generalize this. - Andy
Hi, I have been struggling trying to design a query that will alow be to select the most recent date in a table and I'm obviously not having much luck
This is basically the table layout, note each employee can have multiple rows with different dates
Employee_ID
Last_Name
First_Name
Evaluation_Date
Evaluation_Score
1
Jones
Tom
01/04/07
40
1
Jones
Tom
01/.12/07
50
1
Jones
Tom
04/01/08
60
2
Smith
Ed
02/14/05
70
2
Smith
Ed
03/18/06
80
3
Brown
John
06/23/04
80
3
Brown
John
12/23/04
79
3
Brown
John
01/07/06
50
3
Brown
John
10/22/08
69
What I'd like to do would be to write some thing that would return the following, just the last date of the evaluation & whatever relevant data is in the table
Employee_ID
Last_Name
First_Name
Evaluation_Date
Evaluation_Score
1
Jones
Tom
04/01/08
60
2
Smith
Ed
03/18/06
80
3
Brown
John
10/22/08
69
I've looked at select distinct and the date operatives with out any success.
Hi, I am trying to insert some records into a table within a transaction that is without commit/rollback.In other window/transaction i am trying to insert a record into the same table but it is hanging up.why?
I believe during the insert(with in a transaction) whole table is not locked.I should be able to insert records from other transactions.
Is it possible that an OLEDB Data Flow Source is imposing locks on the source tables? The source is an SQL Server OLTP environment, and although the package will be scheduled to run nightly when the application sees little to no use, I want to be sure that the process isn't impacting any application functions.
We have a SQL Server 2008R2 system that has heavy usage to one specific table. I have tuned basically all I can as far as making sure SQL Statements are using good indexes. From time to time a group of folks will log into Mgt Studio and run SQL Statements like this, leave the query open and once in a while it will cause blocking to other SQL running our online system
The query is like this: select ID,* from tablename with (nolock) where ID like 'MSPRYy%'
The results come back within less than 1 second. However, they leave this window open which is what causes this to be a HEAD BLOCKER and blocks other SQL Statements from running.