Long Table Locks
Jul 20, 2005
Hi
There is an application that runs on sql server.
The application selects/updates some few tables frequently
Once there is even a select on this table .It blocks other users
sometimes for very long.
Is there anything that can be done to reduce this?
The table has 18000 rows and does not seem to have an index
I thought indexing might help but 18000 rows without an index is
no reason for 30 minutes of lock time.
I will appreciate your help as usual
Vince
View 1 Replies
ADVERTISEMENT
Oct 1, 2007
We have a web-based third-party application that has both background processes and user activity requests running in the same database (SQL Server 2005 SP2). The problem is that a background process will start a long-running transaction and hold an exclusive lock on a few rows in a given table (a small table, <100 rows). The web clients need to scan this same table, but when their "select *" statements get to those locked row(s), the web client queries stall waiting for that exclusive lock to be released. This effectively brings the entire web front end to a halt because all clients must hit this table for each user action. I realize that this is the classic lock condition that multiversioning databases like Oracle, PostgreSQL, SQL Server Compact Edition, and other databases do not suffer because they don't use shared read locks like SQL Server. But since we're on SQL Server for this app, what is the way to get around this problem? Modifying the clients to use WITH (NOLOCK) is not an option... there will be major consistency issues unless the clients run in Read Committed or higher. Any ideas? We could tweak this app if needed. Does SQL Server 2008 introduce multiversioning or at least some mechanism to get around this problem? I did not see it mentioned on the Microsoft site, but maybe I missed it. Thanks in advance.
Austin
View 6 Replies
View Related
Jan 16, 2008
Hi,
Let's assume the client code call a stored procedure which reads: SELECT * FROM myTable WHERE ID < 100. Let's assume that for whatever reason, the query becomes very slow and takes 60 seconds to complete. This 60 secs, could be the time for the DB engine to fetch all the rows (happening before returning the resultset to client code), or during the transfer of the resultset to client code (slow network throughput).
Question1: could this prevent another user from doing SELECT on myTable? (could be onn different ID or even overlapping the ID between 1 and 100).
Question2: could this prevent another user from performing write (UPDATE/DELETE) on the rows with ID between 1 and 100?
Question3: can another user perform a write (INS, DEL, UPD) on rows outside of the ID between 1 and 100?
Thanks in advance for any help.
View 6 Replies
View Related
Jul 16, 2015
I've got an INSERT that's selecting data from a linked server and attempting to push 10 million rows into the blank table. More or less, it looks like this:
insert into ReceivingTable (
Field1, Field2, Field3, Field4
, Field5, Field6, Field7, Field8
, Field9, Field10, Field11, Field12
, Field13, Field14, Field15
[code]...
The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions. There are no other active users. I ran it again and monitored the following DMO to watch the growth of locks for that spid:
SELECT request_session_id, COUNT (*) num_locks
-- select *
FROM sys.dm_tran_locks
--where request_session_id = 77
GROUP BY request_session_id
ORDER BY count (*) DESC
The number of locks started small and held for a while around 4-7 locks, but at about 5 minutes in the number of locks held by that spid grew dramatically to more than 8 million before finally erroring again with the same message. Researching, I can't figure out why it's not escalating from row locks to table locks at the appropriate threshold. The threshold in was set to 0 at first (Server Properties > Advanced > Parallelism > Locks). I set it to 5000, and it still didn't seem to work. Rewriting the INSERT to include a WITH (TABLOCK) allows it to finish successfully in testing. My problem is that it's coming out of an ETL with source code that I can't edit. I need to figure out how to force it to escalate to locking the entire table via table or server level settings.
A colleague suggested that installing service packs may take care of it (the client is running SQL Server 2008 R2 (RTM)), but I haven't found anything online to support that theory.
View 9 Replies
View Related
Jan 15, 2008
Hi,
Yesterday, we have had a sudden load in our SQL Server 2000 which resulted in several locks. There was not too much time to investigate as we had to rush. A team member had reviewed the processes in EM, Manegement, Current Activity. Looking for blocking processes and killed them.
She told me that as soon as the blocking SPID was killed, another one arose and she had to repeat the operation a dozen of time. When done, the server activity was back to normal. She noticed that more than half of the blocking processes showed that they executed the stored Proc "P_SearchProducts".
We don't own the server and the information on what had happened at that time (batches or resource intensive operations, etc.) is not available for now.
The team suggests that we set the Transaction Isolation Level to Read UNCOMMITTED for this SP. I would like to know better about locks before I go ahead.
P_SearchProducts returns 5 recordsets each one could contains from 1 to 200 rows. To achieve the results, it creates about 10 intermediate tables (SELECT ... INTO #TableX) these temp tables are then used progressively to arrive to the final results. Roughly the volume of these temp tables could be double than the final results. The developer who wrote this SP is not a guru in SQL, there is room for improvement. But here are my questions:
Q1. Could the series SELECT ... INTO #TableX in P_SearchProducts prevent or lock another connection from executing the same SP? If yes, under which conditions?
Q2. Let's assume that P_SearchProducts has a slow execution time. Could it prevent another connection from updating the Product table? And thus leading to a deadlock situation? Something like another transaction (by User2) has obtained lock on most of Product tables, except the Product table which were being slowly read by User1 executing P_SearchProducts. But User1 cannot read the other product tables b/c there are locks by User2.
Q3. If the contention issue was provoked by the slow execution time of many request to exec
P_SearchProducts (let's assume there were suddenly 50 users on the web hitting the search product feature at the same time). Could the Read Uncommitted magically resolve the contention issue, providing we accept the consequences of the dirty read.
Sorry for the long post and thank you in advance for any help.
View 2 Replies
View Related
Aug 5, 2015
We are migrating our database(s) from ORACLE to SQL. In Oracle we were able to issue a SELECT statement and see all of the locks (Blocking and Non-Blocking) currently in the system. The query also included the Process ID of the process we needed to kill in order to get rid of the lock.
We now need to create the same type of query for Microsoft SQL Server 2012. I have seen postings on different sites saying that this info can be obtained using SP_WHO2 or using the SQL Server Management Studio Activity Monitor's PROCESSES tab, but we are looking for a SELECT statement that will give us similar information.
View 7 Replies
View Related
Oct 10, 2003
I am interested in getting a better handle on how SQL 2000 determines the locking level to apply to different transactions. I am familiar with the fact that SQL does this on the fly but I was wondering if this could be specified in a Stored Procedure to use one over the other instead and what impact if any that might have on indexes. The databases I work with run anywhere from under 100 GB to 150 GB. Thanks for anyone's input on this subject.
Thomas
View 2 Replies
View Related
Dec 29, 2007
Hi,
I am trying to insert some records into a table within a transaction that is without commit/rollback.In other window/transaction i am trying to insert a record into the same table but it is hanging up.why?
I believe during the insert(with in a transaction) whole table is not locked.I should be able to insert records from other transactions.
View 15 Replies
View Related
Sep 15, 2006
Hi All,
Is it possible that an OLEDB Data Flow Source is imposing locks on the source tables? The source is an SQL Server OLTP environment, and although the package will be scheduled to run nightly when the application sees little to no use, I want to be sure that the process isn't impacting any application functions.
Thanks for the advice!
Rocco
View 1 Replies
View Related
Oct 31, 2007
Hello -- I have a Huge Table Tab1 (160 Mill Rows). I have a script that Inserts and Deletes from this Tab1 (This Script runs for a good 4/5 Hrs).
When the script is being executed, No other session can even do a "SELECT TOP 10 * FROM Tab1"
How do i avoid this? Any "NOLOCK" Keywords i should be specifying in my Script?
Thanks in advance
View 3 Replies
View Related
Jun 3, 2014
I was under impression that rebuilding index online largely means that the index will remain available for use during rebuild and my procs and query will be able to use it during rebuild. Also my understanding was that table will be locked very briefly while the schema change will be completing.But when I was rebuilding the clustered index online on a large table with some 3 million records, the table got locked and I was not able even to read the data from it for some 5 minutes. Then I cancelled the operation as it was production server and it was one of our main transaction table.
Is rebuilding index online supposed to work this way? The table has no other index.The parameteres I used are:
REBUILD WITH (PAD_INDEX = ON, SORT_IN_TEMPDB = ON, ONLINE = ON, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 95)
View 5 Replies
View Related
Jul 20, 2005
Hi,I have an Access application with linked tables via ODBC to MSSQLserver 2000.Having a weird problem, probably something i've done while not beingaware of (kinda newbie).the last 20 records (and growing)of a specific table are locked - cantchange them - ("another user is editing these records ... ").I know for a fact that no one is editing records and yet no user canedit these last records in the MDB - including the administrator -while able to add new records.Administrator able to edit records in the ADP (mssql server) where thetables are stored.Please help, the application is renedred inert .Thanks for reading,Oren.
View 3 Replies
View Related
Feb 11, 2015
I have a sql snippet from a 3rd party application that will not complete its transaction. The SELECT statement executes but does not finish. Instead the statement just sits in AWAITING COMMAND for 1000 seconds then dies, thus killing the UPDATE statement that is supposed to follow.
The CROSS JOIN and CROSS APPLY seem suspect.
(
@p0 DATETIME,
@p1 INT,
@p2 INT,
@p3 NVARCHAR(4000),
@p4 INT,
[code]....
View 9 Replies
View Related
Jul 20, 2005
I have written a stored procedure to list out all tables in whichrows or the table itself is locked. The only information I amnot able to get is the time when the lock occurred. The way Iwant is that if I run the procedure it should show all lockson a table which are at least 5 or x seconds old. This way I canavoid momentary locks on a table which go away after few seconds.Which table and column of master database has that information?Thanks.--email id is bogus
View 3 Replies
View Related
Aug 16, 2006
Hi,
I am traing to execute this select
SELECT count(*) CM_CERT_VARIANCE_DET_APINV_RELATE
using SQL SERVER ODBC, however, I received this message:
"Identifer too long"
Is there any table length name limitation?
I think, there is, because works well on SQL environment.
I mean, should this table have less then 18 characters? Can I work around this limitation?
cheers,
Alessandro
View 6 Replies
View Related
Jun 8, 2001
Hello Folks,
I'm looking for the group's collective wisdom on the following issue, I'm sure many have been confronted with it.
I'm designing a db for a website that will utilize SQL Server 7. I have few tables that I think will grow very large row wise and that will be written to an read from frequently. I sense that I might be able to get better perfomance out of the system if I split many of these large tables up (row wise) into many smaller tables.
For example, The website that I'm working on has a "MailingAddresses" db table that contains the mailing addresses for the sites users (subscribers and other misc users - 1 record per user, anticipating 70,000+ users). Each user can update their record, and there will be frequent queries to the tables to get addresses for both internal admin use and for display on public webpages.
The website has five distinct sections that are in some sense like five distinct websites. Each user belongs to only one of these sections and they won't migrate between sections. Therefore I'm considering breaking up the one large "MailingAddresses" tables into five smaller tables, one for each section, i.e., "MAddressesSectionA", ..., "MAddressesSectionE".
These tables will have the same fields and constraints. Also of course there would be the same number of reads and writes in total with the five smaller tables as compared to that for the one big table. Also the combined size of the info in the five smaller tables would be the same as that for the one large table.
Though it's going to more of a pain to manage five tables versus one, I have a hunch it might be easier for the dbms to handle reads and writes with five smaller tables than with one large table...
...Of course this is true when queries are mainly respective to users from just one or two of the sections (as might be the case) - the big table then has the overhead of the address records for users in the other sections. BUT is there any benefit with the five smaller tables route when they are all frequently accessed??? Sure each select query has fewer records to go through, but with all five tables in play the dbms has to deal on average with the same amount of info as in the one big table.
What do you folks think, to divide or not to divide the big table(s) up row wise into smaller tables?
I guess the issue is summed up in this question: In general, can a dbms better handle in memory, and more quickly write to and query access - one BIG table with a+b+...+n records, or N smaller tables with a, b,...,n records respectively?
Thanks for the collective wisdom, - Jerry
View 2 Replies
View Related
Mar 8, 2007
I have a long query which I have set off and would like to stop, andrename one of the tables used.My query is due to my lack of understanding of the underlyingstructure of MSSQL-Server...So say and update updates TABLE_A and I stop it, whilst thistransaction is rolling back I attempt to rename TABLE_A to TABLE_A_OLDand rename a different table to become TABLE_A. I am assuming thatthe rollback actions will use the object reference of TABLE_A_OLD andcontinue to rollback the effects on the correct table and not corrupt'new' TABLE_A... or will it not allow me to rename TABLE_A until therollback is complete?Thanks for any help!Steve
View 1 Replies
View Related
Dec 3, 2007
Hi everyone,
I've got an excel file that I want to import into a database table.
The longest text in a cell is 385 characters.
I've made the fields in the table nvarchar(1024).
I created a data flow task for the import.
When I run this task, I get the following error:
[Excel Source [1]] Error: There was an error with output column "Line Text" (52) on output "Excel Source Output" (9). The column status returned was: "Text was truncated or one or more characters had no match in the target code page.".
[Excel Source [1]] Error: The "output column "Line Text" (52)" failed because truncation occurred, and the truncation row disposition on "output column "Line Text" (52)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
is it possible that there is a restriction on the length of the text ?
regards,
Filip
View 8 Replies
View Related
Mar 8, 2005
Dear Participants,
We are using merge replication for multi locational database but we are facing one problem in only one table which is not included in replication-
Table name is xxxxmast has only 39 row static information, but it used by every users for all task as select only information from this table like-
Select fin_year into mem_variable from xxxxmast where co_name = :global_variable
go
Code validation from here only.
Right validation from here only.
Report retrival validation from here only.
It means its usage for select from every user frequentely for so many times but we have to only fetch information from this table.
It was working prior fine but rightnow get problem for while-
Today Dated 08-March this table not accessible fro three times in eight hours-
1st time for 10 minute.
2nd time 10 minutes
3rd time 52 minutes.
Users want to login but at the login time years and other validation from this table, so users awaited for above mention time.
We had have do following by yesterday-
Drop table xxxxmast.
Create table xxxxmast.
Insert required data.
This is realy trouble for our application.
Any help realy great for us.
Thanks
R.Mall
View 3 Replies
View Related
May 26, 2008
Hi,
The scenario is the data comes from various sources and its staged into staging database. From this staging database it goes into data warehouse database. Everyday this staging database is truncated and repopulated from various sources.
I've a dimension table called DimCustomers which consists of around 300,000 rows and has lots of different types of SCD columns. It takes around 4-5 hours to load data from staging to this dimension table. Currently I'm using a For Loop container which uses a store proc to extract 15000 rows each time and populate my dimension tables. First couple of loops it goes off quickly but as and when the number reaches half of the count it slows down and hence it takes around 4-5 hours to load data.
What would be the best approach to populate this kind of dimension table.
Thanks
View 7 Replies
View Related
Jan 31, 2005
i need to full-text index a table so that i can easily search the text fields of that table.. the table has about 21,000 rows, and i was wondering how long it might take to full-text index it?
thanks
View 1 Replies
View Related
Sep 25, 2015
We have a proc that adds some fields to a few tables of ours and normally there are no issues. For one of our client databases this process is taking anywhere from 5-10 minutes to add the fields. This causes an issue where the app will timeout waiting. After plugging around and looking at the proc and trying different items i found it to only be for this one database and ONLY when there is data in the table. If i truncate the table and run the same procedure everything is fine. Tables all have same index on 4 columns and the columns being added are not indexed because of the stupid hoops we have to jump thru to pre-pivot data for our reporting package.
View 4 Replies
View Related
Jul 14, 2015
I have several reports that are looking for a code within a certain set of codes or ranges. The specific list of codes to be including is determined by the end user. Currently my "IN" statement can be a hundred lines, listing several ranges, lists of specific codes, etc. I am constantly getting asked what codes does it include, is this code included, etc. Sometimes they'll give me a printed 10 page list of codes and want me to compare to what I have included in the report. Not ideal in the slightest.
What I'd like to do is have a table or a file of some kind somewhere where the end user can view the codes contained, add new ones, and delete ones they no longer want. Then I'd like to be able to just reference that file in my IN statement. Leaving the responsibility of listing the correct codes on them.
View 9 Replies
View Related
Nov 10, 2003
I have read that even during read procedures (sql select statements), sql server uses row locking. I know that you can use the NOLOCK keyword, but if you don't everytime that a user makes a selects statement on a table, does sql server really lock those rows, and if so are they then unavailable to another user who wants to make a select statement at the same time on that same table? That does not seem like it would be the case otherwise it would not scale well. Thanks for any clarification on this.
View 5 Replies
View Related
Feb 13, 2002
I am using SQL Server7.0. I opened a table through the Enterprise Manager and left it open. In the Query Analyzer when I try to update a field on that table(more than 2000 rows), it goes on running. When I watched the Current Activity, it shows that the update process is being blocked by the select query. But if I try to update the same column for less than 1500 rows, there is no blocking issue and the update occurs immediately. Can anybody let me know why this is happening and what should I do to prevent it?
View 1 Replies
View Related
Mar 28, 2000
I am using Sql Server 7.0
To I got the following error message. Can some one tell how to solve this issue.
Server: Msg 1204, Level 19, State 1, Procedure OPEN_OBJECTS, Line 2
SQL Server has run out of LOCKS. Rerun your statement when there are fewer active users, or ask the system administrator to reconfigure SQL Server with more LOCKS.
ranga.
View 1 Replies
View Related
Sep 26, 2000
what the best way to control locks, if inserting couple thousands records from one table to another.
View 5 Replies
View Related
Feb 20, 2001
Hi
I have a big query which updates around 14000 rows at a time if i place a lock on the table and others try to update the same table is it possible to let them know that table is locked by someone else.
View 1 Replies
View Related
Sep 19, 2002
2 quick questions :
1) How do I keep multiple users from editing the same record without locking the entire table? What would be a 'standard' way of handling this?
2) How do I keep 2 people from posting the same record?
Please help me understand locking, THANKS!!!
View 2 Replies
View Related
Apr 11, 2005
I have a stored proc which will be entering/updating a record into a table. The table's key is an integer field which I may have to increment by one. I know I can use
declare @nextid int
set @netxid = max(id) from table
insert @nextid into table
Is some kind of lock the best way to approach this?
View 4 Replies
View Related
Jun 15, 2004
Hello,
I have a problem in SQLSERVER 2000, when I execute a Query, the table get locked for insert or any other transaction, even for other queries.
Does SQL Server have a kind of lock mode different of Oracle ?
How do I solve this problem ??
View 5 Replies
View Related
Jun 16, 2004
Hello There !!
I have a very big problem, with SQL SERVER 2000. I want to know about the locks with select.
When I execute a Select (so big), and I try to update or Insert into one of the tables that I invoke in the select, I get locked.
Is there in SQL Server, something like a Select for update, that could be causing the problem ???
Is there any way to select rows from a table without locking it ?
I really have a big problem with this, and I don't know so much about sql server !
Thank you so much !!!
View 14 Replies
View Related
Jul 1, 2004
Some of my tables Were locked IN IS Mode.What does it mean?
Thanks.
View 2 Replies
View Related