I have a customer using our program with SQL server and is
occasionally getting a "Transaction (process ID xxxxx) was deadlocked
on lock resources with another process and has been chosen as the
deadlock victim." From what they are telling me, there shouldn't be
any deadlock happening as they say this happens when they invoicing in
a different program that is accessing a different database. Also the
error is happening on an SQL Select from a view and this select is
then showing data in an HTML table for the user. I don't think this
view should need to lock anything, I just want to read the data. Is
there anything I can do to fix this?
Hi guys; i use a transaction that takes a long time to execute.(Different updates on different tables) I want to use in my transaction a TABLOCKX after each update in order to Lock the table within the Transaction (Until the transaction will finish). This will normaly force another transaction using the same tables within the first transaction to wait until the lock will release.
1- Because there is no time out set . Does the second transaction wait until the first transaction be commited even if it takes more long time.
When I do a data flow task like following, always blocking 1.get data from data source 2.conditional spit by column='Y' or 'N' 3.when column='Y' insert to table a 4 when column='N', ole db command, update table a but when I run this package, I see there are some rows are 'Y', some rows are 'N',but they can't run together. Why? I've took long time for the issues. When I change this data flow to a data flow plus t-sql task, and change ole db command( update table a) to execute tsql task. The issue can be solved. But I want to know why i can't put these in one data flow?
I have a framewrok that runs tests and keeps updating the status of the tests to the DB. They are approx 20 tests whose status will be updated simultaneously. Recently i have seen the follwoing error
{"Transaction (Process ID 84) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction."}
I have a SSIS package that is run from one job, nowhere else. The package has TransactionOption NotSupported.
In the SSIS package I first have a sequence container that has TransactionOption Required (Serializable). The sequence container contains several Execute SQL tasks that access the same SQL Server 2005 database. The Execute SQL tasks have TransactionOption supported / Serializable. The first SQL statement is:
select p.column_name
from table_name p with (XLOCK, ROWLOCK)
where p.second_column_name = 'COLUMN_VALUE'
After this there are a couple of SQL tasks, and finally a task containing the following SQL:
select p.third_column_name
from table_name p
where p.second_colomn_name = 'COLUNM_NAME'
The idea is that the first query exclusively locks the row in the table_name-table for mutual exclusion purposes, i.e. so that if for some reason the SSIS package would be executed simultaneously two or more times, only one package execution could lock the row and proceed and the other executions will have to wait. There's an index on column second_column_name in the table to avoid the select locking other rows in addition to the required row.
Some questions:
1) Is it so in my setup that when SSIS runtime executes the sequence container it creates a transaction in the beginning of the sequence container and commits the transaction in the end of the sequence container? And in my setup the Execute SQL tasks in the sequence containar are executed under the same transaction?
2) I have a problem that the second query sometimes gives this error:"ErrorCode: -1073548784. ErrorDescription: Executing the query "XXXXX" failed with the following error: "Transaction (Process ID 73) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly."
The 1st query has locked the row for the transaction. How can the second query be deadlocked, shouldn't the transaction have a lock on the row? I'd understand if the 1st query failed sometimes, but don't understand how the 2nd quey can fail.
I was trying to extract data from the source server using OLEDB Source and SQL Server Destination when i encountered this error:
"Transaction (Process ID 135) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.".
What must be done so that even if the table being queried is locked, i wouldn't experience any deadlock?
Is there a way to send out an email woth deadlock information (victim query, winner query, process id's and resources on which the deadlock occurred) as soon as a deadlock occurs in a database or at instance level?I currently has trace flag 1222 turned on. And also created an alert that send me an email whenever a deadlock occurs. but it just says that a deadlock occurred and I log into sql server error log and review the information.
I'm getting this when executing the code below. Going from W2K/SQL2k SP4 to XP/SQL2k SP4 over a dial-up link.
If I take away the begin tran and commit it works, but of course, if one statement fails I want a rollback. I'm executing this from a Delphi app, but I get the same from Qry Analyser.
I've tried both with and without the Set XACT . . ., and also tried with Set Implicit_Transactions off.
set XACT_ABORT ON Begin distributed Tran update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 and DONE = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 COMMIT TRAN
It's got me stumped, so any ideas gratefully received.Thx
I have a design a SSIS Package for ETL Process. In my package i have to read the data from the tables and then insert into the another table of same structure.
for reading the data i have write the Dynamic TSQL based on some condition and based on that it is using 25 different function to populate the data into different 25 column. Tsql returning correct data and is working fine in Enterprise manager. But in my SSIS package it show me time out ERROR.
I have increase and decrease the time to catch the error but it is still there i have tried to set 0 for commandout Properties.
if i'm using the 0 for commandtime out then i'm getting the Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
and
Failed to open a fastload rowset for "[dbo].[P@@#$%$%%%]". Check that the object exists in the database.
I am getting this error :Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.OleDb.OleDbException: Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.have anybody idea?!
i have a sequence container in my my sequence container i have a script task for drop the existing tables. This seq. container connected to another seq. container. all these are in for each loop container when i run the package it's work fine for 1st looop but it gives me error for second execution.
Message is like this:
Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
i am getting this error "Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.".
my transations have been done using LINKED SERVER. when i manually call the store procedure from Server 1 it works but when i call it through Service broker it dosen't work and gives me this error.
In a high traffic environment, deadlocks eventually occur as number of data processes increase. How can deadlocks be avoided, minimized and resolved. Please kindly provide scenario examples and samples of T-SQL code. Thanks much.
Iam trying to bcp a table(residing on my prod server to my local machine from command prompt) .Actually the table iam trying to bcp has heavy updates and selects, from users (70 users). The users complain that system becomes slow.Is it got anything to do with my trying to bcp the mentioned table(table has 170,000 records).Also whenever i try to bcp this table, only after being chosen as the deadlock victim by Sql server,for 3 or 4 times that iam able to bcp the table.
Any help regarding this will be very much appreciated TIA kinnu
I am unable to control the granularity of locks in our queries. We are running queries through MTS and are getting deadlocks.
The batch includes two inserts and one select query - all are hitting on only one table. This table has a unique clustered and a unique nonclustered index as well as a primary key.
Within the batch, I have given a table hint to set transaction isolation level to READCOMMITTED, ROWLOCK for the insert statements, like this
INSERT INTO ib_price with (READCOMMITTED,ROWLOCK)........
and the same for the Select statement.
SELECT retail_price, price_status_id FROM ib_price with (READCOMMITTED,ROWLOCK)
When I run sp_lock on the spid, I get output indicating that SS7 is placing a IX lock on the table. I'm pretty sure this is a big contributor to the deadlock.
I get the deadlock when I try to run more than one client with similar insert parameters.
How can I control the granularity to just rowlocks?
I am getting deadlock running a stored procedure from two machines. Looking at the error log (generated using trace flag 1204 and 3605), it seems the deadlock is on a key. But what I fail to understand is how come sql server granted exclusive lock on the key to both connections. The grant list shows that lock with Mode X is granted to both connection.
Can anyone help me resolve deadlock with following text
Parallel Query worker thread Involved in deadlock.
I am particularly interested in resolving details of above mentioned line,as I started getting dead lock more frequently now and when I look into query involved blocking and victim I see nothing that can cause deadlock they are update insert and select statement which were fine for long and all of sudden started giving problem.
Thanks in advance, for any knowledge share my mail id scraval@hotmail.com
Hi, When many users run some stored procedures I 've got some deadlocks. How to avoid that? We run large stored procedures code which are using sometime the same table. What is the best way for using the transaction isolation level, fillfactor indexes, procedure cache configuration ...etc to avoid that.
In addition, I am using MTS and sometimes the Tempdb is also locked, is it a Microsoft bug (again) ?
We had a dead lock every night 9:00pm. I found out Server/Current Activity --Object Locks : The error log showing error 17824, severity:10, state 0 DNCC TRACEON 208, SPID 28 DBCCTRACEOFF 208, SPID 28 In current activity --object locks and reapetedly showing "tempdb.dbo.sysobjects/sysindexes/syscolumns" 28:sa.master.dbo /INSERT /SQL_servername (MS SQLEW)
Sorry for bombading the forum with all these questions, but i am relatively new to sql 2000.
I am getting dead lock on the following procedure.
important background information 1. this is a multi user web-based call centre application 2. this procedure loads up a new contact based on priority
I see no reason how a dead lock could occur. does any one have any idea. could it be something else that is locking up resource used by this procedure?
CREATE PROCEDURE topcat.getNewContactInfo ( @contact_id int ) AS BEGIN begin transaction
declare @id int
set @id = (SELECT TOP 1 _id FROM class_contact WHERE (status IS NULL OR status='New Contact' OR status = 'No Connect' OR status='callback') AND (checked_in IS NULL OR checked_in <> 1) AND (callback_date >= (getdate() + 1) OR callback_date IS NULL ) ORDER BY priority DESC)
UPDATE class_contact SET checked_in = 1 WHERE _id = @id SELECT TOP 1 * FROM class_contact WHERE _id = @id
commit END GO
wat i dont' get is that, this procedure only has one update statement, this is the only statement that could possibly hold a lock on another resource (i think) , i can't see how a dead lock can happen in this case since this procedure doesn't hold up 2 resources at a time.
Using SQL Server 2000 SP3a, I run the following in 2 query analizerwindows on the Northwind database, the second one always gets thedeadlock Msg 1205:Window 1:declare @cnt intselect @cnt = 5while @cnt > 0beginbegin transactionselect * from orders (updlock) where employeeid = 1update orders set employeeid = 1 where employeeid = 1waitfor delay '00:00:03'commitselect @cnt = @cnt -1endWindow 2:declare @cnt intselect @cnt = 5while @cnt > 0beginbegin transactionselect * from orders (updlock) where employeeid = 1 and customerid ='ERNSH'waitfor delay '00:00:02'commitselect @cnt = @cnt -1endThe query in the first window gets 123 rows and places update locks onthem, then updates them and commits. The query in the second windowgets a subset (about 5) of the results that window 1 gets also tryingto place update locks on the same rows. Shouldn't the query in window 2just wait for the transaction in window 1 to finish? why would itdeadlock?you can also get rid of the delay in the second window and it willdeadlock faster.thanks in advance.Eugene
Hi.create table joe(c1 integer not null, c2 integer not null)Two sessions:Session 1:BEGIN TRANinsert into joe (c1,c2) values (1,2)Session 2:BEGIN TRANinsert into joe (c1,c2) values (3,4)Session 1:select * from joeSession 2:select * from joeOne of the sessions gets a deadlock victim message.thanks,Joe
I have a table that every 30 minutes needs to be repopulated fromanother table that is recreated from scratch just before.What I did was this:CREATE PROCEDURE BatchUpdProducts ASbegin transactiondelete productsinsert into productsselect * from productsTempcommit transactionGOThis takes about 30 seconds to run. I tried it doing it with a cursor,row by row, but it took like 30 minutes to run instead. The problemis with the fast approach is, once in a while I get a deadlock errorin different areas trying to access the products table. Using SQLServer 2000 by the way.Any ideas?
I have Full database backup upto previous day and transaction logfile of Today transaction. my database has crashed. I have restored previous day's Full backup. I have faced difficulty to restore today's transaction from today's transaction log. What are the steps to restore full database back and one day's transaction log file. Note: there is no differential database backup and transaction backup.
I'm receiving the below error when trying to implement Execute SQL Task.
"The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION." This error also happens on COMMIT as well and there is a preceding Execute SQL Task with BEGIN TRANSACTION tranname WITH MARK 'tran'
I know I can change the transaction option property from "supported" to "required" however I want to mark the transaction. I was copying the way Import/Export Wizard does it however I'm unable to figure out why it works and why mine doesn't work.
I created a Calculated measure in cube something like this : ([TransType].[TransTypeHierarchy].[TransTypeCategoryParent].&[SPEND],[Measures].[Transaction Amount]). To get only spend transactions. Now, I want to slice this measure with same hierarchy to find the amount distribution across different transaction types under spend transaction. But this query behaving like the measure doesn't have relation with measure.
you can think this as below query: WITH MEMBER SPEND AS ([TransType].[TransTypeHierarchy].[TransTypeCategoryParent].&[SPEND],[Measures].[Transaction Amount]) SELECT NON EMPTY {SPEND} ON 0 ,NON EMPTY ([TransType].[TransTypeHierarchy].[TransTypeCategoryParent]) ON 1 FROM [CUBE]