Hi,
Very new to locking/transactions, so any help appreciated with the following:
I want to create a stored procedure with statements that read an Integer value in a single row table, return this as output from the procedure to my web form, and increment it incremented by 1 (for the next call of the procedure).
I need to be absolutely sure that while the value is being read and updated, another call to the procedure cannot read/update this value until the current is call completed, i.e some sort of locking on the value.
Question - If I wrap my SELECT and UPDATE statements in a BEGIN TRAN and COMMIT TRAN, will this be enough? e.g Is this sufficient:
CREATE PROCEDURE sp_test
@retval int output
AS
BEGIN TRAN
SELECT @retval = POID FROM OMS_SYSDATA
UPDATE OMS_SYSDATA SET POID=POID+1
COMMIT TRAN
GO
Or do I need to start using things like HOLDLOCK/UPDLOCK within statements?
I'm using Sql Server 2005... I'm creating a transaction and enlisting the commands inside vb.net code as well as surrounding the t-sql in an "Begin Trans --- Commit Trans" block. I also have the Isolation level set to the highest (Serializable) in the vb.net code and the sprocs. I'm running 4 instances of the app on 1 server and 4 instances of the app on another server. I am handling the lockouts just fine and writing them to an error table within the db. The app keeps spinning and producing data just fine. There are 3 places where the locking may occur within the app. Two of them are just fine (which is a select and and insert). The app will eventually cycle around and pick up the records taht may have been locked out. My concern is the Update portion which updates stats based off the Insert done previously. If the records never get updated, the only way I would know if they were processed would be to check in our Error table to see if the record exists. I would like to know if there is any way possible to cut down on the number of lockouts (which may be perfectly normal) and to get a way to update that table I just talked about. Should I be using different isolation levels, etc. --- anything of importance might be useful.
I have a stored-procedure which insert's records. In it I have a Begin Tran so if it fails I can run a rollback. When I'm inserting big number of records it creates X locks and it start's blocking other users.
How can I know when a table is lock by a transaction?
I have found about the procedure sp_lock but it doesn€™t show me the table that the transaction is locking and when I execute an update statement through an application the application crashed waiting for the transaction to finish.
I'm creating a transaction and enlisting the commands inside vb.net code as well as surrounding the t-sql in an "Begin Trans --- Commit Trans" block. I also have the Isolation level set to the highest (Serializable) in the vb.net code and the sprocs.
I'm running 4 instances of the app on 1 server and 4 instances of the app on another server. I am handling the lockouts just fine and writing them to an error table within the db. The app keeps spinning and producing data just fine.
There are 3 places where the locking may occur within the app. Two of them are just fine (which is a select and and insert). The app will eventually cycle around and pick up the records taht may have been locked out. My concern is the Update portion which updates stats based off the Insert done previously. If the records never get updated, the only way I would know if they were processed would be to check in our Error table to see if the record exists.
I would like to know if there is any way possible to cut down on the number of lockouts (which may be perfectly normal) and to get a way to update that table I just talked about. Should I be using different isolation levels, etc. --- anything of importance might be useful.
I have SQL server 7 machine here but this problem will occure with any version I suppose. We have a web application and NT server and client machines.
I have a stored procedure which has
Begin transaction statement1 statement2 and so on... these could be any DML statements or another proecdure call.
commit transaction end
We cannot commit this transaction untill the process is complete, I mean it processes all statements.
Now the problem is when one of the statement takes a ong time the page will time out.. keeping that transaction open.
In the mean time other users are doing their changes in the database.. this goes on for a while and nobody noticed it.
When the connection to the application was broken for some reason. The transaction which was open was cancelled ... and everthing rollsback from the time the 1st transaction failed ( timed out) so there is nothing in the database that other users did..
Hi all, I am writing an sp which includes insert and update statements. sp is working fine. But when I tried to make it as a single Transaction its not working(waiting indefinetly at second insert statement).tables are getting locked. what could be the possible reason for tables getting locked indefinetly. I Tried with set transaction isolationlevel serializable option. There are several insert into statements.some of them on the same tables again and again. any help would be greatly appreciated. Thanks.
We have a nightly script that drops and rebuilds a table. Problem is before the script has had a chance to complete the building of the primary key, it fails due to some other process gaining access to the table.
My question is this: how do I lock a table for the duration of the transaction.
Here is what I think would work:
SET ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
DROP TABLE <table>... CREATE TABLE <table>...
INSERT INTO <table> WITH (TABLOCKX) ...
ALTER TABLE <table> ADD CONSTRAINT PRIMARY KEY ...
COMMIT
Any insight as to whether or not I am correct, or if I am not, what would be the correct way.
I'm getting this when executing the code below. Going from W2K/SQL2k SP4 to XP/SQL2k SP4 over a dial-up link.
If I take away the begin tran and commit it works, but of course, if one statement fails I want a rollback. I'm executing this from a Delphi app, but I get the same from Qry Analyser.
I've tried both with and without the Set XACT . . ., and also tried with Set Implicit_Transactions off.
set XACT_ABORT ON Begin distributed Tran update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 and DONE = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 COMMIT TRAN
It's got me stumped, so any ideas gratefully received.Thx
I have a design a SSIS Package for ETL Process. In my package i have to read the data from the tables and then insert into the another table of same structure.
for reading the data i have write the Dynamic TSQL based on some condition and based on that it is using 25 different function to populate the data into different 25 column. Tsql returning correct data and is working fine in Enterprise manager. But in my SSIS package it show me time out ERROR.
I have increase and decrease the time to catch the error but it is still there i have tried to set 0 for commandout Properties.
if i'm using the 0 for commandtime out then i'm getting the Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
and
Failed to open a fastload rowset for "[dbo].[P@@#$%$%%%]". Check that the object exists in the database.
I am getting this error :Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.OleDb.OleDbException: Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.have anybody idea?!
i have a sequence container in my my sequence container i have a script task for drop the existing tables. This seq. container connected to another seq. container. all these are in for each loop container when i run the package it's work fine for 1st looop but it gives me error for second execution.
Message is like this:
Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
i am getting this error "Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.".
my transations have been done using LINKED SERVER. when i manually call the store procedure from Server 1 it works but when i call it through Service broker it dosen't work and gives me this error.
I have Full database backup upto previous day and transaction logfile of Today transaction. my database has crashed. I have restored previous day's Full backup. I have faced difficulty to restore today's transaction from today's transaction log. What are the steps to restore full database back and one day's transaction log file. Note: there is no differential database backup and transaction backup.
I'm receiving the below error when trying to implement Execute SQL Task.
"The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION." This error also happens on COMMIT as well and there is a preceding Execute SQL Task with BEGIN TRANSACTION tranname WITH MARK 'tran'
I know I can change the transaction option property from "supported" to "required" however I want to mark the transaction. I was copying the way Import/Export Wizard does it however I'm unable to figure out why it works and why mine doesn't work.
I created a Calculated measure in cube something like this : ([TransType].[TransTypeHierarchy].[TransTypeCategoryParent].&[SPEND],[Measures].[Transaction Amount]). To get only spend transactions. Now, I want to slice this measure with same hierarchy to find the amount distribution across different transaction types under spend transaction. But this query behaving like the measure doesn't have relation with measure.
you can think this as below query: WITH MEMBER SPEND AS ([TransType].[TransTypeHierarchy].[TransTypeCategoryParent].&[SPEND],[Measures].[Transaction Amount]) SELECT NON EMPTY {SPEND} ON 0 ,NON EMPTY ([TransType].[TransTypeHierarchy].[TransTypeCategoryParent]) ON 1 FROM [CUBE]
How do I make use of begin transaction and commit transaction in SSIS.
As am not able to commit changes due to certain update commands I want to explicitly write begin and commit statements. but when i make use of begin and commit in OLEDB commnad stage it throws an error as follows:
Hresult:0x80004005
descriptionyntax error or access violation.
its definately not an syntax error as i executed it in sql server. also when i use it in execute sql task out side the dataflow container it doesnt throw any error but still this task doesnt serve my purpose of saving/ commiting update chanages in the database.
I am having some problem with SSIS transaction. Eventhought I tried to imitate the concept that Jamie presented at http://www.sqlservercentral.com/columnists/jthomson/transactionsinsqlserver2005integrationservices.asp
. My workflow is as followed
********************************* For Each ADO.Record in Oracle (transaction=not supported)
If (Certain_Field_Value = 'A')
Lookup Data in SQL DB with values from Oracle (transaction=not supported)
DO Sequence A (Start a Transaction , transaction=required)
INSERT/UPDATE some records in SQLDB(transaction=supported) Finish Sequence A ( transaction should stop here) UPDATE Oracle DB ( Execute SQLTask, transaction=not supported) If (Certain_Field_Value = 'B')
Lookup Data in SQL DB with values from Oracle (transaction=not supported)
DO Sequence B (Start a Transaction , transaction = required)
INSERT/UPDATE some records in SQLDB (transaction=supported) Finish Sequence A ( transaction should stop here) UPDATE Oracle DB ( Execute SQLTask, transaction=not supported) If (Certain_Field_Value = 'C')
------------ ------------ End ForEach Loop ************************************* My requirements are that I want separate transaction for each Sequence A, B, C, etc... If Sequence A transaction fails, the other should still be continuing with another transaction. But I am getting an error regarding the OLEDB Error in next Task (e.g in Certain_Field_Value = 'B') "Lookup Data in SQL DB with values from Oracle ", the error message is ".......Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction. ". What is it that I am doing wrong? Regards KyawAM
Hi everyone, I have a question about SQL Server 2005. I have written an ASP.Net 2.0 Web Application and it is using SQL Server 2005 as Database. In the last few days I noticed that the app is down sometimes. To analyze the problem I looked at the activity monitor in SQL Management Studio. I can see there approximately 170 processinfos. I want to describe the column values of the process infos: Process-ID: Unique ID and a red down-showing-arrow-icon User: My UserDatabase: My DatabaseStatus: sleepingCommand: AWAITING COMMANDApplication: .Net SqlClient Data Provider When I click Locks by Object, I can see the IDs of the Processinfos. Again I will show some colums:Type: DATABASERequirementtype: LOCKRequirementstate: GRANTOwnertype: SHARED_TRANSACTION_WORKSPACEDatabase: My Database So my question is, does this mean, that i have locked the db? How are they handled? For example I have a windows service, which is doing checks in db every 10 seconds. I can see, that each check generates a new processinfo? Is this usual, or am I doing something wrong? Thnaks for help,Byeee
When I run a select statement : select 'X' from table1 where c1 = condition locking on indexes behaves as expected
However if I run select 1 from table1 where c1 = condition locking on indexes goes wild locking pages and rows on indexes that are not even referenced in the query. Any ideas Why?
Hello All, I'm just migrating from oracle to SQL.Can anybody tell me that how effectively I can use Row level locking in SQL? If tow users are attemping to Moify same record how i can deal it in Back end(SQL)? Thanks in Advance. Suchee
i have an application in production(sql 6.5 ) which causes locking which times out my other processes , iwant to capture time the locking takes place i have found in bol that i can get time deadlock occurs using trace flag 3605 in sql7.0 ,if i have to use trace flag is it ok with dbcc traceon or -T option in startup is recommended. any advice would be appreciated tia ram
I have used DTS in the past to copy information in certain tables in production over the top of those same tables in test. However, the process is now failing. Does DTS require an exclusive lock on the source table, as well as the destination table during the export process? Will shared locks on the table I need to copy prevent DTS from completing the process?
We are running out of locks while updating a particular table (table name = history, rows = 25,000,000) in SQL Server 6.5.
LE threshold maximum is set to 200. LE threshold minimum is set to 20. LE threshold percentage is set to 0.
Locks is set to 0.
I have also included the stored procedure, which we use to update the history table.
As you can see, from the first four lines, we ran this SP 4 times processing around 6 million rows at a time. It runs out of locks once it is around 5.5 to 6.5 million rows. Is there a way of locking the table so that this SP can be run just once which will effectively process all the 26 million rows in one go?
Any help will be greatly appreciated.
Winston
--declare minihist cursor for (select uin,uan,mailingdate from history(tablock)where rowno between 5635993 and 12000000) --declare minihist cursor for (select uin,uan,mailingdate from history(tablock)where rowno between 12000001 and 19000000) declare minihist cursor for (select uin,uan,mailingdate from history(tablock)where rowno > 19000000)
open minihist fetch next from minihist into @huin,@huan,@hmailingdate while (@@fetch_status <> -1) begin
if (@@fetch_status <> -2) begin
select @mailtot = 1 select @mail12m = 0
/*** Get the gender ***/ select @sex = gender from name where uin = @huin
/*** Calculate if mailed in the last twelwe months ***/ if (@hmailingdate <> null) and (@hmailingdate > '19980524') select @mail12m = @mail12m +1
/*** Get info for this uan from address_summary ***/ select @mailtot = (@mailtot+mailed_total), @mail12m = (@mail12m+mailed_12months), @lastday = last_date from address_summary where uan = @huan
/*** Insert a row into address_summary if doesn't exist ***/ IF @@rowcount = 0
Hi, We are running SQL 6.5 in Produciton and I'm getting one blocking problem but mostly I kill the process and whenever I check the SQL Error Log I see this message : Error : 17824, Severity: 10, State: 0 Unable to write to ListenOn connection '1433', loginname 'XXXY', hostname 'DT SA'. OS Error : 64, The specified network name is no longer available.
I'm trying to use the pessimistic row locking of SQL to get following result.
When a customer form is openend, the row should be locked for writing. This lock should be left open until the user closes the customer form.
I cannot use transactions because there can be more then 1 customer form open in the same app. In ADO a connection is IN transaction or is NOT, nested transactions are not supported.
How can I keep this row locked on SQL and this until I unlock it or the connection is broken ( in case of problems on client machine )? And how can I see on another machine of this row ( customer ) is already locked so I can open him in read-only?
For the moment I'm using extra fields that hold the info wether the customer is locked en by whom. But that's on application level, not on DB-level.
Ok, this may be a brain dead question but I can't seem to figure out what it is I am doing wrong. I have a stored proc which has multiple inserts and updates and deletes. However, I do not want to commit until the end of the procedure. So near the end if no error has been return by a particular insert, update, delete I tell it to COMMIT TRAN. My problem is that it seems to run and run and run and run. I take out the Begin Tran and boom it runs fast and completes.
But if there is a problem near the end then those other statements will be committed. I wish to avoid that. I have an error routine at the end of the SP and I have if statement to GOTO sp_error: if @@error produces a non zero value. I am sure I am doing something goofy but can seem to see it. I know it has come down to the Begin Tran. Is it that I have too many uncommitted transactions? Or perhaps I am locking something up. I know its hard to tell without seeing what I am doing but is there something simple to remember about using explicit transactions that I am forgetting. Any help is appreciated.
Hello . I am using SQL Server 2000 in order to create a multi user program that accesses data. The problem is that multiple users will update and select data at the same time at the same table.
Is there a way to avoid deadlocks ? I heard about two ways: using a temporary table to store data and then write the data only when the user finished the update. and the other is using xml to write the database to a xml file that is stored locally. do the updates on the file and then after completion insert the xml file into the database.
does anybody know much about these ways? do you know where i can find code for this ?
Hi all, firstly I would like to apologise because I don't actually use sql or know diddly squat about it. I am a network administrator and have a problem with a user's domain account getting locked out everytime he starts his sqlagent service (we are running a windows 2003 domain). I know this a vary vague post and I am sorry for that. I am just after some general ideas/information on why this keeps happening. Any help greatly appreciated.
deepak writes "how to lock the record while using a query "select id,name from students" i want to know various locks in sqlserver and and each of its use in insert ,update,delete and select etc. i am using it from visual basic 6.0