My VB.Net (with SQL Server backend) application currently allows more than one user to look at a particular record at the same time. This is not a problem unless both those users also try to update that record as well. One user's changes then overwrite the other's.
I've been reading up on locking hints but my database knowledge is a little scant and I'm also rather dense and need things spelling out for me!! So I have a few questions that I hope someone can help with:
If I add an updlock to my update SQL statement, this would allow both users to view the record but would only allow one user's changes through. Is that correct?
For the other user, would SQL Server return an error message that I can use to tell the user that their update has not worked?
Would I have to get my VB.NET application to re-get the record information so that the user who's update failed can see the changes made by the other user and reapply their own changes?
Does the updlock become unlocked once the record is updated or do I need to specifically unlock it somehow?
Using Merge replication, I have a table that is filtered using the HOST_NAME() function. The filter also makes use of a function (as the HOST_NAME() is overriden to return some complex data).
Everything replicates and filters just fine. but when I add a join filter on a different table (the join filter is a simple foreign key join) I get the following error when the snapshot agent is run:
Message: Conflicting locking hints are specified for table "fn_GetIDList". This may be caused by a conflicting hint specified for a view. Command Text: sp_MSsetup_partition_groups Parameters: @publication = test1
fn_GetIDList is the function used in the original filter.
This article instructed me on how to process rows from a table used as a data queue for multiple processes.
http://www.mssqltips.com/tip.asp?tip=1257
I tested this against the AdventureWorks DB (SQL 2005) and multiple SQL connections inside of Sql Mgmt. Studio).
Connection1:
BEGIN TRANSACTION
SELECT TOP 1 * FROM Production.WorkOrder WITH (updlock, readpast) --skips over locked rows --COMMIT TRANSACTION
Connection2:
BEGIN TRANSACTION
SELECT TOP 1 * FROM Production.WorkOrder WITH (updlock, readpast) --skips over locked rows
COMMIT TRANSACTION
This works like I want where connection 2 skips over the locked row from connection 1 and gets the next available record from the table / queue. However, when I add ORDER BY tsql to each sql statement, connection 2 is now blocked waiting for Connection 1 to commit. (This is not what I want)
Connection1:
BEGIN TRANSACTION
SELECT TOP 1 * FROM Production.WorkOrder WITH (updlock, readpast) order by DueDate
--COMMIT TRANSACTION
Connection2:
BEGIN TRANSACTION
SELECT TOP 1 * FROM Production.WorkOrder WITH (updlock, readpast) order by DueDate --is blocked until connection 1 commits transaction
COMMIT TRANSACTION
How do I prevent blocking when using these locking hints with ORDER BY?
I am kind of confused about the way SQL Server 2000 handles the hintsthat users supply with their SQL statements.[color=blue]>From BOL, it seems that one can specify them with "WITH (...)" clauses[/color]in SQL statements known as table hints. Sometimes, multiple uses ofthis form in a statement is OK. Then there is the OPTION clause forspecifying statement hints. However, the documentation on OPTIONsection discourages their use.Being relatively new to SQL Server and still learning about it, what isthe general practice? Use hints or not? And if so, how (through WITHor OPTION clauses)?Cheers!
I am running SQL7 SP2 and and noticing table the query processor table scans when I ussue a between 'date1' and 'date2' instead of using the datetime index. If I put in the index hint (index = ix_datetimeXXXX) the query runs fine. My question is does this index hint restict the use of other indexes in the query and secondly how can I specify multiple index hints? Thanks in advance.
Whilst running a query I recieved the error below. Cannot create a worktable row larger than allowable maximum. Resubmit your query with the ROBUST PLAN hint.
I am having problems with doing what seams to be a very easy query. For some reason the SQL Server is trying to do nested loops instead of hash join. I tried to force the use of the hash join using the join hint.
Query 1
select * from DIM_DATE DD inner hash join ( select A.student_key, CONVERT(int, CONVERT(varchar, COALESCE (A.date_withdrawn, getdate()), 112)) AS date_withdrawn_current FROM FACT_STUDENT AS A ) SSE on DD.date_key= date_withdrawn_current This query gives an error:
Msg 8622, Level 16, State 1, Line 1 Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN
Second query is not really what I want but it illustrate that it works fine when getdate() is not used.
Query 2
select * from DIM_DATE DD inner hash join ( select A.student_key, CONVERT(int, CONVERT(varchar, COALESCE (A.date_withdrawn, A.date_enrolled), 112)) AS date_withdrawn_current FROM FACT_STUDENT AS A ) SSE on DD.date_key= date_withdrawn_current Is there some problem with using function getdate() ? It works fine in SQL Server 2000
This problem occurs on the SQL Server 2005 SP2 ( 9.00.3050.00 (X64) ) and (9.00.2050)
I noticed that the online books say the following: Note The SQL Server query optimizer automatically makes the correct determination. It is recommended that table-level locking hints be used to change the default locking behavior only when necessary. Also, at another place in online books, it says: The table hints are ignored if the table is not accessed by the query plan. From the above, it seems that using locking hints is not going to guarantee that SQL Server will follow them. Is this true?
Why SQL server dose work as follows when I dose not provide any join hints? It looks like HASH join is the best plan, but SQL dose not. What kind of JOIN method is used by SQL optimizer?
Thanks in advance, Wonhyuk William Chung wonhyukc@usa.net MCSE/ MCT
----------- use northwind go select orderid, CompanyName --productname, from orders o inner join customers c on o.customerID = c.CustomerID /* Table `Orders`. Scan count 91, logical reads 184, physical reads 0, read-ahead reads 0. Table `Customers`. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0. .0553 */
hi all, I'm trying to run queries on relatively small tables (a few hundred thousand rows) with subqueries of counts per primary key columns as such:
(ColA in tableA is the primary key)
select * from tableA p where exists (select 1 from ( select ColA, count(1) cnt from TableA group by ColA having count(1)>1 ) t where t.ColA= p.ColA) order by some_col
my problem is that sqlserver 2005 sp5 does not materialize the internal subquery properly, or execute it beforehand and it gets confused as heck and pegs the CPUs at 100% forever.
What hints can I use to solve this issue? I've tried to use ..... "with ...." to prepare/materialize the table upfront, no luck, one version of statement pegged one cpu at 100%, while the other statement pegged ALL cpu's at 100% -- don't remember which.
My only solution right now was to create these subqueries as PHYSICAL tables -- and this would solve the problem but that would entail creating a lot of un-necessary objects.
There is a trace flag that tells SQL Server to ignore index hinting in incoming queries. I'm having a Monday morning problem and I can't remember the trace number nor find it in my notes. Can anyone else come up with it?
have a dts package that does txt -> sql server. i have 200 txt files with the same exact format.
just want to know if i can write a SP passing a parameter that loads this txt files. because i dont wanna create 200 packages or 200 sources to load 200 txt files.
say: exec SP_loadTXT txt1
or should i use bulk insert?
any approaches are fine. any suggestions are fine too.
I've got a SELECT WITH (UPDLOCK, ROWLOCK) WHERE followed by an UPDATE WHERE statement. The results of the SELECT statement are deserialized in C# and updates are made to the deserialized object. Then the object is serialized back into the table with the UPDATE statement. I've got this code running within a transaction scope with the ReadCommited isolation level.
My service receives requests to update data and the requests can come in on different threads. What I'm seeing, is that once in a while, the log messages from my application indicate that two different threads are able to issue the above SELECT statement and both are receiving results. This is a problem since the thread that issues the last UPDATE will overwrite the changes made by the first. Each thread has its own connection and transaction scope.
I've researched all over the place and have tried a few different things, but all things point to the fact that query hints are just hints and that SQL may or may not pay attention to them. If that's the case, how am I suppose to perform a SELECT with the intention of updating so that no one else can do the same? I haven't tried table level locking, but I'd really like to avoid that if possible.
I would like to implement a kind of standard packages which can be used in all other processes and will be started using the variables.
But I do not know where to store these kind of packages in "best practise", because we
- would like to use them in Dev and in "Real" also without having to change something in the other processes
- we are storing the packages in the folders of the package store
and as far as I understood I would have to share the package store to all developers though that they would be able to do this?
Then I would better choose another folder with defined access rights I think...
Or would it be better to spend some time in developing a custom component? But this component would work with recordsets rather than the standard data flow elemtents and therefor I would expect a leak of performance... Or is it possible to do "trasnformation" from a packae to a custom component?
Hi everyone, I have a question about SQL Server 2005. I have written an ASP.Net 2.0 Web Application and it is using SQL Server 2005 as Database. In the last few days I noticed that the app is down sometimes. To analyze the problem I looked at the activity monitor in SQL Management Studio. I can see there approximately 170 processinfos. I want to describe the column values of the process infos: Process-ID: Unique ID and a red down-showing-arrow-icon User: My UserDatabase: My DatabaseStatus: sleepingCommand: AWAITING COMMANDApplication: .Net SqlClient Data Provider When I click Locks by Object, I can see the IDs of the Processinfos. Again I will show some colums:Type: DATABASERequirementtype: LOCKRequirementstate: GRANTOwnertype: SHARED_TRANSACTION_WORKSPACEDatabase: My Database So my question is, does this mean, that i have locked the db? How are they handled? For example I have a windows service, which is doing checks in db every 10 seconds. I can see, that each check generates a new processinfo? Is this usual, or am I doing something wrong? Thnaks for help,Byeee
When I run a select statement : select 'X' from table1 where c1 = condition locking on indexes behaves as expected
However if I run select 1 from table1 where c1 = condition locking on indexes goes wild locking pages and rows on indexes that are not even referenced in the query. Any ideas Why?
Hello All, I'm just migrating from oracle to SQL.Can anybody tell me that how effectively I can use Row level locking in SQL? If tow users are attemping to Moify same record how i can deal it in Back end(SQL)? Thanks in Advance. Suchee
i have an application in production(sql 6.5 ) which causes locking which times out my other processes , iwant to capture time the locking takes place i have found in bol that i can get time deadlock occurs using trace flag 3605 in sql7.0 ,if i have to use trace flag is it ok with dbcc traceon or -T option in startup is recommended. any advice would be appreciated tia ram
I have used DTS in the past to copy information in certain tables in production over the top of those same tables in test. However, the process is now failing. Does DTS require an exclusive lock on the source table, as well as the destination table during the export process? Will shared locks on the table I need to copy prevent DTS from completing the process?
We are running out of locks while updating a particular table (table name = history, rows = 25,000,000) in SQL Server 6.5.
LE threshold maximum is set to 200. LE threshold minimum is set to 20. LE threshold percentage is set to 0.
Locks is set to 0.
I have also included the stored procedure, which we use to update the history table.
As you can see, from the first four lines, we ran this SP 4 times processing around 6 million rows at a time. It runs out of locks once it is around 5.5 to 6.5 million rows. Is there a way of locking the table so that this SP can be run just once which will effectively process all the 26 million rows in one go?
Any help will be greatly appreciated.
Winston
--declare minihist cursor for (select uin,uan,mailingdate from history(tablock)where rowno between 5635993 and 12000000) --declare minihist cursor for (select uin,uan,mailingdate from history(tablock)where rowno between 12000001 and 19000000) declare minihist cursor for (select uin,uan,mailingdate from history(tablock)where rowno > 19000000)
open minihist fetch next from minihist into @huin,@huan,@hmailingdate while (@@fetch_status <> -1) begin
if (@@fetch_status <> -2) begin
select @mailtot = 1 select @mail12m = 0
/*** Get the gender ***/ select @sex = gender from name where uin = @huin
/*** Calculate if mailed in the last twelwe months ***/ if (@hmailingdate <> null) and (@hmailingdate > '19980524') select @mail12m = @mail12m +1
/*** Get info for this uan from address_summary ***/ select @mailtot = (@mailtot+mailed_total), @mail12m = (@mail12m+mailed_12months), @lastday = last_date from address_summary where uan = @huan
/*** Insert a row into address_summary if doesn't exist ***/ IF @@rowcount = 0
Hi, We are running SQL 6.5 in Produciton and I'm getting one blocking problem but mostly I kill the process and whenever I check the SQL Error Log I see this message : Error : 17824, Severity: 10, State: 0 Unable to write to ListenOn connection '1433', loginname 'XXXY', hostname 'DT SA'. OS Error : 64, The specified network name is no longer available.
I'm trying to use the pessimistic row locking of SQL to get following result.
When a customer form is openend, the row should be locked for writing. This lock should be left open until the user closes the customer form.
I cannot use transactions because there can be more then 1 customer form open in the same app. In ADO a connection is IN transaction or is NOT, nested transactions are not supported.
How can I keep this row locked on SQL and this until I unlock it or the connection is broken ( in case of problems on client machine )? And how can I see on another machine of this row ( customer ) is already locked so I can open him in read-only?
For the moment I'm using extra fields that hold the info wether the customer is locked en by whom. But that's on application level, not on DB-level.
Ok, this may be a brain dead question but I can't seem to figure out what it is I am doing wrong. I have a stored proc which has multiple inserts and updates and deletes. However, I do not want to commit until the end of the procedure. So near the end if no error has been return by a particular insert, update, delete I tell it to COMMIT TRAN. My problem is that it seems to run and run and run and run. I take out the Begin Tran and boom it runs fast and completes.
But if there is a problem near the end then those other statements will be committed. I wish to avoid that. I have an error routine at the end of the SP and I have if statement to GOTO sp_error: if @@error produces a non zero value. I am sure I am doing something goofy but can seem to see it. I know it has come down to the Begin Tran. Is it that I have too many uncommitted transactions? Or perhaps I am locking something up. I know its hard to tell without seeing what I am doing but is there something simple to remember about using explicit transactions that I am forgetting. Any help is appreciated.
Hello . I am using SQL Server 2000 in order to create a multi user program that accesses data. The problem is that multiple users will update and select data at the same time at the same table.
Is there a way to avoid deadlocks ? I heard about two ways: using a temporary table to store data and then write the data only when the user finished the update. and the other is using xml to write the database to a xml file that is stored locally. do the updates on the file and then after completion insert the xml file into the database.
does anybody know much about these ways? do you know where i can find code for this ?
Hi all, firstly I would like to apologise because I don't actually use sql or know diddly squat about it. I am a network administrator and have a problem with a user's domain account getting locked out everytime he starts his sqlagent service (we are running a windows 2003 domain). I know this a vary vague post and I am sorry for that. I am just after some general ideas/information on why this keeps happening. Any help greatly appreciated.