Back in the days of SQL 7.0 I used a lot of ODBC SELECT querying form VB applications, in which I implemented NOLOCK in order to prevent the primary business applications from being locked out of tables once the queries were run.
Now, quite a few years later, I'm busying myself converting a lot of old Access based forms and queries to TSQL on SQL-Server 2000, and wonder aimlessly why NOLOCK queries (simple select ones) are imensely slower than a standars select clause.
SELECT * FROM employees
This would be much much faster than the code below, but users would get "The current record could not be accessed, as it is being used by another user", evidently because I'm locking the record while producing the output.
SELECT * FROM employees (nolock)
So this could should - as I remember it - do a dirty read on table, not obstructing other users and give me a snapshot of date as they are, although they might be locked for edit.
Could anyone explain to me why the nOLOCK query fials to give me any output? It is as if the nolock request is waiting for the table/records to free? In which case I'll never be able to run a query.
I have a query that keeps on waiting with a wait type of SOS_SCHEDULER_YIELD. I have checked for the CPU pressure as well, and there is nothing else running. The query outline is given below.
Select a,b,c,d....
FROM tableA A inner join tableB B on A.a = B.b
inner join tableC on B.c = C.c
where
a.x not in (SELECT x from tableX where y = '???')
I have restored the copy of prod DB on a DEV box and ran the query which runs fine. But the prod box which is more powerful chokes. DEV is a 4CPU/4GB box and PROD is 8CPU/32GB box. All other DB and Server settings are exactly the same. DEV is SQL 2005 Standard ed SP2. Prod is a SQL 2005 Enterprise ed SP2.
THis is so annoying. I have 3 ADO executes in my program. THe first one creates a view, the second one performs an outer join on that view and returns a result set, the third execute drops the aforementioned view. THe program that is using this is installed on about 200 computers scattered across Germany and Italy, all querying the same MSsql server 7.0. THe queries run quite quick when few users are actively using the program (after hours for example). however in the heat of the day performance goes up and down dramatically with identical queries taking from 1 to 20 seconds to return their result set. Now I initially thought 'bandwidth issue out of our server'. However I noticed that if I take those three queries and run them from the sql server enterprise manager( running on the same computer as the aforementioned program) then the queries run instantly and the data is in my result pane in less than 2 seconds ALWAYS....even when the program is dogging it with 20 second delays before the result set returns. I know it is hanging on the return of the result set as I put a stop after before each ADO execute in order to check which one was eating up my time. Why is there this dichotomy between running the queries from my enterprise manager versus running them from an ADO object. Both are using TCP/IP (no named pipes involved). I havent monkied with the attributes of the ADO result set so they are all set to default. I have used the sql server profiler to trace these queries and they always run in less than 33 milliseconds. THe duration is also never more than 33 milliseconds. THis stinks of a network resource issue but what always leads me somewhere else is how consistent the performance of the enterprise manager is when it runs the exact same three queries.
Here is my slightly edited connection string Public Const connection_string = "Provider=SQLOLEDB;Server=000.000.000.000;" & _ "User ID=johndoe;Password=janedoe;Network=dbmssocn;" & _ "database=fidojoe"
Here are the 3 ADO executes: conn.Execute (sqlstr_create_view) Set resultset1 = conn.Execute(sqlstr_get_providers_by_DMISID) conn.Execute (sqlstr_drop_view)
I have the below query which returns thousands of records. can I optimize the returned result set faster without changing the structure of the database? SELECT dbo.tblComponent.ComponentID, dbo.tblComponent.ComponentName, dbo.tblErrorLog.ShortErrorMessage, dbo.tblErrorLog.LongErrorMessage, dbo.tblErrorLog.LogDate, dbo.tblErrorLevel.Description,dbo.tblErrorLog.ErrorLogIDFROM dbo.tblErrorLevel INNER JOIN dbo.tblErrorLog ON dbo.tblErrorLevel.ErrorLevelID = dbo.tblErrorLog.ErrorLevelID INNER JOIN dbo.tblComponent ON dbo.tblErrorLog.ComponentID = dbo.tblComponent.ComponentID Thanks.
I have following common table expression query which is taking like 15 hours to run. would someone suggest what can I do to speed this thing up..
; with a as (select proj_id, proj_start_dt,proj_end_dt, case when charindex('.', Proj_ID) > 0 then left(Proj_ID, len(Proj_ID) - charindex('.', reverse(Proj_ID))) end as Parent_Proj_ID from ods32.dbo.Proj a), --add Parent_Proj_ID column b as (select proj_id, proj_start_dt,proj_end_dt,Parent_Proj_ID from a where PROJ_START_DT is not null and PROJ_END_DT is not null --get all valid rows union all select a.Proj_Id, b.PROJ_START_DT, b.PROJ_END_DT, a.Parent_Proj_ID from b inner join a on b.Proj_Id = a.Parent_Proj_ID where a.PROJ_START_DT is null or a.PROJ_END_DT is null) --get all invalid children of valid rows and give them the dates of their parents update a set PROJ_START_DT = b.PROJ_START_DT, PROJ_END_DT = b.PROJ_END_DT from WPData a left outer join b on a.Proj_ID = b.Proj_ID -- join up and update
select t.name as TriggerName, ta.name as TableName, o.parent_obj into GLPDemo.dbo.Temp_TablesAndTriggers from sysobjects o inner join sys.triggers t on t.object_id = o.id inner join syscomments c on c.id = t.object_id inner join sys.tables ta on ta.object_id = o.parent_obj where xtype = 'tr' and c.text like '%Audit%'
DECLARE @DBTrigger as varchar(100), @DBTable as varchar(100), @exestr as varchar(100)
DECLARE TCursor CURSOR for
SELECT TriggerName, TableName from Temp_TablesAndTriggers
Hi, I got a weird problem. I've created a sp that takes in the query analyzer 7 seconds to run. When i put in my code dataAdapter.Fill(dataSet.Tables(0)) it takes forever to finish!! What's going on? Any thoughts highly appreciated. t.i.a.,ratjetoes.
I've a complex view on a SQL 2014 Enterprise Edition. If I query the view with:
SELECT * FROM myComplexView it takes 14 seconds to completes if I want a subset of the result and I run the query with a WHERE clause: SELECT * FROM myComplexView WHERE [Season]='A16'
The query never completes (I've waited 10 minutes and then cancelled the task).
Last Sunday on our Primary server I saw some blocking and the First spid that blocked everything was waiting on LogBuffer wait. At that time the server was not running hot and our DB is on SSD's. There was no memory pressure(1 TB of memory on the server) It is strange why we should get LogBuffer wait when we are running on the fastest disks possible and there was not much action going on the DB.
Hello to all,I have a small question.I call the SP outer the DB. The procedure deletes some record in tableT1.The table T1 has a trigger after delete.This is very importand for me, that the SP will be finished ASAP,that's why, I do not want, and I do not need to wait for a trigger.Does the SP will be finished, after the trigger is finished?Means, does the SP "waits" for a trigger?I think it is like that. Is it anyhow possible, to set the trigger (orthe procedure) that it want's be waiting for a result of triggerexecution?Thank You for kindly replyMateusz
About a month ago I setup mirroring with our DB, High safety with automatic failover. Ever since the lock waits per minute in the DB went from maybe 2-5 per minute to 22-25 per minute. I am not sure if this was expected or what; but sometimes it spikes even higher than that.
Does this sound off the charts to anyone or normal for mirroring?
I am trying to find out what could be causing this issue. Why would we be waiting on cpu when its barely being used. Signal waits are varying from 35 to 55% and cpu usage is only at 5% usage.We are using Windows Server 2012 with SQl Server 2012 Standard edition with cpu5. There are 3 instances on the server each with max memory 50gb memory and the server has a total of 190gb memory. The machine is a 12 core machine with hyperthreading enabled.
I have MS SQL Server 2005 with SP1 installed, version 9.0.2047 I am trying to debug a Script Task in SSIS. I have break point on the first line of the code. SSIS runs and eventually launches MS Visual Studio for Applications. Line with the break point is highlighted in yellow. After that Visual Studio is frozen. F5, F11 or any other key press produces a popup which says the following:
Delay notification. Microsoft Visual Studio for Applications is waiting for an operation to complete. If you regularly encounter this delay during normal usage please report this problem to Microsoft. Please include a description of the work you were doing in Microsoft Visual Studio for Applications and when possible instructions how to reproduce this delay. If Microsoft Visual Studio for Applications is waiting on another application you can switch to that application now, or you can continue waiting for this operation to complete.
The popup has to buttons: [Switch to€¦] and [Continue Waiting] None of the buttons allows to proceed.
Any idea what causes Microsoft Visual Studio for Applications to a complete halt?
I have a 2GHZ cpu with 1GB of RAM. I occassionally see very slow (long) queries against a local SQL Server 2005 Express (SP2) database. The issue occurs against different SQL Queries, but all queries are rather basic select statements Perfmon shows that the SQL Server counter for the "MEMORY GRANT QUEUE WAIT Avg MS" gets extremely high (25000+ ms). Perfmon also also shows that PAGING is not occuring, and the system is not under unsual stress. The problem is not reproducible with MSDE.
Has anyone seen this issue, or have any recommendations for a next course of action?
I created a profiler to run on a remote server in local. Then I logout. After two hours, I login again. The profiler was closed. I don't know when and why. Did someone have same problem? Is this normal?
I need some help to under stand when the right time is for NOLOCK. I work in a small dev group and NOLOCK seams to be a buzz word and others are throwing it in all over for no apparent reason.
I read the thing from http://www.sql-server-performance.com/ and I am sure that our web and SQL servers are about 100x over sized for the application. While are ASP.Net (VB) app may demonstrate some hesitation from time to time I am more inclined to blame poor VB.Net coding techniques before slow SQL. The point being the NOLOCK is being added to SELECTS that are not part of a transaction and were using the SQL data adapter to return datasets or single column values.
Also I am not even sure it’s being used correctly. The OLM has the example: SELECT au_lname FROM authors WITH (NOLOCK)
However I am seeing it formatted like this: SELECT au_lname FROM authors (NOLOCK)
I am by no mean an expert, I follow what I read in books or from examples from others. And I have never read in a book go crazy with NOLOCK because it’s the bomb!
Any thoughts? I am trying to learn as much as I can before I raise my hand and say this might be a bad idea.
Hi, I have a job that runs 3 seperate DTS packages.
The first step imports a file and runs successfully.
The second step which is the 2nd DTS package is hanging in the execute mode until I manually stop the job. Apparently,We discovered a bulk insert that is blocking a select statement--both proccesses are within this second DTS package. I tried using the WITH (UNLOCK) on the tables but this DTS package is still failing.
Does anyone have any suggestion? It would be greatly appreciated.
There has been a discussion/debate going on this thread about the benefits and drawbacks of using the NOLOCK hint: http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=67294
It occurred to me that you might know more about this than any of us, or at least be able to point us to a white paper or knowledge base article that explains the subject in more detail. Any light you can shed on the subject would be a big help.
I'm doing an update on a table with about 113m rows, the update-statement is fairly simple: update tab set col = null where col is not null. The col column is mostly null.
Sysprocesses shows three rows for this statement: 1 CXPACKET (its a dual processor, 2000 box with sp3 installed), 2 PAGEIOLATCH_SH (waitresource is filled). My guess would be that the where-clause is executed in a seperate process blocking the update.
I changed the statement into update [...] set col = null; sysprocesses shows one row with PAGEIOLATCH_SH. Executing forever.
I checked other processes including those outside sqlserver but none are using the db, let alone accessing the table involved. Even restarted sqlserver to be sure there's no dead process blocking the update. Didn't help.
So I added a search condition to the where-clause, involving a clustered index in order to reduce the rowcount. The execution plan shows a 97% hit on the clustered index, but sysprocesses shows the three rows again...
So far the profiler didn't help me out either: there's a SP: CacheInsert on the update-statement... then nothing.
2 weeks ago I deleted about 200GB of data from a 300GB+ database. It's a custom DB we want to use to test few things. We wanted a smaller size DB for our testing and since we didn't have any we grabbed a production backup, removed sensitive data and ran a large archiving script on it... Anyway so far so good but our data file was still the same size as before.
So we started a shrinkdatabase... it has been running for 2 weeks now! After about 1 week I interrupted the shrinkdatabase process and ran a dbcc shrinkdatabase('DB', truncateonly) just to see if the data file will get reduced a bit or not. It did get reduced by about 20GB. I assume that dbcc shrinkdatabase('DB', 0) has free up enough pages at the end of the data file so a truncateonly was able to free up some space... Anyway after this we started the dbcc shrinkdatabase('DB', truncateonly) again... still running...
The database was never shrank before and every index is highly fragmented... Is that why it's taking so long? Am I actually going to have to wait for another few weeks before that thing finishes??
Anyone has experience running shrink on large DBs?
Hi, sorry if this has been asked before but I'm pretty strapped for time...
I have two tables: [photos] and [photoFolders]
[photoFolders] contains information about photo albums on the site im creating. The information in [photos] lists information about all photos along with which [photoFolder] they belong. When the user logs in, I want to present a list of all 'folders' in their name along with the TOP image with the corresponding folderId...
Lately I did a mass update on all our scripts and addeddbo. in front of all tables and other objects.There is a function that returns a table and the function isI think 300 lines. There are a lot of UNIONs, EXISTS,NOT EXISTS, etc...Anyhow there's a single place where if i add dbo. in frontof the table this function which is being called from an SP,when i run the SP from the QA or from the ASP applicationit runs forever. As soon as i remove the dbo. again it runsin 6-8 seconds. Here's the thing this same table reference existsa few lines below in the 2nd UNION for example andhaving a dbo. in front doesn't cause any problems.Even more confusing if i add Databasename.dbo.TableNameand run it again, i don't run into any problem.So it's almost like if i specify dbo. in front of the table, somehowSQL Server or our code is getting lost and searching for thistable in other databases?Has anyone run or seen such a problem?I am sure I can make changes to the code and end upwriting a different code but before I make changes I wouldlike to find out more about my mystery problem.I am running SQL 2000 SP4 and the same problem occuredon two different machines. Win 2000 Pro and Win 2000 Server.Any ideas, suggestions?Thank you
I am using RS 2005 and SQL Server 2005. I am having a table with about 6 million rows. I am extracting about 2 milliion rows for a report. When i run the report as a single user the report comes up in 6-7 minutes but when i run the report with 2 users the report takes forever to come up.
The statistics are different each time sometimes 19 minutes sometimes 30 minutes. The report connects to the db with the same dbuser id for both the people running the report. The stored procedure being invoked uses temp tables and also indexes are created on the fly for these temp tables.
The moment 2 people are running the report and when i run an SP_WHO2 i see that one process id that is being started by reportserver blocks another process being run by reportserver.
Timeouts are not happening the report justs goes on forever to come up. Any help? Also if you need any more information please do let me know I will be glad to give them.
The report is a matrix report and there are 4 levels of grouping on the report.
I try to restore a database but it pop error said "the database is in use". So, I try to take the database offline so that I can store the database. But it takes me forever (1 hour till now). It is still showing "1 remaining". Do you have any idea why ? Thank you very much in advance!
Does anyone know how i can keep an ssis package used for real time reporting alive no matter the amount of errors it gets? So for instance the server im streaming to is shutdown for maintenance, and the connection dies, its needs to just keep re-trying. In other words the maximum error count is infinite. i dont just want to set max err count high, i want it out of the picture all together.
Hello, Does anyone know if you place NOLOCK after a view in a select statement, if the effects trickle down to the tables in the view? Or does one have to add NOLOCK to each table within the view?