Could Not Continue Scan With NOLOCK Due To Data Movement
Aug 24, 2007
Hi all,
I'm getting this error on expanding the Databases node in SQL Server Management Studio Express Object Explorer:
Failed to retrieve data for this request. (Microsoft.SqlServer.Express.SmoEnum)
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.Express.ConnectionInfo)
Could not continue scan with NOLOCK due to data movement. (Microsoft SQL Server, Error: 601)
A similar error message also appeared when I tried opening an existing data connection within Database Explorer in Visual Web Developer Express. (However, the web application ran fine and it managed to access the database normally.)
These errors only appeared recently. Any ideas how to go about solving this issue?
We have a stored procedure that failed with: Could not continue scan with NOLOCK due to data movement
We are running with ISOLATION LEVEL READ UNCOMMITTED. There are other jobs running, some of which might be hitting these tables (all using the ROWLOCK hint - though I know that's not guaranteed), however, this stored proc would not be going near the same rows. But even if they were, we'd be happy with either the before-look or after-look. This needs to be a low-impact job and should have minimal impact on ther other jobs, so we can't take out locks. Is there any hint we can use to do this? e.g. can we tell the query to just wait until the data has stopped moving then try again?
Hi Guys, Every time backup job backup the database or I manually backup the database I receive error message [Microsoft SQL-DMO (ODBC SQLState: 42000)] Error 601: [Microsoft][ODBC SQL Server Driver][SQL Server]Could not continue scan with NOLOCK due to data movement.
This is sql 2000 maintainance job or manual backup does same though actually backup of database does happen and I do have successfull backup but error also and older files are not getting deleted and job shows as failed in enterprise manager. Any idea guys
I am wanting to continuously monitor a source table throughout the day and as data becomes available, process it and insert it into one of a number of tables.
I have tried achieving this using a FOR LOOP and setting the halt condition such that it is not stisfiable. However, this has a couple of problems:
1) It runs in a tight loop and consequently degrades system performance enormously.
2) I can't get transactions to work. I would like each iteration of the loop to spawn a new transaction under which the tasks in the loop can run. Therefore, if one of the tasks fails during such an iteration, only the updates affected by that iteration are lost.
Ideally, I would like to be able to put a wait statement within the loop container so that it runs every couple of seconds. And would also like to implement transactions as described above.
The secondary server for my availability group was recycled. When SQL Server came back online the data movement for a database was suspended. The error log shows:
"AlwaysOn Availability Groups data movement for database 'XXXXXXXXX' has been suspended for the following reason: "system" (Source ID 5; Source string: 'SUSPEND_FROM_RESTART'). To resume data movement on the database, you will need to resume the database manually. For information about how to resume an availability database, see SQL Server Books Online."
I was able to resume data movement with no issue. I would like to understand the technical reason as to WHY the data movement was put in the suspended state and left there upon recycle. I searched for an article that would list possible reasons (BOL, Google, Bing, etc..). I just could not find much information out there on 'SUSPEND_FROM_RESTART'.
I am aware that TDE protects data at Rest and not during communication or data in motion (UNLESS you use Encrypted communication channels using SSL certs etc). Hence I am thinking of doing data export from a TDE encrypted database to a database on the instance where TDE is not enabled or supported. I believe it works and need to take care of relationships between tables.The target database is hosted on SQL 2012 standard edition on which TDE is not supported.
In my organization, we have a database which exceed unto 2 GB. The application on which we are working is developing since the time of asp / cb 6 and in short it is the baby of many developers. It is a kind of ERP which has no documentation.
Problem 1: ========= Now, to reduce the size of the database we have examined some of the orphan procedures, tables, and columns. Is it possible for me to create a DTS package which create the backup of all the orphan stuff to some other database and delete it from the actual one. Moreover, We still have doubts that some of fields which we think are orphan, might not be. So we need another DTS package which rollback all the changes. (Any Idea for that)
Problem 2: ========= We want another DTS package which would move the data from one database to another and same rollback DTS for this one. (Any Idea for that)
Problem 3: ========= How can we use DTS programming to do these tasks ? what benefits do I got from DTS Programming over DTS Wizard.
I am using SQL Server report 2008/2012 (SSRS) and my report viewer contains body content with 3 Row groups. While printing the report, data print with blank space and move to continue data to next page.
Departure flight : 70 rows First Page : 42 rows printed Second Page : 23 rows printed [ Supposed to be print 28 , if the total count of records more than 23 and less than 42 then the page print only 23 records ] Third Page : 5 rows printed
Departure flight : 42 rows First Page : 42 rows printed [Report max. record allowed to print 42 rows so if total record is 42 then print perfectly ]
Departure flight : 26 rows First Page : 23 rows printed [Supposed to be print 26, if the total count of records more than 23 and less than 42 then the page print only 23 records ] Second Page : 3 rows printed
Do you have to set the Error Output on DF components to "Fail Component" in order to get the errors?
What I would LIKE to do is a combination of "Ignore Failure" and "Fail Component". You see, I am using the Logging feature in my package that creates the sysdtslog90 table in the SQL database. The errors that I am logging make sense and have enough information for my purposes.
The problem is that I would like to continue processing the data and not have it stop when a data error occurs. I REALLY do not want to Redirect Rows unless it is necessary for me to do what I am asking.
Using Ignore Failure on both the source text file and destination SQL table allows the "good" data to be inserted, but I cannot get any info on the columns in error. Conversely, if I choose to Fail component, I get the info on the columns in error, but only the data that was inserted before the error was encountered is inserted into the table.
In my SSIS package I am trying to import a .csv flat file with about 170,000 rows of data with 75 columns (I know this is rather large). I would like to create a SSIS package that loads ALL of the rows of data into a table. This table has it's fields defined (such as money, float, varchar, datetime, etc). I want the data to load to this table even if there is a conversion error converting any field. When the data is loaded to this table, any fields that couldn't be converted should be set to null. Of the 75 columns of data, there are MANY columns that could be invalid.
For example, if the flat file contains the value "00/00/00" for my "PaidDate" field, I would want all of the other fields to be populated and the value for this particular record's "PaidDate" field to be null.
It would also be nice if I could trap any of the fields that caused a problem and log them along with the row.
I know that SSIS supports the "Redirect Row" method for handling data, but in my case, I don't want to redirect it. I want to continue to load it, just with a null value.
Is there an easy way of doing this (i.e. perhaps by using the Advanced Editor for the Flat File Source and setting something in the Input/Output columns)? I really don't want to create a Derived Column or Data Conversion transformation for every field that needs to be converted (b/c there are 75 columns to maintain this for).
I know if I import this file to an Access 2003 database, it imports the data fine and logs any conversion errors to an "_ConversionErrors" table. I am basically looking for this same behavior in SSIS, without having to maintain 75 columns.
Does anyone know of any Windows software to scan the same paper document andenter the results to an SQL Server database?We have a Photocopier machine that can automatically scan several documentsat a time and email them to an email address. We'd like to scan in anassessment form that rates a person from 1-10 on punctuality, appearance,efficiency, willingness and professionalism, and then submit the values toan SQL 2000 database.Thanks in advance for any suggestions.Dan Williams.
These are couple of simple questions. When I set the data viewers and start the package, data viewer windows pop up but data download process stops. How do you resume the process? Also if I press "hide" for a data viewer window it disappears. How can I get it shown again?
Hi Sql gurus :))I've got a question that I couldn't find a satisfying answer on the net.What is the difference between:1) running sql query (select from sth with nolock) with no transaction2) running sql query (select from sth) withing a TransactionScope with option Read Uncommitted dataBasically, both should do the same work. However is anyone aware of any potential problems using any of both approaches ?We use 1) to improve our web application scalability since the system works in such a way that any selects and updates on that table (sth) do not interfere with one another.However, updates are done in a TransactionScope. And when having simultaneous select with nolock and update in a Transaction scope (the select statement has a where clause and returns records that are not updated by the update statement). However sometimes ( we still cannot figure it out when) the select statement returns some records twice.For example, the select should return 1000 records , but (sometimes) it returns 1002 records ( the extra 2 records are copies of some of the original 1000 records).Removing the nolock, makes the problem does not appear - but i want to be 100% sure that nolock is our troublemaker. And if it is - why ?We also have a problem that this particular nolock select sometimes return even less records than it should.I know it sounds impossible but it happens.So anyone who has experience with select with nolock, please share :)Thanks in advance, Yani
This is on Sybase but I'm guessing that the same situation would happen on SQL Server. (Please confirm if you know).
I'm looking at these new databases and I'm seeing code similar to this all over the place:
if not exists (select 1 from dbo.t1 where f1 = @p1) begin select @errno = @errno | 1 end
There's a unique clustered in dex on t1.f1.
The execution plan shows this for this statement:
FROM TABLE dbo.t1 EXISTS TABLE : nested iteration. Table Scan. Forward scan. Positioning at start of table.
It's not using my index!!!!!
It seems to be the case with EXISTS statements. Can anybody confirm?
I also hinted to use the index but it still didn't use it.
If the existence check really doesn't use the index, what's a good code alternative to this check?
I did this and it's working great but I wonder if there's a better alternative. I don't really like doing the SET ROWCOUNT 1 and then SET ROWCOUNT 0 thing. SELECT TOP 1 won't work on Sybase, :-(.
SET ROWCOUNT 1 SELECT @cnt = (SELECT 1 FROM dbo.t1 (index ix01) WHERE f1 = @p1 ) SET ROWCOUNT 0
I have a table named sales, num of rows are 20. total number of rows for stor_id is 5 rows I assume that when I create a cursor like this one which will have 5 rows from the cursor Now if I print a statement , I assume that I will have the statement 5 times( one for each row) Am I correct... well, if this is the concept, I am still having one printed statement.. I do not know why , am I doing something wrong? thanks for help
CREATE PROCEDURE p_cursor @ord_nbr char(4) declare rst cursor for select * from sales where stor_id= 123 open rst fetch rst if (@@fetch_status=0) print ' I may have 5 rows '
We're very interested in having our application use SQL/e but we can't have the 4 gig limit. It makes sense to me that SQL/e simply should not be able to access a file over the network, and then you wouldn't have any reason to put a 4 gig limit on it.
At that point it becomes a very flexible alternative for remote users that need to have a large amount of data (i.e. documents, images etc.) with them.
We'd love to start building an abstraction layer so that we can support both SQL Server and SQL/e so that we can support network and remote users and not have the nightmare that is SQL Server Express installation. (care of the windows installer group's bugs...)
I have transactional replication setup from server A to Server B. I wanted to move the subscriber from B to C. What could be the best approach.
1. Backup the DB from Server B and restore on Server C. set the replication between A & C. 2. setup the transaction replication between A & C along between A & B. Test A& C working fine and then remove B.
If I am going with approach 2 , I have to replicate data approx. 70 GB so If I ran both the replication on Server A that will stress as 140 GB of data moving out. How do I control this large movement ? Can the replication be manual synch?
Hello everyone,There's an interesting SQL problem I've come across that I'm currentlybanging my head against. Given the following table that contains itemlocation information populated every minute :location_id date_created=========== ============5 2000-01-01 01:00 <-- Don't need5 2000-01-01 01:01 <-- Don't need5 2000-01-01 01:02 <-- Need7 2000-01-01 01:03 <-- Don't need7 2000-01-01 01:04 <-- Need5 2000-01-01 01:05 <-- Need2 2000-01-01 01:06 <-- Don't Need2 2000-01-01 01:07 <-- Need7 2000-01-01 01:08 <-- Needhow would you generate a result-set that returns the item's locationhistory *without* duplicating the same location if the item has beensitting in the same room for a while. For example, the result setshould look like the following :location_id date_created=========== ============5 2000-01-01 01:027 2000-01-01 01:045 2000-01-01 01:052 2000-01-01 01:077 2000-01-01 01:08This is turning out to be a finger twister and I'm not sure if itcould be done in SQL; I may have to resort to writing a stored-proc.Regards,Anthony
I need some help to under stand when the right time is for NOLOCK. I work in a small dev group and NOLOCK seams to be a buzz word and others are throwing it in all over for no apparent reason.
I read the thing from http://www.sql-server-performance.com/ and I am sure that our web and SQL servers are about 100x over sized for the application. While are ASP.Net (VB) app may demonstrate some hesitation from time to time I am more inclined to blame poor VB.Net coding techniques before slow SQL. The point being the NOLOCK is being added to SELECTS that are not part of a transaction and were using the SQL data adapter to return datasets or single column values.
Also I am not even sure it’s being used correctly. The OLM has the example: SELECT au_lname FROM authors WITH (NOLOCK)
However I am seeing it formatted like this: SELECT au_lname FROM authors (NOLOCK)
I am by no mean an expert, I follow what I read in books or from examples from others. And I have never read in a book go crazy with NOLOCK because it’s the bomb!
Any thoughts? I am trying to learn as much as I can before I raise my hand and say this might be a bad idea.
Hi, I have a job that runs 3 seperate DTS packages.
The first step imports a file and runs successfully.
The second step which is the 2nd DTS package is hanging in the execute mode until I manually stop the job. Apparently,We discovered a bulk insert that is blocking a select statement--both proccesses are within this second DTS package. I tried using the WITH (UNLOCK) on the tables but this DTS package is still failing.
Does anyone have any suggestion? It would be greatly appreciated.
There has been a discussion/debate going on this thread about the benefits and drawbacks of using the NOLOCK hint: http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=67294
It occurred to me that you might know more about this than any of us, or at least be able to point us to a white paper or knowledge base article that explains the subject in more detail. Any light you can shed on the subject would be a big help.
Hello, Does anyone know if you place NOLOCK after a view in a select statement, if the effects trickle down to the tables in the view? Or does one have to add NOLOCK to each table within the view?
I'm using the sentence NOLOCK for selects, but I have many sentences, Is there any way to set a parameter in the DBMS, to use NOLOCK parameter by default ???? I mean, I don't like to lock any table for selects.
I came across a SQL statement, thought up by a developer, in which two views were joined with the NOLOCK hint: SELECT v1.xxx, v2.yyy FROM dbo.vw_SomeView v1 WITH (NOLOCK) INNER JOIN dbo.vw_SomeOtherView WITH (NOLOCK) ON v1.id = v2.id The views are not created the NOLOCK hint. So my question is: has the NOLOCK hint any effect here?
I've looked in the BOL and searched on the net but can't find anything on this particular topic.
Lex
PS. Personally I don't like to use views in JOINs. I've seen too many cases in which tables are joined twice just because they are part of both views. Further more I don't like the "random" use of NOLOCK because most people don't seem to understand the implications of it. But this is besides the point of my question ;)
I'm running a heavy SELECT query using WITH (NOLOCK). This still causes other processes trying to INSERT in one of the tables to get blocked. I thought the locking hint would prevent from blocking other processes?