Killed/Rollback Process Hogging ALL CPU Resources.
Feb 25, 2005
I have a test database for the end users to test their select queries for reports.
One of my users is writing queries that cause locking in the database. I killed the process last evening and they are in Killed/Rollback status but are still hogging 90% of the CPU resources for the past 12 hrs. I tried killing them several times but no go.
I know that the best way to clear of these processes is by restarting SQL Server. If that is not an option is there is any other way we can clean these processes?
Also the user running these queries has a read only and create view access to the database. From my experience processes that go into Kill/Rollback state after you kill them are processes associated with some update transaction. Since the user as far as i know is running Select commands would an infinite loop cause this ?
I have a query to refer to the data source in as400 with ODBC 'iSeries Access for Windows ODBC data source' and a linked server with 'oledb for ODBC driver' in SQL2005 (9.0.2047)
Now we got 2 problems,
1. When we refer to the view with openquery e.g select * from openquery (abc, 'select * from dates')
This query continue to run without ending. In Activity monitor, we found that it is waiting for resource ole db
2. Since the process continue to run without ending, I kill the process in activity monitor or by command 'kill'. It shows the status 'KILLED/ROLLBACK' and it returns 'SPID 197: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 0 seconds.' in 'kill 197 with STATUSONLY'
In problem 1, I found some ppl encountered the same problem (waiting for resource oledb) with openquery to different data source e.g. vfp, oracle.
In problem 2, we can't restart the sql server for clear the unkillable process
I was running a stored procedure it was suspended for about 11 hours so I decided to kill it now its in Killed/Rollback stage for 12 hours and when check the status of roll back it says "Estimated rollback completion: 0%. Estimated time remaining: 0 seconds." its using up CPUTIME 380000 and DiskIO 970000. How to do I stop this co.mpletely
When I was trying to get my maintenance plans to work there were many processes I had to kill. These processes were killed over a week ago but they still list in the Current Activity | Process Info list. Under Command it says "Killed/Rollback". If I go into QA and run kill 65 with statusonly it says the process is complete. How do I get these processes off the list?
I was trying to extract data from the source server using OLEDB Source and SQL Server Destination when i encountered this error:
"Transaction (Process ID 135) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.".
What must be done so that even if the table being queried is locked, i wouldn't experience any deadlock?
I have a C# SQL 2005 .net stored procedure which scrubs a text file looking for characters not in a range of characters and replacing them with another character. This works fine except when the process is killed. When this happens the file handle of the file being scrubbed is not released. I use a try catch finally block when opening the file and the output file. The finally section fiushes the output file and closes all files and streams but still when I go to access the file again or use the file in explorer it says the file is still in use. Should I be handling this some other way? How do I know the files will always be closed correctly.
I am having this table locking issue that I need to start paying attention to as its getting more frequent.
The problem is that the data in the tables is live finance data that needs to be changed and viewed almost real time so what I have picked up so far is that using 'table Hints' may not be a good idea.
I have a guy at work telling me that introducing a data access layer is the only way to solve this, I am not convinced but havnt enough knowledge to back my own feeling up. (asp system not .net).
Process ID 152:3 owns resources that are blocking processes on Scheduler 2.
When I did an BCC INPUTBUFFER I found it was sp_MSadd_repl_commands27hp which is doing the insert into MSrepl_commands has any else noticed and issue w/ sp_MSadd_repl_commands27hp blocking itself. At the time I had about 10 million records to move. I was using the default log reader settings so I was batching them in 500 chunk intervals.
I am wondering if any else has had problems like this? I basically see it whenever I move too much data through my replication server.
I found the followoing link http://support.microsoft.com/kb/319892
Sample Scenario
Client 1 connects to SQL Server.
Client 1 runs a Transact-SQL command that starts a transaction and performs data modification.
For example: begin tran update authors set au_lname = 'test' Client 1 becomes IDLE, shows up as sleeping, and awaiting a command with an open transaction in the sysprocesses system table.
Clients 2 through 255: Approximately 254 more clients log on to SQL Server and issue a SELECT from the authors table. These clients will all become blocked on the original update.
Client 1 tries to commit the transaction but it becomes queued because all the worker threads are tied up by clients 2 through 255.
I am afraid that I am seeing this more then I would like does anyone know a way to prevent this from happening?
The following question applies to SQL Server 8.0.2187 (2000 + SP4+916287/914384/898709/915065/915340):
We have now twice had an incident where the same SQL Server has stopped responding. The only workaround is to restart the SQL Service. After this occurs, the log is filled with the following messages:
2007-09-10 16:42:14.29 spid3 Process ID 197:320 owns resources that are blocking processes on Scheduler 1.
2007-09-10 16:42:14.31 spid3 Process ID 74:324 owns resources that are blocking processes on Scheduler 5.
We haven't been able to pinpoint a cause or reporduce the problem on a dev server. I've seen several posts about this issue online but not many answers. Does anyone have any advice on how to troubleshoot this issue?
I have this weird situation on one of the production database. Somebody ran a sql script couple days ago and must have hit cancelled. Somehow this script is still rolling back. The script is actually very simple statement..should not take more than couple miliseconds to run. Is there any way to see whether rollback is really in process or this process has simply hung? is there any other way to get rid of this rollback without stopping and restarting sql server services? Can it cause problems in getting sql server up again if I restart sql server services?
I have over 500 transaction records in a single DB process handling within SQL transaction (Begin, Commit, RollBack and End). Is there any limitation for the following rollbacktransaction function to handle more records (eg. over 500 records)? Public Shared Sub RollBackTransaction() Dim transactionObj As Object Try transactionObj = SqlTransaction.GetExistingTransaction If (Not IsNothing(transactionObj)) Then CType(transactionObj, SqlTransaction).RollBack() End If
Catch ex As Exception Throw New Exception(ex.Message) End Try End Sub
We upgraded QA and production to sql server 2012 last year ( in place) leaving the user databases at sql 2005 ( 90 ). A few months ago the QA user databases were set to sql 2012 compatibility mode. Management is worried about upgrading production and wants to know if we can quickly roll back.
I want to confirm that we can roll back using the same command, and if dbcc freeproccache can be used to avoid having to update all statistics.
ALTER DATABASE <mydatabase> SET COMPATIBILITY_LEVEL = 110
ALTER DATABASE <mydatabase> SET COMPATIBILITY_LEVEL = 90
This works fine in QA on my own test user database. No errors.
I've just had a call saying that their SQL Server v7 box on a Compaq 7000 that has been installed for about 6 months is now running slowly, a look on the CPU shows that SQL server is grabbing 50% upwards of the CPUs (dual processor) even when the front end application is not running , is there anything that I should be checking
Using SQL Server 2000, SP1 with 4Gb max memory allocated to the instance. The problem is that one large table is hogging cache and it's dragging down overall query performance. I realise it's in cache because it's getting queried regulary. However, I need to know what options exist to get around this problem - to free up some cache for other tables and indexes? Of course, there is the option of archiving off some the data in the table to reduce its size and we will look at doing this although it will not be as easy as it sounds.
I can imagine that there must be many databases that have at least one large table that is getting hit regularly and is left in cache more-or-less permanently. Therefore, I can't believe I have an usual problem.
I'm running Windows Small Business Server 2003 Premium R2. I installed SQL Server 2005 Standard Edition on this PC. I'm using Std Edition instead of the Workgroup Edition that comes with SBS since I like the additional features in the Std Edition.
I'm a kind of a newbie to Windows Server administration and SQL Server administration although I've written lots of SQL queries, stored procedures, etc.
I have 2GB of RAM on this PC. When I looked in Task Manager, I saw that SQL Server was using somewhere over 1GB of RAM. So, I opened up SQL Management Studio, right clicked on the server node, clicked to get to the memory configuration page and saw that SQL Server was set to use all 2GB of RAM on the PC. I changed that to 500MB(500000000, or 476MB) and decided to reboot the server. When the server came back up, I forgot to check SQL Server's new RAM usage. The server ran without interruption for over 24 hours. Now, when I checked SQL Server's memory usage in Task Manager, I can see it's using over 750MB of RAM.
How do I fix this?
I have Exchange Server 2003 running on this PC as well.
We have recently tested upgrading our web service from sql 2000 to 2005 sp1. The upgrade went smoothly enough, however we now have the problem of the sqlserver.exe process taking 90-100 % of the processors time, but using only 100 MB of memory.
We have 6GB available and we are running the enterprise editions of Windows 2003 and SQL 2005.
Machine specs,
DL380 G2, 2 X 2.8 Ghz Zeon, 6GB ram, Raid 5, database partition of 140 GB, Log partition of 35 GB.
Db is 25 GB, Log is 12 GB. Largest table has 32 million rows.
Is there any specific event i have to select in SQL profiler to monitor the process / user that kills active connection which is performing a batch data transfer. Any other alternative other than profiler that catches this (like XEvents)?
I installed 2K5 developer on my box. I was impressed with the Summary tab report functions and the reports available there. Then I ran the "Reporting services configuration" tool to get RS running, but now several of the summary reports are broken. There are several different error messages given in the summary window, depending on the report selected.
For example, the "server dashboard" now displays
Summary Unavailable
The information needed to display a summary page for the selected object is not available.
Error:
"sql_handle" is not a recognized table hints option. If it is intended as a parameter to a table-valued function, ensure that your database compatibility mode is set to 90.
The next two reports work, but the "Scheduler Health" report displays
Summary Unavailable
The information needed to display a summary page for the selected object is not available.
Error:
Incorrect syntax near '.'.
"Memory Consumption" works GREAT, but "Activity - All blocking transaction" shows:
Summary Unavailable
The information needed to display a summary page for the selected object is not available.
Error:
Incorrect syntax near '.'. Incorrect syntax near '.'. Incorrect syntax near the keyword 'as'. Incorrect syntax near the keyword 'as'.
What could be the cause of such errors? Is there an "easy fix"? I wouldn't know how to "unconfigure" reporting services and have a feeling that wouldn't help anyway...
Hello all, I am running into an interesting scenario on my desktop. I'm running developer edition on Windows XP Professional (9.00.3042.00 SP2 Developer Edition). OS is autopatched via corporate policy and I saw some patches go in last week. This machine is also a hand-me-down so I don't have a clean install of the databases on the machine but I am local admin.
So, starting last week after a forced remote reboot (also a policy) I noticed a few of the databases didn't start back up. I chalked it up to the hard shutdown and went along my merry way. Friday however I know I shut my machine down nicely and this morning when I booted up, I was in the same state I was last Wenesday. 7 of the 18 databases on my machine came up with
FCB:pen: Operating system error 32(The process cannot access the file because it is being used by another process.) occurred while creating or opening file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf'. Diagnose and correct the operating system error, and retry the operation. and it also logs FCB:pen failed: Could not open file C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf for file number 1. OS error: 32(The process cannot access the file because it is being used by another process.).
I've caught references to the auto close feature being a possible culprit, no dice as the databases in question are set to False. Recovery mode varies on the databases from Simple to Full. If I cycle the SQL Server service, whatever transient issue it was having with those files is gone. As much as I'd love to disable the virus scanner, network security would not be amused. The data and log files appear to have the same permissions as unaffected database files. Nothing's set to read only or archive as I've caught on other forums as possible gremlins. I have sufficient disk space and the databases are set for unrestricted growth.
Any thoughts on what I could look at? If it was everything coming up in RECOVERY_PENDING it's make more sense to me than a hit or miss type of thing I'm experiencing now.
Dear list Im designing a package that uses Microsofts preplog.exe to prepare web log files to be imported into SQL Server
What Im trying to do is convert this cmd that works into an execute process task D:SSIS ProcessPrepweblogProcessLoad>preplog ex.log > out.log the above dos cmd works 100%
However when I use the Execute Process Task I get this error [Execute Process Task] Error: In Executing "D:SSIS ProcessPrepweblogProcessLoadpreplog.exe" "" at "D:SSIS ProcessPrepweblogProcessLoad", The process exit code was "-1" while the expected was "0".
There are two package varaibles User::gsPreplogInput = ex.log User::gsPreplogOutput = out.log
How do I use the execute process task? I am trying to unzip the file using the freeware PZUnzip.exe and I tried to place the entire command in a batch file and specified the working directory as the location of the batch file, but the task fails with the error:
SSIS package "IngramWeeklyPOS.dtsx" starting.
Error: 0xC0029151 at Unzip download file, Execute Process Task: In Executing "C:ETLPOSDataIngramWeeklyUnzip.bat" "" at "C:ETLPOSDataIngramWeekly", The process exit code was "1" while the expected was "0".
Then I tried to specify the exe directly in the Executable property and the agruments as the location of the zip file and the directory to unzip the files in, but this time it fails with the following message:
SSIS package "IngramWeeklyPOS.dtsx" starting.
Error: 0xC002F304 at Unzip download file, Execute Process Task: An error occurred with the following error message: "%1 is not a valid Win32 application".
The command in the batch file when run from the command line works perfectly and unzips the file, so there is absolutely no problem with the command, I believe it is just the set up of the variables on the execute process task editor under Process. Any input on resolving this will be much appreciated.
I am designing a utility which will keep two similar databases in sync. In other words, copying the new data from db1 to db2 and updating the old data from db1 to db2.
For this I am making use of the 'Tablediff' utility which when provided with server name, database, table info will generate .sql file which can be used to keep the target table in sync with the source table.
I am using the Execute Process Task and the process parameters I am providing are:
The customer.bat file will have the following code: tablediff -sourceserver "LV-SQL5" -sourcedatabase "TC_CTI" -sourcetable "CUSTOMER_1" -destinationserver "LV-SQL2" -destinationdatabase "TC_CTI" -destinationtable "CUSTOMER" -f "c:SQL_bat_Filessql5TC_CTIsql_filescustomer1"
the .sql file will be generated at: C:SQL_bat_Filessql5TC_CTIsql_filescustomer1.
The Problem: The Execute Process Task is working fine, ie., the tables are being compared correctly and the .SQL file is being generated as desired. But the task as such is reporting faliure with the following error :
[Execute Process Task] Error: In Executing "C:SQL_bat_FilesSQL5TC_CTIpackage_occurrence.bat" "" at "C:Program Files (x86)Microsoft SQL Server90COM", The process exit code was "2" while the expected was "0". ]
Some of you may suggest to just set the ForceExecutionResult = Success (infact this is what I am doing now just to get the program working), but, this is not what I desire.
I'm pulling data from Oracle db and load into MS-SQL 2008.For my data type checks during the data load process, what are options to ensure that the data being processed wouldn't fail. such that I can verify first in-hand with the target type of data and then if its valid format load it into destination table else mark it with error flag and push into errors table... All this at the row level.One way I can think of is to load into a staging table then get the source & destination table -column data types, compare them and proceed.
should I just try loading the data directly and if it fails try trouble shooting(which could be a difficult task as I wouldn't know what caused error...)
Hi, I'm trying to upload the ASPNETDB.MDF file to a hosting server via FTP, and everytime when it was uploaded half way(40% or 50%) I would get an error message saying: "550 ASPNETDB.MDF: The process cannot access the file because it is being used by another process" and then the upload failed. I'm using SQL Express. Does anybody know what's the cause? Thanks a lot
Hi. When I try to start a package manually clicking the Start Debugging button I get this after a little while:
Cannot process request because the process (3880) has exited. (Microsoft.DataTransformationServices.VsIntegration)
How can I prevent this from happening? This happens every time I want to start the package and every time the process id is different. Here it is 3880.
I used SqlConnection/SqlDataReader to read data from SQL Server 2000 in my ASP.NET project, but I found that the CPU usage is too high, and I used SQL Query Analyzer to run the same SQL statement, The CPU usage is the half of the previous, is there anyone can tell me why? Thank you!
Hi everyone!Being new the SQL, I wanted to ask everyone if there are good resources out there that they use? I could use all the help I can get! Books, links, websites, etc.Recently my friend sent me this, I found it informative and best of all, free! https://store.pentontech.com/index.cfm?s=9&promocode=qe2165f7 Thanks in advance for your help!- Lizzy
Sometimes logging in to SQL 7 I get 'Memory resources not available...' and log in process terminates. Do I need to up number of locks ! If so, how ! Pls. suggest any other ideas !
This proc will show you what is using SQL system resources right now. If a process is not currently using any resources, it will not show up on the list. Also, a dump of the input buffer is printed so you can see just what is going on.
I use this code whenever somebody whines that "The server is slow!".
The magic is ::fn_get_sql which came with service pack 3 for sql 2000.
use master IF (object_id('sp_Now')) is not null BEGIN PRINT 'Dropping: sp_Now' DROP PROCEDURE sp_Now END PRINT 'Creating: sp_Now' GO CREATE PROCEDURE sp_Now as -- FULL DETAILS OF WHAT IS RUNNING set nocount on declare @handle binary(20), @spid smallint, @rowcnt smallint, @output varchar(500)
DECLARE TEST CURSOR FOR select sql_handle, spid from sysprocesses where sql_handle <> 0x0000000000000000000000000000000000000000 -- and spid <> @@SPID --> Uncomment to stop seeing your own process order by cpu desc
OPEN TEST FETCH NEXT FROM TEST INTO @handle, @spid
select convert(char(15), loginame) as 'loginame', convert(char(10),hostname) as 'hostname', convert(char(20),db_name(dbid)) 'database', str(spid,4,0) 'spid', str(blocked,5,0) 'block', -- str(waittime,9,0) 'wait_time', str(physical_io,8,0) 'phys_io', -- str((cpu),10,0) 'cpu(ms)', -- str((cpu/60000.0),9,3) 'cpu(mins)', str((cpu/1000/60),6) + ':' + case when left((str(((cpu/1000) % 60),2)),1) = ' ' then stuff(str(((cpu/1000) % 60),2),1,1,'0') else str(((cpu/1000) % 60),2) END as 'cpu(mm:ss)', str((convert(float,memusage) * 8192.0 / 1024.0 / 1024.0),8,2) 'mem(MB)', convert(char(30),program_name) as 'program_name', cmd, convert(char(15),lastwaittype) 'lastwaittype', convert(char(20),login_time,120) 'login_time', convert(char(19),last_batch,120) 'last_batch', convert(char(10),status) as 'status', convert(char(15),nt_username) as 'nt_username' from master..sysprocesses where spid = @spid print ' ' print ' ' dbcc inputbuffer(@spid) print ' ' print ' ' select * from ::fn_get_sql(@handle) FETCH NEXT FROM TEST INTO @handle, @spid END close TEST deallocate TEST GO IF (object_id('sp_Now')) is not null PRINT 'Procedure created.' ELSE PRINT 'Procedure NOT created.' GO
Hi, Im just looking for someone that has a demonstration Visual Basic database script. Something that makes a database, ad / removes rows and information, does all basic database stuff, just as an example of how a SQL database is supposed to be handled.
I know that a query inside a called UDF does not show up when displaying theestimated query plan, but are the resources (CPU, DiskIO) used by a UDF thatis call from within an SProc or embedded in a SQL statement included insp_who2 and Profiler BatchCompleted?Also how performance draining is it to do something like the following. Iassume for every row returned the udf will be called causing 8000 tablescans on DateTable?Create Functionu udf_GetCountDates (@StartDate datetime, @EndDate datetime)Returns IntbeginReturn(select count(*) from DateTablewhere ColDate > @TestDate and Col1Date <)end--DateTable has 5,000 rowsCreate proc Test@TestDate datetimeAsselect Col1, Col2, udf_GetCountDates(@TestDate, Col2)From OTable--OTable has 8,000 rowsGoThanks,Ray