Process ID 152:3 Owns Resources That Are Blocking Processes On Scheduler 2.
Jun 5, 2007
Last night I received this error
Process ID 152:3 owns resources that are blocking processes on Scheduler 2.
When I did an BCC INPUTBUFFER I found it was sp_MSadd_repl_commands27hp which is doing the insert into MSrepl_commands has any else noticed and issue w/ sp_MSadd_repl_commands27hp blocking itself. At the time I had about 10 million records to move. I was using the default log reader settings so I was batching them in 500 chunk intervals.
I am wondering if any else has had problems like this? I basically see it whenever I move too much data through my replication server.
I found the followoing link http://support.microsoft.com/kb/319892
Sample Scenario
Client 1 connects to SQL Server.
Client 1 runs a Transact-SQL command that starts a transaction and performs data modification.
For example: begin tran
update authors set au_lname = 'test'
Client 1 becomes IDLE, shows up as sleeping, and awaiting a command with an open transaction in the sysprocesses system table.
Clients 2 through 255: Approximately 254 more clients log on to SQL Server and issue a SELECT from the authors table. These clients will all become blocked on the original update.
Client 1 tries to commit the transaction but it becomes queued because all the worker threads are tied up by clients 2 through 255.
I am afraid that I am seeing this more then I would like does anyone know a way to prevent this from happening?
The following question applies to SQL Server 8.0.2187 (2000 + SP4+916287/914384/898709/915065/915340):
We have now twice had an incident where the same SQL Server has stopped responding. The only workaround is to restart the SQL Service. After this occurs, the log is filled with the following messages:
2007-09-10 16:42:14.29 spid3 Process ID 197:320 owns resources that are blocking processes on Scheduler 1.
2007-09-10 16:42:14.31 spid3 Process ID 74:324 owns resources that are blocking processes on Scheduler 5.
We haven't been able to pinpoint a cause or reporduce the problem on a dev server. I've seen several posts about this issue online but not many answers. Does anyone have any advice on how to troubleshoot this issue?
I have upgraded a MS SQL database from 6.5 to 7.0. The database functioned fine in 6.5, now I have a table that is locking due to a blocking process. If I kill the process all is fine, but am trying to determine what is causing the process to hang. Has anyone experience any similar situations.
I was trying to extract data from the source server using OLEDB Source and SQL Server Destination when i encountered this error:
"Transaction (Process ID 135) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.".
What must be done so that even if the table being queried is locked, i wouldn't experience any deadlock?
Currently using SQL Server 2000 (SP4). The following condition started occurring last week:
- Server has excessive blocking - Majority of the processes are in runnable state - Excessive blocking happens for a few mins. and repeats again during the day. Does not happen at night. - Nothing on the server errorlog, profiler - CPU averages 40 - 50% at that point of excessive blocking
I am having this table locking issue that I need to start paying attention to as its getting more frequent.
The problem is that the data in the tables is live finance data that needs to be changed and viewed almost real time so what I have picked up so far is that using 'table Hints' may not be a good idea.
I have a guy at work telling me that introducing a data access layer is the only way to solve this, I am not convinced but havnt enough knowledge to back my own feeling up. (asp system not .net).
Today I ended up in a situation where I had a process with total six "subthreads" (identified by different execution context) (seen in Activity Monitor). All of these had blocking=1. The server didn't function properly, I don't know the details of these problems, since I was not present at that time. We had to kill the processes. What is the process id 1, "RESOURCE MONITOR" in SQL Server 2005, seen in Activity Monitor? Is it fatal if some processes are blocking RESOURCE MONITOR? How can one end up in such situation, is it normal or a bug somewhere?
The server is a 64-bit Windows server having SQL Server 2005 SP1.
Yesterday I had a CLR stored procedure running on another server. The procedure uses System.Data.SqlClient.SqlConnection to access this server. The procedure started about 11.4.2007 22:22. The procedure created a connection to the SQL Server and created a select that should return 1,5 million rows. During fetching the rows (about after 800 000 rows) the procedure crashes to an error:"".NET Framework execution was aborted by escalation policy because of out of memory. " Naturally the procedure couldn't close the SQL Server connections, since it was forced to end.
The details if the processes as seen from ActÃvity Monitor (I only have screenshots so I can't copy-paste...):
The main process:
Process id: 69
status: suspended
open transactions: 1
command: SELECT
Application: .NET SqlClient Data Provider
Wait time: 578
Wait type: ASYNC_NETWORK_ID
CPU: 1375
Physical IO: 22
Memory usage: 2
Login time: 11.4.2007 22:22:05
Last batch: 11.4.2007 22:22:05
Blocked by: 0
Blocking: 1
Execution context: 0
Two "subthreads", there are five similar.
Process id: 69
status: suspended
open transactions: 0
command: SELECT
Application: .NET SqlClient Data Provider
Wait time: 35293046
Wait type: CXPACKET
CPU: 4875
Physical IO: 2214
Memory usage: 2
Login time: 11.4.2007 22:22:05
Last batch: 11.4.2007 22:22:05
Blocked by: 0
Blocking: 1
Execution context: 1
Process id: 69
status: suspended
open transactions: 0
command: SELECT
Application: .NET SqlClient Data Provider
Wait time: 35293031
Wait type: CXPACKET
CPU: 4875
Physical IO: 2210
Memory usage: 2
Login time: 11.4.2007 22:22:05
Last batch: 11.4.2007 22:22:05
Blocked by: 0
Blocking: 1
Execution context: 2
The rest three subthreads differ from the above by having different wait time, CPU, physical IO and execution context.
When I was trying to get my maintenance plans to work there were many processes I had to kill. These processes were killed over a week ago but they still list in the Current Activity | Process Info list. Under Command it says "Killed/Rollback". If I go into QA and run kill 65 with statusonly it says the process is complete. How do I get these processes off the list?
I have a test database for the end users to test their select queries for reports. One of my users is writing queries that cause locking in the database. I killed the process last evening and they are in Killed/Rollback status but are still hogging 90% of the CPU resources for the past 12 hrs. I tried killing them several times but no go.
I know that the best way to clear of these processes is by restarting SQL Server. If that is not an option is there is any other way we can clean these processes?
Also the user running these queries has a read only and create view access to the database. From my experience processes that go into Kill/Rollback state after you kill them are processes associated with some update transaction. Since the user as far as i know is running Select commands would an infinite loop cause this ?
Hi,we're having a problem with SQL 2000 and Opta 2000 JDBC driverwhere there is large update running and at the same time,read is blocked for a while.We're looking for a way to catch this blocking processand if it last more than 10 minutes, then email or send out a message.I know sp_lock returns all current locksbut how do you know which one is blocking other processes?Thanks for your help in advance.
Imagine I set a begin transaction on table a and updating a row and not committed and not roll backed--first connection
From second connection I am selecting same table (obviously it wait until first connection commits/rollback based on my transaction level: my isolation level is read committed).
1. How do I know second connection is to waiting to first connection to complete. 2. If I want to select rows that are not locked by update process how do i need to do(ex:row 1,2,3,4 and 1 is locked by update process(exclusive lock) and i want to leave that and i need to select 2,3,4 records).
We are having a really big problem with a zombie process/transactionthat is blocking other processes. When looking at Lock/ProcessIDunder Current Activity I see a bunch of processes that are blocked byprocess 94 and process 94 is blocked by process -2. I assume -2 is azombie that has an open transaction. I cannot find this process tokill and it seems that this transaction is surviving databaserestarts. I know which table is locked up and when I run a select *from this table it never returns. Does anyone have any ideas as tohow to kill is transaction.Any help is appreciated.A. Tillman
We are facing lot of problems with Blocking,can any one help us in this matter,The problem is as follows
We have SQL Server 7.0 running on Nt4.0, and three web servers and 5 application servers are accessing SQL server. Till Yesterday everything was fine,Suddenly today more than 18 processes were blocked by other(Like chain),First i killed some blocking process,then it was fine,once again it started and continuously some processes are blocked by other,and i found that all blocking process are running from webservers.I ran SQL Profiler to get some information,but no use. I am not understanding why suddenly it happend,because we have't modified anything.Is there any way to overcome this situation,this is production server. because of this users are getting slow responce/no responce.
----Here i want to know why it happend? ---How to trace the problem and fix it
I need a script or stored procedure to tell me who owns what jobs. I have something like 150 and one of my job creators is no longer with our department. His account (NT domain) is still active but he is no longer working with these jobs and they need to be owned by someone else. Is there an easy way to do this?Dale
I am trying to drop a login but the system is telling me. quote:Msg 15174, Level 16, State 1, Line 3 Login 'Mark' owns one or more database(s). Change the owner of the database(s) before dropping the login.
Question: But how do i check to find out which objects or tables that this login is associated with.
I recently added a new user to my database. Now I want to delete that user, but I keep getting the error above. What do I need to do to delete my recently added user?
Hello all, I am running into an interesting scenario on my desktop. I'm running developer edition on Windows XP Professional (9.00.3042.00 SP2 Developer Edition). OS is autopatched via corporate policy and I saw some patches go in last week. This machine is also a hand-me-down so I don't have a clean install of the databases on the machine but I am local admin.
So, starting last week after a forced remote reboot (also a policy) I noticed a few of the databases didn't start back up. I chalked it up to the hard shutdown and went along my merry way. Friday however I know I shut my machine down nicely and this morning when I booted up, I was in the same state I was last Wenesday. 7 of the 18 databases on my machine came up with
FCB:pen: Operating system error 32(The process cannot access the file because it is being used by another process.) occurred while creating or opening file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf'. Diagnose and correct the operating system error, and retry the operation. and it also logs FCB:pen failed: Could not open file C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf for file number 1. OS error: 32(The process cannot access the file because it is being used by another process.).
I've caught references to the auto close feature being a possible culprit, no dice as the databases in question are set to False. Recovery mode varies on the databases from Simple to Full. If I cycle the SQL Server service, whatever transient issue it was having with those files is gone. As much as I'd love to disable the virus scanner, network security would not be amused. The data and log files appear to have the same permissions as unaffected database files. Nothing's set to read only or archive as I've caught on other forums as possible gremlins. I have sufficient disk space and the databases are set for unrestricted growth.
Any thoughts on what I could look at? If it was everything coming up in RECOVERY_PENDING it's make more sense to me than a hit or miss type of thing I'm experiencing now.
Dear list Im designing a package that uses Microsofts preplog.exe to prepare web log files to be imported into SQL Server
What Im trying to do is convert this cmd that works into an execute process task D:SSIS ProcessPrepweblogProcessLoad>preplog ex.log > out.log the above dos cmd works 100%
However when I use the Execute Process Task I get this error [Execute Process Task] Error: In Executing "D:SSIS ProcessPrepweblogProcessLoadpreplog.exe" "" at "D:SSIS ProcessPrepweblogProcessLoad", The process exit code was "-1" while the expected was "0".
There are two package varaibles User::gsPreplogInput = ex.log User::gsPreplogOutput = out.log
How do I use the execute process task? I am trying to unzip the file using the freeware PZUnzip.exe and I tried to place the entire command in a batch file and specified the working directory as the location of the batch file, but the task fails with the error:
SSIS package "IngramWeeklyPOS.dtsx" starting.
Error: 0xC0029151 at Unzip download file, Execute Process Task: In Executing "C:ETLPOSDataIngramWeeklyUnzip.bat" "" at "C:ETLPOSDataIngramWeekly", The process exit code was "1" while the expected was "0".
Then I tried to specify the exe directly in the Executable property and the agruments as the location of the zip file and the directory to unzip the files in, but this time it fails with the following message:
SSIS package "IngramWeeklyPOS.dtsx" starting.
Error: 0xC002F304 at Unzip download file, Execute Process Task: An error occurred with the following error message: "%1 is not a valid Win32 application".
The command in the batch file when run from the command line works perfectly and unzips the file, so there is absolutely no problem with the command, I believe it is just the set up of the variables on the execute process task editor under Process. Any input on resolving this will be much appreciated.
I am designing a utility which will keep two similar databases in sync. In other words, copying the new data from db1 to db2 and updating the old data from db1 to db2.
For this I am making use of the 'Tablediff' utility which when provided with server name, database, table info will generate .sql file which can be used to keep the target table in sync with the source table.
I am using the Execute Process Task and the process parameters I am providing are:
The customer.bat file will have the following code: tablediff -sourceserver "LV-SQL5" -sourcedatabase "TC_CTI" -sourcetable "CUSTOMER_1" -destinationserver "LV-SQL2" -destinationdatabase "TC_CTI" -destinationtable "CUSTOMER" -f "c:SQL_bat_Filessql5TC_CTIsql_filescustomer1"
the .sql file will be generated at: C:SQL_bat_Filessql5TC_CTIsql_filescustomer1.
The Problem: The Execute Process Task is working fine, ie., the tables are being compared correctly and the .SQL file is being generated as desired. But the task as such is reporting faliure with the following error :
[Execute Process Task] Error: In Executing "C:SQL_bat_FilesSQL5TC_CTIpackage_occurrence.bat" "" at "C:Program Files (x86)Microsoft SQL Server90COM", The process exit code was "2" while the expected was "0". ]
Some of you may suggest to just set the ForceExecutionResult = Success (infact this is what I am doing now just to get the program working), but, this is not what I desire.
Im having trouble changing the DB owner from Kate to Bob, because Kate owns some objects in the DB. I first try to run sp_changedbowner Bob....but it tells me: The proposed new database owner is already a user in the database.
When I run scripts on a table that such as; sp_changeobjectowner 'customers', 'Bob'...I get the message of
Server: Msg 15001, Level 16, State 1, Procedure sp_changeobjectowner, Line 38 Object 'customers' does not exist or is not a valid object for this operation.
Trying to get my hands around all the new security features of SQL Server 2005. In Management Studio did something I don't know how to undo. I added a database role ReadOnlyRole and clicked the box next to db_datareader in the owned schemas box. Then I tried to remove the ReadOnlyRole and could not. How do I undo what I did? Is it possible?
The below is the TSQL that generates the my issue.
Use [master] go create database [test] go
USE [test] GO
CREATE ROLE [ReadOnlyRole] GO
USE [test] GO
ALTER AUTHORIZATION ON SCHEMA::[db_datareader] TO [ReadOnlyRole] GO
I'm pulling data from Oracle db and load into MS-SQL 2008.For my data type checks during the data load process, what are options to ensure that the data being processed wouldn't fail. such that I can verify first in-hand with the target type of data and then if its valid format load it into destination table else mark it with error flag and push into errors table... All this at the row level.One way I can think of is to load into a staging table then get the source & destination table -column data types, compare them and proceed.
should I just try loading the data directly and if it fails try trouble shooting(which could be a difficult task as I wouldn't know what caused error...)
I have a scheduler doing ftp and bcp to SQLserver. How do I check the file size is greater than 0 after ftp completed? (right now I am using xp_cmdshell to execute ftp command and return 0 always, even the file did not get transfer successfully).
DECLARE @result_ftp int, @result_bcp int EXEC result_ftp=xp_cmdshell @ftpbat if (@result_ftp=0) ???? xp_cmdshell always return 0 ???? BEGIN EXEC result_bcp=xp_cmdshell @bcpbat if (@result_bcp=0) print ' ## BPC Successfully##' else raiserror("BCP Fail", 16,1) END ELSE print ' FTP fail' return Go
Currently i am running a large sql server 2000. Usually when i need to generate reports, i will need to query using query analyser and paste the result in excel. This process is very tedious as the query takes long time due tolarge data. Also my reports have the frequency of daily, weekly and monthly.
I am looking for a job/query scheduler to automate this process and publish the result in excel.
I understand that DTS can only schedule a query to run but extracting the data to excel and Reporting Services does not have scheuler.
Please advise if there is any s/w or application can do this. Any freebies?
I am have problems running a couple of jobs. It makes no sense. I have checked and made sure the agent service is running the same user level permissions as I am. I run the job manually from SMS and it works fine.
User is a Windows Login.
Any suggestions would be greatly appreciated.
Date 6/11/2007 6:00:01 AM Log Job History (BidBackLog)
Step ID 1 Server TWSQLRPTS Job Name BidBackLog Step Name Step1 Duration 00:00:21 Sql Severity 0 Sql Message ID 0 Operator Emailed Operator Net sent Operator Paged Retries Attempted 0
Message Executed as user: TWDOMAINSQLADMIN. ... 9.00.3042.00 for 64-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 6:00:01 AM Error: 2007-06-11 06:00:22.50 Code: 0xC0202009 Source: BidBacklog Connection manager "TWSQLRPTS.HomeBASE" Description: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Login failed for user 'TWDOMAINSQLADMIN'.". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Cannot open database "HomeBASE" requested by the login. The login failed.". End Error Error: 2007-06-11 06:00:22.50 Code: 0xC020801C Source: DTSTask_DTSDataPumpTask_1 OLE DB Source [1] Description: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager ... The package execution fa... The step failed.
What is the best way to run a scheduled task that fires off three scripts that need to be run sequentially? I could set up three different tasks but I don't know exactly how long each will take and they are interdependent.
I have several DTS packages that I run in steps thru the scheduler. The only one that is giving me grief is one that extracts large amounts of data from an Oracle database. Any extract that is less than 30,000 rows is fine but more than that, only the last completed batch that keeps the rows returned under 30,000 will write out to either a new table or to a file. Not only do the rows not return, but it holds open any file that is being used (the one it is writing out to as well as the errorlog file.) I have changed batch sizes, max error counts, timeouts and every other variable I could find to no avail...The job immediately 'completes' its execution with a good return code but that appears only to be 'I sent your request off to Oracle' which releases the next step even though the data hasn't been returned (an issue I will tackle once I get all the data.) My Network folks and Oracle DBA are telling me that there are no size restrictions for passing the data back.
Any ideas where I have strayed off the path? Thanks for your help.