Long Running Transactions W/ Other Users Needing To Read Data
Jul 23, 2005
I have a very long transaction that runs on the same database that
other users need to use for existing data. I don't care if they see
data from the transaction before it is done and am only using the
transaction because I need a way to roll it back if any errors happen
during the transaction. Unfortunately all tables affected in the long
running transaction are completely locked and nobody else can access
any of the affected tables while it is running. I am using the
transaction isolation level of read uncommitted, which from my limited
understanding of isolation levels is the least strict. What can I do to
prevent this from happening?
Below is the output from sp_who2 and sp_lock while the process is
running and another process is being blocked by it.
SPID Status Login
HostName BlkBy DBName Command CPUTime
DiskIO LastBatch ProgramName SPID
----- ------------------------------
------------------------------------------------ ---------- -----
------------ ---------------- ------- ------ --------------
---------------------------- -----
1 BACKGROUND sa
. . NULL LAZY WRITER 0 0
06/09 15:42:52 1
2 sleeping sa
. . NULL LOG WRITER 10 0
06/09 15:42:52 2
3 BACKGROUND sa
. . master SIGNAL HANDLER 0 0
06/09 15:42:52 3
4 BACKGROUND sa
. . NULL LOCK MONITOR 0 0
06/09 15:42:52 4
5 BACKGROUND sa
. . master TASK MANAGER 0 5
06/09 15:42:52 5
6 BACKGROUND sa
. . master TASK MANAGER 0 0
06/09 15:42:52 6
7 sleeping sa
. . NULL CHECKPOINT SLEEP 0 12
06/09 15:42:52 7
8 BACKGROUND sa
. . master TASK MANAGER 0 2
06/09 15:42:52 8
9 BACKGROUND sa
. . master TASK MANAGER 0 0
06/09 15:42:52 9
10 BACKGROUND sa
. . master TASK MANAGER 0 0
06/09 15:42:52 10
11 BACKGROUND sa
. . master TASK MANAGER 0 1
06/09 15:42:52 11
12 BACKGROUND sa
. . master TASK MANAGER 0 0
06/09 15:42:52 12
51 sleeping SUPERPABLOAdministrator
SUPERPABLO . PM AWAITING COMMAND 1813
307 06/09 16:10:34 .Net SqlClient Data Provider 51
52 sleeping SUPERPABLOAdministrator
SUPERPABLO 54 PM SELECT 30 5
06/09 16:10:16 .Net SqlClient Data Provider 52
53 RUNNABLE SUPERPABLOAdministrator
SUPERPABLO . master SELECT 0 3
06/09 16:09:44 SQL Profiler 53
54 RUNNABLE SUPERPABLOAdministrator
SUPERPABLO . PM UPDATE 10095
206 06/09 16:10:02 .Net SqlClient Data Provider 54
56 RUNNABLE SUPERPABLOAdministrator
SUPERPABLO . PM SELECT INTO 151 27
06/09 16:10:33 SQL Query Analyzer 56
(17 row(s) affected)
spid dbid ObjId IndId Type Resource Mode Status
------ ------ ----------- ------ ---- ---------------- -------- ------
51 5 0 0 DB S GRANT
52 5 0 0 DB S GRANT
52 5 1117963059 4 PAG 1:7401 IS GRANT
52 5 1117963059 4 KEY (5301214e6d62) S WAIT
52 5 1117963059 0 TAB IS GRANT
54 5 1117963059 0 TAB IX GRANT
54 5 1852025829 0 TAB IX GRANT
54 5 1181963287 3 PAG 1:9017 IX GRANT
54 5 1117963059 4 KEY (5301934930a4) X GRANT
54 5 1117963059 3 KEY (530187fc93f3) X GRANT
54 5 1117963059 4 KEY (530154df71eb) X GRANT
54 5 0 0 DB [BULK-OP-LOG] NULL GRANT
54 5 0 0 FIL 2:0:d U GRANT
54 5 1117963059 2 KEY (1d0096c50a7d) X GRANT
54 5 1117963059 2 KEY (1b004a9a6158) X GRANT
54 5 1117963059 2 KEY (1800a435d44a) X GRANT
54 5 1181963287 6 PAG 1:8745 IX GRANT
54 5 1181963287 4 PAG 1:8923 IX GRANT
54 5 1181963287 2 PAG 1:8937 IX GRANT
54 5 1117963059 4 KEY (5301112b0696) X GRANT
54 5 0 0 PAG 1:10889 IX GRANT
54 5 1181963287 5 PAG 1:8859 IX GRANT
54 5 1181963287 6 PAG 1:10888 IX GRANT
54 5 0 0 PAG 1:10891 IX GRANT
54 5 0 0 PAG 1:10893 IX GRANT
54 5 0 0 PAG 1:10892 IX GRANT
54 5 0 0 PAG 1:10894 IX GRANT
54 5 0 0 PAG 1:10882 IX GRANT
54 5 1117963059 3 KEY (530135fbce35) X GRANT
54 5 1117963059 0 RID 1:7387:57 X GRANT
54 5 1117963059 0 RID 1:7387:59 X GRANT
54 5 1117963059 0 RID 1:7387:61 X GRANT
54 5 1117963059 3 KEY (5301406ad2bc) X GRANT
54 5 1117963059 4 PAG 1:7401 IX GRANT
54 5 0 0 PAG 1:7387 IX GRANT
54 5 1117963059 2 PAG 1:7389 IX GRANT
54 5 1117963059 3 PAG 1:7391 IX GRANT
54 5 1117963059 0 RID 1:7387:10 X GRANT
54 5 1117963059 0 RID 1:7387:56 X GRANT
54 5 1117963059 0 RID 1:7387:58 X GRANT
54 5 1117963059 0 RID 1:7387:60 X GRANT
54 5 1117963059 3 KEY (530144afbed8) X GRANT
54 5 1117963059 4 KEY (530115ee6af2) X GRANT
54 5 1117963059 3 KEY (5301c6cd88ea) X GRANT
54 5 1149963173 0 TAB IX GRANT
54 5 1181963287 0 TAB X GRANT
54 5 1117963059 4 KEY (5301d2782bbd) X GRANT
54 5 1117963059 3 KEY (5301015bc9a5) X GRANT
54 5 0 0 DB S GRANT
54 5 0 0 DB [BULK-OP-DB] NULL GRANT
54 5 1117963059 4 KEY (5301501a1d8f) X GRANT
54 5 1117963059 2 KEY (1c00f3a2b6c5) X GRANT
54 5 1117963059 2 KEY (1a002ffddde0) X GRANT
54 5 0 0 PAG 1:7411 IX GRANT
54 5 1117963059 2 KEY (1900c15268f2) X GRANT
54 5 0 0 PAG 1:10840 IX GRANT
54 5 1181963287 4 PAG 1:10841 IX GRANT
54 5 0 0 PAG 1:10842 IX GRANT
54 5 1117963059 3 KEY (5301059ea5c1) X GRANT
54 5 0 0 PAG 1:10820 IX GRANT
54 5 1181963287 4 PAG 1:10821 IX GRANT
54 5 1181963287 5 PAG 1:10874 IX GRANT
54 5 1181963287 5 PAG 1:10876 IX GRANT
54 5 0 0 PAG 1:10877 IX GRANT
54 5 1181963287 5 PAG 1:10878 IX GRANT
54 5 0 0 PAG 1:10849 IX GRANT
54 5 0 0 PAG 1:10850 IX GRANT
54 5 1117963059 2 KEY (1700f225b712) X GRANT
54 5 1117963059 4 KEY (5301214e6d62) X GRANT
56 5 0 0 DB S GRANT
56 1 85575343 0 TAB IS GRANT
View 29 Replies
ADVERTISEMENT
Aug 6, 2015
I want to make data changes in read_only database , that's why i must set database read_write. While database is at read_write mode, i want to be sure that no one makes change in database.
For this aim, i write the code below, but i suspect that after setting the database read_write, till the setting database
single_user ,is it possible get DML script from another user. Is the code below enough for this operation. Or is there another way?
Reminding: Read_only database can not be set single_user mode. That's why, first you must set database read_write.
The code;
use master
alter database xxx set read_write
with rollback immediate
alter database xxx set single_user
with rollback immediate
use xxx
update tablexxx set columnxxx=yyy
use master
alter database xxx set read_only
with rollback immediate
alter database xxx set multi_user
with rollback immediate
View 5 Replies
View Related
Feb 6, 2007
Hi,I have a web app, that runs fine, except for one particular section that uses a class called by an event in the code behind. The class resides as a dll in the bin folder. We had no problems during testing, when only one user was running this dll. Problems soon occurred when multiple users tried running it. Here's the error & stack: 06/02/2007 09:25:26 ==> cburns ==> There is already an open DataReader associated with this Command which must be closed first. at System.Data.SqlClient.SqlInternalConnectionTds.ValidateConnectionForExecute(SqlCommand command) at System.Data.SqlClient.SqlConnection.ValidateConnectionForExecute(String method, SqlCommand command) at System.Data.SqlClient.SqlCommand.ValidateCommand(String method, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader() at ESP.Validator.Data.DatabaseEvents.DatabaseEventManager.Read(IEventable eventObject, Int16 eventType, DateTime earliestDate, DateTime latestDate) in C:My PathValidatorValidator.NETDataDatabaseEventsDatabaseEventManager.cs:line 92 at ESP.Validator.Data.Translink.CATCard.GetDespatchDate() in C:My PathProjectsValidatorValidator.NETDataTranslinkCATCard.cs:line 94 at ESP.Validator.Data.Translink.ExistingSchemeEntitlement.ReadCards() in C:My PathProjectsValidatorValidator.NETDataTranslinkExistingSchemeEntitlement.cs:line 215 at ESP.Validator.Data.Translink.ExistingSchemeEntitlement.Read() in C:My PathProjectsValidatorValidator.NETDataTranslinkExistingSchemeEntitlement.cs:line 147 at ESP.Validator.Data.Translink.TranslinkApplicant.ReadEntitlements() in C:My PathProjectsValidatorValidator.NETDataTranslinkTranslinkApplicant.cs:line 369 at ESP.Validator.Data.Translink.TranslinkApplicant.Read() in C:My PathProjectsValidatorValidator.NETDataTranslinkTranslinkApplicant.cs:line 353 at ESP.Validator.Data.Translink.PrePrintedLetter.Read() in C:My PathProjectsValidatorValidator.NETDataTranslinkPrePrintedLetter.cs:line 282 at ESP.Validator.ValidationProcessor.Read(ValidationSubject subject) in C:My PathProjectsValidatorValidator.NETValidationProcessor.cs:line 82 at clear_applications_scan_applications.ProcessValidation() It seems the data reader is getting reused. We have ensured after each read the reader is closed. Though all users are using the same connection string. Could it be a connection pool problem, with the connection being overwritten during execution? Should i edit it according to the users logon?I am really at a loss for ideas, and I don't mind admitting I am a bit out of my depth with this one!! Any ideas/suggestions would be greatly appreciated. Thanks
View 1 Replies
View Related
Sep 1, 2006
If I start a long running query running on a background thread is there a way to abort the query so that it does not continue running on SQL server?
The query would be running on SQL Server 2005 from a Windows form application using the Background worker component. So the query would have been started from the background workers DoWork event using ado.net. If the user clicks an abort button in the UI I would want the query to die so that it does not continue to use sql server resources.
Is there a way to do this?
Thanks
View 1 Replies
View Related
Jan 15, 2008
Hi,
Yesterday, we have had a sudden load in our SQL Server 2000 which resulted in several locks. There was not too much time to investigate as we had to rush. A team member had reviewed the processes in EM, Manegement, Current Activity. Looking for blocking processes and killed them.
She told me that as soon as the blocking SPID was killed, another one arose and she had to repeat the operation a dozen of time. When done, the server activity was back to normal. She noticed that more than half of the blocking processes showed that they executed the stored Proc "P_SearchProducts".
We don't own the server and the information on what had happened at that time (batches or resource intensive operations, etc.) is not available for now.
The team suggests that we set the Transaction Isolation Level to Read UNCOMMITTED for this SP. I would like to know better about locks before I go ahead.
P_SearchProducts returns 5 recordsets each one could contains from 1 to 200 rows. To achieve the results, it creates about 10 intermediate tables (SELECT ... INTO #TableX) these temp tables are then used progressively to arrive to the final results. Roughly the volume of these temp tables could be double than the final results. The developer who wrote this SP is not a guru in SQL, there is room for improvement. But here are my questions:
Q1. Could the series SELECT ... INTO #TableX in P_SearchProducts prevent or lock another connection from executing the same SP? If yes, under which conditions?
Q2. Let's assume that P_SearchProducts has a slow execution time. Could it prevent another connection from updating the Product table? And thus leading to a deadlock situation? Something like another transaction (by User2) has obtained lock on most of Product tables, except the Product table which were being slowly read by User1 executing P_SearchProducts. But User1 cannot read the other product tables b/c there are locks by User2.
Q3. If the contention issue was provoked by the slow execution time of many request to exec
P_SearchProducts (let's assume there were suddenly 50 users on the web hitting the search product feature at the same time). Could the Read Uncommitted magically resolve the contention issue, providing we accept the consequences of the dirty read.
Sorry for the long post and thank you in advance for any help.
View 2 Replies
View Related
Aug 22, 2007
I have a pretty complex query that aggregates lots of data and inserts multiple rows of that data into a reporting table. When I call this SPROC from SQL Server Management Studio, it executes in under 3 seconds. When I try to execute the same SPROC using .NET's SqlCommand object the query runs indefinitely until the CommandTimeout is reached. Why would this SPROC behave differently with the same inputs, but being called from .NET? Thanks for your help!
View 3 Replies
View Related
Aug 20, 2001
Hi everyone.... I'm trying to execute this update statement... It takes an eternity... any ideas on how to rewrite or speed it up?
It's a several step process... below is everything that i run, one step at a time. The final update statement is what takes so long. It should only affect about 2600 rows out of a potential 9000. That's why I'm confused on the response time
select d.olddevicename, de.device, d.newdevicename into #temp9
from dns d, devices de
where de.device = d.olddevicename
update #temp9 set device = newdevicename where olddevicename = device
update devices set device = #temp9.device from #temp9, devices where #temp9.device in
(select #temp9.device from #temp9, devices where #temp9.olddevicename = devices.device)
Thank you!
J.
View 1 Replies
View Related
Jan 10, 2001
Hi all,
I have 3 three scheduled job: one runs onece a day, one runs once per hour, and another runs every 17 minutes. It is a NetIQ application. I just scheduled SQL Server maintianace job last night which ran at 2:00Am and 4:00Am. This morning, I came in office and found all my jobs were still running; and they were all blocked by the first 3 jobs. I had to kill all of them. In this afternoon, I kicked off one of my many DTS packages which runs usually about 40 minutes, but it failed. I tried several times but no luck. I suspected one of user tables corrupted or one of stored procedures corrupted. After I recycle the server, and dropped the table and the stored procedure, and recreated them, the package went fine. The store procedure involves many updates and inserts.
The question I have is: is it possible to cause this problem because I killed the unfinished jobs (especially the sql maintanace job)?
NOTE: the sql maintanace job does not include the backup of database and transaction log.
View 1 Replies
View Related
Feb 7, 2003
When I execute the following stored procedure it runs for about a minute.
CREATE PROCEDURE EquipmentListByProduct
(
@iProdTypeId int
)
AS
SET NOCOUNT ON
DECLARE@iError int, @iRows int
SELECT pn.prodTypeId, pn.prodId, pn.prodName
FROM prodNames pn
WHERE pn.prodTypeId = @iProdTypeId
SELECT@iError = @@ERROR, @iRows = @@ROWCOUNT
IF ( @iError <> 0 )
BEGIN
RETURN@iError
END
IF ( @iRows = 0 )
BEGIN
RETURN-1
END
RETURN@iError
GO
The table only has 22 records.
Do I need to index the table? If so how do I do this?
View 4 Replies
View Related
Mar 11, 2003
My backups are running 5-6 hours on SQL2000. I'm sure they only used to take 1 hour or so. On another server, backing up the same database (both about 50 gig), the backup only takes 45 min - 1 hour. What can I look at to see why it's taking so long ?
(edit) I'm using a maint plan to backup to disk
View 3 Replies
View Related
Jul 30, 2004
Trying to come up with a way to monitor (without profiler, hopefully with a job and a select statement) a specific sql job that may cause a problem if the duration is too long. It seems that there is an sp called sp_sqlagent_log_jobhistory that shoves a record in sysjobhistory, but only after all the job steps run. Anyone tried this before?
View 1 Replies
View Related
Apr 3, 2008
1. could any one tell me in a simple way how to troubleshoot long running queries.
2.what is the default recovery model
View 6 Replies
View Related
May 9, 2008
Hello,
I want to find long running queries?
Can any one help me?
Thanks
Prashant Hirani
View 4 Replies
View Related
Jun 4, 2008
Hello Gurus
I am using sql 2005 and one job status is executing in job monitor in 2005,How can i check since how long this job is running?
Please advice
Nitin
View 4 Replies
View Related
Jul 20, 2005
I've got a server (SQL 2K, Win2K) where the backupshave started running long.The database is a bit largish -- 150GB or so. Up untillast month, the backups were taking on the order of4 to 5 hours -- depending on the level of activity on theserver.I'm using a T-SQL script in the SQLAgent to run thebackups. Native SQL backup to an AIT tape drive.Now, for no apparent reason, the backups are takingon the order of 24 to 26 hours. The backups completesuccessfully -- no errors, just taking an outrageouslylong time to complete. DBCCs check out AOK, noproblems with the database.No changes to the machine. No hardware changes. Nosoftware changes. Weird.Multiple tape media have been tried -- it's not a caseof a tape going bad.We've had no problems with this box for almost 4years. Now it's gettin' jiggy with us!Any ideas on where to start with this one?Thanks in advance.
View 1 Replies
View Related
Jul 6, 2007
Hi. I'm fairly new to SSRS, and very new to this forum. I have a report based on a stored procedure. I've optimized the procedure so that it runs from 2-4 minutes (previously over half an hour). However, when I run the report that calls the sp, it runs forever (well over 45 minutes in some cases), and the users basically give up on it. Any ideas of why this happens and what steps I can take to improve performance?
Thanks,
Marianne
View 4 Replies
View Related
Dec 21, 2006
I need to execute a long running package (it takes about 16 hours to finish) to load a data warehouse for the first time with all historical data. This package it's a master package and execute other packages; I log the start time and the finish time of the package in a table to manage future incremental loads.
I executed the package on sql server where it is saved, but after 8 hours it was running, a new package was started automatically. Then two more packages started .. each every two hours.
I set the MaxConcorrentExecutable = 4, this could affect this strange behavoir ?
Anyone could imagine wath happened ?
Thanks
Cosimo
View 3 Replies
View Related
Jun 2, 2006
Hi,
I'm trying to optimize a long running (several hours) query. This query is a cross join on two tables. Table 1 has 3 fields - ROWID, LAt and Long. Table 2 has Name, Addr1,Addr2,City,State,Zip,Lat,Long.
Both tables has LatRad - Lat in radians, LonRad- Lon in Radians. Sin and Cos values of Lat and Lon are calulated and stored to be used in the distance formula.
What I'm trying to do here is find the nearest dealer (Table 2) for each of Table 1 rows. The Select statement takes a long time to execute as there are about 19 million recs on table 1 and 1250 rows in table 2. I ran into Log issues- filling the transaction log, so I'm currently using table variables and split up the process into 100000 recs at a time. I cross join and calculate the distance (@DistValues) and then find the minimum distance (tablevar2) for each rowid and then the result is inserted into another Table (ResultTable).
My Code looks like this:
Declare @DistValues table (DataSeqno varchar(10),Lat2 numeric(20,6),Lon2 numeric(20,6),StoreNo varchar(60), Lat1 numeric(20,6),Long1 numeric(20,6),Distance numeric(20,15))
Declare @MinDistance table (DataSeqno varchar(10) primary key,distance numeric(20,15))
Insert into @DistValues
Select DataSeqno,T.Lat Lat2,T.Lon Lon2,S.StoreNo,S.Lat Lat1,S.Long Long1,
distance=3963.1*Case when cast(S.SinLat * T.SinLat + S.CosLat * T.cosLat * cos(T.Lonrad - s.Lonrad) as numeric(20,15)) not between -1.0 and 1.0 then 0.0 else acos(cast(S.SinLat * T.SinLat + S.CosLat * T.cosLat * cos(T.Lonrad - s.Lonrad) as numeric(20,15))) end
from dbo.TopNForProcess T , dbo.Table2 S where Isnull(T.Lat,0) <> 0 and Isnull(T.Lon,0)<> 0
Insert into @MinDistance
Select DataSeqno,Min(distance) From @DistValues Group by DataSeqno
Insert into ResultTable (DataSeqno,Lat2,Lon2,StoreNo,LAt1,Long1,distance)
Select D.DataSeqno, D.Lat2, D.Lon2, D.StoreNo, D.LAt1, D.Long1, M.distance from @DistValues D Inner Join @MinDistance M on D.DataSeqno = M.DataSeqno and D.Distance = M.Distance
I created a View called TopNForProcess which looks like this. This cut down the processing time compared to when I had this as the Subquery.
SELECT TOP (100000) DataSeqno, lat, Lon, LatRad, LonRad, SinLat, cosLat, SinLon, CosLon FROM Table1 WHERE (DataSeqno NOT IN (SELECT DataSeqno FROM dbo.ResultTable)) AND (ISNULL(lat, 0) <> 0) AND (ISNULL(Lon, 0) <> 0)
I have indexes on table table1 - Rowid and another one with Lat and lon. Table2 - Lat and Long.
Is there any way this can be optimized/improved? This is already in a stored procedure.
Thanks in advance.
View 7 Replies
View Related
Jan 29, 2001
I have SQL Server 7 database & I need to create some interface between users & database to let users see what is in the database, but not let them change or destroy data. Do I need to create some other database in-between users & the original database? Users can't program & want access to all data.management doesn't want to create a replication of the database. They want to let users have access through MS Word or Access. What could you advise me?
View 1 Replies
View Related
Apr 13, 2007
I have a SQL procedure that can take several minutes to complete. I allow users to initiate the process through a web site and view a progress bar. When the process is running, though, the site slows to a crawl or times out completely. That long running process seems to block all other queries on the database. Is there a way to give this process a low priority or somehow throttle its resource use so that the other web processes can get a chance to run in a timely manner?
Thanks for any advice.
View 4 Replies
View Related
Feb 10, 2008
Hello,
I am having some difficulty with data-driven processes in an internal web application using ASP.NET framework 2.0 with SQL Server 2000.
I have a facility where the user enters search criteria to retrieve a list with the idea being that when they click a button the list set is migrated to a table as a batch. The SQL Server processes to perform the database operations can take up to 5min because of volume of data and relational complexity, which is fine.
The only problem is that I'm having trouble managing the SQL Server process if the user closes the browser or go to a new URL etc. If this happens on the client side it's like the HTTP request is terminated but the SQL statement left on the SQL Server still running, eventually it runs to completion but I would prefer the SQL process to also be abandoned and rolled back.
Any idea what is happening here and how it can be handled?
Thanks for any help.
A.C
View 2 Replies
View Related
Mar 5, 2002
Hello All
I have queries which take over 30 secs to run, which I wish to monitor.
Currently, I am monitoring using sql profiler.
Is there any way of setting up mail to e-mail me when such a query happens.
Could I set up an Alert, or is there some other method.
I want to be able to react to these events faster before the users complain.
I am using sql server 7 enterprise and I have exchange set up.
Any help would be great
Pargat
View 2 Replies
View Related
Jun 5, 2001
Hi there!,
Problem: I schedule a job that calls a stored procedure which loads around 1.5 million records. The Job takes 19 hrs to complete. However, if i run that stored procedure manually in Query Analyser it takes only 45 minutes..
Did anyone faced this problem? Is this known problem..Any suggestions/recommendations?
thank you.
regards,
Nana
View 6 Replies
View Related
May 13, 2005
Hello,
I have a very long running stored proc (import and transformation of 2-3 M. records). The duration of about 1h is not the problem.
My problem is that i want to send some notifications to the UI for showing a progress counter in the UI.
Is there any possibility to send send out a msg after certain steps (e.g. after positioning the cursor to next row) via an extended stored procedure which I could catch in my UI?
I would be very appreciative if somebody would send me a hint.
Thank you,
selo
View 4 Replies
View Related
Jan 4, 2008
I’m trying to write a script that will detect long running agent jobs.
Having looked at this article:
http://www.databasejournal.com/features/mssql/article.php/3500276
It appears that agent job job id’s don’t necessarily get stored in the programname of the sysprocesses table. This is true if the agent executes an os command. It also appears that job steps do not get stored in the sysjobhistory until the step is complete so that cannot be used accurately.
Does anyone know of an effective way to find if there are long running jobs other than these methods?
View 1 Replies
View Related
Dec 6, 2005
I just wanted to post a follow up to a message I posted some months agoabout a long running transaction that was blocking all other users...The link is belowhttp://groups.google.com/group/comp...649bee2002646a2By using the new "Row versioning" functionality of SQL 2005, itcompletely solved this problem. By reading the books online, it saysthere is a performance impact, but that the better performance of SQL2005 in general might offset it. So far this seems to be the case. justposting it here in case anyone else has the problem. The SQL command Iihad to execute to get everything working properly was:ALTER DATABASE DBnameSET READ_COMMITTED_SNAPSHOT ON;
View 1 Replies
View Related
Oct 27, 2006
How does one prevent a long running procedure form crapping out in CLR?
I am trying to do a pull from a distant data source and it works, except I have to break down my stored procedure call into several smaller calls. I would like to do everything in one shot, but I get the thread abort exception when I try to get a lot of data.
Any ideas?
Thanks.
View 3 Replies
View Related
Mar 20, 2008
The query show below is designed to use seasonal profiles to compute 53 weeks of forecast data and then from that compute the number of weeks of supply of each item at each location. The query works but the volume of data produced (20+M rows) is substantial. If I limit the CTE to a single location, it run is 2 seconds and returns 41,000 rows. But when run for all locations and items, it runs for more than 4 hours. Would I do better converting the CTE to a sub-query and adding an index to improve the performance of the main query?
WITH Forecast AS
(SELECT Location_Idx
,Item_Idx
,Week_Code
,(CAST(AnnualQty AS DECIMAL(9))/53.0)*[Profile] AS fcst
FROM dbo.FactReplenishmentProfile rp
INNER JOIN dbo.FactSeasonalProfile sp
ON sp.SeasonalProfile_Idx = rp.SeasonalProfile_Idx
)
SELECT fcst1.Location_Idx
,fcst1.Item_Idx
,fcst1.Week_Code
,fcst1.fcst AS WeekQty
,SUM(fcst2.fcst) AS CumQty
FROM Forecast fcst1
INNER JOIN Forecast fcst2
ON fcst2.Location_Idx = fcst1.Location_Idx
AND fcst2.Item_Idx = fcst1.Item_Idx
AND fcst2.Week_code <= fcst1.Week_Code
GROUP BY fcst1.Location_Idx,fcst1.Item_Idx,fcst1.Week_code,fcst1.fcst
View 4 Replies
View Related
Aug 30, 2006
I have a table that contains approx 200 thousand records that I need to run validations on. Here's my stored proc:
[code]
CREATE PROCEDURE [dbo].[uspValidateLoadLeads]
@sQuotes char(1) = null, @sProjectId varchar(10) = null, @sErrorText varchar(1000) out
AS BEGIN
DECLARE @ProcName sysname, @Error int, @RC int, @lErrorCode bigint, DECLARE @SQL varchar(8000)
IF @sQuotes = '0'
BEGIN
UPDATE dbo.prProjectDiallingList_staging
SET sPhone = RTrim(LTrim(Convert(varchar(30), Convert(numeric(20, 1), phone))))
END
ELSE
BEGIN
UPDATE dbo.prProjectDiallingList_staging
SET sPhone = phone
END
--2. Remove quotes
UPDATE dbo.prProjectDiallingList_staging
SET sphone = REPLACE(sphone,'"' , '')
--3. Remove decimal, comma, dashes, parenthesis
UPDATE dbo.prProjectDiallingList_staging
SET sphone = replace(replace(replace(replace(replace(replace(sphone,'.',''),',','' ),'-',''), ' ',''), '(', ''), ')', '')
--4. Update failed Validation column if not 10 digits
UPDATE dbo.prProjectDiallingList_staging
SET sFailedValidation = 'X'
WHERE(Len(RTrim(LTrim(sPhone))) <> 10)
--5. Dedup
UPDATE a
SET a.sFailedValidation = 'X'
FROM dbo.prProjectDiallingList_staging a (nolock)
INNER JOIN dbo.prProjectDiallingList_staging b
ON a.sPhone= b.sPhone
WHERE(a.iList_StagingID > b.iList_StagingID)
--6. Update failed Validation column if not numeric
UPDATE dbo.prProjectDiallingList_staging
SET sFailedValidation = 'X'
WHERE(IsNumeric(RTrim(LTrim(sPhone))) = 0)
--7. Update time zones
UPDATE s
SET s.sTimeZone =z.sTimeZone
FROM dbo.prProjectDiallingList_staging s (nolock)
LEFT OUTER JOIN dbo.prPhoneTimeZone z
ON left(rtrim(ltrim(s.sphone)),3) = z.sPhoneAreaCode
--8. Insert into dialing table only records that have not failed the validation
INSERT dbo.prProjectDiallingList(iPrProjectId, sPhoneNumber, sTimeZone)
SELECT @sProjectId,sPhone, sTimeZone
FROM dbo.prProjectDiallingList_staging
WHERE ISNULL(sFailedValidation,'1') = '1'
UPDATE d
SET d.bProcessReporting = 1
FROM dbo.prProjectDialling d
WHERE d.iPrProjectId = @sProjectId
END
[/code]
When I execute this stored proc it runs for more than 5 minutes. Is there anything i can do to speed it up? Maybe there is a faster way of writing these queries?
Thanks,
Ninel
View 1 Replies
View Related
Jan 4, 2008
I€™m trying to write a script that will detect long running agent jobs.
Having looked at this article:
http://www.databasejournal.com/features/mssql/article.php/3500276
It appears that agent job job id€™s don€™t necessarily get stored in the programname of the sysprocesses table. This is true if the agent executes an os command. It also appears that job steps do not get stored in the sysjobhistory until the step is complete so that cannot be used accurately.
Does anyone know of an effective way to find if there are long running jobs other than these methods?
View 17 Replies
View Related
Jul 21, 2001
Is there any way to measure the progress of a long running query,
for instance, to find where in a query plan a query is in SQL 7.0?
I have a query I am running that is currently 2 1/2 hours into the
query. Since it's joining three large tables, one with 42 million rows
and two with 7 million rows, I'm expecting the query to take a
while. However, I have no way of estimating exactly how long
it will take. Before I ran it, I optimized it the best I could in
Query Analyzer using an estimated query plan, making sure I had
all the right indexes, etc. I've been trying to use the estimated cost
to project query time, but that hasn't been working since queries
with similar costs can take radically different amounts of time to
execute.
Now I'm sitting here waiting, wondering if the query is just taking
too long, and I should stop it and work on optimizing it some more
(since I will have to run a couple more queries like it), or let it
finish. But I have no clue how close it is to finishing. I've tried looking
at the Physical I/O given by sp_who2 and then trying to calculate the
number of pages it would have to read if it had to read everything
from disk, then estimating it's progress by that, but this seems dubious
at best, since I don't know a whole slew of factors (ie: how many
pages are being read from the cache, is my page calculation correct,
etc).
So, does anyone know of any way to figure out how soon a long
running query will finish in SQL 7.0?
Thanks.
--
Trevor Lohrbeer
trevor@truepeers.com
View 2 Replies
View Related
Feb 21, 2000
I have a stored procedure being called from Visual Cafe 4.0 that takes over 30 minutes to run. Is there any way to backround this so that control returns to the browser that the JFC Applet is running in? The result set is saved to local disk and an email message sent to the user on completion.
Thanks, Dave.
View 2 Replies
View Related