How To Kill A Long Running Query Running On A Background Thread.
Sep 1, 2006
If I start a long running query running on a background thread is there a way to abort the query so that it does not continue running on SQL server?
The query would be running on SQL Server 2005 from a Windows form application using the Background worker component. So the query would have been started from the background workers DoWork event using ado.net. If the user clicks an abort button in the UI I would want the query to die so that it does not continue to use sql server resources.
I have a pretty complex query that aggregates lots of data and inserts multiple rows of that data into a reporting table. When I call this SPROC from SQL Server Management Studio, it executes in under 3 seconds. When I try to execute the same SPROC using .NET's SqlCommand object the query runs indefinitely until the CommandTimeout is reached. Why would this SPROC behave differently with the same inputs, but being called from .NET? Thanks for your help!
I'm trying to optimize a long running (several hours) query. This query is a cross join on two tables. Table 1 has 3 fields - ROWID, LAt and Long. Table 2 has Name, Addr1,Addr2,City,State,Zip,Lat,Long.
Both tables has LatRad - Lat in radians, LonRad- Lon in Radians. Sin and Cos values of Lat and Lon are calulated and stored to be used in the distance formula.
What I'm trying to do here is find the nearest dealer (Table 2) for each of Table 1 rows. The Select statement takes a long time to execute as there are about 19 million recs on table 1 and 1250 rows in table 2. I ran into Log issues- filling the transaction log, so I'm currently using table variables and split up the process into 100000 recs at a time. I cross join and calculate the distance (@DistValues) and then find the minimum distance (tablevar2) for each rowid and then the result is inserted into another Table (ResultTable).
distance=3963.1*Case when cast(S.SinLat * T.SinLat + S.CosLat * T.cosLat * cos(T.Lonrad - s.Lonrad) as numeric(20,15)) not between -1.0 and 1.0 then 0.0 else acos(cast(S.SinLat * T.SinLat + S.CosLat * T.cosLat * cos(T.Lonrad - s.Lonrad) as numeric(20,15))) end
from dbo.TopNForProcess T , dbo.Table2 S where Isnull(T.Lat,0) <> 0 and Isnull(T.Lon,0)<> 0
Insert into @MinDistance
Select DataSeqno,Min(distance) From @DistValues Group by DataSeqno
Insert into ResultTable (DataSeqno,Lat2,Lon2,StoreNo,LAt1,Long1,distance)
Select D.DataSeqno, D.Lat2, D.Lon2, D.StoreNo, D.LAt1, D.Long1, M.distance from @DistValues D Inner Join @MinDistance M on D.DataSeqno = M.DataSeqno and D.Distance = M.Distance
I created a View called TopNForProcess which looks like this. This cut down the processing time compared to when I had this as the Subquery.
SELECT TOP (100000) DataSeqno, lat, Lon, LatRad, LonRad, SinLat, cosLat, SinLon, CosLon FROM Table1 WHERE (DataSeqno NOT IN (SELECT DataSeqno FROM dbo.ResultTable)) AND (ISNULL(lat, 0) <> 0) AND (ISNULL(Lon, 0) <> 0)
I have indexes on table table1 - Rowid and another one with Lat and lon. Table2 - Lat and Long.
Is there any way this can be optimized/improved? This is already in a stored procedure.
The query show below is designed to use seasonal profiles to compute 53 weeks of forecast data and then from that compute the number of weeks of supply of each item at each location. The query works but the volume of data produced (20+M rows) is substantial. If I limit the CTE to a single location, it run is 2 seconds and returns 41,000 rows. But when run for all locations and items, it runs for more than 4 hours. Would I do better converting the CTE to a sub-query and adding an index to improve the performance of the main query?
WITH Forecast AS
(SELECT Location_Idx
,Item_Idx
,Week_Code
,(CAST(AnnualQty AS DECIMAL(9))/53.0)*[Profile] AS fcst
FROM dbo.FactReplenishmentProfile rp
INNER JOIN dbo.FactSeasonalProfile sp
ON sp.SeasonalProfile_Idx = rp.SeasonalProfile_Idx
)
SELECT fcst1.Location_Idx
,fcst1.Item_Idx
,fcst1.Week_Code
,fcst1.fcst AS WeekQty
,SUM(fcst2.fcst) AS CumQty
FROM Forecast fcst1
INNER JOIN Forecast fcst2
ON fcst2.Location_Idx = fcst1.Location_Idx
AND fcst2.Item_Idx = fcst1.Item_Idx
AND fcst2.Week_code <= fcst1.Week_Code
GROUP BY fcst1.Location_Idx,fcst1.Item_Idx,fcst1.Week_code,fcst1.fcst
I am having a problem executing long running queries from an ASP applicationwhich connects to SQL Server 2000. Basically, I have batches of queries thatare run using ADO in a loop written in VBScript. This works pretty welluntil the execution time of a single query starts to exceed some threshold,which I am trying to narrow down. I can typically run 2 - 10 queries in aloop, with the run time being anywhere from under a minute to an hour ormore. Now that this application is being subjected to run against some largedatabases (25 - 40G), I'm having problems getting the application tocontinue beyond the first query if it takes a while to run.I used SQL Profiler to try to diagnose what was going on. I can see thequery executes to completion, but immediately after completing I can see an"Audit Logout" message, which apparently means that the client hasdisconnected. The query durations vary from 45 or 50 minutes to up to over90 minutes. I have the ADO connection and query timeouts set to very largevalues, e.g. 1000 minutes, so I can't think its that. My guess is that thereis some IIS setting or timeout that I am running up against and theconnection to SQL Server is just dropped for some reason.The configuration isNT 4.0 SP6SQL Server 2000 SP3IIS 4.0Internet Explorer 5.5I'm only running into this problem on the very largest databases we runagainst. The vast majority continue to function properly, but this is goingto happen more often as time goes on the databases continue to grow in size.Any advice is appreciated,-Gary
I have a simple update statement that is running forever in SQL 2005 but works fine in SQL 2000. We have a new server we put SQL 2005, restored db. The table in question WEEKLYSALESHISTORY I even re-indexed all the indexes and rebuilt the stats as well. But still no luck, still running extremely long. 1 hour 20 minutes.
I'll try to give you some background on these table. Weeklysalehistory has approx 30 fields. I have 11 indesxes set up weekending date being one of them. And replication control has index on lasttrandatetime as well. So I think my indexes are fine.
/* Update WeekEnding Date for current weeks WeeklySales Records */ Update WeeklySalesHistory set weekendingdate = (SELECT LastTransDateTime from ReplicationControl where TableName = 'WEEKHST') where weekendingdate is null
Weekly sales has approx 100,000,000 rows Replication control has 631,000 (Ithink I can delete some from here to bring it down to 100 or 200 records) Although I don't think this is issue since on 2000 has same thing and works fine.
I was trying to do this within SSIS and thought that was issue. I am new so SSIS but it runs long even if I just run it as a job with this simple Update statement so I think its something with tables, etc that is wrong.
One thing on noticed if I look at the statistics in SQL Server Management studio there is a ton of stats. some being statistics on indexes which makes sense then I have a ton of hind_113_9_6 and simiiar one like this. I must have 90 or so named like this. Not sure how to check on SQL 2000 all the stats to see if they moved over from there or what. I checked a few other tables and don't have all these extra stats. Could this be causing the issue do I need to delete all these extras? Any help would be greatly appreciated.
Try this script to see what queries are taking over a second.To get some real output, you need a long-running query. Here's one(estimated to take over an hour):PRINT GETDATE()select count_big(*)from sys.objects s1, sys.objects s2, sys.objects s3,sys.objects s4, sys.objects s5PRINT GETDATE()Output is:session_id elapsed task_alloc task_dealloc runningSqlText FullSqlTextquery_plan51 32847 0 0 select count_big(*) from sys.objects s1, sys.objects s2,sys.objects s3, sys.objects s4, sys.objects s5 SQL PlanClicking on SQL opens the full SQL batch as a .txt file, including the PRINTstatementsClicking on Plan allows you to see the .sqlplan file in MSSMS========Title: Using a VB Script to show long-running queries, complete with queryplans.Today (July 14th), I found a query running for hours on a development box.Rather than kill it, I decided to use this opportunity to develop a scriptto show long-running queries, so I could see what was going on. (ReferenceRoy Carlson's article for the idea.)This script generates a web page which shows long-running queries with thecurrently-executing SQL command, full SQL text, and .sqlplan files. The fullSQL query text and the sqlplan file are output to files in your tempdirectory. If you have SQL Management Studio installed on the localcomputer, you should be able to open the .sqlplan to see the query plan ofthe whole batch for any statement.'LongestRunningQueries.vbs'By Aaron W. West, 7/14/2006'Idea from:'http://www.sqlservercentral.com/columnists/rcarlson/scriptedserversnapshot.asp'Reference: Troubleshooting Performance Problems in SQL Server 2005'http://www.microsoft.com/technet/prodtechnol/sql/2005/tsprfprb.mspxSub Main()Const MinimumMilliseconds = 1000Dim srvnameIf WScript.Arguments.count 0 Thensrvname = WScript.Arguments(0)Elsesrvname = InputBox ( "Enter the server Name", "Server", ".", VbOk)If srvname = "" ThenMsgBox("Cancelled")Exit SubEnd IfEnd IfConst adOpenStatic = 3Const adLockOptimistic = 3Dim i' making the connection to your sql server' change yourservername to match your serverSet conn = CreateObject("ADODB.Connection")Set rs = CreateObject("ADODB.Recordset")' this is using the trusted connection if you use sql logins' add username and password, but I would then encrypt this' using Windows Script Encoderconn.Open "Provider=SQLOLEDB;Data Source=" & _srvname & ";Trusted_Connection=Yes;Initial Catalog=Master;"' The query goes heresql = "select " & vbCrLf & _" t1.session_id, " & vbCrLf & _" t2.total_elapsed_time AS elapsed, " & vbCrLf & _" -- t1.request_id, " & vbCrLf & _" t1.task_alloc, " & vbCrLf & _" t1.task_dealloc, " & vbCrLf & _" -- t2.sql_handle, " & vbCrLf & _" -- t2.statement_start_offset, " & vbCrLf & _" -- t2.statement_end_offset, " & vbCrLf & _" -- t2.plan_handle," & vbCrLf & _" substring(sql.text, statement_start_offset/2, " & vbCrLf & _" CASE WHEN statement_end_offset<1 THEN 8000 " & vbCrLf & _" ELSE (statement_end_offset-statement_start_offset)/2 " & vbCrLf & _" END) AS runningSqlText," & vbCrLf & _" sql.text as FullSqlText," & vbCrLf & _" p.query_plan " & vbCrLf & _"from (Select session_id, " & vbCrLf & _" request_id, " & vbCrLf & _" sum(internal_objects_alloc_page_count) as task_alloc, " &vbCrLf & _" sum (internal_objects_dealloc_page_count) as task_dealloc " &vbCrLf & _" from sys.dm_db_task_space_usage " & vbCrLf & _" group by session_id, request_id) as t1, " & vbCrLf & _" sys.dm_exec_requests as t2 " & vbCrLf & _"cross apply sys.dm_exec_sql_text(t2.sql_handle) AS sql " & vbCrLf & _"cross apply sys.dm_exec_query_plan(t2.plan_handle) AS p " & vbCrLf & _"where t1.session_id = t2.session_id and " & vbCrLf & _" (t1.request_id = t2.request_id) " & vbCrLf & _" AND total_elapsed_time " & MinimumMilliseconds & vbCrLf & _"order by t1.task_alloc DESC"rs.Open sql, conn, adOpenStatic, adLockOptimistic'rs.MoveFirstpg = "<html><head><title>Top consuming queries</title></head>" & vbCrLfpg = pg & "<table border=1>" & vbCrLfIf Not rs.EOF Thenpg = pg & "<tr>"For Each col In rs.Fieldspg = pg & "<th>" & col.Name & "</th>"c = c + 1Nextpg = pg & "</tr>"Elsepg = pg & "Query returned no results"End Ifcols = cdim filenamedim WshShellset WshShell = WScript.CreateObject("WScript.Shell")Set WshSysEnv = WshShell.Environment("PROCESS")temp = WshShell.ExpandEnvironmentStrings(WshSysEnv("TEMP")) & ""filename = temp & filenameDim fso, fSet fso = CreateObject("Scripting.FileSystemObject")i = 0Dim cDo Until rs.EOFi = i + 1pg = pg & "<tr>"For c = 0 to cols-3pg = pg & "<td>" & RTrim(rs(c)) & "</td>"Next'Output FullSQL and Plan Text to files, provide links to themfilename = "topplan-sql" & i & ".txt"Set f = fso.CreateTextFile(temp & filename, True, True)f.Write rs(cols-2)f.Closepg = pg & "<td><a href=""" & filename & """>SQL</a>"filename = "topplan" & i & ".sqlplan"Set f = fso.CreateTextFile(temp & filename, True, True)f.Write rs(cols-1)f.Closepg = pg & "<td><a href=""" & filename & """>Plan</a>"'We could open them immediately, eg:'WshShell.run temp & filenamers.MoveNextpg = pg & "</tr>"Looppg = pg & "</table>"filename = temp & "topplans.htm"Set f = fso.CreateTextFile(filename, True, True)f.Write pgf.CloseDim oIESET oIE = CreateObject("InternetExplorer.Application")oIE.Visible = TrueoIE.Navigate(filename)'Alternate method:'WshShell.run filename' cleaning uprs.Closeconn.CloseSet WshShell = NothingSet oIE = NothingSet f = NothingEnd SubMain
I'm calling a PHP page that run a stored procedure. The SP contains a xp_cmdshell command that runs a DTS package. This DTS package is HUGE and will take many hours to import tables from an Oracle database to SQL Server. But I don't want my PHP page to "hang" in the process...
How can I execute my PHP page and refresh it every 30 seconds to see if the importation is done ? In my actual code, I create a "start_file.txt" when I begin the importation and a "end_file.txt" when it's done. I want to refresh every 30 seconds to see if the "end_file.txt" is created and display a "Importation Done!" message.
To be able to do that, I need the DTS Package to run in the background of the web server, or asynchronously from my PHP page.
Simple (stupid) question: is it possible to create a SP to call my main SP so it will run independently ??
Let's review some of my code here.
Part of my PHP page: $DTS_result = $dbj->Execute(mssql_query("EXECUTE Run_DTS_Packages"));
My Stored procedure (without username & password...): CREATE PROCEDURE Run_DTS_Packages AS exec master.dbo.xp_cmdshell 'C:Progra~1Micros~380ToolsBinnISQL.EXE -S RHEA -U user -P pwd -Q "ISQL_Batch ''D:DDFIImporteIMPICAFI.bat''" -n -d DDFI' GO
I executed xp_cmdshell command. More than 24 hours this process still running. I tried killing this process with enterprise manager and query analyser, both gave me a message saying its successfully killed. But when i do a sp_who, the process still their executing.
how can I kill this process that's running xp_cmdshell
I am running two jobs background using SQL agent. There was a power shut down and the two job i was running in back groung were cancelled leaving the below message
The job was stopped prior to completion by Shutdown Sequence 0.
Can any one tell me why this happened although I am running them in back ground.
I am not sure where the servers are located and if they are located in the same bulding .. do u think it is bcoz of tis power shut down only?
Hi everyone.... I'm trying to execute this update statement... It takes an eternity... any ideas on how to rewrite or speed it up?
It's a several step process... below is everything that i run, one step at a time. The final update statement is what takes so long. It should only affect about 2600 rows out of a potential 9000. That's why I'm confused on the response time
select d.olddevicename, de.device, d.newdevicename into #temp9 from dns d, devices de where de.device = d.olddevicename
update #temp9 set device = newdevicename where olddevicename = device
update devices set device = #temp9.device from #temp9, devices where #temp9.device in (select #temp9.device from #temp9, devices where #temp9.olddevicename = devices.device)
I have 3 three scheduled job: one runs onece a day, one runs once per hour, and another runs every 17 minutes. It is a NetIQ application. I just scheduled SQL Server maintianace job last night which ran at 2:00Am and 4:00Am. This morning, I came in office and found all my jobs were still running; and they were all blocked by the first 3 jobs. I had to kill all of them. In this afternoon, I kicked off one of my many DTS packages which runs usually about 40 minutes, but it failed. I tried several times but no luck. I suspected one of user tables corrupted or one of stored procedures corrupted. After I recycle the server, and dropped the table and the stored procedure, and recreated them, the package went fine. The store procedure involves many updates and inserts.
The question I have is: is it possible to cause this problem because I killed the unfinished jobs (especially the sql maintanace job)?
NOTE: the sql maintanace job does not include the backup of database and transaction log.
My backups are running 5-6 hours on SQL2000. I'm sure they only used to take 1 hour or so. On another server, backing up the same database (both about 50 gig), the backup only takes 45 min - 1 hour. What can I look at to see why it's taking so long ?
Trying to come up with a way to monitor (without profiler, hopefully with a job and a select statement) a specific sql job that may cause a problem if the duration is too long. It seems that there is an sp called sp_sqlagent_log_jobhistory that shoves a record in sysjobhistory, but only after all the job steps run. Anyone tried this before?
Hello Gurus I am using sql 2005 and one job status is executing in job monitor in 2005,How can i check since how long this job is running? Please advice
I've got a server (SQL 2K, Win2K) where the backupshave started running long.The database is a bit largish -- 150GB or so. Up untillast month, the backups were taking on the order of4 to 5 hours -- depending on the level of activity on theserver.I'm using a T-SQL script in the SQLAgent to run thebackups. Native SQL backup to an AIT tape drive.Now, for no apparent reason, the backups are takingon the order of 24 to 26 hours. The backups completesuccessfully -- no errors, just taking an outrageouslylong time to complete. DBCCs check out AOK, noproblems with the database.No changes to the machine. No hardware changes. Nosoftware changes. Weird.Multiple tape media have been tried -- it's not a caseof a tape going bad.We've had no problems with this box for almost 4years. Now it's gettin' jiggy with us!Any ideas on where to start with this one?Thanks in advance.
Hi. I'm fairly new to SSRS, and very new to this forum. I have a report based on a stored procedure. I've optimized the procedure so that it runs from 2-4 minutes (previously over half an hour). However, when I run the report that calls the sp, it runs forever (well over 45 minutes in some cases), and the users basically give up on it. Any ideas of why this happens and what steps I can take to improve performance?
I need to execute a long running package (it takes about 16 hours to finish) to load a data warehouse for the first time with all historical data. This package it's a master package and execute other packages; I log the start time and the finish time of the package in a table to manage future incremental loads.
I executed the package on sql server where it is saved, but after 8 hours it was running, a new package was started automatically. Then two more packages started .. each every two hours.
I set the MaxConcorrentExecutable = 4, this could affect this strange behavoir ?
I have a SQL procedure that can take several minutes to complete. I allow users to initiate the process through a web site and view a progress bar. When the process is running, though, the site slows to a crawl or times out completely. That long running process seems to block all other queries on the database. Is there a way to give this process a low priority or somehow throttle its resource use so that the other web processes can get a chance to run in a timely manner? Thanks for any advice.
I am having some difficulty with data-driven processes in an internal web application using ASP.NET framework 2.0 with SQL Server 2000.
I have a facility where the user enters search criteria to retrieve a list with the idea being that when they click a button the list set is migrated to a table as a batch. The SQL Server processes to perform the database operations can take up to 5min because of volume of data and relational complexity, which is fine.
The only problem is that I'm having trouble managing the SQL Server process if the user closes the browser or go to a new URL etc. If this happens on the client side it's like the HTTP request is terminated but the SQL statement left on the SQL Server still running, eventually it runs to completion but I would prefer the SQL process to also be abandoned and rolled back.
Any idea what is happening here and how it can be handled?
I have queries which take over 30 secs to run, which I wish to monitor. Currently, I am monitoring using sql profiler. Is there any way of setting up mail to e-mail me when such a query happens. Could I set up an Alert, or is there some other method.
I want to be able to react to these events faster before the users complain. I am using sql server 7 enterprise and I have exchange set up.
Problem: I schedule a job that calls a stored procedure which loads around 1.5 million records. The Job takes 19 hrs to complete. However, if i run that stored procedure manually in Query Analyser it takes only 45 minutes..
Did anyone faced this problem? Is this known problem..Any suggestions/recommendations?
I have a very long running stored proc (import and transformation of 2-3 M. records). The duration of about 1h is not the problem. My problem is that i want to send some notifications to the UI for showing a progress counter in the UI.
Is there any possibility to send send out a msg after certain steps (e.g. after positioning the cursor to next row) via an extended stored procedure which I could catch in my UI?
I would be very appreciative if somebody would send me a hint.
I’m trying to write a script that will detect long running agent jobs.
Having looked at this article: http://www.databasejournal.com/features/mssql/article.php/3500276
It appears that agent job job id’s don’t necessarily get stored in the programname of the sysprocesses table. This is true if the agent executes an os command. It also appears that job steps do not get stored in the sysjobhistory until the step is complete so that cannot be used accurately.
Does anyone know of an effective way to find if there are long running jobs other than these methods?
I just wanted to post a follow up to a message I posted some months agoabout a long running transaction that was blocking all other users...The link is belowhttp://groups.google.com/group/comp...649bee2002646a2By using the new "Row versioning" functionality of SQL 2005, itcompletely solved this problem. By reading the books online, it saysthere is a performance impact, but that the better performance of SQL2005 in general might offset it. So far this seems to be the case. justposting it here in case anyone else has the problem. The SQL command Iihad to execute to get everything working properly was:ALTER DATABASE DBnameSET READ_COMMITTED_SNAPSHOT ON;
How does one prevent a long running procedure form crapping out in CLR? I am trying to do a pull from a distant data source and it works, except I have to break down my stored procedure call into several smaller calls. I would like to do everything in one shot, but I get the thread abort exception when I try to get a lot of data.
I have a table that contains approx 200 thousand records that I need to run validations on. Here's my stored proc:
[code] CREATE PROCEDURE [dbo].[uspValidateLoadLeads] @sQuotes char(1) = null, @sProjectId varchar(10) = null, @sErrorText varchar(1000) out AS BEGIN DECLARE @ProcName sysname, @Error int, @RC int, @lErrorCode bigint, DECLARE @SQL varchar(8000)
IF @sQuotes = '0' BEGIN UPDATE dbo.prProjectDiallingList_staging SET sPhone = RTrim(LTrim(Convert(varchar(30), Convert(numeric(20, 1), phone)))) END ELSE BEGIN UPDATE dbo.prProjectDiallingList_staging SET sPhone = phone END
--4. Update failed Validation column if not 10 digits UPDATE dbo.prProjectDiallingList_staging SET sFailedValidation = 'X' WHERE(Len(RTrim(LTrim(sPhone))) <> 10)
--5. Dedup UPDATE a SET a.sFailedValidation = 'X' FROM dbo.prProjectDiallingList_staging a (nolock) INNER JOIN dbo.prProjectDiallingList_staging b ON a.sPhone= b.sPhone WHERE(a.iList_StagingID > b.iList_StagingID)
--6. Update failed Validation column if not numeric UPDATE dbo.prProjectDiallingList_staging SET sFailedValidation = 'X' WHERE(IsNumeric(RTrim(LTrim(sPhone))) = 0)
--7. Update time zones UPDATE s SET s.sTimeZone =z.sTimeZone FROM dbo.prProjectDiallingList_staging s (nolock) LEFT OUTER JOIN dbo.prPhoneTimeZone z ON left(rtrim(ltrim(s.sphone)),3) = z.sPhoneAreaCode
--8. Insert into dialing table only records that have not failed the validation INSERT dbo.prProjectDiallingList(iPrProjectId, sPhoneNumber, sTimeZone) SELECT @sProjectId,sPhone, sTimeZone FROM dbo.prProjectDiallingList_staging WHERE ISNULL(sFailedValidation,'1') = '1'
UPDATE d SET d.bProcessReporting = 1 FROM dbo.prProjectDialling d WHERE d.iPrProjectId = @sProjectId END [/code]
When I execute this stored proc it runs for more than 5 minutes. Is there anything i can do to speed it up? Maybe there is a faster way of writing these queries?
I€™m trying to write a script that will detect long running agent jobs.
Having looked at this article: http://www.databasejournal.com/features/mssql/article.php/3500276
It appears that agent job job id€™s don€™t necessarily get stored in the programname of the sysprocesses table. This is true if the agent executes an os command. It also appears that job steps do not get stored in the sysjobhistory until the step is complete so that cannot be used accurately.
Does anyone know of an effective way to find if there are long running jobs other than these methods?
Is there any way to measure the progress of a long running query, for instance, to find where in a query plan a query is in SQL 7.0?
I have a query I am running that is currently 2 1/2 hours into the query. Since it's joining three large tables, one with 42 million rows and two with 7 million rows, I'm expecting the query to take a while. However, I have no way of estimating exactly how long it will take. Before I ran it, I optimized it the best I could in Query Analyzer using an estimated query plan, making sure I had all the right indexes, etc. I've been trying to use the estimated cost to project query time, but that hasn't been working since queries with similar costs can take radically different amounts of time to execute.
Now I'm sitting here waiting, wondering if the query is just taking too long, and I should stop it and work on optimizing it some more (since I will have to run a couple more queries like it), or let it finish. But I have no clue how close it is to finishing. I've tried looking at the Physical I/O given by sp_who2 and then trying to calculate the number of pages it would have to read if it had to read everything from disk, then estimating it's progress by that, but this seems dubious at best, since I don't know a whole slew of factors (ie: how many pages are being read from the cache, is my page calculation correct, etc).
So, does anyone know of any way to figure out how soon a long running query will finish in SQL 7.0?
I have a stored procedure being called from Visual Cafe 4.0 that takes over 30 minutes to run. Is there any way to backround this so that control returns to the browser that the JFC Applet is running in? The result set is saved to local disk and an email message sent to the user on completion. Thanks, Dave.