I am having a problem with a procedure. I can run it from QA and it
takes 50 minutes. When I have it in a scheduled job, it takes 3
hours!! What could be the cause of this? Why the big time
difference?
I have set up thresholds for time behind and I'm getting an event 32040 in the eventlog when the threshold has been exceeded. I have also set up a SQL Agent Alert with a email operator. However the alert is never generated (and no email sent). Shouldn't it be possible to alert an operator as described in http://www.microsoft.com/technet/prodtechnol/sql/2005/mirroringevents.mspx also??? The email profile is working and also setup on the SQL Agent, and the SQL Agent has been restarted (just in case that's necessary).
I have a stored procedure thats transferring/processing data from onetable to two different tables. The destination tables have a uniquevalue constraint as the source tables at times has duplicate recordsand this will prevent the duplicates from being reported. When thestored procedure (which includes a cursor) is executed through queryanalyzer, it runs fine, and reports an error everytime it sees aduplicate value (as expected). It moves all the unique values from thesource to the destination tables.However, if the same stored procedure is run as a task/job in SQLServer Agent, the behaviour is different. The job fails when it see'sthe error and ends up skipping records or terminating the procedureall together. Eg. if there are 100 records in the source table with 10duplicates, the stored procedure when run through Query Analyzer willcopy the 90 unique records to the destination tables but when run fromSQL-Agent, it copies just 10-15 records.Any idea why this happens?
I have created a stored procedure for a routine task to be performed periodically in my application. Say, i want to execute my stored procedure at 12:00 AM daily.
How can I add my stored procedure to the SQL server agent jobs??
One of my clients has a stored procedure on their secondary server thatcopies a bunch of data from the production server. (Replication willbreak the accounting software, according to its authors. The productionserver generates a nightly full backup, so if the secondary can bescripted to do a nightly restore from that same file, then that wouldprobably be a Big Win.)Anyway, if I execute the stored procedure from Query Analyzer, itfinishes (after nearly 24 hours) - tested once recently, and I'm sureat least a few times at some point in the past. If I run a SQL ServerAgent job that executes the stored procedure, then it gets cut off afterabout 15-20 minutes - tested once recently with a manual run, and forseveral weeks of scheduled runs before that. (This being a secondaryserver, it took a while for the problem to be noticed.) What are thelikely causes of this?Both servers are running SQL 2K with SP3, and limited to TCP/IP andnamed pipes. RPC is allowed, with a 600-second timeout, but thatdoesn't seem relevant, since both the successful and unsuccessfulmethods go well past that length. The production server is a recentpurchase, and works well for their daily operations; the secondaryserver and/or its network connection might be flaky for all I know,but that doesn't seem relevant either, since success appears todepend consistently on method of execution.
I have 3rd party application implementation I am doing, if I have stored procedure in one database, can it be copied to other database (newly created) on the same server? I will have the name of old and new database when this will be copied. I do need to automate this copy.
I have a stored procedure which loops thru about 180 tables and inserts the data into one table used for reporting purposes. One field is a ntext field. If I execute the stored procedure from query analyzer the entire contents of the ntext field is available. When I run as scheduled task (every 1/2 hour) from sql server agent, the data in the ntext field is truncated.
To set the scene I am using SQL 2012, in project deployment mode (SSIS Catalog rather than file system).I have setup an SSIS package to run a stored procedure which exports data for the last hour to a .tsv file and then FTP's the file to some other location via a sql agent job - This all works fine.However, I can see there may be a requirement to run the package with dates that need to be set i.e. in the event of a lost file of some other reason the package has not run and missed some of its hourly slots and the customer requires the files to be resent.
The stored procedure I am using has parameters for "DateOverride" - boolean), "start" and "end" dates (datetime) with defaults set "0" for "DateOverride" and null for the "Start" and "End" dates, I have built logic into the procedure which sets the dates if the parameters are null (as in the above to an hour before now). What I would like to be able to do (and this is to make it user friendly for support staff) is to be able to set parameters/variables in SQL agent with "DateOverride" set to "1" and the the dates I would like to be sent to the stored procedure "Start" and "End" parameters.
I did try using the parameters in SSIS which worked well when the values were true or false (0,1) but didn't work at all for the dates. If I left the dates as I had set them is SSIS it worked, but if I changed them (even if it was just changing the hour) the job errored/crashed and corrupted the job step leaving me the ability to only delete it.
I can't access SQL Server 2008 R2 remotely on Windows 2008
1. TCP/IP Enabled for SQL Server Network Configuration Protocols, SQL Native Client 10.0 configuration clients, and SQL Native Client 10.0 configuration clients(32 bit)
2. Firewall disabled to make sure its not interferring with things.
I noticed the SQL Server Agent is Stopped. Not sure if this is the issue. When I try and turn this from disabled to Automatic or Manual, I get this error:
i have created a procedure that is about 500 line long.
now this is actually a controller procedure which calles other procedures and functions to generate data for a report. But this procedure table about 3 min to generate result set. I am not using any temp table. I am using table variables.
My procedure do not recompile. My rocedure have some insert into ... Exec statements also..
My question is Will performance increase if i split the stored procedure into 2 or 3 or 4 parts?
Given that a stored procedure and T-SQL code in query analyzer are exactly the same, why would the stored procedure run much slower?
When I mean much slower I mean 3 sec for the code in query analyzer as opposed to 2:33 sec for the stored procedure.
Exact same code!
Profiler also gives more reads and writes for stored procedure, and a lot of BatchStarted and BatchCompleted directives between the 'start' and 'end' of the stored procedure.
I am writing an ASP based application that creates a dynamic querry and then executes it and displays results. I was thinking about writing a stored procedure to increase performance. How much can the SP help me boost querry responce time ???
i've a store procedure has a strange behavior, As soon as created has a good performance , but after some times (indeterminated) it takes more time to be execute.... (up to 70s!!!)
The thing that i've not understood was if i take the query inside to the store i execute it separtely I get result immediately... :eek:
Dropping and re-creating procedure,it become newly fast... I've just scheduled a maintenance plan with index optimization and integrity check, but this seems doesn't work ...
hi how should i monitor performjance of stored procedure and sql statements. i want to know that how much cpu time a query or stored procedure is taking??
r there any system table which give these information
My application fetches a batch of data through a web service and writes 1000entities per batch to a SQL Server 2000 database. There are 4 tables inevery batch. There are the following number of SQL commands executed peraverage of every batch;Table #1: always 1Table #2: 5Table #3: 5Table #4: 3The problem is that the performance slows down for every batch. Below is anexcerpt from my log file;2004-12-15 12:00:01 Starting job... (RAM usage: 6,38 mb)2004-12-15 12:00:39 data fetch time: 00:00:28 (RAM usage: 23,04 mb)2004-12-15 12:00:39 Total data fetch time: 00:00:37 (RAM usage: 23,04 mb)2004-12-15 12:00:39 Inserting/updating 1000 entities...2004-12-15 12:01:20 Write SQL time: 00:00:402004-12-15 12:01:49 data fetch time: 00:00:24 (RAM usage: 26,87 mb)2004-12-15 12:01:49 Total data fetch time: 00:00:29 (RAM usage: 26,87 mb)2004-12-15 12:01:49 Inserting/updating 1000 entities...2004-12-15 12:02:59 Write SQL time: 00:01:102004-12-15 12:04:06 data fetch time: 00:00:29 (RAM usage: 27,48 mb)2004-12-15 12:04:06 Total data fetch time: 00:01:06 (RAM usage: 27,48 mb)2004-12-15 12:04:06 Inserting/updating 1000 entities...2004-12-15 12:05:30 Write SQL time: 00:01:232004-12-15 12:06:05 data fetch time: 00:00:31 (RAM usage: 27,03 mb)2004-12-15 12:06:05 Total data fetch time: 00:00:35 (RAM usage: 27,03 mb)2004-12-15 12:06:05 Inserting/updating 1000 entities...2004-12-15 12:07:37 Write SQL time: 00:01:32As one can see, the Write SQL time increases per every batch.I would like this time to stay around one minute per batch.There are one trigger per table. There is one parent table which has aprimary-foreign key relationship to the three sub tables.I have 2% automatic file size growth set on both the data and the log file.Thank you in advance to the guru which helps me out with this!
Hi,I have a large SQL Server 2000 database with 3 core tables.Table A : 10 million + recordsTable B : 2 million + recordsTable C : 6 million + recordsOne of the batch tasks that I have to perform firstly builds a list ofall keys for records from each of the three tables that I need toprocess (along with a flag to tell me which table the key is from).This list is populated into a table variable.I then loop through the table variable to process all the records withthe particular key value.The updates are run in order of the tables .... Table A first, B nextand finally C.The table variable will typically hold 3000 keys.My problem is this ...... the processing of the key records from TableA runs well - it takes around 40 minutes which is acceptable for thelevel of processing being carried out. Though when I start processingthe transactions for Table B the first couple of statements executesuccessfully though then the subsequent statements take a long time (insome cases hours) to complete. The format of the statements for alltables is virtually the same and the tables have been indexedappropriately.The thing is that if I alter the stored proc to only process recordsfrom Table B or Table C ... the procedure flies through and processesthe records in a flash .... 1-2 minutes.Can anyone suggest what might be the issue here ?I have read many posts though can't seem to find the solution.Should I break up my processing so that it processes each tableindividually ?I've tried running the Profiler though it doesn't provide me with muchin the way of solutions.Regards,Ian
I have an SP that is big, huge, 700-800 lines.I am not an expert but I need to figure out every possible way thatI can improve the performance speed of this SP.In the next couple of weeks I will work on preparing SQL statementsthat will create the tables, insert sample record and run the SP.I would hope people will look at my SP and give me any hints on howI can better write the SP.In the meantime, after looking at the SP briefly, my first observations are:1- use SET NOCOUNT ON2- avoid using UNION statements3- use WITH (NOLOCK) with each SELECT statement4- avoid using NESTED Select statements5- use #temp tables6- avoid renaming tables in SELECT statements, for example SELECT * FROMtblClients CAm i correct to consider the above 6 points as valid things in terms ofcausingperformance problems?I would appreciate any comments/helpThank you very much
Hello,I have a question regarding stored procedure desing that provides theoptimal performance. Let's say we have a table Products that consists ofthree columns: Name, Status, RegistrationTime. All columns are indexed andusers should be able to lookup data by any of the columns. We have two mainoptions to design stored procedures for data retrieval:1. Design separate stored procedures for each search criteria:LookupProductsByName, LookupProductsByStatus, LookupProductsByTime.2. Write a generic stored procedure that will fit any search criteria:CREATE PROCEDURE GetProducts (@Name varchar(20),@Status int = NULL,@FromTime datetime = NULL,@ToTime datetime = NULL)AS BEGINSELECT[Name],[Status],[RegistrationTime]FROM [Products]WHERE [Name]=CASEWHEN @Name<>NULL THEN @NameELSE [Name]ENDAND [Status]=CASEWHEN @Status<>NULL THEN @StatusELSE [Status]ENDAND [RegistrationTime]>=CASEWHEN @FromTimestamp<>NULL THEN @FromTimestampELSE [RegistrationTime]ENDAND [RegistrationTime]<=CASEWHEN @ToTimestamp<>NULL THEN @ToTimestampELSE [RegistrationTime]ENDORDER BY [RegistrationTime]END;The second option is very attractive, because it is obviously easier tomaintain such code. However, I am a little concerned about performance ofsuch stored procedure. It is not possible to foresee what index should beused, index can only be selected each during procedure execution, becausesearch criteria can include either Name, Status or RegistrationTime. Will itmake this SP inefficient? Or perormance difference in such case is not big(if any) and we should choose the second option because of its significantcode reduction?Thanks in advanceVagif AbilovJoin Bytes!
Dear all, I have the following stored procedure that takes around 1:15 minutes to finish execution against SQL Server 2005. The table RecordedCalls contains 9369907 Records, the other tables used in the join will not contain more than 15 Records for each one. The table lookups contains like 200 Records so that€™s why I put the records I want from it in a temp table (#tempLookUps) before joining it with the table RecordedCalls. I have clustered index (Primary key) on the column ID in table RecordedCalls and non-clustered indexes on the columns that are used in the Where statement and the group by field (CallType), I can€™t remove any join with other tables or any condition on the where statements hence it is very dynamic and concatenated from other strored procedure and i can't remove the DISTINCT Word. I found that when using temp tables to put the results in then imply joining on then is more efficient than using Derived Tables.
Are there any ideas to enhance the performance for this Stored Procedure? Or the code below is the optimal code? Here is the Stored procedure:
DECLARE @max int
SELECT @max = MAX(RecordedCalls.ID) FROM RecordedCalls
CREATE TABLE #tempLookups (ID int identity(0,1),Code NVARCHAR(100),NameE NVARCHAR(500),NameA NVARCHAR(500))
CREATE TABLE #tempTable (ID int identity(0,1),TypesCount INT,CallsType NVARCHAR(50))
INSERT INTO #tempLookups SELECT Code, NameE, NameA FROM lookups WHERE [Type] = 'CALLTYPES' ORDER BY Ordering ASC
INSERT INTO #tempTable SELECT COUNT(DISTINCT(RecordedCalls.ID)) As TypesCount,RecordedCalls.CallType as CallsType
FROM Servers INNER JOIN RecordedCalls ON Servers.Name = RecordedCalls.ServerName
LEFT OUTER JOIN Tags INNER JOIN RecordedCallsTags ON Tags.ID = RecordedCallsTags.TagID
ON RecordedCalls.ID = RecordedCallsTags.CallID
WHERE RecordedCalls.ID <= @max
AND (RecordedCalls.CallDate BETWEEN CAST ('01 Jan 1910 00:00:00:000' AS DATETIME ) AND CAST('01 Jan 2210 00:00:00:000' AS DATETIME ))
AND (RecordedCalls.Duration BETWEEN 0 AND 1000000)
AND RecordedCalls.AgentID NOT IN('1000010000')
AND RecordedCalls.IsDeleted='FALSE'
GROUP BY RecordedCalls.CallType
SELECT IsNull(#tempTable.TypesCount, 0) AS TypesCount, CASE('English')
WHEN 'Arabic' THEN #tempLookups.NameA
ELSE #tempLookups.NameE
END AS CallsType FROM
#tempTable RIGHT OUTER JOIN #tempLookups ON #tempTable.CallsType = #tempLookups.Code
We are using a stored procedure with three parameters to query a table with two tables joined. The query when run outside of the stored procedure in Query Analyzer takes less than 1 second to run. The same query inside the stored procedure run in Query Analyzer takes 5 to 30 seconds, which is extremely long.
We have looked at parameter sniffing as the possible issue and set the parameters as local variables and tried using sp_executesql to resolve parameter sniffing. These did not work.
I copied the database to a test server with no other users logged on except for me. I ran the same stored procedure multiple times. The first run was in less than 1 second. All subsequent runs took 5 to 30 seconds. No other users were on the box; therefore, there were no file lock contentions.
We are at a loss as to why this is happening. Any advice is greatly appreciated. Our code is below:
ALTER PROCEDURE [dbo].[spBoxCheck]
@FacilityID int, @CustID int, @OrdNumber nvarchar(64) AS SET NOCOUNT ON
SELECT
dbo.PackHeader.BoxID, dbo.OrdHeader.partnerOrderNumber, dbo.PackLine.LineID, dbo.PackLine.QuantityPacked, dbo.PackHeader.Validated FROM dbo.PackHeader LEFT OUTER JOIN dbo.OrdHeader ON dbo.PackHeader.IntOrderNumber = dbo.OrdHeader.IntOrderNumber LEFT OUTER JOIN dbo.PackLine ON dbo.PackHeader.BoxID = dbo.PackLine.BoxID WHERE (OrdHeader.partnerOrderNumber = @OrdNumber) And (OrdHeader.CustomerID = @CustID) And (PackHeader.ProdFacilityID = @FacilityID)
Here€™s a question that bugs me on SQL Stored procedure.
I€™ve a stored procedure which takes 1 sec for its execution when it is hit by single user.
When the same Stored procedure is accessed concurrently in a multi user environment with different Inputs to the SP, it€™s taking 5-6 secs to execute.
I€™m totally confused, what might hold up the resources though SQL server provides multi user instances when it is hit by several users. (Correct me if I€™m wrong).
It would be great if you let me know the actual reason behind this.
Hi, I think I remember reading somewhere that excessive comments inside a stored procedure can have a negative impact on performance. Does anyone know if this is true and can provide an explanation?
I need to improve the performance of Stored Procedure and would like to give some information about the same.
First of all the sp is very big. and it has some parameter like @Select, @Where etc... which may vary so it has With Recompile option.It uses patindex, substring and replace function heavily.Other things seems to be fine.
and also one more point i required assistance is if i am declaring a variable @Month varchar(20) and then setting value @Month = month(TradeDt) -> month function returns an int (here int to varchar conversion happening) and also setting one more variable value like @Var1 = '0' + @Month -> reason why i declared @month as varchar so the above option is ok or shall i go for @month as int, @Month = month(TradeDt) and @Var1 = '0' + cast(@Month as varchar)
any help or suggestion would be greatly apprecialted.
I am not sure which of the forums this question should be posted to (Engine or TSQL) so I have posted to both. Please forgive the cross-post.
The problem we are having is with the performance of a stored procedure. Normally, this specific proc will only take seconds to run, but if we at all modify the stored proc in any way, the proc will take at least 20 minutes to run (we don't really know when it will finish because we cut it off at 20 minutes).
When this happens, we can run the profiler and see that a lock and unlock is happening on each record that the query is returning. We are at a loss as to why this is happening.
The only way to fix it (get the proc run time back down to seconds) is to run the engine tuning advisor. We don't even need to apply any of the recommended changes, we just need to run it and the proc performance goes back down to mere seconds.
If we run the profiler again when the proc performs normally, we dont' see any locking going on.
We can consistenly repeat these steps.
Just so we aren't jumping the gun, we have re-written and disected the proc 8 ways to Sunday and it still does the same thing.
Anyone else see this and can offer any suggestions?
Has anyone encountered cases in which a proc executed by DTS has the following behavior: 1) underperforms the same proc when executed in DTS as opposed to SQL Server Managemet Studio 2) underperforms an ad-hoc version of the same query (UPDATE) executed in SQL Server Managemet Studio
What could explain this?
Obviously,
All three scenarios are executed against the same database and hit the exact same tables and indices.
Query plans show that one step, a Clustered Index Seek, consumes most of the resources (57%) and for that the estimated rows = 1 and actual rows is 10 of 1000's time higher. (~ 23000).
The DTS execution effectively never finishes even after many hours (10+) The Stored procedure execution will finish in 6 minutes (executed after the update ad-hoc query) The Update ad-hoc query will finish in 2 minutes
I have a stored procedure which runs in about 30-40 seconds most of the time, however sometimes it takes over an hour to complete. The resultset is the same for both execution times. There doesn't appear to be any other significant resource hogging on the server during execution (SQL Server does use 99% CPU while it runs tho) The procedure itself is based on 2 views, these views in turn are themselves based on several views and some base tables and so forth.
Anyone any ideas on how to narrow the problem a little more?
We have an application that is based on several extended storedprocedures. When we run our application in house, or when most othercustomers run it, they see performance of about X transactions persecond. One customer is seeing performance of about X/5, and I'mhaving a hard time troubleshooting it. The performace bottleneck hasbeen narrowed to the execution of the extended stored procedures. Doesanyone know of tuneable SQL Server parameters that may specificallyaffect the performance of extended stored procedures. I know theprocedures get run by a scheduler. Is there some way the priority orfrequency of the scheduler can be modified? Thanks for any advice.
For my organization, I wrote a stored procedure to generate invoices for all of our clients using an audit table. Any time changes are made to the data in a table in our database, corresponding records are added to our audit table. Using this audit table we can recreate any table in the database as it was on a particular day. To generate the invoices, I first use the audit table to generate a cached view of the information. After I have a cached view, I insert records into an invoices table using various filters in my select statements.
When I run the T-SQL directly from query analizer, it takes about 1 minute to generate all of the invoices. However, when I run the same T-SQL as a stored procedure, it takes roughly 14 minutes to complete. Thinking there was a problem with too much parallelism, I restricted the MAXDOP to 2 (as the server has 2 physical processors). However, this did not reduce the execution time at all. Next, I tried using the sql profiler to watch the database while I ran the query. I checked and I did not see any unneeded recompilation. Oddly however, I noticed that the stored procedure required well over 2 million reads compared to only 400,000 reads using the straight T-SQL. I am at a loss for how to make my stored procedure run as efficiently as the straight T-SQL code. If anyone has anything else for me to try or has any suggestions, they would be greatly appreciated.
I know that SQL Server itself does not support passing arrays to its procedures. But I need an alternative that will allow me to "duplicate" the same functionality.
I have the following information stored in a Class:
UserInfo: Where I keep the variables 1-12 PageInfo: Where I keep variable 13 (the list of pages)
I need to store this information in my SQL Server Database. So far I found three possible methods but I'm not sure which one has the better performance:
First Method: the easy one 1. Call the SaveUserInfo stored procedure 2. Loop through the pages array and call the SavePageInfo stored procedure for each page item in the array
Second Method: Passing a delimited string to the stored procedure 1. Call the SaveUserInfo stored procedure 2. Pass a delimeted string to the SavePageInfo stored procedure. The stored procedure will split the string and save the pages into the database. The string would look like this:
Third Method: Passing and XML File to the stored procedure The stored procedure will read the XML file and store the information into the database.
What method is the best for performance?? As you may see this is for tracking the user navigation through the website. For the first method I worry about the number of call to the database; for the Second and Third method I worry about the lenght of the string or XML file to pass to the stored procedure.
When I start my stored procedure from Query Analyzer it ends in 1 h. I have created a job consisting of 1 step - the same procedure. If I start the job from Enterprise Manager it ends after some 10 h. What can I do to get the same performance?
Hi group,I have a select statement that if run against a 1 million recorddatabase directly in query analyzer takes less than 1 second.However, if I execute the select statement in a stored procedureinstead, calling the stored proc from query analyzer, then it takes12-17 seconds.Here is what I execute in Query Analyzer when bypassing the storedprocedure:USE VerizonGODECLARE @phonenumber varchar(15)SELECT @phonenumber = '6317898493'SELECT Source_Identifier,BADD_Sequence_Number,Record_Type,BAID ,Social_Security_Number ,Billing_Name,Billing_Address_1,Billing_Address_2,Billing_Address_3,Billing_Address_4,Service_Connection_Date,Disconnect_Date,Date_Final_Bill,Behavior_Score,Account_Group,Diconnect_Reason,Treatment_History,Perm_Temp,Balance_Due,Regulated_Balance_Due,Toll_Balance_Due,Deregulated_Balance_Due,Directory_Balance_Due,Other_Category_BalanceFROM BadDebtWHERE (Telephone_Number = @phonenumber) OR (Telephone_Number_Redef =@phonenumber)order by Service_Connection_Date descRETURNGOHere is what I execute in Query Analyzer when calling the storedprocedure:DECLARE @phonenumber varchar(15)SELECT @phonenumber = '6317898493'EXEC Verizon.dbo.baddebt_phonelookup @phonenumberHere is the script that created the stored procedure itself:CREATE PROCEDURE dbo.baddebt_phonelookup @phonenumber varchar(15)ASSELECT Source_Identifier,BADD_Sequence_Number,Record_Type,BAID ,Social_Security_Number ,Billing_Name,Billing_Address_1,Billing_Address_2,Billing_Address_3,Billing_Address_4,Service_Connection_Date,Disconnect_Date,Date_Final_Bill,Behavior_Score,Account_Group,Diconnect_Reason,Treatment_History,Perm_Temp,Balance_Due,Regulated_Balance_Due,Toll_Balance_Due,Deregulated_Balance_Due,Directory_Balance_Due,Other_Category_BalanceFROM BadDebtWHERE (Telephone_Number = @phonenumber) OR (Telephone_Number_Redef =@phonenumber)order by Service_Connection_Date descRETURNGOUsing SQL Profiler, I also have the execution trees for each of thesetwo different ways of running the same query.Here is the Execution tree when running the whole query in theanalyzer, bypassing the stored procedure:--------------------------------------Sort(ORDER BY:([BadDebt].[Service_Connection_Date] DESC))|--Bookmark Lookup(BOOKMARK:([Bmk1000]),OBJECT:([Verizon].[dbo].[BadDebt]))|--Sort(DISTINCT ORDER BY:([Bmk1000] ASC))|--Concatenation|--IndexSeek(OBJECT:([Verizon].[dbo].[BadDebt].[Telephone_Index]),SEEK:([BadDebt].[Telephone_Number]=[@phonenumber]) ORDERED FORWARD)|--IndexSeek(OBJECT:([Verizon].[dbo].[BadDebt].[Telephone_Redef_Index]),SEEK:([BadDebt].[Telephone_Number_Redef]=[@phonenumber]) ORDEREDFORWARD)--------------------------------------Finally, here is the execution tree when calling the stored procedure:--------------------------------------Sort(ORDER BY:([BadDebt].[Service_Connection_Date] DESC))|--Filter(WHERE:([BadDebt].[Telephone_Number]=[@phonenumber] OR[BadDebt].[Telephone_Number_Redef]=[@phonenumber]))|--Compute Scalar(DEFINE:([BadDebt].[Telephone_Number_Redef]=substring(Convert([BadDebt].[Telephone_Number]),1, 10)))|--Table Scan(OBJECT:([Verizon].[dbo].[BadDebt]))--------------------------------------Thanks for any help on my path to optimizing this query for ourproduction environment.Regards,Warren WrightScorex Development Team