Performance Difference: Query Window V. Stored Procedure
Oct 24, 2007
Executing the stored procedure took 45 seconds. But copying the code to a query window and setting up the variables (instead of parameters), it took 7 seconds.
In the query window, most of the processing cost (86%) is right up front in a "Distinct Sort." But in exec stored procedure, the cost for this step is 11% and the significant costs are in later "Table Scans."
I don't know why SQL Server would choose different execution plans when the code is identical in each.
Has anyone encountered cases in which a proc executed by DTS has the following behavior: 1) underperforms the same proc when executed in DTS as opposed to SQL Server Managemet Studio 2) underperforms an ad-hoc version of the same query (UPDATE) executed in SQL Server Managemet Studio
What could explain this?
Obviously,
All three scenarios are executed against the same database and hit the exact same tables and indices.
Query plans show that one step, a Clustered Index Seek, consumes most of the resources (57%) and for that the estimated rows = 1 and actual rows is 10 of 1000's time higher. (~ 23000).
The DTS execution effectively never finishes even after many hours (10+) The Stored procedure execution will finish in 6 minutes (executed after the update ad-hoc query) The Update ad-hoc query will finish in 2 minutes
Hi,I have a web application using Stored Procedure (SP). I see that there's a SP taking long time to execute. I try to capture it by Profiler Tool, and find out that with the same SP on the same db with the same parameter. The duration of executing by my web app is far bigger than the duration of executing on SQl server management studio - query windowPlease see the image attached http://kyxao.net/127/ExecutionProblem.png Any ideas for this issue?Thanks a lot Jalijack
Bit of a strange one here. We have a SQL Express instance running with one database which is accessed by a VB6 client application.
Performance between the application and the database isn't great, but bizarrely if you open a Query window from the Management Studio (against the database) the performance dramatically improves - to the extent that it is visually noticeable from within the application.
So, I'm thinking that the database connections being made by the application are taking longer because of instance name resolution or the like and with the Query window open this resolution is performed quicker.
Has anyone come across this situation? I can re-create it on several different computers each with their own Express instance and I've not found anything about this on the net.
Here's the connection string used by the application - I have tried various permutations of values to no avail:
Hi group,I have a select statement that if run against a 1 million recorddatabase directly in query analyzer takes less than 1 second.However, if I execute the select statement in a stored procedureinstead, calling the stored proc from query analyzer, then it takes12-17 seconds.Here is what I execute in Query Analyzer when bypassing the storedprocedure:USE VerizonGODECLARE @phonenumber varchar(15)SELECT @phonenumber = '6317898493'SELECT Source_Identifier,BADD_Sequence_Number,Record_Type,BAID ,Social_Security_Number ,Billing_Name,Billing_Address_1,Billing_Address_2,Billing_Address_3,Billing_Address_4,Service_Connection_Date,Disconnect_Date,Date_Final_Bill,Behavior_Score,Account_Group,Diconnect_Reason,Treatment_History,Perm_Temp,Balance_Due,Regulated_Balance_Due,Toll_Balance_Due,Deregulated_Balance_Due,Directory_Balance_Due,Other_Category_BalanceFROM BadDebtWHERE (Telephone_Number = @phonenumber) OR (Telephone_Number_Redef =@phonenumber)order by Service_Connection_Date descRETURNGOHere is what I execute in Query Analyzer when calling the storedprocedure:DECLARE @phonenumber varchar(15)SELECT @phonenumber = '6317898493'EXEC Verizon.dbo.baddebt_phonelookup @phonenumberHere is the script that created the stored procedure itself:CREATE PROCEDURE dbo.baddebt_phonelookup @phonenumber varchar(15)ASSELECT Source_Identifier,BADD_Sequence_Number,Record_Type,BAID ,Social_Security_Number ,Billing_Name,Billing_Address_1,Billing_Address_2,Billing_Address_3,Billing_Address_4,Service_Connection_Date,Disconnect_Date,Date_Final_Bill,Behavior_Score,Account_Group,Diconnect_Reason,Treatment_History,Perm_Temp,Balance_Due,Regulated_Balance_Due,Toll_Balance_Due,Deregulated_Balance_Due,Directory_Balance_Due,Other_Category_BalanceFROM BadDebtWHERE (Telephone_Number = @phonenumber) OR (Telephone_Number_Redef =@phonenumber)order by Service_Connection_Date descRETURNGOUsing SQL Profiler, I also have the execution trees for each of thesetwo different ways of running the same query.Here is the Execution tree when running the whole query in theanalyzer, bypassing the stored procedure:--------------------------------------Sort(ORDER BY:([BadDebt].[Service_Connection_Date] DESC))|--Bookmark Lookup(BOOKMARK:([Bmk1000]),OBJECT:([Verizon].[dbo].[BadDebt]))|--Sort(DISTINCT ORDER BY:([Bmk1000] ASC))|--Concatenation|--IndexSeek(OBJECT:([Verizon].[dbo].[BadDebt].[Telephone_Index]),SEEK:([BadDebt].[Telephone_Number]=[@phonenumber]) ORDERED FORWARD)|--IndexSeek(OBJECT:([Verizon].[dbo].[BadDebt].[Telephone_Redef_Index]),SEEK:([BadDebt].[Telephone_Number_Redef]=[@phonenumber]) ORDEREDFORWARD)--------------------------------------Finally, here is the execution tree when calling the stored procedure:--------------------------------------Sort(ORDER BY:([BadDebt].[Service_Connection_Date] DESC))|--Filter(WHERE:([BadDebt].[Telephone_Number]=[@phonenumber] OR[BadDebt].[Telephone_Number_Redef]=[@phonenumber]))|--Compute Scalar(DEFINE:([BadDebt].[Telephone_Number_Redef]=substring(Convert([BadDebt].[Telephone_Number]),1, 10)))|--Table Scan(OBJECT:([Verizon].[dbo].[BadDebt]))--------------------------------------Thanks for any help on my path to optimizing this query for ourproduction environment.Regards,Warren WrightScorex Development Team
I have also posted this in microsoft.public.sqlserver.programming.
I have a query which, depending on where I run it from, will either take 10 milliseconds or 10 seconds.
The query works perfectly when run in SQL Server Management Studio... in my database of around 70,000 items it returns the results in around 10ms. It uses all my indexes and indexed views correctly.
However when I run the identical query from my ASP.NET application, it takes around 10 seconds... 1000 times longer. Looking at it in Sql Server Profiler I can't see any difference in the query, except from ASP.NET it needs 62531 reads and from SSMS it needs only 318 reads. If I copy the slow running ASP.NET query from the profiler into SSMS, then it runs quick again. The results returned are the same.
I have provided more details of the query below, but I guess my real question is: What is the best way to debug this? I'm not an expert with SQL Server, so any pointers on where I should start looking to find the difference in how the query is being executed would be a great help.
The query is of the form:
WITH RowPost AS ( SELECT ROW_NUMBER() OVER(ORDER BY DateCreated DESC) AS Row, ItemId, Title, .... FROM Items_View WITH(NOEXPAND) WHERE ItemX >= @minX AND ItemX <= @maxX AND ItemY >= @minY AND ItemY <= @maxY ) SELECT *, (SELECT Count(*) FROM RowPost) AS [Count] FROM RowPost WHERE Row >= @minRow AND Row < @maxRow
Where Items_View is an indexed view, and WITH(NOEXPAND) is being used to force it to use the indexed view (this is optimal). The line beginning "SELECT Count(*)" is to get the total number of results (without having to run the inner query a second time).
This is running against SQL Server Developer Edition.
I'm having an issue with a query I'm running on Sql Server 2005. It's a semi-complex query involving an in-line table function and several left outer joins which are joined on to the results of the function call. Two of the left outer joins are then qualified in a where clause of the form where table.Col is not null; the idea is that the final result set contains data that has no match in those two tables.
The problem revolves around a where clause in the function and the last left outer join (ie, one of the ones qualified with where not null). When I alter the where clause of the function to further restrict the result set the function returns, the query times shoots up from 1 second to roughly 2-3 minutes. Note that the time the function takes to complete is not affected. The difference in time is purely down to what the query does with the results the function provides. Also note that the change to the where clause provides a subset of the original data; it does not add any more data (it actually restricts the original resultset by roughly 1000 rows).
I can bring the query speed back down again by removing the last left outer join - this join takes one of the columns from the function, and joins it to a small table - 924 rows. So it appears that this particular join is the cause of the issue, but only when using the resultset generated from the modified function query.
Now, as the thread title alludes, Sql Server 2000 and 2005 handle this differently, or appear to. When I execute this same query on a Sql 2000 machine, there's no apparent time differences, and the data that is returned is as expected. Does anyone have any suggestions as to what might be causing this and how I can fix it? I could simply return the larger resultset and use managed code to filter out the rows I don't want; however, I would like to get to the bottom of this, especially if it's going to effect future queries.
I saved a stored procedure (see below) and I'm not seeing listed within the 'Databases' / 'Remote_Serials' (DB name) / 'Programmability' / 'Stored Procedures' folder within the 'Object Explorer' window.
I'm trying to perform a simple add-info-from-form-into-DB process.
Is it because of the way I put together the procedure? I have a feeling that it is (was going off a video demo, which created it within VB.NET 2008 Express, but I'm not able to connect to the DB from there; it says the file is open, when I completely closed out of SQL Management Studio Express).
Code Snippet ALTER PROCEDURE dbo.spInsertSerialInfo @EmployeeID as nchar(10), @First_Name as nvarchar(50), @Last_Name as nvarchar(50), @HAddress as nvarchar(50), @City as nvarchar(30), @HState as nvarchar(2), @Zip as nvarchar(10), @Phone_Home as nchar(15), @Phone_Cell as nchar(15), @Monitor1 as nvarchar(50), @Monitor2 as nvarchar(50), @PIX_ASA_Box as nvarchar(50), @System_Case as nvarchar(50), @Batt_APC as nvarchar(50), @current_count as int OUTPUT
If I change the 'CommandType.StoreProcedure' to 'CommandType.Text' it seems to find it, but then I get another error saying that the @EmployeeID variable has already been delcared and that I need to have a unique variable.
I created one stored procedure to update the date difference in the table . in this table i have dt1,dt2,dt3... column and diff1,diff2... I wanted to find the difference between dt2 and dt1, and dt4 and dt3 and put it in separate column.
When I compiled the stored procedure, it did not show any error. But when i execute, it shows the error:
Conversion failed when converting datetime from character string.
set ANSI_NULLS ON set QUOTED_IDENTIFIER ON go ALTER procedure [dbo].[autopost1] as begin declare inner int
I have a Stored Procedure (SP) that creates the data required for areport that I show on a web page. The SP does all the work and justreturns back a results set that I dump in an ASP.NET DataGrid. The SPtakes a product area and a start and end date as parameters.Here are the basics of the SP.1.Create temp table to store report results, all columns are createdthat will be needed at this point.2.Select products and general product data into the temp table.3.Create a cursor that loops through all the products in the temptable, running a more complex query with each individual product.4.The results of that query are updated on the temp table based on thecurrent product of the cursor.5.A complex "totals" query is run and the results from that areinserted into the temp table as the last 3 rows.In all we are talking about 120 rows in the temp table with 8 columnsthat are mostly numbers.I originally wrote this report SP about a month ago and it worked fine,ran in about 10 - 20 seconds based on server traffic and amount ofdata in the temp table. For the example I'm running there are the120 products.Just yesterday the (SP started timing out and when I ran the SPmanually from Query Analyzer (QA) (exec SP_NAME ... ) with the sameparameters as it was getting in the code it took 6 minutes to complete.I was floored. I immediately copied the SQL out of the SP and pastedinto another QA window, changed the variables to be hard coded valuesand ran it. It completed in 10 seconds.I'm really confused now. I ran a Profiler on the 2 when I ran themagain. The SQL code in QA executed again in ~10 seconds with 65,000reads. When the SP finished some 6 minutes later it had completed witthe right results but it needed 150,000,000 reads to do its job.How can the exact same SQL code produce such different results (time,disk reads) based on whether its in a SP or just run from QA but stillgive me the exact same output. The reports both look correct and havethe same numbers of rows.I asked my Sys Admin if he had done anything to anything and he saidno.I've been reading about recompiles and temp table indexes and allkinds of other stuff that could possibly be affecting it but havegotten nowhere.Any ideas are appreciated.
Created a report that displays the Maximum Response time (example of value 00:00:00) which is directly pulled from the Stored proc.When I ran the report, the column displays blank values.I am not sure if I should add any conversion to the Response value in the report.
I have a very complex Stored Procedure called by a Job that is Scheduled to run every night. It's execution takes sometimes 1 or 2 hours and sometimes 7 hours or more.
So, if it is running for more than 4 hours I stop the Job and I run the procedure from a Query Window and it never takes more than 2 hours.
Can anyone help me identify the problem ? I want to run from the Job and not to worry about it.
Some more information: - It is SQL 2000 Enterprise with SP4 in a Cluster (It happens the same way in any node). - The SQL Server and SQL Agent services run using a Domain Account that have full Administrative access. - When I connect to a Query Window I also use a Windows Account.
- There is no locks or process bloking or being blocked while the job is running. - Using the Task Manager the processor activity is ok, no more than 30 % in any processor.
i have created a procedure that is about 500 line long.
now this is actually a controller procedure which calles other procedures and functions to generate data for a report. But this procedure table about 3 min to generate result set. I am not using any temp table. I am using table variables.
My procedure do not recompile. My rocedure have some insert into ... Exec statements also..
My question is Will performance increase if i split the stored procedure into 2 or 3 or 4 parts?
Given that a stored procedure and T-SQL code in query analyzer are exactly the same, why would the stored procedure run much slower?
When I mean much slower I mean 3 sec for the code in query analyzer as opposed to 2:33 sec for the stored procedure.
Exact same code!
Profiler also gives more reads and writes for stored procedure, and a lot of BatchStarted and BatchCompleted directives between the 'start' and 'end' of the stored procedure.
I am writing an ASP based application that creates a dynamic querry and then executes it and displays results. I was thinking about writing a stored procedure to increase performance. How much can the SP help me boost querry responce time ???
hi how should i monitor performjance of stored procedure and sql statements. i want to know that how much cpu time a query or stored procedure is taking??
r there any system table which give these information
My application fetches a batch of data through a web service and writes 1000entities per batch to a SQL Server 2000 database. There are 4 tables inevery batch. There are the following number of SQL commands executed peraverage of every batch;Table #1: always 1Table #2: 5Table #3: 5Table #4: 3The problem is that the performance slows down for every batch. Below is anexcerpt from my log file;2004-12-15 12:00:01 Starting job... (RAM usage: 6,38 mb)2004-12-15 12:00:39 data fetch time: 00:00:28 (RAM usage: 23,04 mb)2004-12-15 12:00:39 Total data fetch time: 00:00:37 (RAM usage: 23,04 mb)2004-12-15 12:00:39 Inserting/updating 1000 entities...2004-12-15 12:01:20 Write SQL time: 00:00:402004-12-15 12:01:49 data fetch time: 00:00:24 (RAM usage: 26,87 mb)2004-12-15 12:01:49 Total data fetch time: 00:00:29 (RAM usage: 26,87 mb)2004-12-15 12:01:49 Inserting/updating 1000 entities...2004-12-15 12:02:59 Write SQL time: 00:01:102004-12-15 12:04:06 data fetch time: 00:00:29 (RAM usage: 27,48 mb)2004-12-15 12:04:06 Total data fetch time: 00:01:06 (RAM usage: 27,48 mb)2004-12-15 12:04:06 Inserting/updating 1000 entities...2004-12-15 12:05:30 Write SQL time: 00:01:232004-12-15 12:06:05 data fetch time: 00:00:31 (RAM usage: 27,03 mb)2004-12-15 12:06:05 Total data fetch time: 00:00:35 (RAM usage: 27,03 mb)2004-12-15 12:06:05 Inserting/updating 1000 entities...2004-12-15 12:07:37 Write SQL time: 00:01:32As one can see, the Write SQL time increases per every batch.I would like this time to stay around one minute per batch.There are one trigger per table. There is one parent table which has aprimary-foreign key relationship to the three sub tables.I have 2% automatic file size growth set on both the data and the log file.Thank you in advance to the guru which helps me out with this!
Hi,I have a large SQL Server 2000 database with 3 core tables.Table A : 10 million + recordsTable B : 2 million + recordsTable C : 6 million + recordsOne of the batch tasks that I have to perform firstly builds a list ofall keys for records from each of the three tables that I need toprocess (along with a flag to tell me which table the key is from).This list is populated into a table variable.I then loop through the table variable to process all the records withthe particular key value.The updates are run in order of the tables .... Table A first, B nextand finally C.The table variable will typically hold 3000 keys.My problem is this ...... the processing of the key records from TableA runs well - it takes around 40 minutes which is acceptable for thelevel of processing being carried out. Though when I start processingthe transactions for Table B the first couple of statements executesuccessfully though then the subsequent statements take a long time (insome cases hours) to complete. The format of the statements for alltables is virtually the same and the tables have been indexedappropriately.The thing is that if I alter the stored proc to only process recordsfrom Table B or Table C ... the procedure flies through and processesthe records in a flash .... 1-2 minutes.Can anyone suggest what might be the issue here ?I have read many posts though can't seem to find the solution.Should I break up my processing so that it processes each tableindividually ?I've tried running the Profiler though it doesn't provide me with muchin the way of solutions.Regards,Ian
I have an SP that is big, huge, 700-800 lines.I am not an expert but I need to figure out every possible way thatI can improve the performance speed of this SP.In the next couple of weeks I will work on preparing SQL statementsthat will create the tables, insert sample record and run the SP.I would hope people will look at my SP and give me any hints on howI can better write the SP.In the meantime, after looking at the SP briefly, my first observations are:1- use SET NOCOUNT ON2- avoid using UNION statements3- use WITH (NOLOCK) with each SELECT statement4- avoid using NESTED Select statements5- use #temp tables6- avoid renaming tables in SELECT statements, for example SELECT * FROMtblClients CAm i correct to consider the above 6 points as valid things in terms ofcausingperformance problems?I would appreciate any comments/helpThank you very much
Hello,I have a question regarding stored procedure desing that provides theoptimal performance. Let's say we have a table Products that consists ofthree columns: Name, Status, RegistrationTime. All columns are indexed andusers should be able to lookup data by any of the columns. We have two mainoptions to design stored procedures for data retrieval:1. Design separate stored procedures for each search criteria:LookupProductsByName, LookupProductsByStatus, LookupProductsByTime.2. Write a generic stored procedure that will fit any search criteria:CREATE PROCEDURE GetProducts (@Name varchar(20),@Status int = NULL,@FromTime datetime = NULL,@ToTime datetime = NULL)AS BEGINSELECT[Name],[Status],[RegistrationTime]FROM [Products]WHERE [Name]=CASEWHEN @Name<>NULL THEN @NameELSE [Name]ENDAND [Status]=CASEWHEN @Status<>NULL THEN @StatusELSE [Status]ENDAND [RegistrationTime]>=CASEWHEN @FromTimestamp<>NULL THEN @FromTimestampELSE [RegistrationTime]ENDAND [RegistrationTime]<=CASEWHEN @ToTimestamp<>NULL THEN @ToTimestampELSE [RegistrationTime]ENDORDER BY [RegistrationTime]END;The second option is very attractive, because it is obviously easier tomaintain such code. However, I am a little concerned about performance ofsuch stored procedure. It is not possible to foresee what index should beused, index can only be selected each during procedure execution, becausesearch criteria can include either Name, Status or RegistrationTime. Will itmake this SP inefficient? Or perormance difference in such case is not big(if any) and we should choose the second option because of its significantcode reduction?Thanks in advanceVagif AbilovJoin Bytes!
Dear all, I have the following stored procedure that takes around 1:15 minutes to finish execution against SQL Server 2005. The table RecordedCalls contains 9369907 Records, the other tables used in the join will not contain more than 15 Records for each one. The table lookups contains like 200 Records so that€™s why I put the records I want from it in a temp table (#tempLookUps) before joining it with the table RecordedCalls. I have clustered index (Primary key) on the column ID in table RecordedCalls and non-clustered indexes on the columns that are used in the Where statement and the group by field (CallType), I can€™t remove any join with other tables or any condition on the where statements hence it is very dynamic and concatenated from other strored procedure and i can't remove the DISTINCT Word. I found that when using temp tables to put the results in then imply joining on then is more efficient than using Derived Tables.
Are there any ideas to enhance the performance for this Stored Procedure? Or the code below is the optimal code? Here is the Stored procedure:
DECLARE @max int
SELECT @max = MAX(RecordedCalls.ID) FROM RecordedCalls
CREATE TABLE #tempLookups (ID int identity(0,1),Code NVARCHAR(100),NameE NVARCHAR(500),NameA NVARCHAR(500))
CREATE TABLE #tempTable (ID int identity(0,1),TypesCount INT,CallsType NVARCHAR(50))
INSERT INTO #tempLookups SELECT Code, NameE, NameA FROM lookups WHERE [Type] = 'CALLTYPES' ORDER BY Ordering ASC
INSERT INTO #tempTable SELECT COUNT(DISTINCT(RecordedCalls.ID)) As TypesCount,RecordedCalls.CallType as CallsType
FROM Servers INNER JOIN RecordedCalls ON Servers.Name = RecordedCalls.ServerName
LEFT OUTER JOIN Tags INNER JOIN RecordedCallsTags ON Tags.ID = RecordedCallsTags.TagID
ON RecordedCalls.ID = RecordedCallsTags.CallID
WHERE RecordedCalls.ID <= @max
AND (RecordedCalls.CallDate BETWEEN CAST ('01 Jan 1910 00:00:00:000' AS DATETIME ) AND CAST('01 Jan 2210 00:00:00:000' AS DATETIME ))
AND (RecordedCalls.Duration BETWEEN 0 AND 1000000)
AND RecordedCalls.AgentID NOT IN('1000010000')
AND RecordedCalls.IsDeleted='FALSE'
GROUP BY RecordedCalls.CallType
SELECT IsNull(#tempTable.TypesCount, 0) AS TypesCount, CASE('English')
WHEN 'Arabic' THEN #tempLookups.NameA
ELSE #tempLookups.NameE
END AS CallsType FROM
#tempTable RIGHT OUTER JOIN #tempLookups ON #tempTable.CallsType = #tempLookups.Code
We are using a stored procedure with three parameters to query a table with two tables joined. The query when run outside of the stored procedure in Query Analyzer takes less than 1 second to run. The same query inside the stored procedure run in Query Analyzer takes 5 to 30 seconds, which is extremely long.
We have looked at parameter sniffing as the possible issue and set the parameters as local variables and tried using sp_executesql to resolve parameter sniffing. These did not work.
I copied the database to a test server with no other users logged on except for me. I ran the same stored procedure multiple times. The first run was in less than 1 second. All subsequent runs took 5 to 30 seconds. No other users were on the box; therefore, there were no file lock contentions.
We are at a loss as to why this is happening. Any advice is greatly appreciated. Our code is below:
ALTER PROCEDURE [dbo].[spBoxCheck]
@FacilityID int, @CustID int, @OrdNumber nvarchar(64) AS SET NOCOUNT ON
SELECT
dbo.PackHeader.BoxID, dbo.OrdHeader.partnerOrderNumber, dbo.PackLine.LineID, dbo.PackLine.QuantityPacked, dbo.PackHeader.Validated FROM dbo.PackHeader LEFT OUTER JOIN dbo.OrdHeader ON dbo.PackHeader.IntOrderNumber = dbo.OrdHeader.IntOrderNumber LEFT OUTER JOIN dbo.PackLine ON dbo.PackHeader.BoxID = dbo.PackLine.BoxID WHERE (OrdHeader.partnerOrderNumber = @OrdNumber) And (OrdHeader.CustomerID = @CustID) And (PackHeader.ProdFacilityID = @FacilityID)
Here€™s a question that bugs me on SQL Stored procedure.
I€™ve a stored procedure which takes 1 sec for its execution when it is hit by single user.
When the same Stored procedure is accessed concurrently in a multi user environment with different Inputs to the SP, it€™s taking 5-6 secs to execute.
I€™m totally confused, what might hold up the resources though SQL server provides multi user instances when it is hit by several users. (Correct me if I€™m wrong).
It would be great if you let me know the actual reason behind this.
Hi, I think I remember reading somewhere that excessive comments inside a stored procedure can have a negative impact on performance. Does anyone know if this is true and can provide an explanation?
I need to improve the performance of Stored Procedure and would like to give some information about the same.
First of all the sp is very big. and it has some parameter like @Select, @Where etc... which may vary so it has With Recompile option.It uses patindex, substring and replace function heavily.Other things seems to be fine.
and also one more point i required assistance is if i am declaring a variable @Month varchar(20) and then setting value @Month = month(TradeDt) -> month function returns an int (here int to varchar conversion happening) and also setting one more variable value like @Var1 = '0' + @Month -> reason why i declared @month as varchar so the above option is ok or shall i go for @month as int, @Month = month(TradeDt) and @Var1 = '0' + cast(@Month as varchar)
any help or suggestion would be greatly apprecialted.
I have a stored procedure which runs in about 30-40 seconds most of the time, however sometimes it takes over an hour to complete. The resultset is the same for both execution times. There doesn't appear to be any other significant resource hogging on the server during execution (SQL Server does use 99% CPU while it runs tho) The procedure itself is based on 2 views, these views in turn are themselves based on several views and some base tables and so forth.
Anyone any ideas on how to narrow the problem a little more?
We have an application that is based on several extended storedprocedures. When we run our application in house, or when most othercustomers run it, they see performance of about X transactions persecond. One customer is seeing performance of about X/5, and I'mhaving a hard time troubleshooting it. The performace bottleneck hasbeen narrowed to the execution of the extended stored procedures. Doesanyone know of tuneable SQL Server parameters that may specificallyaffect the performance of extended stored procedures. I know theprocedures get run by a scheduler. Is there some way the priority orfrequency of the scheduler can be modified? Thanks for any advice.
For my organization, I wrote a stored procedure to generate invoices for all of our clients using an audit table. Any time changes are made to the data in a table in our database, corresponding records are added to our audit table. Using this audit table we can recreate any table in the database as it was on a particular day. To generate the invoices, I first use the audit table to generate a cached view of the information. After I have a cached view, I insert records into an invoices table using various filters in my select statements.
When I run the T-SQL directly from query analizer, it takes about 1 minute to generate all of the invoices. However, when I run the same T-SQL as a stored procedure, it takes roughly 14 minutes to complete. Thinking there was a problem with too much parallelism, I restricted the MAXDOP to 2 (as the server has 2 physical processors). However, this did not reduce the execution time at all. Next, I tried using the sql profiler to watch the database while I ran the query. I checked and I did not see any unneeded recompilation. Oddly however, I noticed that the stored procedure required well over 2 million reads compared to only 400,000 reads using the straight T-SQL. I am at a loss for how to make my stored procedure run as efficiently as the straight T-SQL code. If anyone has anything else for me to try or has any suggestions, they would be greatly appreciated.
I know that SQL Server itself does not support passing arrays to its procedures. But I need an alternative that will allow me to "duplicate" the same functionality.
I have the following information stored in a Class:
UserInfo: Where I keep the variables 1-12 PageInfo: Where I keep variable 13 (the list of pages)
I need to store this information in my SQL Server Database. So far I found three possible methods but I'm not sure which one has the better performance:
First Method: the easy one 1. Call the SaveUserInfo stored procedure 2. Loop through the pages array and call the SavePageInfo stored procedure for each page item in the array
Second Method: Passing a delimited string to the stored procedure 1. Call the SaveUserInfo stored procedure 2. Pass a delimeted string to the SavePageInfo stored procedure. The stored procedure will split the string and save the pages into the database. The string would look like this:
Third Method: Passing and XML File to the stored procedure The stored procedure will read the XML file and store the information into the database.
What method is the best for performance?? As you may see this is for tracking the user navigation through the website. For the first method I worry about the number of call to the database; for the Second and Third method I worry about the lenght of the string or XML file to pass to the stored procedure.
When I start my stored procedure from Query Analyzer it ends in 1 h. I have created a job consisting of 1 step - the same procedure. If I start the job from Enterprise Manager it ends after some 10 h. What can I do to get the same performance?