How Do We Tune A Large Sql Query
Feb 6, 2006hi,
I have a problem asked by one of my senior person and finding the
answer .
What is the step by step procedure for tune a large sql query.
OR how do we tune a large SQL query with somany joins
hi,
I have a problem asked by one of my senior person and finding the
answer .
What is the step by step procedure for tune a large sql query.
OR how do we tune a large SQL query with somany joins
I have written a query which fetches data from a table with huge amount of data. This query is actually being used in a stored-procedure and it takes a lot of time, hence slows down my SP.
Here is the query that I'm builiding in the stored-procedure:
-------------------------------------------------------------
SET @selectLeadsInPeriodString = ' SELECT LM_Dealer, LM_Brand, SUM(LM_ImpressionCount)
FROM LM_ImpressionCount_Dealer'
IF(@timeCriterion IS NULL OR LTRIM(RTRIM(@timeCriterion)) = '' OR LTRIM(RTRIM(@timeCriterion)) = 'NULL')
SET @selectLeadsInPeriodString = @selectLeadsInPeriodString +
' WHERE (CONVERT(varchar,[LM_ImpressionCount_Dealer].[LM_ImpressionDate],102) >= ''' +
CONVERT(varchar,@startDateTime,102) +
''' AND CONVERT(varchar,[LM_ImpressionCount_Dealer].[LM_ImpressionDate],102) <= ''' +
CONVERT(varchar,@endDateTime,102) + ''')'
ELSE IF(@timeCriterion = 'CurrentMonth')
SET @selectLeadsInPeriodString = @selectLeadsInPeriodString +
' WHERE MONTH([LM_ImpressionCount_Dealer].[LM_ImpressionDate]) = MONTH(GETDATE())'
ELSE IF(@timeCriterion = 'PreviousMonth')
SET @selectLeadsInPeriodString = @selectLeadsInPeriodString +
' WHERE MONTH([LM_ImpressionCount_Dealer].[LM_ImpressionDate]) = MONTH(GETDATE()) - 1'
ELSE IF(@timeCriterion = 'YearToDate')
SET @selectLeadsInPeriodString = @selectLeadsInPeriodString +
' WHERE ([LM_ImpressionCount_Dealer].[LM_ImpressionDate] >= cast((''1/1/''+cast(year(getdate()) AS varchar(4))) AS datetime)
AND CONVERT(varchar,[LM_ImpressionCount_Dealer].[LM_ImpressionDate],102) <= CONVERT(varchar,GETDATE(),102))'
IF(@brand IS NOT NULL AND LTRIM(RTRIM(@brand)) <> '')
SET @selectLeadsInPeriodString = @selectLeadsInPeriodString +
' AND [LM_ImpressionCount_Dealer].[LM_Brand] = ''' + @brand + ''''
SET @selectLeadsInPeriodString = @selectLeadsInPeriodString + ' GROUP BY LM_Dealer, LM_Brand'
The variables used in the query formation above are passed as input parameters to the SP.
The table being queried has columns 'LM_Dealer', 'LM_Brand', 'LM_ImpressionCount' and 'LM_ImpressionDate'.
Also, the table has a non-clustered index on the column LM_ImpressionDate.
With all this information, can anyone suggest as to how I can optimize the query above.
Thanks in advance.
-Dex
I think this is very silly question
but It is hard for me :eek:
SELECT email, host FROM WAITING_AUTH WHERE email NOT IN
(SELECT email FROM MEMBER)
AND host NOT IN (SELECT host FROM MEMBER)
thanks~ Have a nice weekend
If anyone is able to provide advice for tuning the below query in sqlserver, it is much appreciated. In addition any index suggestions arealso appreciated as I have access to the tables. Thank you.select a.id, isnull(b.advisement_satisfaction_yes, 0) asadvisement_satisfaction_yes,isnull(c.advisement_satisfaction_no, 0) as advisement_satisfaction_no,casewhen isnull(b.advisement_satisfaction_yes, 0) >isnull(c.advisement_satisfaction_no, 0) then 'YES'when isnull(b.advisement_satisfaction_yes, 0) <isnull(c.advisement_satisfaction_no, 0) then 'NO'when isnull(b.advisement_satisfaction_yes, 0) =isnull(c.advisement_satisfaction_no, 0) then 'TIE'end as Satisfied_With_Advisementfrom aleft Join(select id, count(answer_text) as Advisement_Satisfaction_yes from awhere question = 'The level of Academic Advisement I received from theUniversity staff during this course was appropriate.'and answer_text = 'yes'GROUP BY id) bon a.id = b.idleft join(select id, count(answer_text) as Advisement_Satisfaction_NO from awhere question = 'The level of Academic Advisement I received from theUniversity staff during this course was appropriate.'and answer_text = 'NO'GROUP BY id) con a.id = b.idwhere question = 'The level of Academic Advisement I received from theUniversity staff during this course was appropriate.'
View 2 Replies View RelatedHi,
I have a query as mentioned below:
SELECT BillCurrencyID,DistD,Amount,PartnerFlag1
FROM Factsales_tab_Dtls (NOLOCK)
WHERE (sales_year =2007 OR sales_year=2008)
AND Country ='US'
1) Table Factsales_tab_Dtls is having more than 5.5 million of records.
2) It is having 32 columns.
3)There's nonclusetered index on columns sales_year & country
4)The query takes longer time for execution
Please help me fine-tune the query
hi here has a question for the mssql,
a problem occurs when i run a large database, the running speed is very slow.
for example, if i want to seek the record of 1500 employees with 15 per person(average) within 1 year(12 months), that mean i have to find record for 1500 * 15 * 12 time.
so, could i find a way to solve this problem? is this call tune the sql server/index/view?
what different of tune the sql server with index and view?
thanks for giving me advice.
:)
Hi,
I have SQL desktop version installed and for the last few days it has really slowed down. I have ran many anti-virus etc. and all is okay on that front.
Any tips regarding how I can tune things up? What should I look for and how do I go about it. Deleting LOG etc. etc???
Please guide.
Thanks.
ENVIRONMENT:
I have SQL Server6.5 running under a dedicated NT Server. NT configuration includes dual pentium 200Mhz processors, 256MB RAM and RAID system.
The Database size is 1GB with actual data size about 500MB.
PROBLEM:
I have an application which uses lots of joins to get the results. My select query is running too slow even when I run it on the server.
I updated the statistics and rebuild all the indexes on the tables used by the query.
Any suggestions on using SQL Trace and tuning the server/database are welcome.
Srini
How can l trim the code and make the procedure run faster ???????????????
CREATE Procedure Disbursements_Cats
(@startdate datetime,@enddate datetime)
As
Begin
SELECT Transaction_Record.loan_No AS loan_no,
Transaction_Record.transaction_Date AS Transaction_Date,
Transaction_Record.transaction_type AS Transaction_type,
Transaction_Record.transaction_Amount AS Transaction_Amount,
Product.product AS Product,
Product_Type.product_Type AS product_type,
Product_Type.loan_Type AS Loan_type,
Customer.first_name AS first_name,
Customer.initials AS initials,
Customer.second_name AS Second_Name,
Customer.surname AS surname,
Customer_identification.idno AS ID_No,
Bank.Bank_name AS Bank_Name,
Bank_detail.Account_no AS Account,
Bank_detail.Branch AS Branch
FROM Transaction_Record CROSS JOIN
Bank_detail CROSS JOIN Bank CROSS JOIN
Customer CROSS JOIN Product CROSS JOIN
Loan_Type CROSS JOIN Product_Type CROSS JOIN Customer_identification
End;
GO
Hello,
i need to tune some of my store procedures.
can any one help me how it's possible to tune particular store procedures?
Thanks
Prashant Hirani
HELP!!!I am trying to fine tune or rewrite my SELECT statement which has acombination of SUM and CASE statements. The values are accurate, butthe query is slow.BUSINESS RULE=============1. Add up Count1 when FIELD_1 has a value and FIELD_2 is NULL, or bothhave a value.2. Add up Count2 when FIELD_2 has a value and FIELD_1 is NULL.4. TotalCount = Count1 + Count2 -- (Below, basically had to reuse theSQL from both Count1 and Count2)3. Add a NoneCount when both FIELD_1 and FIELD_2 are NULL.SQL Code========SELECTSUM(CASEWHEN ((FIELD_1 IS NOT NULL AND FIELD_2 IS NULL) OR (FIELD_1 IS NOTNULL AND FIELD_2 IS NOT NULL))THEN 1ELSE 0END) AS Count1 ,SUM(CASEWHEN (FIELD_1 IS NULL AND FIELD_2 IS NOT NULL)THEN 1ELSE 0END) AS Count2,SUM(CASEWHEN (FIELD_1 IS NULL AND FIELD_2 IS NOT NULL)THEN 1ELSE (CASE WHEN ((FIELD_1 IS NOT NULL AND FIELD_2 IS NULL) OR FIELD_1IS NOT NULL AND FIELD_2 IS NOT NULL) THEN 1 ELSE 0 END)END) AS Total_Count,SUM(CASEWHEN ( FIELD_1 IS NULL AND FIELD_2 IS NULL)THEN 1ELSE 0END) AS None_Count,FROMTABLE_1
View 1 Replies View RelatedHi :I have a TableA with around 10 columns with varchar and numericdatatypesIt has 500 million records and its size is 999999999 KB. i believe itis kbi got this data after running sp_spaceused on it. The index_size wasalso pretty big in 6 digits.On looking at the tableAit didnot have any pks and hence no clustered index.It had other indicesIX_1 on ColAIX_2 on ColBIX_3 on ColCIX_4 on ColA, ColB and ColC put together.Queries performed in this table are very slow. I have been asked totune up this table.I know as much info as you.Data prior to 2004 can be archived into another table. I need to run aquery to find out how many records that is.I am thinking the following, but dont know if i am correct ?I need to add a new PK column (which will increase the size of thetableA) which will add a clustered index.Right now there are no clustered indices2. I would like help in understanding should i remove IX_1, IX_2, IX_3as they are all used in IX_4 anyway .3. I forget what the textbox is called on the index page. it is set to0 and can be set from 0 to 100. what would be a good value for it ?thank you.RS
View 8 Replies View Related
I've create a bunch of views to expose a logical model of the underlying database of an application server.
To enforce the security control, I've also created a CLR UDF to call the application server's API for security check and audit log.
For example, we have a table, tblSecret, and the view, vwSecret, is,
SELECT
Id,
ParentId,
Description,
SecretData
FROM tblSecret
WHERE udfExpensiveApiCall(Id) = 1
The udfExpensiveApiCall will return 1 if the current user is allowed to access the SecretData else 0. The CLR UDF call is very expensive in terms of execution time and resources required.
Currently, there are millions rows in the tblSecret.
My objective is to tune the view such that when the view is JOINed, the udfExpensiveApiCall will be called the least number of time.
SELECT
ParentId,
SecertData
FROM vwParent
LEFT JOIN vwSecret ON vwSecret.ParentId = vwParent.ParentId
WHERE vwParent.StartDate > '1/1/2008'
AND vwSecret.Description LIKE '%WHATEVER%'
Is there any way to specify the execution cost of the CLR UDF, udfExpensiveApiCall, such that the execution plan will call the UDF while it is absolutely necessary?
Is there any query hint will help?
Any recommendation?
Thanks,
Simon Chan
ASP.NET and MsSQL are run inside the same machine, and inside win2000 server,
and the physical memory limit of mssql is set to 192MB.
any one have any good idea(s)? please share to us here
:)
I want to tune the indexes on my database and I am trying to use the SQL Server Profiler to collect data for the Index tuning wizard to analyze. My question is what do I need to trace with the profiler so that the Index tuning wizard can work? I am looking at the trace properties in Profiler at the Events, Data Columns, and Filters tabs but I have no idea of what I need to capture.
Thanks in advance.
Mike
Hi all,
i have a problem ...
if there is a query that returns so many rows. I want to know where the result is stored? for example:what database?, what table?, what transaction log file?
Thanks fr reading.
i have a report that runs on a huge table rpt.AgentMeasures , it has 10 months worth of data (150 million records as of today and will keep increasing). i have pasted my proc below , the other tables that are joined to this huge table do not have more then 3k records.This report will be accessed by multiple users (expecting 20 ppl). as of now this reports runs for 5 mins if i pull for 1 month worth of data. if it is wise to use temp tables.
ALTER proc [rpt].[Get_Metrics]
@MinDate DATETIME,
@MaxDate DATETIME,
@Medium Varchar(max),
@footPrint varchar(max),
[code]...
Sriram writes "Hi,
I want to retrieve only the first n rows from a query which returns a large number of rows.
Say,
select empno, name from emp where deptno=100
returns 1000 rows.
I want to improve the query so that it returns only the first 10 rows and not 1000 rows.
Thanks in Advance,
Sriram."
I have the following table structure:
tableA (~85,000 rows) primary key = [colA,colB]
tableB (~850,000 rows) primary key = [colA,colC]
tableC (~120,000,000 rows) primary key = [colA,colB,colC]
IMPORTANT: colC is DATETIME
For a SET of rows in tableA (about 50,000) I need to pull the MOST RECENT (given a date) corresponding values from tables B and C. The only way I can think of doing this is the following:
SELECT tableA.colA
,(SELECT TOP 1 colX FROM tableB WHERE colA = tableA.colA AND colC <= @INPUTDATE ORDER BY colC desc)
,(SELECT TOP 1 colY FROM tableB WHERE colA = tableA.colA AND colC <= @INPUTDATE ORDER BY colC desc)
,... --some more columns from tableB
,(SELECT TOP 1 colX FROM tableC WHERE colA = tableA.colA AND colB = tableA.colB AND colC <= @INPUTDATE ORDER BY colC desc)
,(SELECT TOP 1 colY FROM tableC WHERE colA = tableA.colA AND colB = tableA.colB AND colC <= @INPUTDATE ORDER BY colC desc)
,... --some more columns from tableC
FROM tableA
WHERE tableA.colX = 'some criteria'
Is there any other way anyone can suggest? Unfortunately, because tableC is so large, the disk IO (I think) causes this query to take over an hour. (If I had monster RAM and super fast disk this wouldn't be as big an issue, but that's not an option right now )
Thanks in advance!
The query show below is designed to use seasonal profiles to compute 53 weeks of forecast data and then from that compute the number of weeks of supply of each item at each location. The query works but the volume of data produced (20+M rows) is substantial. If I limit the CTE to a single location, it run is 2 seconds and returns 41,000 rows. But when run for all locations and items, it runs for more than 4 hours. Would I do better converting the CTE to a sub-query and adding an index to improve the performance of the main query?
WITH Forecast AS
(SELECT Location_Idx
,Item_Idx
,Week_Code
,(CAST(AnnualQty AS DECIMAL(9))/53.0)*[Profile] AS fcst
FROM dbo.FactReplenishmentProfile rp
INNER JOIN dbo.FactSeasonalProfile sp
ON sp.SeasonalProfile_Idx = rp.SeasonalProfile_Idx
)
SELECT fcst1.Location_Idx
,fcst1.Item_Idx
,fcst1.Week_Code
,fcst1.fcst AS WeekQty
,SUM(fcst2.fcst) AS CumQty
FROM Forecast fcst1
INNER JOIN Forecast fcst2
ON fcst2.Location_Idx = fcst1.Location_Idx
AND fcst2.Item_Idx = fcst1.Item_Idx
AND fcst2.Week_code <= fcst1.Week_Code
GROUP BY fcst1.Location_Idx,fcst1.Item_Idx,fcst1.Week_code,fcst1.fcst
We are trying to limit are query that returns items from our database. The
query currently returns 32,000 records. We are trying to figure out an effecient way so we can request the 1st 50, or the 3rd 50, or the 5th 50 to display to the screen. We dont want to return the entire 32,000 then limit whats displayed to the screen in ADO. We want the select statment to only return 50 at a time. Any suggestions?
Hi guys,
I am asking this question on behalf of a friend. I have little knowledge of SQL 2005 but my friend is quite knowledgeable, although this is the first time he is dealing with large database for a client. So here's the story.
His client has a database containing 1.5 million books. Now he is setting up a website which will enable users to search books. Searching by ISBN is no problem as it only takes 1 seconds. The problem is, searching by Title takes more than 20seconds, which is unacceptable. My friend has only done smaller database and he just recently thought of implementing indexing and now looking for other ideas.
Each row contains book details such as Title, Author1, Author2, Author3, Publisher, Publication Date, ISBN, etc.
Can anyone who are more experienced in doing large database share with me some design ideas? His client is aiming for 8seconds or less.
Thanks in advance!
Could someone please point me in the right direction on how to read ina large query with .net.I am trying to emulate a legacy database system so I don't know theupper bounds of the sql query. An example query would be somethinglike:Select * from invoices where year 1995the query must be updatable and only return say 10 to 100 rows at atime.It should also be forward only and discard rows no longer in use tosave memory.And if at all possible I would like to lock one row at a time as therow is read in.
View 5 Replies View RelatedI am having performance issues on a SQL query in Access. My query isaccessing and joining several tables (one very large one). The tables arelinked ODBC. The client submits the query to the server, separated byseveral states. It appears the query is retrieving gigs of data from thetable and processing the joins on the client. Is there away to perform moreof the work on the server there by minimizing the amount of extraneous tabledata moving across the network and improving performance (woefully slowabout 6 hours)?
View 3 Replies View RelatedWhile running query below on SQL Server 2005 (Build 3790: Service Pack 2):
SELECT DISTINCT pkg.PrimaryBarcode
FROM dbo.Package AS pkg (NOLOCK)
JOIN dbo.PackageCycle AS pc (NOLOCK)
ON pkg.PackageKey = pc.PackageKey
WHERE (pkg.BillCycleDateKey >= 20061201) OR
(pkg.BillStatusKey = 1)
I received a partial result set followed by
An error occurred while executing batch. Error message is: Couldn't replace text
I suspect this is a memory issue, but cannot find any reference to this particular msg on the Microsoft forums or the other 3rd party forums. PrimaryBarcode is a varchar(50)
I am not sure where to go from here. I would appreciate any ideas. Thanks in advance.
Cheers,
Mike Byrd
Hi,
I am with the response time for a simple count on a fulltext search that is too slow.
Even using the most simple query on a good server (64 bit Dual Opteron 4GB Ram with high speed 16 raid disk storage)):
select count(*) from content_books where contains(searchData,'"english"')
Takes 4 seconds to count the avg 500.000 resultsI have removed all the joins with real table data so that the query is only inside the fulltext engine..
I would expect this to be down to 4 milli seconds. Isn't it just getting the size of the "english" word result index?
It seems the engine is going through all the results because if a do a more complex search that returns less results the performance is better.
Any clues of how to do this faster? I never read the thousands of records BUT i need to count them...
Thank you very much.
Hi all,I am doing some large updates,that may update 10,000 plus rows.This works fine when I execute the SQL directlyin Query Analyzer.If I set the timeout on my VB connection to 0 (zero)the connection should not time out????But it does.If I set the time out to a high value, say 1200,I get the same problem well within 1200 seconds.Also, I am getting the problem that the log fills up,but it is set to auto grow????Any ideas would be appreciated.ThanksGreg
View 1 Replies View RelatedHi, all experts here,
Thank you very much for your kind attention.
It's so frustrated that I dont really knwo what is going on and why is that and I have tried ages to try to figure it out and nothing really helps.
I have already moved the data files of tempdb database to a drive with enough space (many GB space left still), but then again I got the problem which run out all of the system drive space when I run a query against a large table? Why is that? And how to figure it out?
Please help me and thanks a lot in advance for your kind advices and help and I am looking forward to hearing from you shortly.
With best regards,
Yours sincerely,
Hi, all experts here,
I am wondering if tempdb stores all results tempararily whenever I query a large fact table with over 4 million records which joins another dimension table? Since each time when I run the query, the tempdb grows to nearly 1GB which nearly runs out all the space on my local system drive, as a result the performance totally down. Is there any way to fix this problem? Thanks a lot in advance and I am looking forward to hearing from you shortly for your kind advices.
With best regards,
Yours sincerely,
What happens if you have a website and the hard drive on your server is say 250GB. Then the database exceeds that and the database is 300GB in size.How would you spread your database into two different hard drives?Thanks
View 2 Replies View RelatedWe currently have a data warehouse running on SQL 7.0, SP2. One of our primary fact tables is now well over 155 million rows in it. The table is not very wide, as it only contains 17 columns, most of which are defined as integers. The entire database is only 20 GB.
The issue is that the loads from the staging table to this fact table have significantly deteriorated over the last month or so, dropping from over 400 transactions per second to around 85. We drop all the indexes on the fact table before we load the data into it.
Are there issues with a manageable table size in SQL 7.0 that we need to be concerned about? And should we consider partitioning the table into several smaller tables and join them with a "union all" view?
I really need to get this performance issue resolved, as our IT support vendor is pushing us to port the data warehouse to UDB because they tell us that SQL server is not scalable enough to handle this volume of data.
Thanks for any help you can provide.
George M. Parker
Hi, anyone who administering the pretty big database not less than 30 Gb with the average number of rows in a table about 2M and more, please share you experience with maintenace of such a db. Esspecially i'm interesting in:
1) Indexes maintenance (When and how - just regular dbcc, maint. plan or some script to split the job twice and so on.)
2) Remove unused space from db. (not major)
The serever works 24*7, and it's transactional environment. SQL 7 sp3 on claster.
I run the sp. to rebuild all the indexes it takes about 2-3 hrs to determin the objects withfragmentation less than 80% and actually rebuilding, during this process the users experience the performance (specially for update/insert) problem. It looks like I need to change the plan or strategy to do this. Any thoughts appreciated!
Thanks in advance.
Dmitri
Hi,
How can i partition the large tables so that the insert and updates which iam doing on the tables take less time.
I want to know how can i partition large tables and if i do that how is that the performance is going to be increased.
Thanks.