Optimizing/Improving WHERE

Oct 24, 2005

Here's a sample SELECT statement:


SELECT T1.F3
FROM T1 INNER JOIN T2 ON T1.F4 = T2.F4
WHERE
(T1.F1 > @iNum AND T2.F1 > @iNum)
OR
( @iNum2 * (T1.F1 - T2.F1)/(T1.F2 - T2.F2) ) + (T1.F1 - ((T1.F1 - T2.F1)/(T1.F2 - T2.F2) * T1.F2) ) > @INum


As you can see, the second part of the WHERE (after the OR) is much more complicated than the part before the OR. My query would run a lot faster if it tried the first part of the OR and didn't bother with the second part if the first part was satisfied. Is there any way to do this?

Thanks!

Tyler

View 3 Replies


ADVERTISEMENT

Improving Performance

Feb 19, 2007

Hi,I used SQL Server 2005 on my development machine, and whilst thismachine isn't as powerful as the live server, it does at times seem alittle slower than I would expect. So I've been wondering if there isany way for me to tune the machine so that SQL Server is better able tomake use of the resources? I am using WS2003.Short of tweaking with the actual database, which is still underdevelopment, does anyone have any tips to increase peformance?Thanks,--Dylan Parryhttp://electricfreedom.org | http://webpageworkshop.co.ukProgramming, n: A pastime similar to banging one's headagainst a wall, but with fewer opportunities for reward.

View 16 Replies View Related

Improving Performance

Sep 23, 2007

I've written a small application that uses a Microsoft Sql Server 2000 Database and Here lately i've been experiencing alot of performance problems with timeouts and data just taking too long to retreive. I was wondering what could i do to help with this.

Details:
My Program i wrote simply takes windows eventlogs and uploads them into a central database in general , i have also build a small asp.net page, where users could write out queries or save them for later use. The program is hitting our servers and it grabs them remotely, inserts into sql one by one, and clears out the logs when finished, it runs one a day, and generates about 12,000 events daily. (mostly login's and logoffs). once it starts it'll take about 2 and a half hours to finish going through everything.

Database Design...
Single Table, about 3.5 million rows now, takes up about 3 gigs of harddrive space.

Column Definitions
Name, Data Type



EventEntryID INT PrimaryKey(Clustered) (Auto-Increment Identity)
Log varchar(300)

Type varchar(300)
Date smalldatetime
Source varchar(300)
Category varchar(300)
Message varchar(8000)
EventID varchar(300)
UserName varchar(300)
Computer varchar(300)I know the database design can probally use some work, but it's hard to do so now b/c of the massive amoutns of data without killing the transaction log.

Computer Specs:
OS: Window Server 2003
Processor: Intel 700 MHz
RAM: 1 GB

I know it's a small server, but this is pretty much the largest thing this server deals with

I've tried to monitor sql with perfmon and sql performance counters and I just don't see much of anything going wrong with it there, so i'm thinking the the hardware isn't an issue, but then again, i'm not very familar with sql.

View 10 Replies View Related

Improving Performance Of SqlServer

Mar 28, 2006

hello everyone,
does anyone know some tipps or articles that descripe how the sqlserver 2000 can be improved? i have an application which displays newspapers. the text of the newspapers is saved in db. performance is ok but  i have performance-problems when i do a fulltext-search.
 
1) hardware: p4  xeon double core, 2.4 Ghz, 2 GB Ram
2) number of articles in database: ~ 255.000 Records (betwenn 10 - 400 words each)
3) the database is indexed (i am searching with "CONTAINS" and not with the "LIKE"-keyword)
4) in my query i am searching in 3 columns (title,leadtext,content)
5) results (searching for one word, stopping after 150 Results):the time to get a result from database lies between 5 to 10 Seconds (Average: 6.77 sec)!
this is the time only for getting the dataset from the sqlserver. the time of manipulating the dataset and binding it to my repeater needs about 15 milliseconds. using a sqldatareader didn't improve the performance. i got the same results.
in future the number of records will be three to five times higher than now. i am afraid that then the user has to wait half a minute to get a result of 150 search hits.
my questions:
Are you having similiar results in your applications? Are this times acceptable?
The SqlServer 2000 (professional) is installed in its default way with the default settings. Are they possiblities to modify this settings for improving performance? Where can i find info about that?
 
greetings and thx
Klaus

View 2 Replies View Related

Improving The Speed Of A LIKE Query

May 16, 2007

Hi all,

My colleague and I are having some difficulties regarding the speed of a LIKE query on our intranet system. Please see the quote below as posted on another SQL forum (To which no one has responded yet ).

Quote: Howdy,

I've taken control of an SQL Database with an ASP front end. The CPU usage of the SQL Server process periodically jumps up to between 75-99%. I've managed to identify the ASP page that is causing it and it's a search page.

It's basically doing something like this:

SELECT JobNumber, CustomerName, CustomerAddress, CustomerPostCode
FROM Customers
WHERE CustomerName LIKE '%query%'
OR CustomerAddress LIKE '%query%'
OR CustomerPostCode LIKE '%query%'

Where query is the value the user has searched on.

The CustomerName and CustomerPostCode fields are varchar fields, but the CustomerAddress field is a text field.

I know if I replace the CustomerAddress text field with a series of varchar fields then I could make them non-clustered indexes, but would this help speed things up with a LIKE query?

There's a lot of work required on the ASP side if I'm going to split the field out, so I only want to do it if I'm gonna get a significant benefit from doing it.

Failing this, is there a better way of performing a search like this?

Any help is much appreciated,

Thanks!

View 9 Replies View Related

Improving Count(*) Performance

Feb 24, 2008

Hi guys,

I'm facing a performance issue with select count(*) table_name when performing pagination to my results. My actualy query joins 4 tables with proper PK and FK contrainsts and indices applied.



select top 100 * from table1
left outer join table2 on tid=rid
left outer join table3 on tid=aid
inner join shipment on sid=sid
where datetime>=convert(datetime,'20070810 00:00:00') and datetime<=convert(datetime,'20070920 23:59:59')

The query above takes 200 ms to 600 ms to execute.


select count(*) from table1
left outer join table2 on tid=rid
left outer join table3 on tid=aid
inner join shipment on sid=sid
where datetime>=convert(datetime,'20070810 00:00:00') and datetime<=convert(datetime,'20070920 23:59:59')

The query above takes 2000ms to 4000 ms to run.



The total records that fulfill the where clause is approximately 5000 to 6000 records. I checked the execution plan but found that the server is actually utilizing the indices with operations like index scan and index seek.

Are there any other things that i can do to improve the counting of total records?



Thanks.

View 5 Replies View Related

Improving Chatty Applications

Nov 4, 2006

All:

I am presently tasked with improving the performance of a chatty application. What I mean by a chatty application is that the application makes multiple calls to the database server for each user request. In many cases it appears that many of the windows are making multiple calls to the database for the population of dialog boxes. In a couple of cases there are more than 10 dialog boxes that are populated similar to this.

To me this is kind of an "old issue", but my response is (1) cache the dialog information as much as possible and (2) make one call to the database to return all of this data for the dialog boxes if it is not cached. For update calls again make a single call to the database rather than making a call for each individual row of a table that is updated.

I admit that some of this will complicate the code of the application; however, my perspective is that I am supposed to look at this from the database perspective and not from the perspective of "programmer convenience." There will be times in which something that would otherwise get to hairy for a programmer will dictate an additional trip to the server, but this should not normally be the case.

What else do I need to consider?




Dave

View 3 Replies View Related

Improving Conversion Performance

Sep 20, 2006

I have a varchar(max) field that is on its way to a Term Lookup task, so it requires conversion to DT_NTEXT before it gets there. The conversion is taking a really, really, really long time. In my current example I have about 400 rows. Granted the length of the varchar(max) if I 'select max(len(myColumn))' is approximately 14,000,000 so I understand that I'm moving a lot of data. But I'm on a x64 machine with 4 dual core processors and watching Task Manager I can see that I'm only using 1 core out of 8. Anything I'm missing that can speed this up?



Thanks,

Frank

View 3 Replies View Related

Improving Performance Of Search Queries

Dec 5, 2007



I have a number of complex search stored procedures that use the following syntax to try to simplify the code.

WHERE @SearchParam IS NULL OR SearchCol = @SearchParam

unfortunately it appears that this is really inefficient as far as the database is concerned.

If I run the following query on the AdventureWorks database (SQL Server 2005 with SP2 and fixes up to v3054)




Code Block
SET STATISTICS IO ON

Declare @CustomerID int
SET @CustomerID = 1

SELECT SalesOrderID
FROM Sales.SalesOrderHeader
WHERE @CustomerID IS NULL OR CustomerID = @CustomerID

SELECT SalesOrderID
FROM Sales.SalesOrderHeader
WHERE CustomerID = @CustomerID






Is see that the first select results in 45 logical reads, whereas the second results in only 2 logical reads.

Does anyone have any idea how I can get the benefit of a search procedure that does not have loads of IF blocks, or dynamic SQL without this major performance issue?

Cheers,

Justin

View 3 Replies View Related

Improving Large Table Performance

Aug 15, 2007

We have a table that is 800GB. We are planning to re-build the clustered index on this table to a different filegroup. The new filegroup and files associated with it will sit on a SAN which will have a 1.5TB allocation. Does anyone have any suggestions in regards to how many files to have associated with the filegroup to provide optimal performance? Apparently we could have 3 LUNS (500gb each), so would 1 file on each LUN provide additional performance as opposed to one file on 1 LUN?

View 1 Replies View Related

Improving SQL Server Searching - Buying A 3rd Component

Oct 26, 2007

Does any one know of any good components I can buy in to sit on top of my SQL Server to provide the kind of search functionality that google has - 'Did you mean...,' nearset match etc. I have set up SQL Server Full Text but to build all the extra functionality will take some time and I think it will be more cost effective to buy a component in. If any one has used any could you let me know of you experinces please. CheersScott

View 2 Replies View Related

Improving Access To Inserted And Deleted Table

Feb 4, 2008

Is there a configuration or a trick to improve the speed of the access to inserted and deleted tables whithin a trigger? Whenever a trigger is called, the access to inserted or deleted constitute approximatly 95% of the execution time.

Is there a way to have access to inserted and to deleted improved other than copying the data to another table?

View 3 Replies View Related

Optimizing Speed

Sep 4, 2001

I have indexed my SQL Server tables to gain some speed on calling up tables and queries ( using VB and ADO ). It is still very slow...Is there a move I have to make once my tables are indexed or is there any tricks to improve the speed cause I am getting kinda desparate right now :(

View 1 Replies View Related

Optimizing Question

Jul 15, 2000

Hi all,

I have a question in regards to optimistic locking:

I have a database conversion that will be running on a SQL 7.0 system. The process needs to be completed ASAP and to this end, I have tried to set up all aspects of the server to be geared towards speed rather than redundancy for the duration of the process (i.e. moving heavily used tables to separate filegroups on a RAID 0 set, dedicating a separate disk for the database log). I was now looking at trying to tweak locking behaviour to enhance performance (as for the duration of the conversion, no other user will be connecting to the database - the only initator of data changes will be the conversion application, which feeds statements serially to the server). As far as I know changing lock settings is something that would be initiated by the application itself, but is there any property I can set on the server to further enhance performance in this area?

Thanks
A

View 1 Replies View Related

Sql Optimizing Tools

Jun 5, 2002

We are evaluating a tool by Lechotech that can optimize sql statements. It is a pretty good tool, but we would like to compare it against some others. Has anyone seen any other such tools?

View 1 Replies View Related

Optimizing A Trigger / Is There A Better Way

Jan 14, 2005

I'm no SQL wiz, just know basics to get me by ... What I'm trying to do is: everytime a record is inserted into an online orders table, that record needs to be inserted into another table in another database, but with added information.

This is the Trigger I came up with:

CREATE TRIGGER OtherDatabaseInsertTrigger
ON dbo.t_order
FOR INSERT AS

DECLARE @CLIENT VARCHAR(30)
DECLARE @OrderNumberID INT
SET @CLIENT = 'DevShed'

INSERT INTO test2.dbo.t_order (order_num,customer_num,
order_date,drop_date,package_id,us_customer,us_system,non_us_customer,
non_us_system,postage_selection,num_pages,num_sides,permit_number,
permit_city,permit_state,permit_zip,bre_company,bre_name_dept,bre_address1,
bre_address2,bre_city,bre_state,bre_zip,brc_company,brc_name_dept,brc_address1,
brc_address2,brc_city,brc_state,brc_zip,rae_company,rae_name_dept,rae_address1,
rae_address2,rae_city,rae_state,rae_zip,return_company,return_name_dept,
return_address1,return_address2,return_city,return_state,return_zip,salutation_other,
printcost,stock,production,personalization,postage_total,listcost,listcharged,
tax_total,order_total,discount,coupon,milestone_1,milestone_2,milestone_3,milestone_4,
milestone_5,milestone_6,has_logo,has_list,has_rad,tag,nonprofit_fee,order_problem,
experian_only,experian_orderid,experian_price,experian_returncode,experian_returnmessage,
experian_queryid,experian_amt_to_purchase,bp_bankname,bp_address1,bp_address2,bp_city,
bp_state,bp_zip,bp_country,bp_phone,bp_signername,bp_signertitle,bp_signeremail,logo_option,
logo_text,signature_option,list_option,layout_id,cd_only,cd_oi,cd_rc,cd_ot,cd_email,
shipping_id,shipping_charge,pfp_pnref,pfp_result,pfp_respmsg,pfp_authcode,pfp_avsaddr,
pfp_avszip,creditcard_num,ccexp_month,ccexp_year,cc_name_on_card,cust_postage)
SELECT order_num,customer_num,
order_date,drop_date,package_id,us_customer,us_system,non_us_customer,
non_us_system,postage_selection,num_pages,num_sides,permit_number,
permit_city,permit_state,permit_zip,bre_company,bre_name_dept,bre_address1,
bre_address2,bre_city,bre_state,bre_zip,brc_company,brc_name_dept,brc_address1,
brc_address2,brc_city,brc_state,brc_zip,rae_company,rae_name_dept,rae_address1,
rae_address2,rae_city,rae_state,rae_zip,return_company,return_name_dept,
return_address1,return_address2,return_city,return_state,return_zip,salutation_other,
printcost,stock,production,personalization,postage_total,listcost,listcharged,
tax_total,order_total,discount,coupon,milestone_1,milestone_2,milestone_3,milestone_4,
milestone_5,milestone_6,has_logo,has_list,has_rad,tag,nonprofit_fee,order_problem,
experian_only,experian_orderid,experian_price,experian_returncode,experian_returnmessage,
experian_queryid,experian_amt_to_purchase,bp_bankname,bp_address1,bp_address2,bp_city,
bp_state,bp_zip,bp_country,bp_phone,bp_signername,bp_signertitle,bp_signeremail,logo_option,
logo_text,signature_option,list_option,layout_id,cd_only,cd_oi,cd_rc,cd_ot,cd_email,
shipping_id,shipping_charge,pfp_pnref,pfp_result,pfp_respmsg,pfp_authcode,pfp_avsaddr,
pfp_avszip,creditcard_num,ccexp_month,ccexp_year,cc_name_on_card,cust_postage
FROM inserted;

SET @OrderNumberID = (SELECT @@IDENTITY)
UPDATE test2.dbo.t_order SET client = @CLIENT WHERE oid = @OrderNumberID;

I don't know if its possible to do an INSERT INTO SELECT with additional fields in the 2nd table, I was trying, but failed. Had to resort to the bottom piece of SQL to get the ID and run a separate query to add the additional items to the new record in table 2.

Any SQL masters out there that can help me make this better, or know of some other way to do this.

Thanks in advance!

View 6 Replies View Related

Optimizing A Query

Jun 3, 2004

Hello,
I am hoping someone here can help me optimize the following query:
SELECT
INCOMING.DATE_TIME, INCOMING.URL, INCOMING.HITS,
USER_NAMES.USER_LOGIN_NAME,
CATEGORY.NAME
FROM
(wsHQMay2004.dbo.INCOMING INCOMING INNER JOIN wsHQMay2004.dbo.CATEGORY CATEGORY ON INCOMING.CATEGORY = CATEGORY.CATEGORY)
INNER JOIN wsHQMay2004.dbo.USER_NAMES USER_NAMES ON INCOMING.USER_ID = USER_NAMES.USER_ID
WHERE
INCOMING.DATE_TIME >= '2004-05-01 00:00:00.00' AND
INCOMING.DATE_TIME < '2004-06-01 00:00:00.00'
ORDER BY
INCOMING.URL ASC


I am just hoping to get some tips on perhaps a better way to write this query as right now, due to the size of the incoming table, this query just takes forever.

Any advise will be apreciated.

Thanks.

View 5 Replies View Related

Optimizing Cursor

Jul 7, 2004

I've tried a bunch of different ways in an effort to stay away from using a cursor, but I haven't been able to accomplish what I need to do without one. So, I coded this process using cursors and performance (as expected) is pretty mediocre. I was wondering if someone could take a quick look and suggest a different approach or maybe suggest ways to optimize the current code.

*Attached* is my code.

TIA

View 4 Replies View Related

Need Help In Optimizing SQL Script.

May 4, 2006

Kudos to y'all!!! I have this SQL script...

SELECT * FROM OPENQUERY (liorder, '
SELECT DISTINCT
a.AUF_NR AS OrdNo,
e.KU_NAME AS Customer,
a.AUF_POS AS Pos,
f.PC_PANE_NO AS Pane,
f.PC_SGGL_SEQ AS Component,
f.PC_SGGL_COD AS GlassCode,
d.GL_BEZ AS GlassDesc,
a.ANZ AS Qty,
((c.BREITE/1000*c.HOEHE/1000)*a.ANZ) AS SQM,
(a.ANZ*c.SUM_BRUTTO) AS Val,
(CASE
WHEN(SELECT SUM(h.KF_FERT_QTY)
FROM LIPROD.KAPA_AUS_FERT h
WHERE a.AUF_NR = h.KF_ORDER_NO AND
a.AUF_POS = h.KF_ORDER_POS AND
f.PC_PANE_NO = h.KF_SCHEIB_NR AND
f.PC_SGGL_SEQ = CASE
WHEN h.KF_SEQ_NR = 0 THEN 1
ELSE h.KF_SEQ_NR
END AND
h.KF_SCHR_NR IN (2, 402, 502, 602)) IS NULL THEN 0
ELSE(SELECT SUM(h.KF_FERT_QTY)
FROM LIPROD.KAPA_AUS_FERT h
WHERE a.AUF_NR = h.KF_ORDER_NO AND
a.AUF_POS = h.KF_ORDER_POS AND
f.PC_PANE_NO = h.KF_SCHEIB_NR AND
f.PC_SGGL_SEQ = CASE
WHEN h.KF_SEQ_NR = 0 THEN 1
ELSE h.KF_SEQ_NR
END AND
h.KF_SCHR_NR IN (2, 402, 502, 602))
END) AS Done
FROM LIORDER.AUF_STAT a,
LIORDER.AUF_KOPF b,
LIORDER.AUF_POS c,
LIORDER.GLAS_DATEN d,
LIORDER.KUST_ADR e,
LIPROD.AUF_POS_COMP f
WHERE EXISTS
(SELECT g.AUF_NR
FROM LIORDER.AUF_STAT g
WHERE g.AUF_NR = a.AUF_NR AND
g.RG_OFFEN != 0) AND
EXISTS
(SELECT i.KF_ORDER_NO
FROM LIPROD.KAPA_AUS_FERT i
WHERE a.AUF_NR = i.KF_ORDER_NO AND
i.KF_SCHR_NR IN (2, 402, 502, 602)) AND
a.AUF_NR = b.AUF_NR AND
b.AUF_NR = c.AUF_NR AND
c.AUF_NR = f.PC_ORDER_NO AND
a.AUF_POS = c.AUF_POS AND
c.AUF_POS = f.PC_ORDER_POS AND
b.KUNR = e.KU_NR AND
f.PC_SGGL_COD = d.IDNR AND
a.HISTORY = 0 AND
b.AUF_OFF = 0 AND
c.VER_ART != ''V'' AND
e.KU_VK_EK = 0 AND
e.KU_NAME IS NOT NULL
ORDER BY a.AUF_NR DESC,
a.AUF_POS ASC')

...It is retrieving data from an Oracle linked server. But the execution time is so friggin' long! I tried running it and for around 30 minutes it hasn't shown any results. So I could even tell the exact time it would take to return results. Do you have any tips regarding query optimization? Thanks in advance.

View 9 Replies View Related

Need Help Optimizing A Sql Query...

May 28, 2004

we have an insurance program up and running in our regions and we get random reports of slowness. in an effort to track down all facets of slowness i am looking at all my sql code to make sure it is as efficient as possible. I know a little about SQL and writing SQL statements, enough to help me do my job well. but i do not write optimized code.

if request.form("selPolicyNum") <> "" then
sqlPolicyInfo = "SELECT PIEffectiveDate, PIExpirationdate from PIMaster where PIPolicyNum='" & request.form("selPolicyNum") & "'"
Set rsPolicyInfo = Server.CreateObject("ADODB.Recordset")
Set rsPolicyInfo.ActiveConnection = webLookupConn
'rsPolicyInfo.CursorType = adOpenDynamic
'rsPolicyInfo.LockType = adLockOptimistic
rsPolicyInfo.Source = sqlPolicyInfo
'rsPolicyInfo.CursorLocation = adUseClient
rsPolicyInfo.Open
'response.write sqlPolicyInfo
end if


if request.form("selPolicyNum") <> "" then
sqlRemarkInsert="INSERT INTO clRemarks (PolicyNum, AccountNum, UserID, RemarkBody, CreationDate, PolEffectiveDate, PolExpDate, RemarkCategory1, RemarkCategory2, RemarkCategory3, RemarkCategory4, RemarkCategory5, RemarkCategory6, RemarkCategory7, RemarkCategory8, RemarkCategory9, RemarkCategory10, RemarkCategory11, RemarkCategory12, RemarkCategory13) VALUES ('"
sqlRemarkInsert=sqlRemarkInsert & request.form("selPolicyNum") & "', '" & request.form("txtAcctNum") & "', '" & request("aysmenu")("userid") & "', '" & Replace(request.form("RemarkBody"),"'","''") & "', '" & CurrDate & "', '" & rsPolicyInfo("PIEffectiveDate") & "', '" & rsPolicyInfo("PIExpirationDate") & "', '" & request.form("chkIssuingInstructions") & "', '" & request.form("chkLossControl") & "', '" & request.form("chkLossHistoryClaims") & "', '" & request.form("chkReinsurance") & "', '" & request.form("chkMVRDriverIssues") & "', '" & request.form("chkBilling") & "', '" & request.form("chkGeneral") & "', '" & request.form("chkCancellationDNR") & "', '" & request.form("chkPolicyAmendments") & "', '" & request.form("chkExperienceRatingInfo") & "', '" & request.form("chkDiscretionaryPricing") & "', '" & request.form("chkPremiumAudit") & "', '" & request.form("chkQuotingInstructions") & "')"
else
sqlRemarkInsert="INSERT INTO clRemarks (AccountNum, UserID, RemarkBody, CreationDate, RemarkCategory1, RemarkCategory2, RemarkCategory3, RemarkCategory4, RemarkCategory5, RemarkCategory6, RemarkCategory7, RemarkCategory8, RemarkCategory9, RemarkCategory10, RemarkCategory11, RemarkCategory12, RemarkCategory13) VALUES ('"
sqlRemarkInsert=sqlRemarkInsert & request.form("txtAcctNum") & "', '" & request("aysmenu")("userid") & "', '" & Replace(request.form("RemarkBody"),"'","''") & "', '" & CurrDate & "', '" & request.form("chkIssuingInstructions") & "', '" & request.form("chkLossControl") & "', '" & request.form("chkLossHistoryClaims") & "', '" & request.form("chkReinsurance") & "', '" & request.form("chkMVRDriverIssues") & "', '" & request.form("chkBilling") & "', '" & request.form("chkGeneral") & "', '" & request.form("chkCancellationDNR") & "', '" & request.form("chkPolicyAmendments") & "', '" & request.form("chkExperienceRatingInfo") & "', '" & request.form("chkDiscretionaryPricing") & "', '" & request.form("chkPremiumAudit") & "', '" & request.form("chkQuotingInstructions") & "')"
end if
rsRemarkInsert=ComProConn.Execute (sqlRemarkInsert)

rsPolicyInfo.close
Set rsPolicyInfo=nothing


that is the code used to store a remark into the system. is this code optimized already or should some of the db parameters be changed to make things faster? this is just an example of many of the SQL statements that i may or may not have to fix. thank you for any and all help.

View 1 Replies View Related

Help Optimizing Query

Jun 23, 2008

I have the following query that works fine but i'm wondering if there is a way to optimize it better as when I analyze through sql profiler it is at the top of the list of using the cpu

SELECT DISTINCT site, d, (SELECT COUNT(id) FROM anP aPV2 WHERE aPV2.confirmed=1 and aPV2.stage=2 and aPV2.inserted=0 and aPV2.site=aPV1.site and aPV2.d>=aPV1.d and aPV2.d<=aPV1.d) AS mycount FROM anP aPV1 WHERE confirmed=1 AND stage=2 AND inserted=0 ORDER BY site,d

View 8 Replies View Related

Need Help Optimizing Code

Jan 3, 2007

this code takes far too long to run any suggestions:


create table #minatest(
DatabaseName varchar(100),
pagecount varchar(100),
vfillfactor varchar(100),
TableName varchar(200),
IndexName varchar(100),
FragmentPercentage int,
newfragmentvalue int,
index_type_desc varchar(100),
index_level int
)

-- minatest table will contain indexes with fragmentation above 10% which need to be defragged
-- this will go through all databases
-- null indexes will not be affected

exec sp_msforeachdb'
use ?
INSERT INTO #minatest
SELECT
db_name(database_id),
phystat.page_count,
i.fill_factor,
OBJECT_NAME(i.object_id),
i.name,
phystat.avg_fragmentation_in_percent,
newfragmentvalue = 0,
index_type_desc,
index_level
FROM sys.dm_db_index_physical_stats(NULL, NULL, NULL, NULL, ''DETAILED'') phystat
JOIN sys.indexes i ON i.object_id = phystat.object_id AND i.index_id = phystat.index_id
WHERE phystat.avg_fragmentation_in_percent > 10 AND phystat.page_count < 10000
'


DECLARE @Counts int,
@i int,
@DatabaseName varchar(100),
@pagecount varchar(100),
@vfillfactor varchar(100),
@TableName varchar(200),
@IndexName varchar(100),
@sql nvarchar(4000),
@nfsql nvarchar(4000),
@FragmentPercentage int,
@params nvarchar(4000),
@index_type_desc varchar(100),
@index_level int,
@vnewfrag int

SELECT @params = N'@cnt int OUTPUT'

select @Counts = count(Databasename)
from #minatest -- sets the maximum amount of fields to go threw as a number

declare targets cursor -- declare cursor with values to search through
for
select * from #minatest

open targets -- open cursor


fetch next from targets into @DatabaseName,@pagecount,@vfillfactor,@TableName,@IndexName,@FragmentPercentage,@vnewfrag,@index_type_desc,@index_level-- take rows from table


select @i=0
while @@fetch_status=0 and @i<=@Counts-- set loop condition

begin

select @sql = 'USE '+@DatabaseName+'; '+
' ALTER INDEX '+@IndexName+' ON '+ @TableName+
' REBUILD with (ONLINE=ON,SORT_IN_TEMPDB=ON,STATISTICS_NORECOMPUTE=OFF);'
exec sp_executesql @sql
select @nfsql = 'select @cnt = avg_fragmentation_in_percent FROM sys.dm_db_index_physical_stats(NULL,NULL, NULL, NULL, ''DETAILED'') phystat JOIN '+@DatabaseName+'.sys.indexes i ON i.object_id = phystat.object_id AND i.index_id = phystat.index_id WHERE i.name='''+@IndexName+''' and index_type_desc='''+@index_type_desc+''' and index_level='''+CAST(@index_level as varchar(20))+''''
exec sp_executesql @nfsql ,@params, @cnt=@vnewfrag OUTPUT

update #minatest set newfragmentvalue = @vnewfrag where IndexName = @IndexName and TableName = @TableName
select @i=@i+1

fetch next from targets into @DatabaseName,@pagecount,@vfillfactor,@TableName,@IndexName,@FragmentPercentage,@vnewfrag,@index_type_desc,@index_level-- take next field of table


end

close targets
DEALLOCATE targets

ALTER TABLE #minatest DROP COLUMN index_type_desc,index_level
select * from #minatest -- displays which indexes where defraged and their new frag value
drop table #minatest

View 5 Replies View Related

Optimizing Query

Sep 18, 2007

Hi,

I have the below query written so that i do not insert entries that is already existing in the table. I am trying to put in 70000 entries at a single shot and it breaks down. Can anybody help me optimize the below query so that it doesnt break? Is there any other way I can write this query?

Please do help me with this. Thanks in advance. The table in which i am inserting the entries has a composite key composed of ACCT_NUM_MIN and ACCT_NUM_MAX. I am getting this from a table which doesnt have a primary key(CORE)


INSERT INTO CRF (CORE_UID,ACCT_NUM_MIN, ACCT_NUM_MAX,BIN, BUS_ID,BUS_NM,ISO_CTRY_CD, REGN_CD, PROD_TYPE_CD, CARD_TYPE)
SELECT UID , LEFT(ACCT_NUM_MIN,16), LEFT(ACCT_NUM_MAX,16), BIN, BUS_ID, BUS_NM, ISO_CTRY_CD, REGN_CD, PROD_TYPE_CD, CARD_TYPE FROM CORE o
WHERE NOT EXISTS (SELECT * FROM CRF i
WHERE o.ACCT_NUM_MIN = i.ACCT_NUM_MIN
AND o.ACCT_NUM_MAX = i.ACCT_NUM_MAX)

View 20 Replies View Related

Optimizing A Query

Nov 27, 2007

hi all

View 8 Replies View Related

Optimizing A JOIN

Mar 1, 2006

I have two tables.One has approx 90,000 rows with a field .. let's call in BigInt (and itis defined as a bigint data type).I have a reference table, with approx 10,000,000 rows. In thisreference table, I have starting_bigint and ending_bigint fields. Iwant to pull out all of the reference data from the reference table forall 90,000 rows in the transaction table where the BigInt from thetransaction table is between the starting_bigint and ending_bigint inthe reference table.I have the join working now, but it is not as optimized as I wouldlike. It appears no matter what I do, the query does a full table scanon the 10,000,000 rows in the reference table.Sample codeSELECT ref.*, tran.bigintfrom transactiontable tranINNER JOIN referencetable ref on tran.bigint betweenref.starting_bigint and ending_bigintYes, all 3 of the fields are indexed. I even have a composite index onthe reference table with the starting_bigint and ending_bigint fieldsselected as the composite.Any help would be appreciated.Robert H. KershbergIT DirectorTax Credit CompanyJoin Bytes! or Join Bytes! or Join Bytes!

View 5 Replies View Related

Optimizing SQL - Union

Mar 29, 2006

Hello all,I have a table with thousands of rows and is in this format:id col1 col2 col3 col4--- ------ ----- ------ ------1 nm 78 xyz pir2 bn 45 abc dirI now want to get the data from this table in this format:field val---------------------------col1 nmcol1 bncol2 78col2 45col3 xyzcol3 abccol4 pircol4 dirIn order to do this I am doing a union:select * into #tempUpdate(select 'col1' as field, col1 as val from table1unionselect 'col2' as field, col2 as val from table1unionselect 'col3' as field, col3 as val from table1)the above example query is smaller - I have a much bigger table withabout 80 columns (Imagine the size of my union query :) and this takesa lot of time to execute. Can someone please suggest a better way to dothis?The results of this union query are selected into a temp table, which Ithen use to update another table. I am using SQL Server 2000.my main concern is performance. any ideas please?thanks

View 6 Replies View Related

Optimizing A Big Query

Feb 9, 2007

To start with, I'll give a simplified overview of my data.BaseRecord (4mil rows, 25k in each Region)ID | Name | Region | etcOtherData (7.5mil rows, 1 or 2 per ID)ID | Type(1/2) | DataProblemTable (4mil rows)ID | ConcatenatedHistoryThe concatenated history field is a nvarchar with up to 20 differentpipe delimited date/code combinations, eg. '01/01/2007X|11/28/2006Q|11/12/2004Q|'Using left outer joins (all from base, the rest optional) I've got aview something like:View (4mil rows)ID | Name | Region | etc | Data | Data2 | ConcatenatedHistoryQuerying it, it takes about 15-20 seconds to do this:Select ID, Name, Region, etc, Data, Data2, ConcatenatedHistory


Quote:

View 5 Replies View Related

Optimizing Database

Jul 20, 2005

I have an application that's allows user input, and is translating it bystripping out the html tags and also doing some code translations. The useris able to later edit their input. However it's unfeasible to reversetranslate it back as the logic would be too complicated, and there areinstances where it won't be possible.So, what I'm thinking to do to speed up performance is to duplicate the userdata, one for native data, and the other for the translated data. When useredits their input, the native data is shown. When the application isshowing the data in a page, the translated data is shown.My question is, would it make a performance difference if I store the nativedata and the translated data in the same table, or would it be better tostore the cached data in another table?

View 1 Replies View Related

Help With Optimizing Query

Feb 2, 2007

if I have the following tables:

Table Customer_tbl

cust_id as int (PK), ...

Table Item_tbl

item_id as int (PK), ...

Table Selected_Items_tbl

selected_item_id as int (PK), cust_id as int (FK), item_id as int (FK), ...

-------

With the following query:

select cust_ID from selected_items_tbl WHERE item_id in (1, 2, n) GROUP BY cust_id, item_id HAVING cust_id in (select cust_id from selected_items_tbl where item_id = 1) AND cust_id in (select cust_id from selected_items_tbl where item_id = 2) AND cust_id in (select cust_id from selected_items_tbl where item_id = n)

-------

Each of these tables has other items included. Selected_Items_tbl holds zero to many of the items from the item_tbl for each customer. If I am searching for a customer who has item 1 AND item 2 AND item n, what would be the most efficient query for this? Currently, the above query is what I am working with. However, it seems that we should be able to do this type of search in a single query (without subqueries).

View 3 Replies View Related

Optimizing Reports

Apr 24, 2007

I have combined three reports into one big report. I would like to someway cache the big report, and then create little reports from the cached report. What would be the best way to go about doing this?

View 3 Replies View Related

Optimizing Query To Run

Sep 27, 2006

I'm trying to get a query to run which looks at completed orders that have not had another paid order in 180 days. The database I'm running it against is very large so I can't get it to complete. Where's what I've got:

select Date =cast(cl1.cl_rundate as datetime(102)),count(cl1.cl_recno) as 'Completed Initials', cl1.cl_status as Status from dbo.vw_Completedorders cl1 where cl1.lob_lineofbusiness = 'aaa'
and cl1.cl_rundate > '20060801' and not exists (
select cl2.cl_company from dbo.vw_Paidorders cl2 where
cl2.lob_lineofbusiness = 'aa'and cl2.cl_company = cl1.cl_order and cl2.cl_rundate > '20060101' and datediff(day,cl2.cl_rundate,cl1.cl_rundate) < 180)
group by cl1.cl_status, cl1.cl_rundate

View 11 Replies View Related

Optimizing Tables

Jul 9, 2006

Hi,
We have 3 tables in sql server for simillar information about 3 different countries. some times I select from all countries so I need to use all tables and some times just one country so I select from one table.

I want to know that is it good to combine these three tables into one and add one field to define the country name?
Which way is better, my own way or this new one?
Please let me know the advantages and disadvantages of each of these ways.

Thanks

View 4 Replies View Related

Optimizing LIKE OR Selects

Feb 2, 2006

I'm having problems optimizing a sql select statement that uses a LIKE statement coupled with an OR clause. For simplicity sake, I'll demonstrate this with a scaled down example:



table Company, fields CompanyID, CompanyName

table Address, fields AddressID, AddressName

table CompanyAddressAssoc, fields AssocID, CompanyID, AddressID



CompanyAddressAssoc is the many-to-many associative table for Company and Address. A search query is required that, given a search string ( i.e. 'TEST' ), return all Company -> Address records where either the CompanyName or AddressName starts with the parameter:

Select c.CompanyID, c.CompanyName, a.AddressName

FROM Company c

LEFT OUTER JOIN CompanyAddressAssoc caa ON caa.CompanyID = c.CompanyID

LEFT OUTER JOIN Address a ON a.AddressID = caa.AddressID

WHERE ((c.CompanyName LIKE 'TEST%') OR (a.AddressName LIKE 'TEST%))



There are proper indexes on all tables. The execution plan creates a hash table on one LIKE query, then meshes in the other LIKE query. This takes a very long time to do, given a dataset of 500,000+ records in Company and Address.

Is there any way to optimize this query, or is it a problem with the base table implementation?

Any advice would be appreciated.

View 6 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved