Poor Query Optimizing

Nov 4, 2007

Hi there,

I've got a SQL 2005 table(DataTable) with 5.5 million rows and a clustered index on an integer column. The column is a foreign key pointing to another table (LookupTable) that associates the integer with an nvarchar(50) column. Both integer and nvarchar(50) columns are in a clustered index.

I'm running equivalent select statements with slightly different where clauses, and am not seeing the performance similarities I'd expect.

select * from DataTable join LookupTable on DataTable.Integer = LookupTable.Integer

1. Setting the where clause to the nvarchar(50) value on the LookupTable takes 64 seconds.
2. Setting the where clause to Integer = (select integer from LookupTable where nvarchar(50) = 'mysearchstring') takes 61 seconds.
3. Setting the where clause to Integer = 526 takes 1 second.

Is this poor design in SQL, or is it a misconfiguration? I would have expected SQL to modify my query to essentially be #3 in all cases before touching the DataTable, which would cause all three versions to take 1 second.

Thanks,
Jason

View 2 Replies


ADVERTISEMENT

Optimizing A Query

Jun 3, 2004

Hello,
I am hoping someone here can help me optimize the following query:
SELECT
INCOMING.DATE_TIME, INCOMING.URL, INCOMING.HITS,
USER_NAMES.USER_LOGIN_NAME,
CATEGORY.NAME
FROM
(wsHQMay2004.dbo.INCOMING INCOMING INNER JOIN wsHQMay2004.dbo.CATEGORY CATEGORY ON INCOMING.CATEGORY = CATEGORY.CATEGORY)
INNER JOIN wsHQMay2004.dbo.USER_NAMES USER_NAMES ON INCOMING.USER_ID = USER_NAMES.USER_ID
WHERE
INCOMING.DATE_TIME >= '2004-05-01 00:00:00.00' AND
INCOMING.DATE_TIME < '2004-06-01 00:00:00.00'
ORDER BY
INCOMING.URL ASC


I am just hoping to get some tips on perhaps a better way to write this query as right now, due to the size of the incoming table, this query just takes forever.

Any advise will be apreciated.

Thanks.

View 5 Replies View Related

Need Help Optimizing A Sql Query...

May 28, 2004

we have an insurance program up and running in our regions and we get random reports of slowness. in an effort to track down all facets of slowness i am looking at all my sql code to make sure it is as efficient as possible. I know a little about SQL and writing SQL statements, enough to help me do my job well. but i do not write optimized code.

if request.form("selPolicyNum") <> "" then
sqlPolicyInfo = "SELECT PIEffectiveDate, PIExpirationdate from PIMaster where PIPolicyNum='" & request.form("selPolicyNum") & "'"
Set rsPolicyInfo = Server.CreateObject("ADODB.Recordset")
Set rsPolicyInfo.ActiveConnection = webLookupConn
'rsPolicyInfo.CursorType = adOpenDynamic
'rsPolicyInfo.LockType = adLockOptimistic
rsPolicyInfo.Source = sqlPolicyInfo
'rsPolicyInfo.CursorLocation = adUseClient
rsPolicyInfo.Open
'response.write sqlPolicyInfo
end if


if request.form("selPolicyNum") <> "" then
sqlRemarkInsert="INSERT INTO clRemarks (PolicyNum, AccountNum, UserID, RemarkBody, CreationDate, PolEffectiveDate, PolExpDate, RemarkCategory1, RemarkCategory2, RemarkCategory3, RemarkCategory4, RemarkCategory5, RemarkCategory6, RemarkCategory7, RemarkCategory8, RemarkCategory9, RemarkCategory10, RemarkCategory11, RemarkCategory12, RemarkCategory13) VALUES ('"
sqlRemarkInsert=sqlRemarkInsert & request.form("selPolicyNum") & "', '" & request.form("txtAcctNum") & "', '" & request("aysmenu")("userid") & "', '" & Replace(request.form("RemarkBody"),"'","''") & "', '" & CurrDate & "', '" & rsPolicyInfo("PIEffectiveDate") & "', '" & rsPolicyInfo("PIExpirationDate") & "', '" & request.form("chkIssuingInstructions") & "', '" & request.form("chkLossControl") & "', '" & request.form("chkLossHistoryClaims") & "', '" & request.form("chkReinsurance") & "', '" & request.form("chkMVRDriverIssues") & "', '" & request.form("chkBilling") & "', '" & request.form("chkGeneral") & "', '" & request.form("chkCancellationDNR") & "', '" & request.form("chkPolicyAmendments") & "', '" & request.form("chkExperienceRatingInfo") & "', '" & request.form("chkDiscretionaryPricing") & "', '" & request.form("chkPremiumAudit") & "', '" & request.form("chkQuotingInstructions") & "')"
else
sqlRemarkInsert="INSERT INTO clRemarks (AccountNum, UserID, RemarkBody, CreationDate, RemarkCategory1, RemarkCategory2, RemarkCategory3, RemarkCategory4, RemarkCategory5, RemarkCategory6, RemarkCategory7, RemarkCategory8, RemarkCategory9, RemarkCategory10, RemarkCategory11, RemarkCategory12, RemarkCategory13) VALUES ('"
sqlRemarkInsert=sqlRemarkInsert & request.form("txtAcctNum") & "', '" & request("aysmenu")("userid") & "', '" & Replace(request.form("RemarkBody"),"'","''") & "', '" & CurrDate & "', '" & request.form("chkIssuingInstructions") & "', '" & request.form("chkLossControl") & "', '" & request.form("chkLossHistoryClaims") & "', '" & request.form("chkReinsurance") & "', '" & request.form("chkMVRDriverIssues") & "', '" & request.form("chkBilling") & "', '" & request.form("chkGeneral") & "', '" & request.form("chkCancellationDNR") & "', '" & request.form("chkPolicyAmendments") & "', '" & request.form("chkExperienceRatingInfo") & "', '" & request.form("chkDiscretionaryPricing") & "', '" & request.form("chkPremiumAudit") & "', '" & request.form("chkQuotingInstructions") & "')"
end if
rsRemarkInsert=ComProConn.Execute (sqlRemarkInsert)

rsPolicyInfo.close
Set rsPolicyInfo=nothing


that is the code used to store a remark into the system. is this code optimized already or should some of the db parameters be changed to make things faster? this is just an example of many of the SQL statements that i may or may not have to fix. thank you for any and all help.

View 1 Replies View Related

Help Optimizing Query

Jun 23, 2008

I have the following query that works fine but i'm wondering if there is a way to optimize it better as when I analyze through sql profiler it is at the top of the list of using the cpu

SELECT DISTINCT site, d, (SELECT COUNT(id) FROM anP aPV2 WHERE aPV2.confirmed=1 and aPV2.stage=2 and aPV2.inserted=0 and aPV2.site=aPV1.site and aPV2.d>=aPV1.d and aPV2.d<=aPV1.d) AS mycount FROM anP aPV1 WHERE confirmed=1 AND stage=2 AND inserted=0 ORDER BY site,d

View 8 Replies View Related

Optimizing Query

Sep 18, 2007

Hi,

I have the below query written so that i do not insert entries that is already existing in the table. I am trying to put in 70000 entries at a single shot and it breaks down. Can anybody help me optimize the below query so that it doesnt break? Is there any other way I can write this query?

Please do help me with this. Thanks in advance. The table in which i am inserting the entries has a composite key composed of ACCT_NUM_MIN and ACCT_NUM_MAX. I am getting this from a table which doesnt have a primary key(CORE)


INSERT INTO CRF (CORE_UID,ACCT_NUM_MIN, ACCT_NUM_MAX,BIN, BUS_ID,BUS_NM,ISO_CTRY_CD, REGN_CD, PROD_TYPE_CD, CARD_TYPE)
SELECT UID , LEFT(ACCT_NUM_MIN,16), LEFT(ACCT_NUM_MAX,16), BIN, BUS_ID, BUS_NM, ISO_CTRY_CD, REGN_CD, PROD_TYPE_CD, CARD_TYPE FROM CORE o
WHERE NOT EXISTS (SELECT * FROM CRF i
WHERE o.ACCT_NUM_MIN = i.ACCT_NUM_MIN
AND o.ACCT_NUM_MAX = i.ACCT_NUM_MAX)

View 20 Replies View Related

Optimizing A Query

Nov 27, 2007

hi all

View 8 Replies View Related

Optimizing A Big Query

Feb 9, 2007

To start with, I'll give a simplified overview of my data.BaseRecord (4mil rows, 25k in each Region)ID | Name | Region | etcOtherData (7.5mil rows, 1 or 2 per ID)ID | Type(1/2) | DataProblemTable (4mil rows)ID | ConcatenatedHistoryThe concatenated history field is a nvarchar with up to 20 differentpipe delimited date/code combinations, eg. '01/01/2007X|11/28/2006Q|11/12/2004Q|'Using left outer joins (all from base, the rest optional) I've got aview something like:View (4mil rows)ID | Name | Region | etc | Data | Data2 | ConcatenatedHistoryQuerying it, it takes about 15-20 seconds to do this:Select ID, Name, Region, etc, Data, Data2, ConcatenatedHistory


Quote:

View 5 Replies View Related

Help With Optimizing Query

Feb 2, 2007

if I have the following tables:

Table Customer_tbl

cust_id as int (PK), ...

Table Item_tbl

item_id as int (PK), ...

Table Selected_Items_tbl

selected_item_id as int (PK), cust_id as int (FK), item_id as int (FK), ...

-------

With the following query:

select cust_ID from selected_items_tbl WHERE item_id in (1, 2, n) GROUP BY cust_id, item_id HAVING cust_id in (select cust_id from selected_items_tbl where item_id = 1) AND cust_id in (select cust_id from selected_items_tbl where item_id = 2) AND cust_id in (select cust_id from selected_items_tbl where item_id = n)

-------

Each of these tables has other items included. Selected_Items_tbl holds zero to many of the items from the item_tbl for each customer. If I am searching for a customer who has item 1 AND item 2 AND item n, what would be the most efficient query for this? Currently, the above query is what I am working with. However, it seems that we should be able to do this type of search in a single query (without subqueries).

View 3 Replies View Related

Optimizing Query To Run

Sep 27, 2006

I'm trying to get a query to run which looks at completed orders that have not had another paid order in 180 days. The database I'm running it against is very large so I can't get it to complete. Where's what I've got:

select Date =cast(cl1.cl_rundate as datetime(102)),count(cl1.cl_recno) as 'Completed Initials', cl1.cl_status as Status from dbo.vw_Completedorders cl1 where cl1.lob_lineofbusiness = 'aaa'
and cl1.cl_rundate > '20060801' and not exists (
select cl2.cl_company from dbo.vw_Paidorders cl2 where
cl2.lob_lineofbusiness = 'aa'and cl2.cl_company = cl1.cl_order and cl2.cl_rundate > '20060101' and datediff(day,cl2.cl_rundate,cl1.cl_rundate) < 180)
group by cl1.cl_status, cl1.cl_rundate

View 11 Replies View Related

Optimizing SELECT Query

Nov 7, 2007

Hi
I have one table (tableDemo) with following structure:
TSID                          bigint (Primary Key)TskID                         bigint Sequence                   bigint Version                      bigint Frequency                  varchar(500) WOID                         bigint DateSchedule             datetime TimeStandard             real MeterEstimated          real MeterLast                   real Description                 ntext CreatedBy                  bigint CreatedDate               datetime ModifiedBy                 bigint ModifiedDate              datetime Id                               uniqueidentifier 
I have 8000 records in this table. When I fire simple select command (i.e. select * from tableDemo) it takes more than 120 seconds.
Is there any techniques where I can access all these records within 2-3 seconds ?
Regards,
ND

View 2 Replies View Related

Optimizing The Query And Index

Nov 4, 2000

hi, what are the tools that I can use to Optimize the query/ index.

I know that if I am running a query on a table I create index on the fields where I use in the where clause, is this a right thinking.

Someone told me how do I determine what columns should be indexed, I told him the fields that I use in the where clause should be indexed to speed up the process of retrieving the data. ..... is this answer correct. if not please advice the correct one.
Thanks

Ahmed

View 1 Replies View Related

Optimizing Query Execution...

Dec 17, 2005

We have SQL Server 2000 and int is an Oracle linked server. I'm trying to run the following query...


SELECT DISTINCT a.auf_nr AS OrderNo,
e.ku_name AS Customer,
d.bestell_dat AS OrdDate,
d.liefer_dat AS DelvDate,
CAST(SUM(b.anz) AS FLOAT) Qty,
CAST(SUM((CAST(c.breite AS FLOAT) / 1000 * CAST(c.hoehe AS FLOAT) / 1000) * b.anz) AS FLOAT) SQM,
CAST(SUM(a.liefer_offen) - (SUM(a.anz) - SUM(b.anz)) AS FLOAT) AvailDelv,
CAST(SUM(a.liefer_anz) AS FLOAT) Delvd,
CAST(SUM(c.sum_brutto*a.anz) AS FLOAT) Value

FROM liorder..LIORDER.AUF_STAT a,
liorder..LIORDER.AUF_LIP_STATUS b,
liorder..LIORDER.AUF_POS c,
liorder..LIORDER.AUF_KOPF d,
liorder..LIORDER.KUST_ADR e

WHERE a.auf_nr = b.auf_nr and
b.auf_nr = c.auf_nr and
c.auf_nr = d.auf_nr and
d.kunr = e.ku_nr and
a.auf_pos = b.auf_pos and
b.auf_pos = c.auf_pos and
b.lip_status = 7 and
c.ver_art !='V' and
a.history = 0 and
a.rg_stat != 2 and
e.ku_name IS not null and
e.ku_vk_ek = 0 and
d.bestell_dat BETWEEN '01/01/2005' and '12/17/2005'

GROUP BY a.auf_nr,
d.liefer_dat,
b.lip_status,
d.bestell_dat,
e.ku_name,
d.kopf_tour,
d.kopf_firma

HAVING CAST(SUM(a.liefer_offen)-(SUM(a.anz)-SUM(b.anz)) AS FLOAT) > 0


..and it takes around 2 minutes to show the results even if the date range is of the same date. I even tried to use an indexed column but I still get the same slow execution time. I even tried to create a UDF so that the WHERE clause would be resolved remotely on the Oracle DB but still the same. Is there anyway I can do it in much more efficient and faster way?

View 1 Replies View Related

Optimizing This Duplicate Finding Query

Sep 11, 2005

Hi everyone!

I am parsing a database structed like the following for duplicates.


Code:


keywordnegativeexactbroadphrase
Alpha0.120.36
Alpha0.120.37
Alpha0.120.37
Alpha0.220.37
Alpha0.120.37
Alpha0.120.37
Alpha0.120.37
Beta0.43212
Charlie0.240.31
Delta0.72212
Epsilon410.31
Foxtrot250.22
Grape420.215
Hotel 230.13
Indigo0.3721
Juliet21220.14
Kilroy0.3110.15
Lemon0.2201
Mario0.2502
Nugget0.130.72
Oprah2141
Polo01225
Polo01224
Polo01225
Q-Bert31442
Romeo12100.1
Salty2200.1
Tommy10.30.132.4
Uri10.30.132
Uri20.20.132
Uri20.30.122
Uri20.30.131
Vamprie0.120.1315
Wilco0.210.21
X-Ray00.220
Yeti020.20
Zebra1310



The duplicates that this thread relates to are the kind with duplicate "keyword" entries AND dissimilar field entries; i.e. :


Code:


keyword negative exact broad Phrase
Polo 0 122 4
Polo 0 122 5



I've come up with an SQL query that seems to return all of these duplicates (save one of each type- the 'real', unique entry). However I think I made the query very inefficient. My SQL is very bad; this query will be running over tens of thousands of rows, so if it can be at all optimized I would greatly appreciate your help!

What I have so far is:



Code:


string query1 = "SELECT * FROM TableName" +
" WHERE EXISTS (SELECT NULL FROM TableName" + " b" +
" WHERE b.[keyword]= " + "TableName"+ ".[keyword]" +
" AND b.[negative]<> " + "TableName"+ ".[negative]" +
" ORb.[keyword]= " + "TableName"+ ".[keyword]" +
" ANDb.[exact]<> " + "TableName"+ ".[exact]" +
" ORb.[keyword] = " + "TableName"+ ".[keyword]" +
" ANDb.[broad]<> " + "TableName"+ ".[broad]" +
" ORb.[keyword]= " +"TableName"+ ".[keyword]" +
" ANDb.[phrase]<> "+"TableName"+ ".[phrase]" +
" GROUP BY b.[keyword], b.[broad], b.[exact]" +
" HAVING Count(b.[keyword]) BETWEEN 2 AND 50000)" ;



the algoritm seems to check every column of every row in order to determine a duplicate. Seems straightforward to me, but alas slow...

Is there a better/faster way I can do this? Thanks for you help!

View 1 Replies View Related

Optimizing Query - Product Tips

Jan 6, 2004

Hello everyone!
I've got a problem with a real slow query, I would be very happy if somebody has any idea to improve the speed of it...
The idea is to get the top 2 products, a customer hasn't bought wich are in his interest...

query (simplificated)
-------------------------------------------------
SELECT TOP 2 prodID, Title, Price FROM bestSold7Days WHERE
prodID NOT IN (SELECT prodID FROM orders INNER JOIN orderProducts ON orders.orderID = orderProducts.orderID WHERE (orders.custID=394))
AND
(prodType = COALESCE((SELECT TOP 1 products.prodID FROM orders INNER JOIN orderProducts ON order.orderID = orderProducts.orderID INNER JOIN products ON orderProducts.prodID = products.prodID WHERE (orders.custID=394) GROUP BY products.prodType ORDER BY SUM(orderProducts.PCS) DESC), 2))
-------------------------------------------------
end query

(COALESCE is for replacing if the customer hasnt ordered anything, or hasnt ordered anything of this type)...

Thanks for any time spent!

View 14 Replies View Related

Optimizing A Query To Delete Duplicates

Jul 20, 2005

I have a DELETE statement that deletes duplicate data from a table. Ittakes a long time to execute, so I thought I'd seek advice here. Thestructure of the table is little funny. The following is NOT the table,but the representation of the data in the table:+-----------+| a | b |+-----+-----+| 123 | 234 || 345 | 456 || 123 | 123 |+-----+-----+As you can see, the data is tabular. This is how it is stored in the table:+-----+-----------+------------+| Row | FieldName | FieldValue |+-----+-----------+------------+| 1 | a | 123 || 1 | b | 234 || 2 | a | 345 || 2 | b | 456 || 3 | a | 123 || 3 | b | 234 |+-----+-----------+------------+What I need is to delete all records having the same "Row" when there existsthe same set of records with a different (smaller, to be precise) "Row".Using the example above, what I need to get is:+-----+-----------+------------+| Row | FieldName | FieldValue |+-----+-----------+------------+| 1 | a | 123 || 1 | b | 234 || 2 | a | 345 || 2 | b | 456 |+-----+-----------+------------+A slow way of doing this seem to be:DELETE FROM XWHERE Row IN(SELECT DISTINCT Row FROM X x1WHERE EXISTS(SELECT * FROM X x2WHERE x2.Row < x1.RowAND NOT EXISTS(SELECT * FROM X x3WHERE x3.Row = x2.RowAND x3.FieldName = x2.FieldNameAND x3.FieldValue <> x1.FieldValue)))Can this be done faster, better, and cheaper?

View 3 Replies View Related

Extremely Poor Query Performance - Identical DBs Different Performance

Jun 23, 2006

Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***

View 2 Replies View Related

SQL 7 Poor Performance

Apr 4, 2000

Hi,
I have just upgrade my sql 6.5 to 7.0 version. This sql box (compaq proliant 5500) has 1 gig ram and 4 pentium pro processors and smart array controller. There are about 200 users hit on that server constantly. I configured this server according compaq white paper and Microsoft recommendation; however, I am still suffering huge performance hit. Then I setup performance monitor to see what is happening. What I see is all of 4 processors are at 70% processor time constantly. What I heard is sql 7.0 runs much better on pentium III processors. Is that true? Or you have any recommendations??


Help!!
Thanks!!

Jim Zhong

View 6 Replies View Related

Poor Performance

Jul 23, 2005

Hello Gurus,SQL Server 2000Windows 2003 Server, Standard Edition.Firstly I'm not a SQL server DBA but have a little experience withOracle 9i and Oracle Rdb.An application that I've inherited has started performing very veryslowly over the last few days, as far as I know there have been nomodifications or changes in the volume of data the db is holding. Thedatabase is in simple logging mode and it's updated twice per day fromand Oracle Rdb database.The SQL database is all in the primary filegroup and is sat on fourdisks (RAID mirrored) which form one logical disk of 128gb. Itconsists of approx 120 tables, each with a primary key and a number ofunique indicies per table.Apart from a twice a day update in the small hours the only access tothis database is read only. Users are connecting via the web andusing predefined Crystal reports to retrieve data.Questions then:Should this db have more than one filegroup and should I put theindicies in a different group? Is this relevant when the underlyingstorage is RAID?Should it be using mirrored RAID or should it be striped or should Isteer clear of RAID for a database?The indicies have not been rebuilt for some time (probably one year).Is this something I should be doing once per day/week as I do have thedowntime window?Should I be doing a database/table reorg at fairly regular intervals?I'm going to generate a workload file from SQL profiler and see what,in conjunction with the Index tuning wizard it suggests.Appreciate that poor performance & tuning is a huge subject and not anexact science but any tips or comments would be gratefully received.RegardsDave.

View 2 Replies View Related

Poor Performance

Sep 24, 2007

Hi

I have the following structure:

At server, I have SQL Server 2005 with a database running in compability mode 8.0 (SQL 2000). At desktops (cliente side) I have SQL Server 2005 Express. The base on the desktops access the base on the server to a copy of data through Linked Server. Both bases are in compability mode 8.0.

This is my problem:

The copy of data using a ad-hoc query INSERT INTO LOCAL-DATABASE ... SELECT .... FROM REMOTE-DATABASE (WAN) takes much more time than if I use MSDE in the desktops.

I would like to know what problem can be cause this delay.

Thanks

View 3 Replies View Related

Poor Performance

Feb 4, 2008

Is there a way to improve this query?
The execution plan states that it's ,aking use of the Indexes as 'Clustered Index Seeks'
but the query takes 30 minutes to complete.
The Index statistics are also up to date!
If I use just an INNER JOIN the query completes in 2 seconds!
How can I make the LEFT OUTER JOIN more optimal?
SELECT A.MeterSerial AS 'PeaceMeter', B.MeterFrom, B.MeterTo,
CASE WHEN A.MeterSerial = B.MeterFrom
OR A.MeterSerial = B.MeterTo
OR A.MeterSerial BETWEEN B.MeterFrom AND B.MeterTo
THEN 'Y' ELSE 'N' END AS 'ComplexMeter', B.ComplexMeterType
FROM Peace.MeterData_DR1 A LEFT OUTER JOIN
(SELECT MeterFrom, MeterTo, ComplexMeterType
FROM Peace.ComplexMeters2) B ON A.MeterSerial = B.MeterFrom
OR A.MeterSerial BETWEEN B.MeterFrom AND B.MeterTo
WHERE A.MeterSerial != ''

View 3 Replies View Related

Poor Performance From Web Server

Dec 18, 2007

Hi,I am having a problem with one of my stored procedures in SQL Server 2005.  Basically the proc brings back a data set for the ASP.NET front end, but it is running very slowly from .NET. I have run SQL profiler on the procedure and its taking around 20 seconds to bring back the data for the .NET, where as if I copy and paste the executed SP from profiler into the management studio and run it in a query window, it runs in around 1 second, even if I run DBCC DROPCLEANBUFFERS before I run it.  More worryingly, the CPU usage is 40 times higher and the number of reads is 50% higher from .NET.We have the .NET front end spread over 3 clustered web servers with load balancers and the SQL db is on a dedicated rig.  I am having the same problem on my locally published version of the site as well, so I don't think it's an issue with the web site.If anyone has got any ideas on this then please let me know as I am completely stuck.  I should mention that the issue has only recently started occuring and it used to be fine and the rest of the site is fine...Thanks in advance Tom    

View 1 Replies View Related

Is This A Poor Database Design?

Dec 29, 2004

Setup
I am using a simple aspx page to calculate a person’s tax filing status and capturing statistical information. Most of the items are drop down boxes or radio buttons but a few are checkboxes.

Question
My plan is to give each option in the check boxes its own column in the database this will lead to a table that is 70 columns wide. Is that to many? Most of the info will be very small like “yes� or “no�. I will be using a stored procedure to insert the info. This will be coming down a VPN

Goal
I would like to give the admin staff here real time access to the data via link tables and such. I would like there to be only one table so it will be easy for them to pars the info quickly without needing to join tables or write unions. Ideally this would get me out of the business of creating one off reports.

View 2 Replies View Related

How To Select The Poor Indexes?

Jul 28, 2004

hi,
I have a dba with more than 500 tables and I want to drop the indexes on the least selective columns. How do I know which are the poor indexes?
Please help.........thanks

View 1 Replies View Related

SSAS Poor Performance

Jul 9, 2007

Hi,

I created a cube in my development box, tested, performance is great, now if I try to deploy it into the production server from VS, the screen freezes without any error msg... after some time playing around with security I gave up and created a backup of the cube, and sent it over to the IT department( I have no direct control over the production server so they restored the cube ). Now with the cube in production, it was a matter of just processing it... well that didn't go well either... it would take forever from my development box to see results... then I tried to browse any dimension using the data already on the backup... a simple 4 values dimension would take 2-4 minutes to load on screen...

I can't understand why browsing the cube its so slow in the production server, IT admin even reported that when trying to browse locally it would be slow too... The server has 16gbs of RAM and its a dual processor, he didnt notice any lack of CPU or memory while browsing the cube...

Have you experienced this or can you help me troubleshoot whats going on?

View 1 Replies View Related

Poor Man's Trace Replay?

Apr 23, 2008

A while back I captured a large trace but only the SQL:BatchCompleted event and not the usual list of events so I'm not able to use the RML Utilities or Profiler to play it back. Really though all the queries are from the same login so there's no issue just playing them straight through I would think and using the stored duration to back calculate a start time and just play them all through. Are there any tools to do this or I am on my own having to whip something up?

View 6 Replies View Related

Poor Performance Due To LCK_M_S?

Nov 13, 2007

We just converted from SQL Server 2000 to SQL SERVER 2005 and it seems as though we are having trouble with our performance. Sets of queries that used to take about 15 seconds now take almost 2 minutes. We used a utility to find out what was taking so long and found that almost all of the wait time belonged to LCK_M_S. Does anyone have any suggestions?

Here is a snippet of code I used to test the speed against 2000:

declare @counter int
set @counter = 1

while @counter < 200
BEGIN
update UPR40500 set actindx = 96 where actindx = 96
set @counter = @counter+1
END

This takes 15 seconds when it used to take virtually no time at all.

View 3 Replies View Related

Poor Index Performance

Mar 24, 2007

Hey guys,

Having some trouble with indexes on sql server 2005. I'll explain it with a simplified example.
I have a customers table, and a sp to list customers :


create table Customers(
CusID int not null,
Name varchar(50) null,
Surname varchar(50) null,
CusNo int not null,
Deleted bit not null
)


create proc spCusLs (
@CusID int = null,
@Name varchar(50) = null,
@Surname varchar(50) = null,
@CusNo int = null
)
as

select
CusID,
Name,
Surname,
CusNo
from
Customers
where
Deleted = 0
and CusID <> 1000
and (@CusID is null or CusID = @CusID)
and (@CusNo is null or CusNo = @CusNo)
and (@Name is null or Name like @Name)
and (@Surname is null or Surname like @Surname)
order by
Name,
Surname


create nonclustered index ix_customers_name on customers ([name] asc)
with (sort_in_tempdb = off, drop_existing = off, ignore_dup_key = off, online = off) on primary

create nonclustered index ix_customers_surname on customers (surname asc)
with (sort_in_tempdb = off, drop_existing = off, ignore_dup_key = off, online = off) on primary

create nonclustered index ix_customers_cusno on customers (cusno asc)
with (sort_in_tempdb = off, drop_existing = off, ignore_dup_key = off, online = off) on primary



I've recently noticed that some tables, including 'Customers' don't have indexes except primary keys. And I have added indexes to "name", "surname" and "cusno" columns. This has dropped the number of IO reads. But the strange thing is; one time it works with name / surname searches like ('joh%' '%') but when CusNo is included, it does a full scan. And vice versa when the SP is recompiled using 'alter', works ok with CusNo, but not with name/surname. Recompile it, and it's reversed again. When run as a single query, the execution plan looks different.

What's happening? Perhaps something to do with statistics? This doesn't have a big payload on the server, but there are some other procs suffering from this on heavy queries, making server performance worse than before...

View 1 Replies View Related

Poor Performance On Oracle

Aug 3, 2006

Attention: To all SSIS gods like Donald Farmer, Jamie Thompson, Michael Entin

I am so very disappointed with the performance of SSIS on Oracle. When I am doing a simple lookup, it is caching the entire lookup table that is killing my performance. What is worst? When I try to change from Full to Partial or None caching mode, the component throws an error (x) sign.

I am also so frustrated with the way I have warnings all over the place. The warnings are saying code page value for the column cannot be determined. But the code page value is defaulted to 1252. I looked around everywhere to find what the damn code page value of my Oracle database is without any luck.

The damn package that I created takes 10 minutes to process 10,000 records ONLY which is slower than a legacy Cognos Decision Stream!!! Unacceptable!!! My lookup tables on an average have 200,000 records and I DO NOT WANT TO CACHE THEM all for 10,000 records. Something seems messed up!!!

PLEASE HELP!!!!!!!!!!!!!!!!!!!!!!!

View 4 Replies View Related

Linked Server = Poor Performance

Jan 10, 2006

Using a query through a linked server is giving tremendously reduced performance.
Is part of the problem linking from a SQL 2000 to a SQL 7.0 database?
Are there any other tips out there?
Thanks.

View 1 Replies View Related

Page Life Expectancy Is POOR

Jan 16, 2008

Page Life Expectancy (PLE) is pretty bad on my server. PLE is hovering around 3 minutes "sometimes" but is usally around 20-30 seconds.

Total memory allocated to SQL ( a fixed amount ) is set at 3GB.

Of the total memory allocated, SQL Server is using 2.52GB ... so there is room if needed.

The Buffer Cache is sitting at 2.09GB with a hit rate around 99.8%.

The Procedure Cache is sitting at 378MB with a hit rate around 90.5%.


CPU is hovering around 10-20%


Free System Page Table Entries is LOW ... at 22343

Disc Queue Lengths spike quite often to above 5 and sometimes as high as 36. Usually sitting at .05 to 1.0 (and there are times when the DQLs are great and not measureable.

What I need to find out is how to get PLE above the recommended 5 minute mark???

Please let me know if there are any other items I need to note.

Thanks!

=========================================
Here are some hardware/Software/Implementation stats:
=========================================

SQL Server 2005 Standard w/ latest patch of 3152

Windows Sever 2003 R2 Standard w/ latest patches applied (says PAE is enabled in the System Properties.. General tab

4 Intel Xeon X5355 @266GHz

4GB RAM with 3GB dedicated to SQL Server via the /3GB switch


System Disc ( C: ) is 136GB (free space is 122GB) and is RAID10
Data/SQL Disc ( G: ) is 408GB (free space is 347GB) and is RAID10
The SQL files (MDFs/LDFs, TempDB, DB & TLog backups, SQL application and all that is SQL related) are all on the single array (G (** which I must note is NOT how I configured the SQL environment but aquired the setup when I started the position).

View 6 Replies View Related

Poor Peformance On Pulling Data

Nov 12, 2007

So this could be a long story.

Server1 = Previous "Test box", Windows Server 2003 Standard, SQL 2005 Standard SP1 with hotfixes, 2 processor 4 gig RAM
Server2 = New "Box for production", Windows Server 2003 Enterprise, SQL 2005 Standard SP1 with hotfixes, 2 Dual Core Processor, 16 gig Ram

This is for a datawarehouse environment. I pull a lot of data every night from an Oracle database. On Server1 the process took about an hour to pull all my data accross, on Server2 it takes more than 3 hours. I use SSIS packages to pull the data accross.
Even just running an OPENQUERY statement takes a lot longer. On Server1 returned about 150K record in 1.5 minutes and now on Server2 it only return 40K.
Have I missed a setting on my re-install. It the same SQL build number, SP_Configure has exactly the same settings and everything seems the same. I ftp'ed a file from the oracle box to each of my boxes and had very similar results, so it doesn't seem to be a network issue.

Any help would be much appreciated, is there anything that would cause SQL to pull data 3 times slower from an external datasource?

Thanks in advance guys,

Regards.

View 5 Replies View Related

Poor Sql Performance In Oledb Source?

Mar 31, 2008

i need to select data by using a very complex sql statement. when i use a ole db source componente and choose SQL command as data access mode the process never ends. but when i put the sql statement in an sql task component it works fine and fast. isn't an oledb source always based on an sql statement (select *)? so how is it possible that this component becomes so slow?

View 11 Replies View Related

Poor Performance On Sql 2005 Vs. Sql 2000 - AGAIN!

May 15, 2008

I was hoping I wouldn't be another poster with performance issues after migrating to SQl 2005 from SQL 2000 but here I am.

I am in the process of testing out our databases on Sql Server 2005 for migration from SQL Server 2000 and there are certain portions of code that have been affected negatively. I have read thru many of the posts here and have tried out most of the recommendations. I will start out with things I've done and then provide the actual SQL.

1) I have rebuilt all indexes ( using the DBCC REINDEX using the table option).
2) Updated the db engine to latest hot fix (build 3239) that addresses speed related fixes.
3) I also ran sp_createstats using the 'fullscan' option to create stats on all columns of all tables (minus indexed columns)
4) Since nothing seemed to work, I even ran UPDATE STATICS with FULL SCAN on all tables even though I did not need it as the REBUILD woudl have created stats. But I was willing to try anything.

I have confirmed that the execution plans are different even though the data on both sql 2000 and sql 2005 are identical (i put a copy on 2005). The plans themselves are huge as the queries are huge. Here is the query.


SELECT InterimView.* ,TestView.*

FROM View_LabDataExport_TestFormData_55 TestView
RIGHT OUTER JOIN ( SELECT ReqView.*, CDView.*
FROM View_LabDataExport_FormData_55 ReqView
LEFT OUTER JOIN View_LabDataExport_FormData_CD_55 CDView
ON ( CDView.DB_SubjectID_CD = ReqView.DB_SUbjectID )

) InterimView

ON ( InterimView.DB_FormID = TestView.DB_FormID_T AND

InterimView.DB_LabSampleID = TestView.DB_LabSampleID_T )

The above query takes abotu 8 secs to run on 2000 and about 1 minute to run on 2005. This is for a small dataset and on larger datasets this is only going to more pronounced ( as confirmed by other teams that have already migrated in my company). Another point worth mentioning might be if I remove the TestView.* from the select list, it works in 5 to 6 seconds. Is there an issue with Sql 2005 and a large number of columns or anything of that sort? On 2000, the time remains the same , about 8 seconds if I remove this from the select list.

Here is the statistics ion on 2005


(21234 row(s) affected)

Table 'Worktable'. Scan count 75490, logical reads 3676867, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabTestToReportPanel'. Scan count 476, logical reads 1524, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabReportPanel'. Scan count 0, logical reads 260, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'DiscreteValue'. Scan count 1, logical reads 176106, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabReleasedSampleTest'. Scan count 1, logical reads 2078, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabSample'. Scan count 1360, logical reads 18567, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Form'. Scan count 2302, logical reads 8225, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabTest'. Scan count 1, logical reads 23, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabSampleDef'. Scan count 1, logical reads 10530, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabArea'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Lab'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Location'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Study'. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Item'. Scan count 1335, logical reads 32940, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'ObjectState'. Scan count 1, logical reads 10972, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Object'. Scan count 0, logical reads 20674, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Subject'. Scan count 0, logical reads 3293, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'FormDef'. Scan count 2, logical reads 70, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'PrintedLabSampleLabel'. Scan count 0, logical reads 13144, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'PrintedForm'. Scan count 0, logical reads 4219, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'StudySite'. Scan count 0, logical reads 2756, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'StudyEvent'. Scan count 18, logical reads 40, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'StudyEventDef'. Scan count 0, logical reads 36, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'FormDefToStudyEventDef'. Scan count 1, logical reads 43, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabSampleDefToFormDef'. Scan count 1, logical reads 255, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Here is the statistics ion on 2000

Table 'LabTestToReportPanel'. Scan count 2123, logical reads 4820, physical reads 44, read-ahead reads 0.

Table 'LabReportPanel'. Scan count 130, logical reads 260, physical reads 0, read-ahead reads 0.

Table 'DiscreteValue'. Scan count 103914, logical reads 208214, physical reads 0, read-ahead reads 0.

Table 'Location'. Scan count 19031, logical reads 38062, physical reads 2, read-ahead reads 0.

Table 'Lab'. Scan count 19031, logical reads 38062, physical reads 0, read-ahead reads 0.

Table 'LabArea'. Scan count 19031, logical reads 38062, physical reads 0, read-ahead reads 0.

Table 'LabSampleDef'. Scan count 24670, logical reads 49340, physical reads 0, read-ahead reads 0.

Table 'LabTest'. Scan count 19406, logical reads 39575, physical reads 0, read-ahead reads 0.

Table 'LabReleasedSampleTest'. Scan count 4289, logical reads 73865, physical reads 1014, read-ahead reads 24.

Table 'Study'. Scan count 4291, logical reads 8582, physical reads 0, read-ahead reads 0.

Table 'LabSample'. Scan count 5647, logical reads 31382, physical reads 308, read-ahead reads 4.

Table 'Form'. Scan count 4291, logical reads 9272, physical reads 2, read-ahead reads 10.

Table 'PrintedLabSampleLabel'. Scan count 4289, logical reads 17097, physical reads 114, read-ahead reads 308.

Table 'ObjectState'. Scan count 6860, logical reads 13760, physical reads 1, read-ahead reads 0.

Table 'Object'. Scan count 6860, logical reads 23559, physical reads 90, read-ahead reads 701.

Table 'PrintedForm'. Scan count 1375, logical reads 4505, physical reads 40, read-ahead reads 16.

Table 'StudySite'. Scan count 1378, logical reads 2756, physical reads 4, read-ahead reads 0.

Table 'Subject'. Scan count 1599, logical reads 3332, physical reads 2, read-ahead reads 0.

Table 'StudyEvent'. Scan count 18, logical reads 52, physical reads 0, read-ahead reads 0.

Table 'StudyEventDef'. Scan count 18, logical reads 54, physical reads 0, read-ahead reads 2.

Table 'FormDefToStudyEventDef'. Scan count 1, logical reads 69, physical reads 0, read-ahead reads 23.

Table 'FormDef'. Scan count 2, logical reads 78, physical reads 1, read-ahead reads 4.

Table 'LabSampleDefToFormDef'. Scan count 1, logical reads 308, physical reads 1, read-ahead reads 306.

Table 'Item'. Scan count 1335, logical reads 36510, physical reads 140, read-ahead reads 1047.

(21234 row(s) affected)

(147 row(s) affected)


One difference between the two is the work table that 2005 creates versus 2000. I can attach the plans but they are huge. I will attach it if you ask.

What I was looking for was suggestions on what I could do short of rewriting code or any suggestions in general.

Thanks

View 20 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved