TSQL - SELECT TOP Performance

Oct 17, 2007

I am running SQL 2000 SP4. I have a stored procedure which calls a function in an update statement. Previously, this SP which is pretty intensive and used to build a report would run in appox 10 minutes. Now it is taking 3 hours to execute. I have tracked the performance issue to the function. The function is taking 3 to 4 seconds to execute for each row and it is updating 2,600 rows.

I am including the select statement causing problems.


SELECT TOP 1


@m_ECodeMsg = tev.BillMsg

FROM tblUsageHistoryDetail tuhd

INNER JOIN tblMeterReadEcode tmre

ON tmre.MeterReadID = tuhd.CurrentMeterREadID

INNER JOIN tblUsageHistory tuh

ON tuh.UsageHistoryID = tuhd.UsageHistoryID

INNER JOIN tblTran tt

ON tt.TranID = tuh.TranID AND tt.ServiceLocationID = tuh.ServiceLocationID

INNER JOIN tblServiceLocation tsl

ON tsl.ServiceLocationID = tt.ServiceLocationID

INNER JOIN tblRateUsage tru
ON tru.RateID = tsl.RateID

INNER JOIN tblECodeValue tev

ON tev.ECodeValueID = tmre.ECodeValueID AND tev.ECodeTypeID = tmre.ECodeTypeID

WHERE tt.StatementID = @i_StatementID AND tru.AltServiceID = tsl.ServiceID
AND tt.TranTypeID = 2 AND tev.ECodeTypeID = 6


The function has been in use for over a year and SP4 has been installed on this server for serveral months now. If I run this on an old box with SP3a it runs fine. I have rewrote the statement without the "TOP 1" and execution of the function is less than 1 sec.

Any ideas as to why my performance has taken such a hit?

View 4 Replies


ADVERTISEMENT

Optimizing TSQL Server Performance?

Jun 3, 2004

I have some simple TSQL running on a large block of data (565 million records). This process is estimated to take around 5 days and it is critical to get this running as quickly as possible.

I'm watching Windows Performance Monitor and both disk and CPU use are really low and all data is local so there is no network access involved. This is the only task running on this database server. How could this be? The process is running; I just need it to run faster. Typically a system is CPU-bound, disk-bound, network-bound, or operating under maximum capacity. This seems to be none of the above.

In Enterprise Manager, I see the process is sleeping with a Wait Type of "MISCELLANEOUS". What does that mean? Online help gives a pretty useless explanation of that.

View 7 Replies View Related

TSQL SELECT Question

Dec 17, 2004

I want to get results from a specified row to another specified row like:
row 1 to row 10
row 11 to row 20

i don't know how the TSQL code would look like:
SELECT FROM [myTable] FROM ROW 1 TO ROW 10

first i tried a column with the following info:
Datatype: int
Identity: Yes
Seed: 1
Identity Increment: 1

and tried the following TSQL code:
SELECT [id], [Column1], [Column2], [Column3] FROM [myTable] WHERE [id] >= '1' AND [id] >= '10'

But it failed when i deleted some rows. Now I wanna change the SELECT form to an appropriate one.

Thanks

View 4 Replies View Related

TSQL: I Want To Use A SELECT Statement With COUNT(*) AS 'name' And ORDER BY 'name'

Jul 23, 2005

I am very new to Transact-SQL programming and don't have a programmingbackground and was hoping that someone could point me in the rightdirection. I have a SELECT statement SELECT FIXID, COUNT(*) AS IOIsand want to ORDER BY 'IOI's'. I have been combing through the BOL, butI don't even know what topic/heading this would fall under.USE INDIISELECT FIXID, COUNT(*) AS IOIsFROM[dbo].[IOI_2005_03_03]GROUP BY FIXIDORDER BY FIXIDI know that it is a simple question, but perhaps someone could assistme.Thanks,

View 18 Replies View Related

Multiple Tables Select Performance - SQL 2005 - Should It Take 90 Seconds For A Select?

Dec 4, 2007

I have a problem where my users complain that a select statement takes too long, at 90 seconds, to read 120 records out of a database.
The select statement reads from 9 tables three of which contain 1000000 records, the others contain between 100 and 250000 records.
I have checked that each column in the joins are indexed - they are (but some of them are clustered indexes, not unclustered).
I have run the SQL Profiler trace from the run of the query through the "Database Engine Tuning Advisor". That just suggested two statistics items which I added (no benefit) and two indexes for tables that are not involved at all in the query (I didn't add these).
I also ran the query through the Query window in SSMS with "Include Actual Execution Plan" enabled. This showed that all the execution time was being taken up by searches of the clustered indexes.
I have tried running the select with just three tables involved, and it completes fast. I added a fourth and it took 7 seconds. However there was no WHERE clause for the fourth table, so I got a cartesian product which might have explained the problem.
So my question is: Is it normal for such a type of read query to take 90 seconds to complete?
Is there anything I could do to speed it up.
Any other thoughts?
Thanks

View 7 Replies View Related

Select Performance

Nov 16, 2007

Hello,

I have a table which when i execute a select statement the execution lucks performance. The table currently has 1000 rows.

Table:
CREATE TABLE [dbo].[Listings](
[ListingID] [int] NOT NULL,
[UserID] [int] NOT NULL,
[NodeID] [int] NOT NULL,
[Title] [varchar](100) NOT NULL,
[Description] [varchar](500) NOT NULL,
[Keywords] [varchar](200) NOT NULL,
[URL] [varchar](150) NOT NULL,
[DateSubmited] [datetime] NOT NULL
) ON [PRIMARY]



Even a simple sql statement without any where clause will take 3 seconds to execute.

select LISTINGID, TITLE, DESCRIPTION from listings



Is there anyway to improve that?

Thanks in advance.

View 2 Replies View Related

Select Performance...

Jan 24, 2008

I have a table of products which contains a parent-child hierarchy (a self-relationship, creating a built-in recursive structure using a foreign key). The point is: I must create a query that shows only products which are not parents. I can achieve this using a "not in" or a "full join".

Can anybody tell me which of the 2 below is the best option, and why?




Code Snippet
SELECT ProductID
FROM CRP.Product
WHERE ProductID NOT IN
(SELECT ProductFatherID FROM CRP.Product WHERE ProductFatherID is not NULL)

Execution plan:


Select (cost 0%) <--- Merge join (25%) <--- Clustered index scan (54%)
^--- Stream aggregate (1%) <--- Index seek (19%)






Code SnippetSELECT
PC.ProductID
FROM -- Product Child
CRP.Product PC
FULL JOIN -- Product Parent
(SELECT ProductFatherID ProductID
FROM CRP.Product
WHERE ProductFatherID is not NULL) PP ON (PC.ProductID = PP.ProductID)
WHERE
PP.ProductID is NULL


Execution plan:

Select (cost 0%) <--- Filter (2%) <--- Merge join (29%) <--- Clustered index scan (51%)
^--- Compute scalar (0%) <--- index seek (18%)

View 5 Replies View Related

TSQL - Use ORDER BY Statement Without Insertin The Field Name Into The SELECT Statement

Oct 29, 2007

Hi guys,
I have the query below (running okay):



Code Block
SELECT DISTINCT Field01 AS 'Field01', Field02 AS 'Field02'
FROM myTables
WHERE Conditions are true
ORDER BY Field01

The results are just as I need:


Field01 Field02

------------- ----------------------

192473 8461760

192474 22810



Because other reasons. I need to modify that query to:



Code Block
SELECT DISTINCT Field01 AS 'Field01', Field02 AS 'Field02'
INTO AuxiliaryTable
FROM myTables
WHERE Conditions are true
ORDER BY Field01
SELECT DISTINCT [Field02] FROM AuxTable
The the results are:

Field02

----------------------

22810
8461760

And what I need is (without showing any other field):

Field02

----------------------

8461760
22810


Is there any good suggestion?
Thanks in advance for any help,
Aldo.

View 3 Replies View Related

SQL Select Performance Question

Jul 12, 2004

I have only been working with sql server for sever months so I would consider myself a novice.
I am trying to return 3 different datasets from the same query information.
for example a person will do a search for articles on "flowers red annuals"
I first want to return a datase with item details. Name of book, copyright, author etc...

I then want to populate 2 drop down lists to help narrow down the search

1. Article type
Newspaper(count)
Magazine(count)
Hardback Book(count)
Softback book(count)
Etc...

2. Publisher
Publisher A(count)
Publisher B(count)
Publisher C(count)
etc....

I am currently running 3 separate stored procedures against the master file to return this data.
1. select ... for the detail
2. Select ...count() group by for the other 2
Is there a better way to get this information without having to run the query 3 times?
Is there a way to run the (select..group by) against initial detail information to populate the 2 drop down controls.
Is there a way to return more that one dataset of information from one stroed procedure?
Because the data source has millions of records I would assume that this is inefficient or does it matter?
Thanks in advance for any advice.

View 1 Replies View Related

SELECT INTO..# Performance Impact In SQL 7.0

Jan 18, 2001

I want to determine the performance impact caused by the extensive use of the 'select into #' statement in a production environment. The current situation is that our reports team extensively uses the 'select into #' statement to build smaller subsets of data. These subsets are then used as the basis to create summary style reports and exports. All this is accomplished via the use of SQL pass-through.

After these reports/exports are completed and tested, they are then released to our operations department and the users. The reports/exports then can be run against the production server at the discretion of the user, provided they have the appropriate permissions. These reports/exports target the live data on the primary production server that already has been designated for the use of the application software.

Now I know that reporting against a transactional-based server, where the users run the application, is not a very good idea. (Inherited) I am currently migrating all reports/exports to a reporting server. Although it will still be transaction-based, the reports/exports will be isolated from user activity. Eventually we will be moving toward a warehouse scenario.

I also know that the extensive use of the 'select into #' statement is not a coding practice for use in production. I provided several alternatives to this practice

1) insert..select 2) insert..execute - from stored procedure

I have read that in the in sql 6.5 that this may cause severe performance and locking behaviors in system db's and tempdb. However, in the following document on the Microsoft Knowledge Base, it indicates that SQL 7.0 may have corrected this issue.

Q153441 - FIX SELECT INTO Locking Behavior.htm

Despite the indication of being corrected, I am still not convinced. I am frequently seeing drastic performance hits, especially when several of the reports are running. (which is very common) My concern is that moving these reports/exports to a reporting server may save the users; I believe that it may be migrating the problem to another location. I will be working with the developers to optimize their code and will investigate index issues.

** To make a long story short. I would like someone who has experience with this provide me with the top 5+ reasons not to use the 'select into #' methodology in a production environment. Further, if anyone has any documentation, I would surely like the info.

Thanks, Dave

View 2 Replies View Related

Performance Hit If I Use SELECT DISTINCT?

Mar 23, 2006

We use an ASP/MS SQL 2000 system to send out our mass e-mailing to about 3,500 subscribers (and the list is growing). There are some duplicate entries in the DB and I was thinking about using this code


SELECT DISTINCT email FROM Subscribers


to remove the duplicates (at least until we can get around to cleaning up the data and then putting up new subscriber form to prevent duplicate entries).

I was wondering, though... Will this have a significant impact on our performance? I mean, that's a lot of e-mail addresses to process and I don't want to bog our system down unnecessarily.

What do you performance gurus think?

View 3 Replies View Related

DROP TABLE, SELECT INTO, Performance

Jul 11, 2007

Hello -- thank you for taking the time to read this.

I have a very large table that is used both for archives and new information. To get the current information, the table is queried by many different users at various polling periods. The SELECT required includes about fifteen JOINS, and only returns about 200 rows at any given time.

So I got to thinking if it might be faster to periodically run the big query as a SELECT INTO into a smaller table and letting the polling clients query the smaller table with SELECT *. Periodically, the smaller table would be DROPPED and refereshed with another SELECT INTO.

Trouble is, the data would have to be updated once every 30 seconds, and there are inbound polls coming at the rate of about 200 per minute. It got me to thinking what might happen if a client attemtped to query the smaller table when it was in the process of being dropped and refilled.

So my question is three-part:

1) assuming a larger table of about 500,000 records and only 500 pertinent at any given time, is there any real potential of performance enhancement by switching to a SELECT INTO table?

2) if so, is there a chance of a client failing a query if the inbound query somehow collides with the DROP/SELECT INTO procedure?

3) if so, is there any way to prevent it or a better way of doing this?

Thanks again for reading, and in advance for any help you can provide. I apologize if I sound like a dummy - it's hard to fake intelligence!

View 3 Replies View Related

Is There Any Way To Improve The Performance Of Select Statements In MS SQL?

Jul 20, 2005

Hi All,I am getting slower performance of select statements in MS SQL. I amfinding select statements in MS SQL are even slower than MS ACCESS. Isthere any way to improve the performance of select statements in MSSQL by tuning the database of anything else??Thanks in advance!Hoque

View 3 Replies View Related

In Terms Of Performance: Select Top .. OR Set Rowcount ?

Apr 29, 2008



I have a table that contains 10 millions records. The following 2 statements, which one provide better performance? Frankly, i have no idea how to compare the execution plan...

Select Top 10000 * From Table

OR

Set rowcount 10000
Select * From Table

View 11 Replies View Related

Network Performance For Simple Select Statement

Jul 23, 2005

I've just inherited a system and have some concerns about the speed ofconnections to a remote server (SQL2000). If I do a simple selectstatement on the table below, it takes 14 minutes to retrive 6 millionrows across a 2Mb line. Obviously it's a reasonable amount of data toretrieve, but I would have thought this would be quicker if I'm honest.Run locally, this is 50 seconds.My thoughts are that there may be some issues with our connection (weget general network errors sporadically, which are being looked at),but wanted some thoughts if the performance is acceptable for what itis doing with what is available. I don't think there is a SQL issue,but want to check if this sounds about right.It's early days, so I'm after a general impression of the speed ofretrieval for the amount of data on the available bandwidth. Assuming abest performance scenario, what is the minimum time it should take as abest guess ?ThanksRyanCREATE TABLE [FIELD_VALUES] ([DEALER_DATA_ID] [int] NOT NULL ,[FIELD_CODE] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL ,[FIELD_VALUE] [numeric](15, 5) NULL ,[CHANGED_TYPE] [int] NULL ,CONSTRAINT [PK_FIELD_VALUES] PRIMARY KEY CLUSTERED([DEALER_DATA_ID],[FIELD_CODE]) WITH FILLFACTOR = 90 ON [PRIMARY]) ON [PRIMARY]GO

View 1 Replies View Related

Bad Performance Issues When Trying To SELECT TOP 10 * From &&<partitioned_view&&> ORDER BY 1

Nov 28, 2007

Hi.

We are now working with SQL2000sp4, planning migration to SQL2005 in few months though.

I've faced performance issues with large tables (200-500 mln rows, 50-100Gb of data+indexes)

New data are uploaded into tables once a day, around 1mln rows. Thats the only time of inserting data, during daytime tables are used for SELECTs only.

The problem that daily INSERTs are taking too much time now, because of rebuilding few indexes for the table.

I noticed that partitioning solution looks like solving this problem well. So i splitted master data table into 4 tables:

old master table:
CREATE TABLE [dbo].[DTB] (
[report_date] [smalldatetime] NOT NULL ,
[param] [char] (10) COLLATE Cyrillic_General_CI_AS NOT NULL ,
[param_value1] [decimal](18, 2) NULL ,
[param_value2] [decimal](18, 2) NULL ,
[param_value3] [smallint] NULL
)
CREATE INDEX [IX_DTB_DT_ACC] ON [dbo].[DTB]([report_date], [param])

new partition1:
CREATE TABLE [dbo].[be_data_DTB_part_2007_q1] (
[report_date] [smalldatetime] NOT NULL ,
[param] [char] (10) COLLATE Cyrillic_General_CI_AS NOT NULL ,
[param_value1] [decimal](18, 2) NULL ,
[param_value2] [decimal](18, 2) NULL ,
[param_value3] [smallint] NULL ,
CONSTRAINT [CK_be_data_DTB_part_2007_q1_report_date] CHECK ([report_date] >= '2007-Jan-01' and [report_date] <= '2007-Mar-31')
)
CREATE CLUSTERED INDEX [idc_be_data_DTB_part_2007_q1_report_date_param] ON [dbo].[be_data_DTB_part_2007_q1]([report_date], [param])

Similar are definitons for other partitions - q2, q3 and q4.

And here is partitioned view itself:
create view dbo.data_DTB
as
select * from dbo.be_data_DTB_part_2007_q1
union all
select * from dbo.be_data_DTB_part_2007_q2
union all
select * from dbo.be_data_DTB_part_2007_q3
union all
select * from dbo.be_data_DTB_part_2007_q4

I want users to access data SELECTing from view data_DTB, while I perform daily inserts right into be_data_DTB_part_2007_q4.

In general, this solution works well. For example:




Code Block
set statistics profile on
go
select * from data_DTB where report_date = '2007-Apr-16'
go
set statistics profile off
go

1290674 1 SELECT * FROM [data_DTB] WHERE [report_date]=@1
1290674 1 |--Concatenation
0 1 |--Filter(WHERE:(STARTUP EXPR(Convert([@1])<='Mar 31 2007 12:00AM' AND Convert([@1])>='Jan 1 2007 12:00AM')))
0 0 | |--Clustered Index Seek(OBJECT:([MYDB].[dbo].[be_data_DTB_part_2007_q1].[idc_be_data_DTB_part_2007_q1_report_date_param]), SEEK:([be_data_DTB_part_2007_q1].[report_date]=Convert([@1])) ORDERED FORWARD)
1290674 1 |--Filter(WHERE:(STARTUP EXPR(Convert([@1])<='Jun 30 2007 12:00AM' AND Convert([@1])>='Apr 1 2007 12:00AM')))
1290674 1 | |--Clustered Index Seek(OBJECT:([MYDB].[dbo].[be_data_DTB_part_2007_q2].[idc_be_data_DTB_part_2007_q2_report_date_param]), SEEK:([be_data_DTB_part_2007_q2].[report_date]=Convert([@1])) ORDERED FORWARD)
0 1 |--Filter(WHERE:(STARTUP EXPR(Convert([@1])<='Sep 30 2007 12:00AM' AND Convert([@1])>='Jul 1 2007 12:00AM')))
0 0 | |--Clustered Index Seek(OBJECT:([MYDB].[dbo].[be_data_DTB_part_2007_q3].[idc_be_data_DTB_part_2007_q3_report_date_param]), SEEK:([be_data_DTB_part_2007_q3].[report_date]=Convert([@1])) ORDERED FORWARD)
0 1 |--Filter(WHERE:(STARTUP EXPR(Convert([@1])<='Dec 31 2007 12:00AM' AND Convert([@1])>='Oct 1 2007 12:00AM')))
0 0 |--Clustered Index Seek(OBJECT:([MYDB].[dbo].[be_data_DTB_part_2007_q4].[idc_be_data_DTB_part_2007_q4_report_date_param]), SEEK:([be_data_DTB_part_2007_q4].[report_date]=Convert([@1])) ORDERED FORWARD)


As far as i see it checks filter parameter fitting CHECK constraint for each partition. Then it peforms clustered index seek for partition actually containing data and avoids using 3 other partitions. Thats great! This example just illustraits that partitioning actually works for me.

Unfortunately, there is another query with just awful performance on partitions comparing to single table. Lets try to get few rows entered last day:




Code Block
set showplan_text on
go
select top 10 * from DTB order by report_date desc
go
set showplan_text off
go

|--Top(10)
|--Bookmark Lookup(BOOKMARK:([Bmk1000]), OBJECT:([MYDB].[dbo].[DTB]))
|--Index Scan(OBJECT:([MYDB].[dbo].[DTB].[IX_DTB_DT_ACC]), ORDERED BACKWARD)


Excellent! It runs just for only few seconds. But using partitions:




Code Block
set showplan_text on
go
select top 10 * from data_DTB order by report_date
go
set showplan_text off
go

|--Top(10)
|--Merge Join(Concatenation)
|--Merge Join(Concatenation)
| |--Merge Join(Concatenation)
| | |--Sort(ORDER BY:([be_data_ DTB_part_2007_q1].[report_date] ASC, [be_data_ DTB_part_2007_q1].[param] ASC, [be_data_DTB_part_2007_q1].[param_value1] ASC, [be_data_ DTB_part_2007_q1].[param_value2] ASC, [be_
| | | |--Clustered Index Scan(OBJECT:([MYDB].[dbo].[be_data_DTB_part_2007_q1].[idc_be_data_DTB_part_2007_q1_report_date_param]))
| | |--Sort(ORDER BY:([be_data_DTB_part_2007_q2].[report_date] ASC, [be_data_ DTB_part_2007_q2].[param] ASC, [be_data_DTB_part_2007_q2].[prama_value1] ASC, [be_data_DTB_part_2007_q2].[param_value2] ASC, [be_
| | |--Clustered Index Scan(OBJECT:([MYDB].[dbo].[be_data_DTB_part_2007_q2].[idc_be_data_DTB_part_2007_q2_report_date_param]))
| |--Sort(ORDER BY:([be_data_DTB_part_2007_q3].[report_date] ASC, [be_data_DTB_part_2007_q3].[param] ASC, [be_data_DTB_part_2007_q3].[param_value1] ASC, [be_data_DTB_part_2007_q3].[param_value2] ASC, [be_data_
| |--Clustered Index Scan(OBJECT:([MYDB].[dbo].[be_data_DTB_part_2007_q3].[idc_be_data_DTB_part_2007_q3_report_date_param]))
|--Sort(ORDER BY:([be_data_DTB_part_2007_q4].[report_date] ASC, [be_data_DTB_part_2007_q4].[param] ASC, [be_data_DTB_part_2007_q4].[param_value1] ASC, [be_data_DTB_part_2007_q4].[param_value2] ASC, [be_data_
|--Clustered Index Scan(OBJECT:([MYDB].[dbo].[be_data_DTB_part_2007_q4].[idc_be_data_DTB_part_2007_q4_report_date_param]))


As one can see that€™s just awful . When I make graphical execution plan with Ctrl+L it says costs for Sort operations are thousands. I didn€™t run this query to check statistics profile, because on our server it will run for hours.

I found a topic regarding this problem: http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=1270479&SiteID=17

Mostafa Elhemali describes exactly my problem in the last post. I also though that getting top 10 * from partitioned view shouldn€™t be a problem €“ it€™s quite obvious just to grab top 10 from each partition and then find top 10 amongst them. Looks like it doesn€™t work this way though.

So the question is. Is there any new workarounds for this problem? Or maybe it is already solved in latest patches for SQL2005? I know that SQL2005 introduces new way of partitioning tables, maybe the problem will go if using SQL2005 partitioned tables instead of oldstyle partitioned views?

Thank you.


p.s. Upon reviewing my post i noticed that issued ORDER BY report_date DESC against unpartitioned table, and ORDER BY report_date against partitioned view. Well, specifying ORDER BY report_date DESC for partitioned view gives similar results, except for few ASCs are replaced with DESCs.

View 2 Replies View Related

DB Engine :: TOP Changing Cursor Select Performance

Jun 15, 2015

the cursor at the bottom  iterates only to print the number of rows.The problem is in the select. This takes 30 seconds to iterate through 1242 records.But if I add a TOP 1000000 or whatever number to the select, the same iteration takes less than a 1 second.I've tested each query without cursor, and  both have the same cost and performance. (Not exactly the same plan)Note that I got the same performance improvement declaring the cursor as STATIC.Why the top is affecting the cursor iteration so much?

Declare @query varchar(512)
DECLARE Itera CURSOR --LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
select --TOP 1000000

[code]....

View 2 Replies View Related

TSQL - Select First 3 Chars Where Not Special Chars

Feb 5, 2002

Say I have a column called 'NAME' in a table called 'CLIENT' and the values in NAME are Surnames or company names like:

NAME
----------------------
1-FOR-ALL
A.B. SMITH (TOOLS LTD)
BROWN
THOMSON
VW CAR SALES



I want my select to return the first 3 characters, excluding special characters (only characters between 1 and z).

In example, the following would be returned for the data above:

NAME
----------------------
1FO
ABS
BRO
THO
VWC

View 1 Replies View Related

TSQL - Select First 3 Chars Where Not Special Chars

Feb 11, 2002

Say I have a column called 'NAME' in a table called 'CLIENT' and the values in NAME are Surnames or company names like:

NAME
----------------------
1-FOR-ALL
A.B. SMITH (TOOLS LTD)
BROWN
THOMSON
VW CAR SALES



I want my select to return the first 3 characters, excluding special characters (only characters between 1 and z).

In example, the following would be returned for the data above:

NAME
----------------------
1FO
ABS
BRO
THO
VWC

View 1 Replies View Related

Significant Performance Difference If SELECT Command Contains User

Oct 25, 2006

SQL 2000 Connection String:user id=MyUserName;password=MyPassword;initial catalog=MyDB;server=MyServer;Connect Timeout=30 This SELECT statement returns its 10 results nearly instantly:SELECT * FROM MyTableDitto from above, but completes in 30-40 seconds:SELECT * FROM [dbo].[MyTable]Ditto from above, but completes nearly instantly:
SELECT TOP 1000 * FROM [dbo].[MyTable] Obviously I have stopped using the [dbo] syntax in my SqlCommand's (SELECT's and EXECUTE's) but still would like to know why this is.vr, Rich

View 3 Replies View Related

A Performance Question: SELECT ID As SubID FROM MyTable AS MyTable1

Sep 13, 2005

For some reasons I need to access the same field of the same table twice in a query, and each give out a diferrent valueLike this:"SELECT myTable.id, myTable1.id as subID FROM myTable INNER JOIN ...  INNER JOIN myTable as myTable1 ..."The question is, when I write it as myTable as myTable1 will it affect the query performance if myTable is a large table? will it create another so big copy of myTable? or I should create a view like "CREATE VIEW myTable1 AS SELECT id FROM myTable" to reduce the side of myTable1?Thank you.

View 1 Replies View Related

Strange Performance Problem With SELECT COUNT(1) And Sp_executesql

Feb 22, 2008

Hello,

I have the following queries that run on a view called EntrySummary:

1)

exec sp_executesql N'SELECT COUNT (1) FROM [dbo].[EntrySummary] WHERE [EntrySummary].[SubmissionStatusID] = @SubmissionStatusID0 AND [EntrySummary].[CreatedBy] = @CreatedBy1',N'@SubmissionStatusID0 int,@CreatedBy1 nvarchar(20)',@SubmissionStatusID0=4,@CreatedBy1=N'domainaperson'


2)
exec sp_executesql N'SELECT COUNT (1) FROM [dbo].[EntrySummary] WHERE [EntrySummary].[CreatedBy] = @CreatedBy1 AND [EntrySummary].[SubmissionStatusID] = @SubmissionStatusID0',N'@SubmissionStatusID0 int,@CreatedBy1 nvarchar(20)',@SubmissionStatusID0=4,@CreatedBy1=N'domainaperson'


(The only difference between the two queries being the order of the where clauses)

Both return the correct answer (4144), but Query 2 takes 10-15 seconds whereas Query 1 takes < second.
If the same query is resubmitted several times, this doesn't affect the times.

(The vast majority of the records have a status of 4, but only about 3% will be created by the person).

Replacing Count(1) with count(*) makes both queries return quickly.

Is sp_executesql creating poor execution plans ? Can anyone explain this behaviour?


Regards,
AndyM

View 1 Replies View Related

Transact SQL :: Huge Performance Difference For Same Select Between Environments

Jun 22, 2015

I have encountered a problem with a specific set of tables. The same select yields slightly differing execution plans in two different environments (instances). But the slight variation seems to contain a huge differences in stats. I don't know the significance of these stats. The two tables have the exact same indices.

This is the selcet statement:

SELECT 'xx' FROM DUKS.dbo.Profiler
WHERE DNA_Løbenummer IN
(SELECT DNA_Løbenummer FROM DUKS.dbo.Effektregister
WHERE Sagsnummer = '2015-00002')

View 17 Replies View Related

Equivalent Tsql For Sql Server 2000 Is Needed [from Sql Server 2005 Only Tsql]

Nov 19, 2007

Can anyone please give me the equivalent tsql for sql server 2000 for the following two queries which works fine in sql server 2005

1
-- Full Table Structure

select t.object_id, t.name as 'tablename', c.name as 'columnname', y.name as 'typename', case y.namewhen 'varchar' then convert(varchar, c.max_length)when 'decimal' then convert(varchar, c.precision) + ', ' + convert(varchar, c.scale)else ''end attrib,y.*from sys.tables t, sys.columns c, sys.types ywhere t.object_id = c.object_idand t.name not in ('sysdiagrams')and c.system_type_id = y.system_type_idand c.system_type_id = y.user_type_idorder by t.name, c.column_id


2
-- PK and Index
select t.name as 'tablename', i.name as 'indexname', c.name as 'columnname' , i.is_unique, i.is_primary_key, ic.is_descending_keyfrom sys.indexes i, sys.tables t, sys.index_columns ic, sys.columns cwhere t.object_id = i.object_idand t.object_id = ic.object_idand t.object_id = c.object_idand i.index_id = ic.index_idand c.column_id = ic.column_idand t.name not in ('sysdiagrams')order by t.name, i.index_id, ic.index_column_id

This sql is extracting some sort of the information about the structure of the sql server database[2005]
I need a sql whihc will return the same result for sql server 2000

View 1 Replies View Related

Large Log Table ....SELECT * FROM Statement....killing The Performance Of Server..Help Me Out..

Mar 12, 2008

Hi all

I have a Large log table with large size data(I month only),If I run a query like SELECT * FROM <table_name> Server will go€¦very very slow€¦.

Because of large Data system is going slow€¦..

Please some body helps me with suggestion how get good performance.

View 4 Replies View Related

Index/performance Index For SELECT.... IN Statement

Sep 10, 2007



Hi All,

I 'm working to improve some sql performance.


One of the major syntax inside the SELECT statment is ..

WHERE FIELDA IN (SELECT PARAVALUE FROM PARATABLE WHERE SESSIONID = "XXXXX" AND PARATYPE='A') AND
WHERE FIELDB IN (SELECT PARAVALUE FROM PARATABLE WHERE SESSIONID = "XXXXX" AND PARATYPE='B') AND
WHERE FIELDC IN (SELECT PARAVALUE FROM PARATABLE WHERE SESSIONID = "XXXXX" AND PARATYPE='C') AND
WHERE FIELDD IN (SELECT PARAVALUE FROM PARATABLE WHERE SESSIONID = "XXXXX" AND PARATYPE='D') AND
WHERE FIELDE IN (SELECT PARAVALUE FROM PARATABLE WHERE SESSIONID = "XXXXX" AND PARATYPE='E') AND
WHERE FIELDF IN (SELECT PARAVALUE FROM PARATABLE WHERE SESSIONID = "XXXXX" AND PARATYPE='F')

(It's to compare the field content with some user input parameter inside a parameter table... )

I think properly is that the SELECT ... IN is causing much slowness in the sql statement. I have indexed FIELDA , FIELDB, FILEDC etc and those PARAVALUE and PARATYPE in the PARATABLE table. But perfromance is still slow and execution takes >20 seconds for 200000 rows of records.

Do any one know if still any chance to improvide the performance like this?

Much Thanks,

Andy

View 14 Replies View Related

[Performance Discussion] To Schedule A Time For Mssql Command, Which Way Would Be Faster And Get A Better Performance?

Sep 12, 2004

1. Use mssql server agent service to take the schedule
2. Use a .NET windows service with timers to call SqlClientConnection

above, which way would be faster and get a better performance?

View 2 Replies View Related

Extremely Poor Query Performance - Identical DBs Different Performance

Jun 23, 2006

Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***

View 2 Replies View Related

Very Poor Performance - Identical DBs But Different Performance

Jun 22, 2006

Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server witha particular query. It would take approximately 22 seconds to return100 rows, thats about 0.22 seconds per row. Note: I ran the query insingle user mode. So I tested the query on the Development server bytaking a backup (.dmp) of the database and moving it onto the devserver. I ran the same query and found that it ran in less than asecond.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue isrelated to some external hardware issue like: disk space, memory etc.Or could it be OS software related issues, like service packs, SQLServer configuations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating systemrelated issue.Any Ideas would help me greatly!Thanks,Brian T

View 2 Replies View Related

TSQL Or SQL CLR?

Aug 9, 2006

Hello Friends,    I am not sure if this is the right place to post this question but if not please suggest me when it can be posted.
I have been thinking of writing Stored procs in SQL CLR/ changing all of my Stored Procs to SQL CLR.
Is there any thing that I need to keep in mind about the size of the sotred proc (like calculation intensive) before I do that? I mean can even change a TSQL stored proc which is  relatively small Stored Proc, that simply said Select * from Customers to SQL CLR? or the SQL CLR does only useful/makes difference with calculation intensive stored procs and big stored procs?
When I talked to our Architects they said every small sized stored proc can be written using SQL CLR and more over forget about the classic TSQL stored procs and get used to write SQL CLR when ever writing any database related stuff.
And also there are so many articles that discussed about the advantages of SQL CLR over the TSQL but I would appreciate if some one could put few bulletted points why do you think SQL CLR is more powerful.
Please advise.Thanks in advance,-L

View 2 Replies View Related

TSQL Please Help

Jan 8, 2006

Hi
I am creating some dynamic sql by passing variouse parametrs to my Stored Procedure.  One of my Parameters is string has lots of values seperated by a comma to help build an 'IN' statement.
SET @SQL = 'SELECT * FROM areas'SET @p1 = '10,20'If @p1 IS NOT NULLBEGINSET @sSQL = @sSQL + ' WHERE (Areas IN (''' + Replace(@p1,',',''',''') + '''))'END
The above query runs perfecly well in Query Analyser, however when I put it into my ASP.NET application I get an error of "Error converting data type varchar to numeric."
So I think I need to do some sort of casting or Converting but im not sure how to do it.  Or do I need to use a INSTRING?
I did manage to work out a method by using the follwoing
SELECT * FROM Areas WHERE PATINDEX('%,' + CAST(ArType AS VARCHAR) + ',%',',' + @p1 + ',') > 0
But I cant seem to convert the above line into coherent dynamic statement.  My feeble attempt is below but I keep getting errors
SET @sql = @sql + ' WHERE PATINDEX(''%,'' + CAST(ArType AS VARCHAR) + '',%'','','' + @p1 + '','') > 0'
IM strugging to understand all the '''. My TSQL is pretty basic, any help would be much appreciated
Many thanks in advance
 

View 1 Replies View Related

Tsql Help!

Mar 14, 2001

simple update, I want to update max_seq with max(0rdr_seq) from another table.
how do you?

update h
set max_seq = d.max(ordr_seq)
from h_drug_stage_dup h
join drug_ordr_stage d
on h.patkey = d.patkey and
h.ordr_dtm= d.ordr_dtm and
h.h_drug = d.h_Drug

View 1 Replies View Related

TSQL Help !

Jun 5, 2001

Please see below ( in my sub-query I need to say settle_date = post_date +
3 business days )
How would this be done ? Pleas help !!!

declare @PD datetime, @MY_SD datetime
--SELECT @PD = SELECT POST_DATE FROM TRANSACTION_HISTORY
--select @MY_SD = @PD + 3 --T+3
--select @MY_SD = @MY_SD + case datepart(dw, @MY_SD) when 7 then 2 when 1 then 1 else 0 end*/
SELECT
WIRE_ORDER_NUMBER FROM TRANSACTION_HISTORY
WHERE POST_DATE BETWEEN '02/01/2001' AND '02/28/2001' AND
WIRE_ORDER_NUMBER IN
(
SELECT ORDER_NUMBER
FROM TRANSACTION_ARCHIVE WHERE TRANSACTION_ARCHIVE.ORDER_NUMBER = TRANSACTION_HISTORY.WIRE_ORDER_NUMBER
SETTLE_DATE = DATEADD(day, 3, POST_DATE ) case datepart(dw, POST_DATE) when 7 then 2 when 1 then 1 else 0 end))

View 5 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved