Slow Running Query With A High Lob Logical Reads, Lob Data
Jun 1, 2007
we had some slow down complaints lately and this query seems to be the culprit almost every single time. The estimated execution plan is a clustered index seek as there is a clustered index on the uidcustomerid column. setting profile statistics on shows that every time it executes it does an index seek.
profiler session showed a huge number of reads for these queries depending on the value being looked up. 1500 through 50000. i set up profile io on and the culprit is lob logical reads. everything else is 0 or very low. in this case lob logical reads is over 1700.
3 of the columns in the select statement are text columns. when i take them out of the query the lob logical reads drops to 0 and goes up incrementally as i add each column back in.
is there anyway to improve the performance without changing data types to varchar(max)?
select SID,Last_name,Name_2,First_name,Middle_initial,Descriptives,Telephone_number,mainline,Residence,ADL,
DID_number,Svce_street,Svce_town,Svce_state,Svce_appt,Mailing_street,Mailing_town,Mailing_state,Mailing_appt,
Mailing_zip,Listing,Addl_listing,Published,Listed,Gold_number,PIN,status,SSnumber,tax_jurisdiction,
Bill_date,Past_balance,Service_start_date,Service_end_date,LOA,FCC_type,Line_type,I_W,Jacks,Voice_messaging,
vms_ring_cycles,CCS,phonesmarts,ringmate,voice_dialing,Bill_detail,Contact_Number,Contact_extension,
Best_Time,suspend,suspend_start,suspend_end,credits_allowed,credits_granted,home_region,Calling_Plan,Local_Plan,
Local_Plan_Rate,Flat_Rate,Sales_agent,Community,Building_Mgmt,How_Heard,Incentive_1,Incentive_1a,Incentive_1b,
Incentive_1c,Incentive_2,Incentive_2a,Incentive_2b,Incentive_2c,Incentive_3,Incentive_3a,Incentive_3b,
Incentive_3c,block_operator,block_collect,block_group,block_adult,block_call_return,block_repeat_dialing,
block_call_trace,block_caller_id,block_anonymous,block_all_high_toll,block_regional_and_ld,block_DA_Call_Completion,
block_DA,block_3rd_party,bank,prepayment,dial_around_number,custid,waive_interest,Financial_Treatment,
Other_Feature_1_code,Other_Feature_1_rate,Other_Feature_2_code,Other_Feature_2_rate,Other_Feature_3_code,
Other_Feature_3_rate,Other_Feature_4_code,Other_Feature_4_rate,Partial_Account,mail_date,snp_1_date,snp_2_date,
terminate_date,snp1notified,snp1peak,snp1offpeak,snp2notified,snp2peak,snp2offpeak,avg_days_paid,Pulled_Ld,SNP1,
SNP2,Treatment,Collections,Installment,Nynex_BTN,LD_rate,local_discount,to_month,rounds_up,full_package_made,
local_made,PIC,LPIC,tax_exempt_local,tax_exempt_federal,CommissionedAgent,LDRateID,UidCustomerId,
accVchLineClassUSOC,block_Inter_Reg_LD,block_international,block_DA_3rd_Collect,block_DH2,block_ISP_2,block_ISP_3,
block_ISP4_3_GBAS,block_ISP3_3_GBAS,block_collect_only,block_LD_Reg_DA,block_usage_based,block_ISP5_3_GBAS,
block_ISP5_2_GBAS,block_group_adult,csr_PIC,csr_LPIC,csr_SA,csr_exception,cutover_status,cutover_datetime,
OutsideAgent,prfVchAttributes,uidResellerID,Category,uidDealID
from profiles where UidCustomerID in (352199267)
View 3 Replies
ADVERTISEMENT
Jun 26, 2007
Hi,
I have a report in SQL Reporting Services 2005 which calls a stored proc and the report takes a very long time to run and sometimes returns zero records. But when i run the stored proc in query analyzer it takes about 4 seconds!!
I have checked the execution log on the RS using the below sql:
Code Snippet
use ReportServer
Select * from ExecutionLog with (nolock) order by TimeStart DESC
It shows that i have a large amount of time for the dataretrieval (601309ms, about 10mins) and does not return any records most likely because of a query timeout:
TimeDataRetrieval TimeProcessing TimeRendering Source Status ByteCount RowCount
601309 2227 3 1 rsSuccess 4916 0
The weird thing is that when i run it in query analyzer, i get about 400 records in 4 seconds !!
I dont understand what RS is doing to take up so much time like this to retrieve data.
The report is very simple - it basically returns the records straight out into a table.
The only thing I somewhat suspected was a parameter data type conflict between RS and SQL, specifically dates. I have a start and end date parameter in the report - i tried specifying this as date and string to see if it made any difference but it didn't.
Any help would be greatly appreciated.
View 19 Replies
View Related
Dec 22, 2000
Hi Everybody,
One of my friend asked me "How do we reduce the query logical, scan reads
in SQL Server?".
I really don't know, how to answer him.
Can anybody explain me regarding this.
thanks,
Srini
View 2 Replies
View Related
Dec 12, 2005
How do I determine which method I should use ifI want to optimize the performance of a database.I took Northwind's database to run my example.My query is I want to retrieve the Employees' Firstand Last Names that sold between $100,000 and$200,000.First let me create a function that takes the EmployeeIDas the input parameter and returns the Employee'sFirst and Last name:CREATE FUNCTION dbo.GetEmployeeName(@EmployeeID INT)RETURNS VARCHAR(100)ASBEGINDECLARE @NAME VARCHAR(100)SELECT @NAME = FirstName + ' ' + LastNameFROM EmployeesWHERE EmployeeID = @EmployeeIDRETURN ISNULL(@NAME, '')ENDMy first method to run this:SELECT EmployeeID, dbo.GetEmployeeName(EmployeeID) ASEmployee, SUM(UnitPrice * Quantity) AS AmountFROM OrdersJOIN [Order Details] ON Orders.OrderID =[Order Details].OrderIDGROUP BY EmployeeID,dbo.GetEmployeeName(EmployeeID)HAVING SUM(UnitPrice * Quantity) BETWEEN100000 AND 200000It's running in 4 seconds time. And here are theStatistics IO and Time results:SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 17 ms, elapsed time = 17 ms.(3 row(s) affected)Table 'Order Details'. Scan count 1, logical reads 10,physical reads 0, read-ahead reads 0.Table 'Orders'. Scan count 1, logical reads 21,physical reads 0, read-ahead reads 0.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3934 ms.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3935 ms.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3935 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.Now my 2nd method:IF (SELECT OBJECT_ID('tempdb..#temp_Orders')) IS NOT NULLDROP TABLE #temp_OrdersGOSELECT EmployeeID, SUM(UnitPrice * Quantity) AS AmountINTO #temp_OrdersFROM OrdersJOIN [Order Details] ON Orders.OrderID =[Order Details].OrderIDGROUP BY EmployeeIDHAVING SUM(UnitPrice * Quantity) BETWEEN100000 AND 200000GOSELECT EmployeeID, dbo.GetEmployeeName(EmployeeID),AmountFROM #temp_OrdersGOIt's running in 0 seconds time. And here are the Statistics IOand Time results:SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.Table '#temp_Orders0000000000F1'. Scan count 0, logicalreads 1, physical reads 0, read-ahead reads 0.Table 'Order Details'. Scan count 830, logical reads 1672,physical reads 0, read-ahead reads 0.Table 'Orders'. Scan count 1, logical reads 3, physical reads 0,read-ahead reads 0.QL Server Execution Times:CPU time = 15 ms, elapsed time = 19 ms.(3 row(s) affected)SQL Server Execution Times:CPU time = 15 ms, elapsed time = 19 ms.SQL Server Execution Times:CPU time = 15 ms, elapsed time = 20 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 1 ms.(3 row(s) affected)Table '#temp_Orders0000000000F1'. Scan count 1,logical reads 2, physical reads 0, read-ahead reads 0.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.By the way why "SQL Server Execution Times"exists 3 times and not just one time?Summary:The first code is clean, 1 single SELECT statement buttakes 4 long seconds to execute. The logical reads arevery few compared to the second method.The second code is less clean and uses a temp table buttakes 0 second to execute. The logical reads are waytoo high compared to the first method.What am I supposed to conclude in this example?Which method should I use over the other and why?Are both methods good depending on which I prefer?If I can wait four seconds, it's better to reduce the logicalreads in order to provide less Blocking on the live tablesin a heavily accessed database?Which method should I choose on my own database?Calling a function like dbo.GetEmployeeName getsprocessed per each returned row, correct? That meansIf i had a scenario where 1000 records were to be returnedwould it be better to dump 1000 records to a temp tablevariable and then call a function to process each recordone at a time?Or would the direct approach without usinga temp table cause slower processing and moreblocking/deadlocks because I am calling the functionper each row as I am accessing directly from the tables?Thank you
View 1 Replies
View Related
Mar 7, 2004
I have about a 447 MB SQL server 2000 database on a desktop PC acting as a QA server. The hardware specs of the QA box are as follows:
CPU: P4 2.4 GHz
Memory: 1GB
Drives: 80 GB IDE
I recently purchased a Dell PowerEdge 2650 server to act as the staging box. The staging box has
CPU: P4 2.4 GHz
Memory: 2GB
Drives: 40GB SCSI, mirrored
I made a backup of the database on the QA box, and restored it on the staging box. Yet when I run something as simple as a select query (select * from <table>), the less powerful QA box is faster.
I figured maybe the statistics are different on the staging box. I ran dbcc showcontig to make sure the statistics were identical. Also ran RedGate's SQL compare and data compare to make sure everything was identical.
I figured maybe the query optimizer needs to be tweaked. I recreated the indexes and updated statistics on the staging box. The queries actually got slower as a result.
I thought maybe SCSI drives are slower. Tried breaking the mirror on the staging box. No luck. Put the mirror back in place, ran a test where I copied a large folder from one directory to another on the staging box. Repeated the same test with the same data on the QA box. The staging box was more than twice as fast than the QA box.
It doesnt appear to be a problem with the query, adjusting memory in SQL server has not effect, both boxes are using SQL server 2000 SP3, why is the bigger machine running queries hundreds of milliseconds slower than the smaller machine? Any help will be appreciated!
View 14 Replies
View Related
May 5, 2015
Why is there often such a dramatic discrepancy between the logical reads recorded in the trace file versus the output of STATISTICS IO?
In the server-side trace I have running I found a reporting procedure that shows having 136,949,501 reads (yes, in hundreds of millions), and it's taking 13,508 seconds to complete.
So I pull the code from the trace and execute it via SSMS - it runs < 1 second, and only generates about 4,000 reads (using various different parameters I get the same result)
The execution plan shows nothing unusual
View 5 Replies
View Related
May 22, 2008
A table in one of my databases is running very slowly. The IO is very high and below is a printout from the SET STATISTICS IO ON command run on a common query used on the table:
(4162 row(s) affected)
Table 'WebProxyLog'. Scan count 3, logical reads 873660, physical reads 3493, read-ahead reads 505939, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
I have a clustered unique index and a nonclustered index on the table. I have ran SQL Profiler and opened the trace in Database Tuning Advisor, DTA displays 0% improvement suggestions. I have a number of statistics on the table and index which are all up to date and fragmentation is less than 1%. I've tried a number of variations on indexes to improve performance but to no avail. There is only one query which runs on the table, and the nonclustered index created on the table did significantly improve performance, however the query still runs at around 23 seconds. The query does bring back a large amount of data however i'm sure there is a way to bring down the IO and logical reads to improve performance.
The table and index scripts are below:
Code Snippet
-- =================== Table and Clustered index ===========================
CREATE TABLE [dbo].[WebProxyLog](
[ClientIP] [bigint] NULL,
[ClientUserName] [nvarchar](514) NULL,
[ClientAgent] [varchar](128) NULL,
[ClientAuthenticate] [smallint] NULL,
[logTime] [datetime] NULL,
[servername] [nvarchar](32) NULL,
[DestHost] [varchar](255) NULL,
[DestHostIP] [bigint] NULL,
[DestHostPort] [int] NULL,
[bytesrecvd] [bigint] NULL,
[bytessent] [bigint] NULL,
[protocol] [varchar](12) NULL,
[transport] [varchar](8) NULL,
[operation] [varchar](24) NULL,
[uri] [varchar](2048) NULL,
[mimetype] [varchar](32) NULL,
[objectsource] [smallint] NULL,
[rule] [nvarchar](128) NULL,
[SrcNetwork] [nvarchar](128) NULL,
[DstNetwork] [nvarchar](128) NULL,
[Action] [smallint] NULL,
[WebProxyLogid] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [pk_webproxylog_webproxylogid] PRIMARY KEY CLUSTERED
(
[WebProxyLogid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
-- =================== Nonclustered Index ===========================
CREATE NONCLUSTERED INDEX [dta_ix_WebProxyLog_Kaction_clientusername_logtime_uri_mimetype_webproxylogid] ON [dbo].[WebProxyLog]
(
[Action] ASC
)
INCLUDE ( [ClientUserName],
[logTime],
[uri],
[mimetype],
[WebProxyLogid]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
-- =================== Query which is called regularly on the table ===========================
SELECT [User] = CASE
WHEN LEFT(clientusername,3) = domain' THEN RIGHT(clientusername,LEN(clientusername) - 3)
ELSE clientusername
END,
logtime AS [Date],
desthost AS [Site],
uri AS [Actual Site]
FROM webproxylog
WHERE CONVERT(Datetime,CONVERT(VarChar(25),logtime,106),106) BETWEEN '20 apr 2008' AND '14 may 2008'
AND(RIGHT(uri,4) NOT IN('.css','.jpg','.gif','.png','.bmp','.vbs'))
AND (RIGHT(uri,3) NOT IN('.js'))
AND LEFT(mimetype,6) = 'text/h'
AND (uri NOT LIKE '%sometext.local%')
AND (uri NOT LIKE '%sometext.co.uk%')
AND [action] = 9
AND (clientusername IN ('USERNAME'))
ORDER BY logtime ASC;
PS There are 60,078,605 rows in the table
Please help!
Many Thanks
View 6 Replies
View Related
Apr 12, 2006
Hello
I'm doing som performance research, I have a index with following priority: ClientId, Active, ProductId. Active is a bit field telling whether the Product is active or not, it can be inactive products than active, but always at least one active product.
When I'm executing
SELECT * FROM [table] WHERE ClientId = [id] AND ProductId IN (1,2,3,5,7,9,20)
I'm getting following result: Scan count 1, logical reads 490
When I'm leading SQL Server to the right paths by including the to possible values in Active by executing the following SQL:
SELECT * FROM [table] WHERE ClientId = [id] AND ProductId IN (1,2,3,5,7,9,20) AND Active IN (0,1)
I'm getting following results: Scan count 14, logical reads 123
With this information, which version would you say is fastest and why?
When I was running this query 1000 times with different ClientId I got a average time of 172 ms for the first query, and 155 ms for the second one. I have been told that scan count is very expensive... out of this example it seems that the cost of 1 scan count is like 20 logical reads?
View 8 Replies
View Related
Mar 4, 2006
I am running a query in SQL 2000 SP4, Windows 2000 Serverthat is not being shared with any other users or any sqlconnections users. The db involves a lot of tables,JOINs, LEFT JOINs, UNIONS etc... Ok it's not a prettycode and my job is to make it better.But for now one thing I would like to understand with yourhelp is why the same SP on the same server and everythingthe same without me changing anything at all in terms ofSQL Server (configuration, code change, ...) runs inQuery Analyzer in 1:05 minute and i see one table get ahit of 15 million logical reads:Table 'TABLE1'. Scan count 2070, logical reads 15516368,physical reads 147, read-ahead reads 0.This 'TABLE1' has about 400,000 recordsThe second time i ran right after in Query Analyzer again:Table 'TABLE1'. Scan count 2070, logical reads 15516368,physical reads 0, read-ahead reads 0.I can see now the physical reads being 0 as it isunderstandable that SQL is now fetching the data frommemory.But now the third time I ran:Table 'TABLE1'. Scan count 28, logical reads 87784,physical reads 0, read-ahead reads 0.The Scan count went down from 2070 to 28. I don'tknow what the Scan count is actually. It scanned thetable 28 times?The logical reads went down to 87,784 reads from 15million and 2 seconds execution time!Anybody has any ideas why this number change?The problem is i tried various repeats of my test, irebooted the SQL Server, dropped the database, restoredit, ran the same exact query and it took 3-4-5 secondswith 87,784 reads vs 15 million.Why i don't see 15 million now?Well i kept working during the day and i happen to run intoanother set of seeing 15 million again. A few runs wouldkeep running at the paste of 15 million over 1 minute andeventually the numbers went back down to 87,784 and 2seconds.Is it my way of using the computer? Maybe i was openingtoo many applications, SQL was fighting for memory?Would that explain the 15 million reads?I went and changed my SQL Server to used a fixed memoryof 100 megs, restarted it and tested again the samequery but it continued to show 87,784 reads with 2 secondsexecution time.I opened all kinds of applications redid the same testand i was never able to see 15 million reads again.Can someone help me with suggestions on what could bethis problem and what if i could find a way to come tosee 15 million reads again?By the way with the limited info you have here about thedatabase I am using, is 87,784 reads a terrible number ofreads, average or normal when the max records in the manytables involved in this SP is 400,000 records?I am guessing it is a terrible number, am I correct?I would appreciate your help.Thank you
View 4 Replies
View Related
Jan 17, 2002
SQL 6.5 - 5.5 Gig
NT
Hello,
Throughout the day our Document Management application generates high busts of physical page reads when users query the database.
What SQL configuration parameter(s) should I check/modify to insure that the database is performing at it's optimun during these bursts?
Thank You in advance.
View 1 Replies
View Related
May 2, 2007
Hi,
I'm trying to figure out why my sqlserver is flatlined on the CPU. I'm doing a trace and can't help but notice this, with crazy high reads. I'm not sure what this is? It doesnt look good to me, altho maybe its nothing. Any info is much appreciated.
Thanks again!
mike123
Event Class/ TextData/ApplicationName/ LoginName/ CPU/ Reads/ Writes/ Duration
Audit Logout.Net SqlClient Data ProviderloginName3764129784 3146 156
View 3 Replies
View Related
Jul 6, 2015
We are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.
View 2 Replies
View Related
Aug 24, 2007
hi
if suppose one query is running slow than what steps we can follow to optimize it...
this question is asked me in interview.
thanx
View 3 Replies
View Related
Apr 17, 2008
Hi all,
I'm having what you might call an optimisation issue - but I'm also not sure if my approach to this problem is right. I've spent the whole day trying various methods but none seem to be performing as admirably as I'd hoped.
Basically, here's the scenario:
1. Log files are being inserted into a SQL table via Log Parser 2.2.
2. I have a lookup table of ip address ranges to countries and other geographic data.
3. Once the log row has been inserted from Log Parser, I want to update the same log table with data from the lookup table.
The main problem seems to be the updating (the initial insert from Log Parser is lightning quick).
I initially wrote a trigger on AFTER INSERT on the log table that converted the actual user's IP address into IPNumber format using a simple algorithm, then pumped the IPNumber into a quick select query which retrieved the geolocation data. Then, in the same trigger, there was an update statement which basically said:
update dbo.Logs
set [Country] = @Country
where [IpNumber] = @IpNumber and [Country] = Null
Therein lies the problem. The update statement slows everything down to almost a standstill and also seems to obtain a lock on the table.
Critical factors:
1. The log table must remain readable.
2. The query must execute in seconds -- not over half hour :)
I've experimented with various combinations of indexing, so far to no avail.
Any suggestions would be very much appreciated.
Regards
View 10 Replies
View Related
Aug 27, 2007
I have a SELECT TOP query in order to return x number of top records from a table which has the indexing service enabled on it, such as this :
SELECT TOP(15) * FROM [TableName] ORDER BY [ColumnName]
and also there are not that many records(MAX 100 rows) kept in the table at the moment however, it will grow later.
The issue stems out from the ORDER BY [ColumnName] part of the syntax that changes the TOP selection order which makes the query to run very slow as I have also analyzed in the SQL SERVER QUERY ANALYZER.
Anyhow, I need to have the ORDER BY statement to show the data based on different columns either ascending or descending.
There might we workarounds to achieve the same goal that I am not aware of.
Any thoughts is appreciated.
View 3 Replies
View Related
Oct 2, 2014
We would like to benchmark our logical reads daily to show our improvement as we tune the queries over time.
I am using sys.dm_exec_query_stats summing the Physical and Logical Reads. Is this a viable option for gathering this metric? Is this a viable metric to gather?
select sum(total_physical_reads) as TotalPhyReads, sum(total_logical_reads) as TotalLogReads from sys.dm_exec_query_stats;
How best to provide performance based metrics.
View 4 Replies
View Related
Nov 19, 2004
I have an update query running which to just now has been running for 22 hours running on two tables 1 a lookuptable that has just been created within the batch the other a denormalised table for doing data analysis on
the query thats causing teh problem is
--//////////////////////////////////// this is the one thats running
Print 'Update Provider 04-05 EmAdmsCount12mths : ' + CAST(GETDATE() AS varchar)
GO
Update Provider_APC_2004_05
set EmAdmsCount12mths =
(Select COUNT(*)-1
from Combined_Admissions
where ((Combined_Admissions.NHSNumber = Provider_APC_2004_05.NHSNumber) or
(Combined_Admissions.PASNUMBER = Provider_APC_2004_05.PDDISTNO)) and
(Combined_Admissions.AdmDate BETWEEN DateAdd(yyyy,-1,Provider_APC_2004_05.AdmDate) AND Provider_APC_2004_05.AdmDate) AND
Combined_Admissions.AdmMethod like 'Emergency%')-- and
-- CA.NHSorPrivate = 'NHS'))
FROM Provider_APC_2004_05, Combined_Admissions
any help in improving speed would be most welcome as there are 3 more of these updates to run right after this one and the analysis tables are almost double the size of this one
Dave
View 6 Replies
View Related
Jun 11, 2007
Hello,
I developed an SSIS package doing a nightly load into a data warehouse. We have an 8 hour loading window - currently the package takes 16 hours to complete.
I isolated the problem to a Data Flow task where +-35% of the time is spent. This task is pretty straight forward:
- OLE DB source, reading +- 800,000 rows from a SQL server database
- 13 Lookups in sequence, to get surrogate keys from dimension tables. Lookups are all on GUIDS.
- An aggregation
- OLEDB target, fact table in a SQL server database.
It seems unreasonable for the this task to take over 5 hours. It spends the majority of time on the lookups - not so much at target, source and aggregation.
Any comments and advice will be greatly appreciated.
Thanks.
(PS some machine details:
OS Name Microsoft(R) Windows(R) Server 2003, Standard Edition
Version 5.2.3790 Service Pack 1 Build 3790
Other OS Description Not Available
OS Manufacturer Microsoft Corporation
System Name ARK-SQL
System Manufacturer HP
System Model ProLiant DL380 G5
System Type X86-based PC
Processor x86 Family 6 Model 15 Stepping 6 GenuineIntel ~1866 Mhz
Processor x86 Family 6 Model 15 Stepping 6 GenuineIntel ~1866 Mhz
BIOS Version/Date HP P56, 9/18/2006
SMBIOS Version 2.3
Windows Directory C:WINDOWS
System Directory C:WINDOWSsystem32
Boot Device DeviceHarddiskVolume1
Locale United States
Hardware Abstraction Layer Version = "5.2.3790.1830 (srv03_sp1_rtm.050324-1447)"
User Name Not Available
Time Zone South Africa Standard Time
Total Physical Memory 3,327.30 MB
Available Physical Memory 938.20 MB
Total Virtual Memory 1.10 GB
Available Virtual Memory 2.78 GB
Page File Space 2.00 GB
Page File C:pagefile.sys)
View 6 Replies
View Related
Mar 23, 2006
We had a table that had logical fragmentation of 50%. After rebuilding(with default fillfactor 0) I noticed that inserts are much faster. Ifmy page density is 100% wouldn't I get more page splits? I know I ammissing something fundamental here. Could someone get me back ontrack?Table Size 1.5 millionInsert Size 70KBefore: 15 minutesAfter: 3 minutesIndex:Compound clustered across varchar columnsThere are also a couple non-clustered indexes
View 11 Replies
View Related
Apr 17, 2007
I'm running the same query on two different PCs and tracing results in Profiler on my PC. When executing the query on PC1 - the total number of reads is 200000. When executing the same query on PC2 - the toal number of reads is 13000. It is almost 15 times more reads when executing query on PC2. The executed query is same on PC1 and PC2. Any reason for this?
I'm trying to analyse that query and reduce the number of logical reads as it's is too high but then I get completly different result on different PC.
Thanks.
View 5 Replies
View Related
Jan 4, 2007
Hello,
In a Data Flow Task, I have an insert that occurs into a SQL Server 2000 table from a fixed width flat file. The SQL Server table that the data goes into is accessed through an OLE DB connection manager that uses the Native OLE DBMicrosoft OLE DB Provider for SQL Server.
In the OLE DB Destination, I changed the access mode from Table or View - fast load to Table or View because I needed to implement OLE DB Destination Error Output. The Error output goes to a SQL Server 2000 table that uses the same connection manager.
The OLE DB Destination Editor Error Output 'Error' option is configured to 'Redirect' the row. 'Set this value to selected cells' is set to 'Fail component'.
Was changing the access mode the simple reason why the insert from the flat file takes so much longer, or could there be other problems?
Thank you for your help!
cdun2
View 32 Replies
View Related
Feb 20, 2006
Hello sql and .net gurus :-)I have a problem with my website www.eventguide.it. It's completly developed under .NET 2 and SQL Server 2005 Express. My problem is the folowing:The server is a Intel 3Ghz HT processor with 1GB Ram. No other page on the running system is a CPU consuming site. We optimized the SQL statements, the code, the caching and many other parts of the website (pooling on SQL access), but the SQL Server uses about 50% to100% of the CPU and about 400MB RAM all the time. The whole site seems to be very, very slow. In fact there are many of SQL operations on every page request, but we cache a lot of them in different ways (page output caching, application caching). So I don't understand we have so much performance problems. Any suggestions for optimised code in general? I read nearly all of the MS .NET performance papers - but real world experience is the missing part :-) It is better to cast the values of a SQL reader like thisDim String1 as String = Ctype(DataReader.item(0), String)Dim Integer1 as Intger = Ctype(DataReader.item(1), Integer)or like thisDim String1 as String = DataReader.item(0)
Dim Integer1 as Intger = DataReader.item(1)Thanks a lot for your help!FOX
View 6 Replies
View Related
Aug 8, 2000
I have a dts job set up to transfer 550,000 records from a dbf file into a sql server. If I just let it run, there is a 9-10 minute delay, then it starts. If I try to schedule a job, it fails completely. I looked up ways to get it to execute quicker, mainly going to the advanced tab of the transform arrow and making the inserts 1000 at a time, the table locked, turning constraints off. Any advice on how to speed it up or why the job is failing?
View 7 Replies
View Related
Apr 27, 2002
Hi All
Last week, we upgraded to sql2000 from 6.5. Everything went on fine. we re-compiled all the procedures. When i try to run a procedure, its running for long time - more than 10 hours (in sql 6.5 it runs for 50 mins). do i need to set any procedure cache?.
Also, the server CPU usuage is constantly high - more than 80%. It was fine till last week.
Any suggestions?
Thanks in advance
Jaya
View 2 Replies
View Related
Aug 30, 2006
i have written a SP, but it eventually runs slower. i have to run this SP 500 times.
do you know what is causing that?
View 13 Replies
View Related
Jul 1, 2007
when i turn on my pc, it takes about 10 minutes from turning on to actually be able to use.
View 4 Replies
View Related
May 2, 2001
Our server is running. There are no locks, and server has been rebooted but the problem is still there. This has been going on for some time now. I intend to restart the server. Does anybody have a quick solution, please help. Thanks for your assistance!!
View 4 Replies
View Related
May 17, 2001
Please what do I look out for 6.5 if I want to troubleshoot for slow running.
Thanks.
View 2 Replies
View Related
Jun 6, 2001
Hi,
When first time I start my sql server is running faster. After 10 to 15 days later, sql sever performance is very slow. After I restart SQL service, to become normal.
Thanks
Mohan
View 1 Replies
View Related
Oct 13, 1999
Hi list,
I'm a long time lurker on this list and really enjoy the discussions, although I rarely get a chance to participate.
Here is my situation: We are importing chunks of data (500 records at a time) from a C++ interface. The records have to be transformed before inserting into the target table which I am doing using a stored proc which is working fine. The records are in memory in C++ and the programmer is looping through the records building inserts into a temp table through ADO (which my proc picks up). The server business object is using the connection.execute method which is inserting one record at a time. That part of the process is taking over 15 seconds for 500 records which is the bulk of the total time.
My question is: Using ADO is there a better way to insert these records into the temp table? I see mention of a recordset interface but my programmers are new to ADO and since I am the DBA and have never used ADO, I am not sure what to tell them.
Any insight would be greatly appreciated.
shawn
View 2 Replies
View Related
Apr 14, 2004
I have linked three SQL Servers together, and have written a stored proc that has a cursor that joined three tables from one of the linked servers. When I pull the SQL out of the cursor definition and run it in a query window it runs fine, but when I run the stored proc that simply steps through the same select result set it is too slow for words. It also throws a warning about serial isolation levels. Is there any way I can fix this.
David
View 1 Replies
View Related
Mar 2, 2004
Hi,
Our main production server has started running slow, it is a dual zeon thingy with plenty of ram so hardware is not an issue.
Basically a service connects to the database and executes a few stored procs, the only way I can get the system up to speed again is to recompile one of the SPs but that is only a temporary fix.
Anyone had a similar thing?
Can anyone give me help on performance tuning in SQL server 2000.
Thanks
View 2 Replies
View Related
May 13, 2004
Hi guys.
I have a DTS package ON SQL 2000 which transfer data from AS400 to SQL 2000 using an ODBC Client Access 5.1 (The DSN was configured by a sysdmin on the AS400 so it is configured properly).
When i execute the package manualy (by right click and "execute package") the package runns fine and ruterns data in no time (Eproximatly 30000 rows in 15 sec).
The problem starts when a job executes the same packagee!!!
When i start the job, the DTS package is running Very Very Slow!!!!
sometime it takes Hours to return a few rows! and it seems that it is stuck.
The SQLAgent is running as a NT Account with Administrator rights, and has full access to the AS400!! so the problem is not the Agent.
by monitoring the AS400, i have noticed that the job/DTS is retreaving the first fetch quickly , and then it is in a waiting status
i have tried everything and cant seem to get this problem fixed
Does anyone know what could be the problem?
I Need Help Quick!!!
Thank You
Gil
View 5 Replies
View Related