SQL Server 2005 Large IO And Logical Reads
May 22, 2008
A table in one of my databases is running very slowly. The IO is very high and below is a printout from the SET STATISTICS IO ON command run on a common query used on the table:
(4162 row(s) affected)
Table 'WebProxyLog'. Scan count 3, logical reads 873660, physical reads 3493, read-ahead reads 505939, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
I have a clustered unique index and a nonclustered index on the table. I have ran SQL Profiler and opened the trace in Database Tuning Advisor, DTA displays 0% improvement suggestions. I have a number of statistics on the table and index which are all up to date and fragmentation is less than 1%. I've tried a number of variations on indexes to improve performance but to no avail. There is only one query which runs on the table, and the nonclustered index created on the table did significantly improve performance, however the query still runs at around 23 seconds. The query does bring back a large amount of data however i'm sure there is a way to bring down the IO and logical reads to improve performance.
The table and index scripts are below:
Code Snippet
-- =================== Table and Clustered index ===========================
CREATE TABLE [dbo].[WebProxyLog](
[ClientIP] [bigint] NULL,
[ClientUserName] [nvarchar](514) NULL,
[ClientAgent] [varchar](128) NULL,
[ClientAuthenticate] [smallint] NULL,
[logTime] [datetime] NULL,
[servername] [nvarchar](32) NULL,
[DestHost] [varchar](255) NULL,
[DestHostIP] [bigint] NULL,
[DestHostPort] [int] NULL,
[bytesrecvd] [bigint] NULL,
[bytessent] [bigint] NULL,
[protocol] [varchar](12) NULL,
[transport] [varchar](8) NULL,
[operation] [varchar](24) NULL,
[uri] [varchar](2048) NULL,
[mimetype] [varchar](32) NULL,
[objectsource] [smallint] NULL,
[rule] [nvarchar](128) NULL,
[SrcNetwork] [nvarchar](128) NULL,
[DstNetwork] [nvarchar](128) NULL,
[Action] [smallint] NULL,
[WebProxyLogid] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [pk_webproxylog_webproxylogid] PRIMARY KEY CLUSTERED
(
[WebProxyLogid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
-- =================== Nonclustered Index ===========================
CREATE NONCLUSTERED INDEX [dta_ix_WebProxyLog_Kaction_clientusername_logtime_uri_mimetype_webproxylogid] ON [dbo].[WebProxyLog]
(
[Action] ASC
)
INCLUDE ( [ClientUserName],
[logTime],
[uri],
[mimetype],
[WebProxyLogid]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
-- =================== Query which is called regularly on the table ===========================
SELECT [User] = CASE
WHEN LEFT(clientusername,3) = domain' THEN RIGHT(clientusername,LEN(clientusername) - 3)
ELSE clientusername
END,
logtime AS [Date],
desthost AS [Site],
uri AS [Actual Site]
FROM webproxylog
WHERE CONVERT(Datetime,CONVERT(VarChar(25),logtime,106),106) BETWEEN '20 apr 2008' AND '14 may 2008'
AND(RIGHT(uri,4) NOT IN('.css','.jpg','.gif','.png','.bmp','.vbs'))
AND (RIGHT(uri,3) NOT IN('.js'))
AND LEFT(mimetype,6) = 'text/h'
AND (uri NOT LIKE '%sometext.local%')
AND (uri NOT LIKE '%sometext.co.uk%')
AND [action] = 9
AND (clientusername IN ('USERNAME'))
ORDER BY logtime ASC;
PS There are 60,078,605 rows in the table
Please help!
Many Thanks
View 6 Replies
ADVERTISEMENT
Oct 2, 2014
We would like to benchmark our logical reads daily to show our improvement as we tune the queries over time.
I am using sys.dm_exec_query_stats summing the Physical and Logical Reads. Is this a viable option for gathering this metric? Is this a viable metric to gather?
select sum(total_physical_reads) as TotalPhyReads, sum(total_logical_reads) as TotalLogReads from sys.dm_exec_query_stats;
How best to provide performance based metrics.
View 4 Replies
View Related
Dec 22, 2000
Hi Everybody,
One of my friend asked me "How do we reduce the query logical, scan reads
in SQL Server?".
I really don't know, how to answer him.
Can anybody explain me regarding this.
thanks,
Srini
View 2 Replies
View Related
May 5, 2015
Why is there often such a dramatic discrepancy between the logical reads recorded in the trace file versus the output of STATISTICS IO?
In the server-side trace I have running I found a reporting procedure that shows having 136,949,501 reads (yes, in hundreds of millions), and it's taking 13,508 seconds to complete.
So I pull the code from the trace and execute it via SSMS - it runs < 1 second, and only generates about 4,000 reads (using various different parameters I get the same result)
The execution plan shows nothing unusual
View 5 Replies
View Related
Dec 12, 2005
How do I determine which method I should use ifI want to optimize the performance of a database.I took Northwind's database to run my example.My query is I want to retrieve the Employees' Firstand Last Names that sold between $100,000 and$200,000.First let me create a function that takes the EmployeeIDas the input parameter and returns the Employee'sFirst and Last name:CREATE FUNCTION dbo.GetEmployeeName(@EmployeeID INT)RETURNS VARCHAR(100)ASBEGINDECLARE @NAME VARCHAR(100)SELECT @NAME = FirstName + ' ' + LastNameFROM EmployeesWHERE EmployeeID = @EmployeeIDRETURN ISNULL(@NAME, '')ENDMy first method to run this:SELECT EmployeeID, dbo.GetEmployeeName(EmployeeID) ASEmployee, SUM(UnitPrice * Quantity) AS AmountFROM OrdersJOIN [Order Details] ON Orders.OrderID =[Order Details].OrderIDGROUP BY EmployeeID,dbo.GetEmployeeName(EmployeeID)HAVING SUM(UnitPrice * Quantity) BETWEEN100000 AND 200000It's running in 4 seconds time. And here are theStatistics IO and Time results:SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 17 ms, elapsed time = 17 ms.(3 row(s) affected)Table 'Order Details'. Scan count 1, logical reads 10,physical reads 0, read-ahead reads 0.Table 'Orders'. Scan count 1, logical reads 21,physical reads 0, read-ahead reads 0.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3934 ms.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3935 ms.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3935 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.Now my 2nd method:IF (SELECT OBJECT_ID('tempdb..#temp_Orders')) IS NOT NULLDROP TABLE #temp_OrdersGOSELECT EmployeeID, SUM(UnitPrice * Quantity) AS AmountINTO #temp_OrdersFROM OrdersJOIN [Order Details] ON Orders.OrderID =[Order Details].OrderIDGROUP BY EmployeeIDHAVING SUM(UnitPrice * Quantity) BETWEEN100000 AND 200000GOSELECT EmployeeID, dbo.GetEmployeeName(EmployeeID),AmountFROM #temp_OrdersGOIt's running in 0 seconds time. And here are the Statistics IOand Time results:SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.Table '#temp_Orders0000000000F1'. Scan count 0, logicalreads 1, physical reads 0, read-ahead reads 0.Table 'Order Details'. Scan count 830, logical reads 1672,physical reads 0, read-ahead reads 0.Table 'Orders'. Scan count 1, logical reads 3, physical reads 0,read-ahead reads 0.QL Server Execution Times:CPU time = 15 ms, elapsed time = 19 ms.(3 row(s) affected)SQL Server Execution Times:CPU time = 15 ms, elapsed time = 19 ms.SQL Server Execution Times:CPU time = 15 ms, elapsed time = 20 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 1 ms.(3 row(s) affected)Table '#temp_Orders0000000000F1'. Scan count 1,logical reads 2, physical reads 0, read-ahead reads 0.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.By the way why "SQL Server Execution Times"exists 3 times and not just one time?Summary:The first code is clean, 1 single SELECT statement buttakes 4 long seconds to execute. The logical reads arevery few compared to the second method.The second code is less clean and uses a temp table buttakes 0 second to execute. The logical reads are waytoo high compared to the first method.What am I supposed to conclude in this example?Which method should I use over the other and why?Are both methods good depending on which I prefer?If I can wait four seconds, it's better to reduce the logicalreads in order to provide less Blocking on the live tablesin a heavily accessed database?Which method should I choose on my own database?Calling a function like dbo.GetEmployeeName getsprocessed per each returned row, correct? That meansIf i had a scenario where 1000 records were to be returnedwould it be better to dump 1000 records to a temp tablevariable and then call a function to process each recordone at a time?Or would the direct approach without usinga temp table cause slower processing and moreblocking/deadlocks because I am calling the functionper each row as I am accessing directly from the tables?Thank you
View 1 Replies
View Related
Apr 12, 2006
Hello
I'm doing som performance research, I have a index with following priority: ClientId, Active, ProductId. Active is a bit field telling whether the Product is active or not, it can be inactive products than active, but always at least one active product.
When I'm executing
SELECT * FROM [table] WHERE ClientId = [id] AND ProductId IN (1,2,3,5,7,9,20)
I'm getting following result: Scan count 1, logical reads 490
When I'm leading SQL Server to the right paths by including the to possible values in Active by executing the following SQL:
SELECT * FROM [table] WHERE ClientId = [id] AND ProductId IN (1,2,3,5,7,9,20) AND Active IN (0,1)
I'm getting following results: Scan count 14, logical reads 123
With this information, which version would you say is fastest and why?
When I was running this query 1000 times with different ClientId I got a average time of 172 ms for the first query, and 155 ms for the second one. I have been told that scan count is very expensive... out of this example it seems that the cost of 1 scan count is like 20 logical reads?
View 8 Replies
View Related
Mar 4, 2006
I am running a query in SQL 2000 SP4, Windows 2000 Serverthat is not being shared with any other users or any sqlconnections users. The db involves a lot of tables,JOINs, LEFT JOINs, UNIONS etc... Ok it's not a prettycode and my job is to make it better.But for now one thing I would like to understand with yourhelp is why the same SP on the same server and everythingthe same without me changing anything at all in terms ofSQL Server (configuration, code change, ...) runs inQuery Analyzer in 1:05 minute and i see one table get ahit of 15 million logical reads:Table 'TABLE1'. Scan count 2070, logical reads 15516368,physical reads 147, read-ahead reads 0.This 'TABLE1' has about 400,000 recordsThe second time i ran right after in Query Analyzer again:Table 'TABLE1'. Scan count 2070, logical reads 15516368,physical reads 0, read-ahead reads 0.I can see now the physical reads being 0 as it isunderstandable that SQL is now fetching the data frommemory.But now the third time I ran:Table 'TABLE1'. Scan count 28, logical reads 87784,physical reads 0, read-ahead reads 0.The Scan count went down from 2070 to 28. I don'tknow what the Scan count is actually. It scanned thetable 28 times?The logical reads went down to 87,784 reads from 15million and 2 seconds execution time!Anybody has any ideas why this number change?The problem is i tried various repeats of my test, irebooted the SQL Server, dropped the database, restoredit, ran the same exact query and it took 3-4-5 secondswith 87,784 reads vs 15 million.Why i don't see 15 million now?Well i kept working during the day and i happen to run intoanother set of seeing 15 million again. A few runs wouldkeep running at the paste of 15 million over 1 minute andeventually the numbers went back down to 87,784 and 2seconds.Is it my way of using the computer? Maybe i was openingtoo many applications, SQL was fighting for memory?Would that explain the 15 million reads?I went and changed my SQL Server to used a fixed memoryof 100 megs, restarted it and tested again the samequery but it continued to show 87,784 reads with 2 secondsexecution time.I opened all kinds of applications redid the same testand i was never able to see 15 million reads again.Can someone help me with suggestions on what could bethis problem and what if i could find a way to come tosee 15 million reads again?By the way with the limited info you have here about thedatabase I am using, is 87,784 reads a terrible number ofreads, average or normal when the max records in the manytables involved in this SP is 400,000 records?I am guessing it is a terrible number, am I correct?I would appreciate your help.Thank you
View 4 Replies
View Related
Jun 1, 2007
we had some slow down complaints lately and this query seems to be the culprit almost every single time. The estimated execution plan is a clustered index seek as there is a clustered index on the uidcustomerid column. setting profile statistics on shows that every time it executes it does an index seek.
profiler session showed a huge number of reads for these queries depending on the value being looked up. 1500 through 50000. i set up profile io on and the culprit is lob logical reads. everything else is 0 or very low. in this case lob logical reads is over 1700.
3 of the columns in the select statement are text columns. when i take them out of the query the lob logical reads drops to 0 and goes up incrementally as i add each column back in.
is there anyway to improve the performance without changing data types to varchar(max)?
select SID,Last_name,Name_2,First_name,Middle_initial,Descriptives,Telephone_number,mainline,Residence,ADL,
DID_number,Svce_street,Svce_town,Svce_state,Svce_appt,Mailing_street,Mailing_town,Mailing_state,Mailing_appt,
Mailing_zip,Listing,Addl_listing,Published,Listed,Gold_number,PIN,status,SSnumber,tax_jurisdiction,
Bill_date,Past_balance,Service_start_date,Service_end_date,LOA,FCC_type,Line_type,I_W,Jacks,Voice_messaging,
vms_ring_cycles,CCS,phonesmarts,ringmate,voice_dialing,Bill_detail,Contact_Number,Contact_extension,
Best_Time,suspend,suspend_start,suspend_end,credits_allowed,credits_granted,home_region,Calling_Plan,Local_Plan,
Local_Plan_Rate,Flat_Rate,Sales_agent,Community,Building_Mgmt,How_Heard,Incentive_1,Incentive_1a,Incentive_1b,
Incentive_1c,Incentive_2,Incentive_2a,Incentive_2b,Incentive_2c,Incentive_3,Incentive_3a,Incentive_3b,
Incentive_3c,block_operator,block_collect,block_group,block_adult,block_call_return,block_repeat_dialing,
block_call_trace,block_caller_id,block_anonymous,block_all_high_toll,block_regional_and_ld,block_DA_Call_Completion,
block_DA,block_3rd_party,bank,prepayment,dial_around_number,custid,waive_interest,Financial_Treatment,
Other_Feature_1_code,Other_Feature_1_rate,Other_Feature_2_code,Other_Feature_2_rate,Other_Feature_3_code,
Other_Feature_3_rate,Other_Feature_4_code,Other_Feature_4_rate,Partial_Account,mail_date,snp_1_date,snp_2_date,
terminate_date,snp1notified,snp1peak,snp1offpeak,snp2notified,snp2peak,snp2offpeak,avg_days_paid,Pulled_Ld,SNP1,
SNP2,Treatment,Collections,Installment,Nynex_BTN,LD_rate,local_discount,to_month,rounds_up,full_package_made,
local_made,PIC,LPIC,tax_exempt_local,tax_exempt_federal,CommissionedAgent,LDRateID,UidCustomerId,
accVchLineClassUSOC,block_Inter_Reg_LD,block_international,block_DA_3rd_Collect,block_DH2,block_ISP_2,block_ISP_3,
block_ISP4_3_GBAS,block_ISP3_3_GBAS,block_collect_only,block_LD_Reg_DA,block_usage_based,block_ISP5_3_GBAS,
block_ISP5_2_GBAS,block_group_adult,csr_PIC,csr_LPIC,csr_SA,csr_exception,cutover_status,cutover_datetime,
OutsideAgent,prfVchAttributes,uidResellerID,Category,uidDealID
from profiles where UidCustomerID in (352199267)
View 3 Replies
View Related
May 31, 2006
I have a set of triggers that log the history of changes to a table - i.e. I record inserts, updates, deletes (pretty standard audit stuff I suppose). I want to also log reads on that data. If I were using sprocs for reading data, this would be relatively painless, but I am using an O/R mapper to handle my data access, which writes dynamic sql at runtime (and I don't want to use sprocs with it) and then sends it down to the DB. Is there a way I can intercept reads and log them to the same table I am logging other actions? I know very little about the new capabilities of SQL Server 2005, but I would think I could somehow, maybe via the new CLR capabilities or similar, get access to these types of events within the database? Anyone? I know I could always do this higher up in the application layers, but I would like to keep all of this at the database level if possible....Thanks,
View 1 Replies
View Related
Oct 18, 2007
I am looking into various options to improve latency of our application (we figured the latency is mainly because data persistence - writes and reads from DB). I am looking into In-Memory databases also. But, before making that decision (of using in memory databases), I would like to see if there is a way to configure SQL Server 2005 to get as close performance as in-memory databases?
My question:
1. Is there a way that I can configure SQL Server 2005 to use a CACHE that gets loaded as needed basis, so that future database reads/writes will happen to the cache as opposed to disk (db writes)?
2. Is SQL Server 2005 recoverable in such configurations?
3. Are there any ideas/resources where I can get more details? (Such as sample configurations with bench mark numbers, rpevious experiences..etc)
Thanks
Murthy
View 1 Replies
View Related
Feb 9, 2006
I have a table that I'm inserting a file into and using the Image data type to store the binary object. Now the code below works fine for files around 1.5 MB, but anything larger and it's like the code won't even execute and I get a Page Not found error.
I'm in the process of running some traces to find out what's going on in the backend, but I'm assuming there's something amiss with my code. The Image data type should handle files that size with no problem but for some reason it isn't.
Does anyone see anything wrong?
Thanks
Dim iLength As Integer = CType(File1.PostedFile.InputStream.Length, Integer)
If iLength = 0 Then Exit Sub 'not a valid file
Dim sContentType As String = File1.PostedFile.ContentType
Dim sFileName As String, i As Integer
Dim bytContent As Byte()
ReDim bytContent(iLength) 'byte array, set to file size
'strip the path off the filename
i = InStrRev(File1.PostedFile.FileName.Trim, "")
If i = 0 Then
sFileName = File1.PostedFile.FileName.Trim
Else
sFileName = Right(File1.PostedFile.FileName.Trim, Len(File1.PostedFile.FileName.Trim) - i)
End If
conn = New SqlConnection(eco)
conn.Open()
cmd = New SqlCommand("INSERT INTO ECO_Attachments (ECOID, FromType, DocName,OldRev,NewRev,NtLogin,DisplayName, FileName, FileSize, FileData, ContentType) VALUES (@ECOID, @FromType,@DocName,@OldRev,@NewRev,@NtLogin,@DisplayName, @FileName, @FileSize, @FileData, @ContentType) ")
cmd.Connection = conn
Try
File1.PostedFile.InputStream.Read(bytContent, 0, iLength)
With cmd
.Parameters.Add("@ECOID", SqlDbType.Int)
.Parameters.Add("@FromType", SqlDbType.NVarChar, 50)
.Parameters.Add("@DocName", SqlDbType.NVarChar, 250)
.Parameters.Add("@OldRev", SqlDbType.NVarChar, 50)
.Parameters.Add("@NewRev", SqlDbType.NVarChar, 50)
.Parameters.Add("@NTLogin", SqlDbType.NVarChar, 100)
.Parameters.Add("@DisplayName", SqlDbType.NVarChar, 200)
.Parameters.Add("@FileName", SqlDbType.NVarChar, 255)
.Parameters.Add("@FileSize", SqlDbType.Real)
.Parameters.Add("@FileData", SqlDbType.Image)
.Parameters.Add("@ContentType", SqlDbType.NVarChar, 50)
.Parameters("@ECOID").Value = ECOID
.Parameters("@FromType").Value = From
.Parameters("@DocName").Value = DocName
.Parameters("@OldRev").Value = OldRev
.Parameters("@NewRev").Value = NewRev
.Parameters("@NTLogin").Value = NTLogon
.Parameters("@DisplayName").Value = DisplayName
.Parameters("@FileName").Value = sFileName
.Parameters("@FileSize").Value = iLength
.Parameters("@FileData").Value = bytContent
.Parameters("@ContentType").Value = sContentType
.ExecuteNonQuery()
'.ExecuteScalar()
End With
Catch ex As Exception
Response.Write(ex)
'Handle your database error here
conn.Close()
End Try
View 1 Replies
View Related
Jul 31, 2006
SQL 2005 Express - Database Restore size problem
Was this post helpful ?
Hi,
I'm trying to restore a SQL Server DB Backup from a SQL Server DB Server on to my Laptop (SQL 2005 Express)
When I execute a restore filelistonly command on the backup file, It seems that the Database included is 1GB, but the Log file is 91 GB in size, which exceeds my diskspace.
I can restore the Data on its own without the log file, but the the Database stays in "restoring" mode. I've tried to switch the restore flag off (update sys.databases set state = 0 where name = 'G001'), but I can`t seem to be able to do it, even if I try to allow updates via:
sp_configure 'allow updates', 1
GO
RECONFIGURE WITH OVERRIDE
GO
Any ideas how I can restore the database without restoring the enormous logfile?
Thanks in advance...
View 3 Replies
View Related
Jul 14, 2007
Hi I've followed a tutorial on how to write and read varbinary(max) data to and from a database. But when i try to read the data i get the error that the data would be truncated, but only when the varbinary(max) is greater then 8kB. I've used a system stored procedure (sp_tableoption) to set the table that holds the data to store data outside rows. To select the data i'm using a stored procedure: SELECT imageData , MIMEType FROM Pictures WHERE (imageTitle = @imageTitle) And then using an .aspx page to Response.Write the data:Using conn As New sql.SqlConnection conn.ConnectionString = ConfigurationManager.ConnectionStrings("myConnectionString").ToString Dim getLogoCommand As New sql.SqlCommand getLogoCommand.CommandType = Data.CommandType.StoredProcedure getLogoCommand.CommandText = "GetPicture" getLogoCommand.Connection = conn Dim imageTitleParameter As New sql.SqlParameter("@imageTitle", Data.SqlDbType.NVarChar, 200) imageTitleParameter.Value = Request("imageTitle") imageTitleParameter.Direction = Data.ParameterDirection.Input getLogoCommand.Parameters.Add(imageTitleParameter) conn.Open() Using logoReader As sql.SqlDataReader = getLogoCommand.ExecuteReader logoReader.Read() If logoReader.HasRows = True Then Response.Clear() Response.ContentType = logoReader("MIMEtype").ToString() Response.BinaryWrite(logoReader("imageData")) End If End Using conn.Close() End Using Can anyone please help me with this?!
View 2 Replies
View Related
Jul 4, 2007
Hi,
We are running SQL Server 2005 Ent Edition with SP2 on a Windows 2003 Ent. Server SP2 with Intel E6600 Dual core CPU and 4GB of RAM. We have an C# application which perform a large number of calculation that run in a loop. The application first load transactions that needs to be updated and then goes to each one of the rows, query another table get some values and update the transaction.
I have set a limit of 2GB of RAM for SQL server and when I run the application, it performs 5 records update (the process described above) per second. After roughly 10,000 records, the application slows down to about 1 record per second. I have tried to examine the activity monitor however I can't find anything that might indicate what's causing this.
I have read that there are some known issues with Hyper-Threaded CPUs however since my CPU is Dual-core, I do not know if the issue applies to those CPUs too and I have no one to disable one core in the bios.
The only thing that I have noticed is that if I change the Max Degree of Parallelism when the server slows down (I.e. From 0 to 1 and then back to 0), the server speeds up for another 10,000 records update and then slows down. Does anyone has an idea of what's causing it? What does the property change do that make the server speed up again?
If there is no solution for this problem, does anyone know if there is a stored procedure or anything else than can be used programmatically to speed up the server when it slows down? (This is not the optimal solution however I will use it as a workaround)
Any advice will be greatly appreciated.
Thanks,
Joe
View 3 Replies
View Related
Oct 21, 2007
HiI have a VB.net web page which generates a datatable of values (3 columns and on average about 1000-3000 rows).What is the best way to get this data table into an SQL Server? I can create a table on SQL Server no problem but I've found simply looping through the datatable and doing 1000-3000 insert statements is slow (a few seconds). I'd like to make this as streamlined as possible so was wondering is there is a native way to insert all records in a batch via ADO.net or something.Any ideas?ThanksEd
View 1 Replies
View Related
May 30, 2006
I am trying to enable database mirroring for 100 database.
It goes error free till 59 databases (some times 60 databases) with the
status (principal, synchronized) on principal. on the 60th or 61st database
it gave the status (principal, disconnected). Also mirror starts acting
abnormal. connection to mirror starts to give connection timeout and it is
not enabling database mirroring on any more databases. I have SQL SERVER
2005 Enterprise with SP1 on the servers. witness is not included yet.
this are my test servers... i have more than 500 databases on my production
servers.
principal and mirror both are using port 5022 for ENDPOINT communication.
View 1 Replies
View Related
Jun 1, 2006
I am trying to enable database mirroring for 100 database.
It goes error free till 59 databases (some times 60 databases) with the
status (principal, synchronized) on principal. on the 60th or 61st database
it gave the status (principal, disconnected). Also mirror starts acting
abnormal. connection to mirror starts to give connection timeout and it is
not enabling database mirroring on any more databases. I have SQL SERVER
2005 Enterprise with SP1 on the servers. witness is not included yet.
these are my test servers... i have more than 500 databases on my production
servers.
principal and mirror both are using port 5022 for ENDPOINT communication.
All of the databases are critical and all must be included in the Database Mirroring.
so, after that I tried to implement database mirroring again......
System has 3 GB of RAM, SQL SERVER (Mirror) using 85 MB of RAM but still
giving this error while trying to enable database mirroring for 37th
Database.....
"There is insufficient system Memory to run this query"
WHY?
View 19 Replies
View Related
Dec 12, 2007
Hi
We are using the SQL Server 2005 Full Text Service. The data is not huge, but the kind of data is that each record is small and there are a large number of records. There are 35 million records now with 11 GB of data and about 1.6 GB of FT catalog on the table. This is expected to grow to at least 10 times the size of this data. The issue is with FTS taking a long time to return results when the number of hits (rows) getting returned from FTS is large for some searches, it takes a very long time. With the same data & catalog, those full text queries for less common words return timely. The nature of the problem doesnt allow us to only have top results. We need all the results. So it’s not about the size of data but the number of results getting returned from FT. (As the catalog is inverted). The machine is dual processor with 4 GB RAM.
I am considering splitting the table and hence the catalog and using multiple servers to do full text searches in smaller catalogs. Is there any other way this issue can be solved ?
If splitting is the only way, can you give me an idea as to what is a statistical/standard limit to the number of search results/cataog size as which FTS gives good results
Thanks in advance
View 1 Replies
View Related
Nov 5, 2015
How can I measure the disk reads and writes to see if I need to add aditional disks to the server?
View 2 Replies
View Related
Dec 1, 2005
All my queries are being blocked while the tables are being replicatedand it is causing some 2 minute blocking. Is there a way for theReplication to allow dirty reads because I really don't care aboutthat, I would rather have dirty reads than 2 minute waits.Thanks.
View 1 Replies
View Related
Aug 1, 2006
Is it possible to find the reads/writes to a sql server table ?
View 2 Replies
View Related
Jul 6, 2015
We are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.
View 2 Replies
View Related
Jul 17, 2015
I have inherited a database that is over-indexed, i.e. there are sometimes 10-20 indexes on a table. The performance is at times not great due to blocking from long running queries. I want to clean up the indexes as a starting point.
Through a query I found some time ago on the SQLCat blog I have discovered a large number of indexes in the database that have a huge disparity between reads and writes. The range of difference is sometimes almost 2 million more writes than reads. Should I just drop the indexes that have say, more than 100,000 more writes than reads and then see what the Missing Index DMVs tell me after a few days of running without those indexes?
In some cases there are a few hundred thousand reads but maybe a million writes on the index. Thus, there are a fair number of reads happening, just not in comparison to the number of writes. In some cases there are almost no reads and a million or more writes. I am obviously dropping those indexes. I just am not sure what to do about the indexes that do have a fair number of reads.
View 9 Replies
View Related
Nov 23, 2006
SQL Server 2005 Standard Edition act as publisher and distributor.
All subscribers are SQL Server 2005 Express Edition.
According to 2005 book online, "MSMerge_History table exists in distribution database". Howevey I found this table in Subscriber database which is in SQL Server 2005 Express Edition.
The problem is this table (MSMerge_History) and other two tables (MSMerge_genhistory, MSRepl_errors) are quite large. We want to keep SQL Server 2005 Express database as small as possible so we can put more data into it.
Is there anyway (Manually or automatically) to clean those tables in SQL Server 2005 Express? Please help.
View 3 Replies
View Related
Jun 2, 2014
I am writing a performance baseline test.
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
View 4 Replies
View Related
Aug 19, 2015
I often use profiler as one tool to identify bad plans. The reads column gives me a good indication of excessive IO to dig into and correct if necessary. I often use it with Showplan so I can see what a query does, replicate it and fix it.
However I have just lost some faith in it. I am looking at a poorly performing query joining five tables. A parallel plan has been generated and one table is being scanned (in parallel) due to a missing index. This table had in excess of 4 million rows in it. The rest hitd indexes well. However the entire query generates ONLY 12 READS.
Once corrected, a single processor plan is used. This looks really efficient and uses 120 reads. That looks the right figure to me.
Clearly 12 reads is wrong. Does the profiler only display one thread of a parallel plan perhaps? Or something else?
View 1 Replies
View Related
Sep 5, 2006
Hi all....
I was wondering why SQL Server 2000 lacks a logical XOR operator. Do you have a ready to use function that implements it?
How can I program one by myself? I know that A XOR B is equivalent to NOT A * B + A * NOT B. So I tried to create this function :
CREATE FUNCTION XOR
(
-- Add the parameters for the function here
@A BIT,
@B BIT
)
RETURNS BIT
AS
BEGIN
RETURN NOT @A AND @B OR @A + NOT @B
END
But it didn't compile.
How can I pass boolean values as parameters? any other hint?
Thanks
Jaime
View 12 Replies
View Related
Mar 16, 2008
This local test DB was doing just fine. I fire up SQL Server again, and all of a sudden I can't access it (expend the database in Management Studio). So I detach and try to run this script which was run initially to re-attach it:
USE [master]
GO
CREATE DATABASE [ABS] ON
( FILENAME = N'C:DatabaseABS.mdf' ),
( FILENAME = N'C:DatabaseABS_log.ldf' )
FOR ATTACH
GO
if not exists (select name from master.sys.databases sd where name = N'CMS' and SUSER_SNAME(sd.owner_sid) = SUSER_SNAME() ) EXEC [CMS].dbo.sp_changedbowner @loginame=N'domain estuser', @map=false
GO
and I get this error: SQL Server detected a logical consistency-based I/O error: torn page
View 1 Replies
View Related
Feb 27, 2015
I have a SQL Server 2008 R2 Cluster. I have moved a file to another drive by adding a new file on the drive, doing a shrinkfile with emptyfile. I have removed the old file and when I try to rename the new file to the old files old name, I get an error.
ALTER DATABASE mydb REMOVE FILE myfilename
Msg 5009, Level 16, State 9, Line 1
One or more files listed in the statement could not be found or could not be initialized.
ALTER DATABASE mydb
modify FILE ( NAME = temp_filename
, NEWNAME = myfilename
)
Msg 1828, Level 16, State 3, Line 4
The logical file name "myfilename" is already in use. Choose a different name.
I have completed a failover, thinking the name was being held in cache. I've also renamed to something completely different then tried to rename to the old filename.
View 9 Replies
View Related
Apr 30, 2015
Is there any danger with renaming the LOGICAL file names behind the database?
There are a bunch of databases that were restored copies and all of them have the same logical database file name. I'm trying to get some growth data so I want the logical files to be different (prefer them to match the actual database name) so I can more easily identify them.
For instance:
database_id name type_desc name physical_name
1 DLMdb1 ROWS DLMDB1 D:dlmdb1.mdf
1 DLMdb1 LOG DLMDB1_log E:dlmdb1.ldf
2 DLMdb2 ROWS DLMDB1 D:dlmdb2.mdf
2 DLMdb2 LOG DLMDB1_log E:dlmdb2.ldf
3 DLMdb3 ROWS DLMDB1 D:dlmdb3.mdf
3 DLMdb3 LOG DLMDB1_log E:dlmdb3.ldf
Am I safe to rename the logical names? I can't think of anything that references those logical file names that I would be breaking [backups, applications].
View 3 Replies
View Related
Mar 15, 2007
Did anyone familiar with these errors:
SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0x1a00d2fd; actual: 0x1a02d2fd). It occurred during a read of page (1:103329) in database ID 11 at offset 0x00000032742000 in file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAServer02.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database c (Source: MSSQLServer, Error number: 824)
Get help: http://help/824
Thank you.
TJ_1
View 17 Replies
View Related
Mar 24, 2008
Hi,
I am getting timeout or connection closed error on report server when i try to create snapshot for a large report on my development machine.Snapshot are getting created for small reports.
I have done following changes to rsreportserver.config
<Add Key="SQLCommandTimeoutSeconds" Value="1000000"/>
<Add Key="DatabaseQueryTimeout" Value="1000000"/>
and
IIS time out to some large value.
But everytime it shows timeout or connection closed forcibly.
Any assistance would be greatly appreciated!
View 5 Replies
View Related
Apr 3, 2011
Is there a query or sp that I can pull all databases on the server along with their logical file names and physical file names?
View 5 Replies
View Related