What Is The Best Way To Achieve Best Latency For Reads And Writes To SQL Server 2005?
Oct 18, 2007
I am looking into various options to improve latency of our application (we figured the latency is mainly because data persistence - writes and reads from DB). I am looking into In-Memory databases also. But, before making that decision (of using in memory databases), I would like to see if there is a way to configure SQL Server 2005 to get as close performance as in-memory databases?
My question:
1. Is there a way that I can configure SQL Server 2005 to use a CACHE that gets loaded as needed basis, so that future database reads/writes will happen to the cache as opposed to disk (db writes)?
2. Is SQL Server 2005 recoverable in such configurations?
3. Are there any ideas/resources where I can get more details? (Such as sample configurations with bench mark numbers, rpevious experiences..etc)
Thanks
Murthy
View 1 Replies
ADVERTISEMENT
Nov 5, 2015
How can I measure the disk reads and writes to see if I need to add aditional disks to the server?
View 2 Replies
View Related
Aug 1, 2006
Is it possible to find the reads/writes to a sql server table ?
View 2 Replies
View Related
Jul 6, 2015
We are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.
View 2 Replies
View Related
Jul 17, 2015
I have inherited a database that is over-indexed, i.e. there are sometimes 10-20 indexes on a table. The performance is at times not great due to blocking from long running queries. I want to clean up the indexes as a starting point.
Through a query I found some time ago on the SQLCat blog I have discovered a large number of indexes in the database that have a huge disparity between reads and writes. The range of difference is sometimes almost 2 million more writes than reads. Should I just drop the indexes that have say, more than 100,000 more writes than reads and then see what the Missing Index DMVs tell me after a few days of running without those indexes?
In some cases there are a few hundred thousand reads but maybe a million writes on the index. Thus, there are a fair number of reads happening, just not in comparison to the number of writes. In some cases there are almost no reads and a million or more writes. I am obviously dropping those indexes. I just am not sure what to do about the indexes that do have a fair number of reads.
View 9 Replies
View Related
Oct 30, 2006
How can You find the reads and writes per second of your hard drives in sql. I am reading my SQL book and it says that your average disk should have 125 or less i/o's. And it gave the forumal but as mentioned I don't know how to find the reads and writes.
View 4 Replies
View Related
Jul 21, 2000
Is there a way to get a total count of all SELECT, UPDATE, DELETE and INSERT statements to a SQL Server 6.5 database during a 12 hour period? I'm thinking maybe someone knows of a software that reads the log or monitors the server... I've been looking at the performance monitor and, although it has good information, it doesn't capture DML's.
FYI - it's for capacity planning.
TIA,
Mike
View 1 Replies
View Related
Mar 5, 2008
GUys,
Is there any way track tables which have most no of reads and writes from a database of 400 tables.
Thanks
View 9 Replies
View Related
Apr 17, 2008
Problem Statement........
Lets say user A accesses a record and is making an update to a column... next user B accesses the same record and makes an update to the same column and saves the data... how can user A check to see if an update has been made to prevent overwriting the data..
Is there a query statement that user A can write to check for this?
I understand locking can be used to prevent this but is there an alternative to locking.
View 5 Replies
View Related
May 31, 2006
I have a set of triggers that log the history of changes to a table - i.e. I record inserts, updates, deletes (pretty standard audit stuff I suppose). I want to also log reads on that data. If I were using sprocs for reading data, this would be relatively painless, but I am using an O/R mapper to handle my data access, which writes dynamic sql at runtime (and I don't want to use sprocs with it) and then sends it down to the DB. Is there a way I can intercept reads and log them to the same table I am logging other actions? I know very little about the new capabilities of SQL Server 2005, but I would think I could somehow, maybe via the new CLR capabilities or similar, get access to these types of events within the database? Anyone? I know I could always do this higher up in the application layers, but I would like to keep all of this at the database level if possible....Thanks,
View 1 Replies
View Related
May 22, 2008
A table in one of my databases is running very slowly. The IO is very high and below is a printout from the SET STATISTICS IO ON command run on a common query used on the table:
(4162 row(s) affected)
Table 'WebProxyLog'. Scan count 3, logical reads 873660, physical reads 3493, read-ahead reads 505939, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
I have a clustered unique index and a nonclustered index on the table. I have ran SQL Profiler and opened the trace in Database Tuning Advisor, DTA displays 0% improvement suggestions. I have a number of statistics on the table and index which are all up to date and fragmentation is less than 1%. I've tried a number of variations on indexes to improve performance but to no avail. There is only one query which runs on the table, and the nonclustered index created on the table did significantly improve performance, however the query still runs at around 23 seconds. The query does bring back a large amount of data however i'm sure there is a way to bring down the IO and logical reads to improve performance.
The table and index scripts are below:
Code Snippet
-- =================== Table and Clustered index ===========================
CREATE TABLE [dbo].[WebProxyLog](
[ClientIP] [bigint] NULL,
[ClientUserName] [nvarchar](514) NULL,
[ClientAgent] [varchar](128) NULL,
[ClientAuthenticate] [smallint] NULL,
[logTime] [datetime] NULL,
[servername] [nvarchar](32) NULL,
[DestHost] [varchar](255) NULL,
[DestHostIP] [bigint] NULL,
[DestHostPort] [int] NULL,
[bytesrecvd] [bigint] NULL,
[bytessent] [bigint] NULL,
[protocol] [varchar](12) NULL,
[transport] [varchar](8) NULL,
[operation] [varchar](24) NULL,
[uri] [varchar](2048) NULL,
[mimetype] [varchar](32) NULL,
[objectsource] [smallint] NULL,
[rule] [nvarchar](128) NULL,
[SrcNetwork] [nvarchar](128) NULL,
[DstNetwork] [nvarchar](128) NULL,
[Action] [smallint] NULL,
[WebProxyLogid] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [pk_webproxylog_webproxylogid] PRIMARY KEY CLUSTERED
(
[WebProxyLogid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
-- =================== Nonclustered Index ===========================
CREATE NONCLUSTERED INDEX [dta_ix_WebProxyLog_Kaction_clientusername_logtime_uri_mimetype_webproxylogid] ON [dbo].[WebProxyLog]
(
[Action] ASC
)
INCLUDE ( [ClientUserName],
[logTime],
[uri],
[mimetype],
[WebProxyLogid]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
-- =================== Query which is called regularly on the table ===========================
SELECT [User] = CASE
WHEN LEFT(clientusername,3) = domain' THEN RIGHT(clientusername,LEN(clientusername) - 3)
ELSE clientusername
END,
logtime AS [Date],
desthost AS [Site],
uri AS [Actual Site]
FROM webproxylog
WHERE CONVERT(Datetime,CONVERT(VarChar(25),logtime,106),106) BETWEEN '20 apr 2008' AND '14 may 2008'
AND(RIGHT(uri,4) NOT IN('.css','.jpg','.gif','.png','.bmp','.vbs'))
AND (RIGHT(uri,3) NOT IN('.js'))
AND LEFT(mimetype,6) = 'text/h'
AND (uri NOT LIKE '%sometext.local%')
AND (uri NOT LIKE '%sometext.co.uk%')
AND [action] = 9
AND (clientusername IN ('USERNAME'))
ORDER BY logtime ASC;
PS There are 60,078,605 rows in the table
Please help!
Many Thanks
View 6 Replies
View Related
Oct 1, 2014
Is there anyway to check if server is having disk latency or IO issues?Found below in SQL error log
Date10/1/2014 8:28:58 AM
LogSQL Server (Current - 10/1/2014 12:00:00 AM)
Sourcespid10s
Message
SQL Server has encountered 8500 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [D:Fin.mdf] in database [Fin] (5). The OS file handle is 0x0000000000001368. The offset of the latest long I/O is: 0x0001104a7da000
View 1 Replies
View Related
Feb 2, 2004
Hi,
i am experiencing SQl write performance problems on a very shiny server. Got data files on a Raid 1+0, log files on a separate drive, all SCSI, Win2003 server, 6G RAM, 2 Xeon processors. I've created a small benchmarking program and run it on my desktop pc and this 'big' server. Here are the results:
Desktop: SQL server inserts: 78 Seconds, Direct writes to the harddisk(Just write a string to the file 10000 times): 13 seconds
SQLServer: SQL server inserts: 422 Seconds, Direct writes to the harddisk: 16 seconds
So, for some reason, my 'shiny' machine is 6 times slower on writes than my desktop. When i tried comparing the select performance, my shiny server is 10 times faster than my desktop.
Initially i had Raid5 on my server and it had poorer direct write performance but now, direct writes seem to be ok, so, i recon this is a problem related to SQL server.
What can i do to improve the insert performance?
Thanks in advance
View 8 Replies
View Related
May 25, 2004
Hi Everybody,
Can anybody tell me how to get the number of commands delivered per minute in case of Merge Replication with Publisher and subscribers.
This way, we can be sure that even if there is a latency (due to high volume transaction processing), replication is in good shape and things will catch up soon.
Also if there are any other similar measures which can be monitored to make sure that replication is going on fine, it would be great
Please let me know If anyone has got information on same.
View 1 Replies
View Related
May 14, 2014
I have the following:
(a) One Dynamic SQL Query that takes 37 ms when run as a single query or in an SP.
(b) Three SQL Indexed View queries that take 0 ms when run together.
When i add (a) + (b) in the same SP, i should get 37 ms + 0 ms = 37ms, but NO it takes 400 ms.
What is causing the extra 363 ms of latency.
View 9 Replies
View Related
Oct 15, 2007
Hello,I'm an absolute newbie when it comes to SQL. I was told that SQLserver does not function well on a WAN where network latency between,say, the SQL server and a front-end server is greater than 250ms.I can't find anything information supporting this claim online, so Iwas hoping someone here could tell me if this is true or not?Thank You!!
View 2 Replies
View Related
Jan 28, 2015
I am trying to use change data capture to load the data into the secondtable from table 1 which is coming from UI.
What will be the minimum latency??? Can we use incase of latency less than 5 seconds.
View 1 Replies
View Related
Apr 2, 2015
I built a SSIS(writing out to a flat file ) in 32 bit machine and it woks fine . But however when I deploy to the produciton server(64 bit) the SSIS writes out garbage data . After some research I found out that the problem with the 32 bit OS and 64 bit OS problem.What is my next step. Am I out of luck that now I will have to redesing the SSIS in 64 bit?
View 5 Replies
View Related
Apr 27, 2015
How you would calculate the average read/write latency experienced by a SQL Server instance during a specific time window in order to monitor this for multiple instances. From this MSDN blog, I know that you have to take multiple samples and do some calculations to get the correct latency.
[URL] ...
However, the SQLServer:Resource Pool Stats object tracks these numbers per resource pool and we want to get one number for the whole server. Since there can be a different base value for each resource pool, you can't simply sum the numerator values together. Here's some sample data from a server that illustrates the problem.
object_name counter_name instance_name cntr_value cntr_type
SQLServer:Resource Pool Stats Avg Disk Read IO (ms) default 307318919 1073874176
SQLServer:Resource Pool Stats Avg Disk Read IO (ms) Base default 25546724 1073939712
SQLServer:Resource Pool Stats Avg Disk Read IO (ms) internal 2045730 1073874176
SQLServer:Resource Pool Stats Avg Disk Read IO (ms) Base internal 208270 1073939712
I'm thinking I would need to do some sort of weighted average, but I'm not sure if that will result in the correct value. Here's the formula I am thinking about using currently before doing the calculation over time
((default * default[base]) + (internal * internal[base]))/(default[base] + internal[base])
Then to do the calculation over time, I'd use the changes in the calculated numerator and denominator to get the average.
Does this sound like to correct way to get this value? Is there a good way to verify?
View 2 Replies
View Related
Apr 20, 2015
Im backing up to a network directory thats actually a mount point on a different server.My backup was slower than usual so i opened up perfmon to have a look.
When selecting the mount point from the Logical Disks section in perfmon i can see that writes/sec & write bytes/sec both show zero for a long period of time, even though the backup percent complete is increasing.Then all of a sudden the writes to the network share jump massively.
Is there some caching mechanism for backups in sql where during a backup data is only flushed to the disk periodically during backup?
View 1 Replies
View Related
Nov 12, 2014
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.
View 2 Replies
View Related
Dec 1, 2005
All my queries are being blocked while the tables are being replicatedand it is causing some 2 minute blocking. Is there a way for theReplication to allow dirty reads because I really don't care aboutthat, I would rather have dirty reads than 2 minute waits.Thanks.
View 1 Replies
View Related
Oct 2, 2014
We would like to benchmark our logical reads daily to show our improvement as we tune the queries over time.
I am using sys.dm_exec_query_stats summing the Physical and Logical Reads. Is this a viable option for gathering this metric? Is this a viable metric to gather?
select sum(total_physical_reads) as TotalPhyReads, sum(total_logical_reads) as TotalLogReads from sys.dm_exec_query_stats;
How best to provide performance based metrics.
View 4 Replies
View Related
Jun 20, 2001
Would like to derive a summary report like
WT<=0WT<=10WT<=20WT>20Min WTMax WTAvg WTTotal
---------------------------------------------
2001/06/0101001010101
2001/06/1802006872
from table
RecIDCreateTimeStampWT
----------------------
12001/06/01 08:0010
22001/06/18 08:308
32001/06/18 08:356
Is it possible to dervive the result in a single query? Or separate query needed like the followings :
SELECT CONVERT(CHAR(10), CreateTimeStamp, 111) AS Date, MIN(WT) AS MinWT, MAX(WT) AS MaxWT, AVG(WT) AS AvgWT, COUNT(RecID) AS TotalQ
FROM TAB_Name
GROUP BY CONVERT(CHAR(10), CreateTimeStamp, 111)
SELECT CONVERT(CHAR(10), CreateTimeStamp, 111) AS Date, COUNT(Qid) AS 'WT<=0'
FROM TAB_Name
WHERE WT <= 0
GROUP BY CONVERT(CHAR(10), CreateTimeStamp, 111)
SELECT CONVERT(CHAR(10), CreateTimeStamp, 111) AS Date, COUNT(Qid) AS 'WT<=10'
FROM TAB_Name
WHERE (WT > 0 AND WT <= 10)
GROUP BY CONVERT(CHAR(10), CreateTimeStamp, 111)
Thanks for all your help in advance,
Ben
View 2 Replies
View Related
Oct 2, 2007
Often when I write a stored procedure, I encounter a situation where
it will be really convenient if I can ignore an error and continue the execution
of next SQL statement, especially when I know what kind of error it will generate.
It's just like the effect of "On Error Resume Next" in VB.
Does anyone have any idea or have some knowledge to share?
I would really appreciate.
I am using SQL Server 2005 and SQL Server 2000. Thanks.
Chris
View 5 Replies
View Related
Jun 16, 2006
Hi,
Is it possible to achieve partition parallelism in SSIS? What I am asking is, In DataStage, if I load some data like 'data reader -> trans1 -> trans2 -> destination' (and assume that I have 4 nodes configured), the tool divides the data into 4 different datasets and executes the package as 4 instances. This way the data load is very fast. Is it possible in SSIS?
Of course we can divide the dataset and load them thru multiple instances? But then dividing the dataset will differ for every load and so we need to modify the package all the time. Even if we divide the dataset, I am not sure 4 instances will run in 4 different nodes or in a same node? So anybody has any idea about it?
Thanks.
View 4 Replies
View Related
Jun 2, 2014
I am writing a performance baseline test.
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
View 4 Replies
View Related
Aug 19, 2015
I often use profiler as one tool to identify bad plans. The reads column gives me a good indication of excessive IO to dig into and correct if necessary. I often use it with Showplan so I can see what a query does, replicate it and fix it.
However I have just lost some faith in it. I am looking at a poorly performing query joining five tables. A parallel plan has been generated and one table is being scanned (in parallel) due to a missing index. This table had in excess of 4 million rows in it. The rest hitd indexes well. However the entire query generates ONLY 12 READS.
Once corrected, a single processor plan is used. This looks really efficient and uses 120 reads. That looks the right figure to me.
Clearly 12 reads is wrong. Does the profiler only display one thread of a parallel plan perhaps? Or something else?
View 1 Replies
View Related
Apr 22, 2008
I have a flat file source which going to write in a db table. There is lot of warning message, which based on different condition (CSPL) generate different message, which I achieved through lots of conditional split and derived columns dataflow object. My question is can I achieved same functionality by the script task?
Any help will be appreciating.
Thanks
View 3 Replies
View Related
Apr 17, 2008
I have a main report and some subreports.
What i want to achieve is the subreports would be dynamically sent parameters to and the layout would change depending on some
parameters sent from the main report.
So there ia going to be a main report that is constant but the subreports data and layout could change.
Another question is can i have an expression that would hide a subreport if there is no data in the subreport?
Any ideas would be appreciated
View 4 Replies
View Related
Apr 5, 2006
Hello All,
I am developing a package using SSIS which needs to do the following.
1. Read all flat file from a folder. I am doing this using For Loop task. I know the total number of files in that folder hence I am setting the loop counter = file count.
2. The next step is to import the data from flat file to SQL server destination table using data flow task.
3. Upon successful completion of data flow task there are some other tasks like SQL to do some checks/validation on the data, export it to another tables.
Upon successful completion of step 3 the iteration goes to next file.
I want to achieve the following
IF step 2 has error (for example corrupt file or incomplete data), I want to fail data transfer completely, skip step 3, and go to step 1 for next available file and do rest.
How do I do this in SSIS?
Thanks for your help.
SGK
View 1 Replies
View Related
Apr 27, 2015
We have two locations in US, I am thinking of having 2 node SQL cluster for Lync 2010, I alardy have One DB server running in one location, now we got new site where we are planning to have one more DB for redundancy.
View 5 Replies
View Related
Feb 12, 2007
hi
I have a transactional replication that was running fine for about few months, but eversince there was a massive update of records at the publisher site and when I use the replication monitor, i notice Status: Performace Critical, Subscription:[localservername]:[Orders_Repl], Performance:Critical, Latency:03:34:47.
I was thinking of redoing the whole replication but before i am about to proceed. Can you guys explained what happen and what does it meant when Latency:03:34:47. I don't remember seeing a Latency:03:34:47.
I am hoping to solve this critical performance issue hope you guys can help. Thanks
View 1 Replies
View Related