Track Reads And Writes
Mar 5, 2008GUys,
Is there any way track tables which have most no of reads and writes from a database of 400 tables.
Thanks
GUys,
Is there any way track tables which have most no of reads and writes from a database of 400 tables.
Thanks
How can You find the reads and writes per second of your hard drives in sql. I am reading my SQL book and it says that your average disk should have 125 or less i/o's. And it gave the forumal but as mentioned I don't know how to find the reads and writes.
View 4 Replies View RelatedIs there a way to get a total count of all SELECT, UPDATE, DELETE and INSERT statements to a SQL Server 6.5 database during a 12 hour period? I'm thinking maybe someone knows of a software that reads the log or monitors the server... I've been looking at the performance monitor and, although it has good information, it doesn't capture DML's.
FYI - it's for capacity planning.
TIA,
Mike
Problem Statement........
Lets say user A accesses a record and is making an update to a column... next user B accesses the same record and makes an update to the same column and saves the data... how can user A check to see if an update has been made to prevent overwriting the data..
Is there a query statement that user A can write to check for this?
I understand locking can be used to prevent this but is there an alternative to locking.
How can I measure the disk reads and writes to see if I need to add aditional disks to the server?
View 2 Replies View RelatedIs it possible to find the reads/writes to a sql server table ?
View 2 Replies View RelatedWe are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.
View 2 Replies View RelatedI have inherited a database that is over-indexed, i.e. there are sometimes 10-20 indexes on a table. The performance is at times not great due to blocking from long running queries. I want to clean up the indexes as a starting point.
Through a query I found some time ago on the SQLCat blog I have discovered a large number of indexes in the database that have a huge disparity between reads and writes. The range of difference is sometimes almost 2 million more writes than reads. Should I just drop the indexes that have say, more than 100,000 more writes than reads and then see what the Missing Index DMVs tell me after a few days of running without those indexes?
In some cases there are a few hundred thousand reads but maybe a million writes on the index. Thus, there are a fair number of reads happening, just not in comparison to the number of writes. In some cases there are almost no reads and a million or more writes. I am obviously dropping those indexes. I just am not sure what to do about the indexes that do have a fair number of reads.
I am looking into various options to improve latency of our application (we figured the latency is mainly because data persistence - writes and reads from DB). I am looking into In-Memory databases also. But, before making that decision (of using in memory databases), I would like to see if there is a way to configure SQL Server 2005 to get as close performance as in-memory databases?
My question:
1. Is there a way that I can configure SQL Server 2005 to use a CACHE that gets loaded as needed basis, so that future database reads/writes will happen to the cache as opposed to disk (db writes)?
2. Is SQL Server 2005 recoverable in such configurations?
3. Are there any ideas/resources where I can get more details? (Such as sample configurations with bench mark numbers, rpevious experiences..etc)
Thanks
Murthy
I have an application that is insertting thousands of records houlry. The server's hard drives are staying maxxed out. My boss says there is an index problem. I say it is a drive subsystem issue.
Any help would be appreciated to understand this performance problem.
Hi!
First of all I want to tell you that I'm not a dba or tuning expert but I've ran a trace on a database with perfomance problems and I've found a strange thing.
The user creates orders for their service people in the organisation. I can see in the trace that inserts are done but they don't produce any writes rightaway. However after 10-15 minutes all the writes are done, what could make the actual write be delayed so much. The application is developed using .net.
/Magnus
Jesus saves. But Gretzky slaps in the rebound.
Hi,
i am experiencing SQl write performance problems on a very shiny server. Got data files on a Raid 1+0, log files on a separate drive, all SCSI, Win2003 server, 6G RAM, 2 Xeon processors. I've created a small benchmarking program and run it on my desktop pc and this 'big' server. Here are the results:
Desktop: SQL server inserts: 78 Seconds, Direct writes to the harddisk(Just write a string to the file 10000 times): 13 seconds
SQLServer: SQL server inserts: 422 Seconds, Direct writes to the harddisk: 16 seconds
So, for some reason, my 'shiny' machine is 6 times slower on writes than my desktop. When i tried comparing the select performance, my shiny server is 10 times faster than my desktop.
Initially i had Raid5 on my server and it had poorer direct write performance but now, direct writes seem to be ok, so, i recon this is a problem related to SQL server.
What can i do to improve the insert performance?
Thanks in advance
Hi everyone! I'm new to this forum and I suspect I'll be using this forum frequently. Good stuff.
Allow this question may appear to be Web-related, I think the problem is with what I'm doing with the database. Please read.
I'm trying to implement a page tracking solution using ASP and SQL 2000. It basically writes a new record to a table every time a user visits a page on the site. It appeared to work fine at first, then I've increasingly been getting time out errors on my pages -- all pointing to the include file that fires the database write.
Here's the code that's referenced on every page:
Set Conn = Server.CreateObject("ADODB.Connection")
Conn.Open "dsn=x;uid=y;pwd=z;"
Set objRecordset1= Server.CreateObject("ADODB.Recordset")
objRecordset1.Open "SELECT * FROM table",Conn,1,2
objRecordset1.AddNew
objRecordset1.Fi elds("PAGE") = Left(request.servervariables("SCRIPT_NAME"),100)
objReco rdset1.Fields("QUERY_STRING") = Left(request.servervariables("QUERY_STRING"),100)
objRec ordset1.Fields("DATE") = Date()
objRecordset1.Fields("TIME") = Time()
objRecordset1.Fields("PLATFORM") = Left(request.servervariables("HTTP_USER_AGENT"),100)
obj Recordset1.Fields("REFERRER") = Left(request.servervariables("HTTP_REFERER"),100)
objRec ordset1.Fields("USER_IP") = Left(request.servervariables("REMOTE_ADDR"),20)
If Request.Cookies("TEST")("ID")<>"" Then
objRecordset1.Fields("VISITOR_ID") = Request.Cookies("TEST")("ID")
End If
objRecordset1.Update
Conn.Close
Set Conn=Nothing
%>
After taking out the reference to the above code everything speeds back up. So, I know the performance hit and time out issues have to do with the code above.
Is it the simultaneous write to the table, the constant opening and closing of the recordset, the cursor type, the lock type – or combination of things?
HELP!! Thanks!
David
Ok, here is my situation.....
When someone navigates to a user's profile page on my site, I present them with a slideshow of the user's photos using the AJAX slideshow extender. I obtain the querystring value in the URL (to determine which user's page I'm on) and feed that into a webservice via a context value where an array of photos is created for the slideshow. Now, in order to create the array's size, I do a COUNT of all of that specific user's photos. Then, I run another SQL statement to obtain the path of those photos in the file system. However, during the time of that first SQL query's execution (the COUNT statement) to the time of the second SQL query (getting the paths of the photos), the owner of that profile may upload or delete a photo from his profile. I understand this would be a very rare occurrence since SQL statements 1 and 2 will be executed within milliseconds of each other, but it is still possible I suppose. When this happens, when I try to populate the array, either the array will be too small or too large. I'm using SqlDataReader for this as it seems to be less memory and resource intensive than datasets, but I could be wrong since I'm a relative beginner and newbie. This is what I have in my vb file for the webservice.....Public Function GetSlides(ByVal contextKey As String) As AjaxControlToolkit.Slide() Dim dbConnection As New SqlConnection("string for the data source, etc.") Try dbConnection.Open() Dim memberId = CInt(contextKey) Dim photoCountLookupCmd As New SqlCommand _ ("SELECT COUNT(*) FROM Photo WHERE memberId = " & memberId, dbConnection) Dim thisReader As SqlDataReader = photoCountLookupCmd.ExecuteReader() Dim photoCount As Integer While (thisReader.Read()) photoCount = thisReader.GetInt32(0) End While thisReader.Close() Dim MySlides(photoCount - 1) As AjaxControlToolkit.Slide Dim photoLookupCmd As New SqlCommand _ ("SELECT fullPath FROM Photo WHERE memberId = " & memberId, dbConnection) thisReader = photoLookupCmd.ExecuteReader()
Dim i As Integer For i = 0 To 2 thisReader.Read() Dim photoUrl As String = thisReader.GetString(0) MySlides(i) = New AjaxControlToolkit.Slide(photoUrl, "", "") Next i thisReader.Close() Return MySlides Catch ex As SqlException Finally dbConnection.Close()
End Try
End FunctionI'm trying to use the most efficient method to interact with the database since I don't have unlimited hardware and there may be moderate traffic on the site. Is SqlDataReader the way to go or do I use something else? If I do use SqlDataReader, can someone show me how I can run those 2 SQL statements in best practice? Would I have to somehow lock writing to that table when I start the first SQL statement, then release the lock after I execute the second SQL statement? What's the best practice in this kind of scenario.
Thanks in advance.
I upgraded from 6.5 to 7.0 SP3. Now when I save (write) an invoice it takes about 10-12 seconds, at 6.5 it was 1-3 seconds. SQL Server and my Materials App are the only thing running on this box. This is the only area that has gotten slower everything else works great. I have 3 users saving invoices and about 15 people total using the system at one time. It's a compaq DL580 loaded with memory, database is 2,195MB in size. Same 6.5 client to access system as before. Should I rebuild/reindex the database? Is there something from the old 6.5 version I need to remove?? Thanks in advance!!!
View 1 Replies View RelatedHello, we are investigating the use of SQL Server as a backend to ourscientific imaging application. We have found that when we write alarge image (60 Megabytes) the performance is quite a bit slower thanwriting 60 single megabyte images. The tests were performed runningSQL Server 2000 on Windows 2003 Enterprise on a single machine toeliminate the network's contribution. Perhaps there is a configurationoption that will allow us to tune SQL Server to better handle largewrites?TIA
View 1 Replies View RelatedHi everyone,
Every time that my application throws an .DTSX file I don't know who or what is writing on eventviewer.application if failed or successful.
Execute method implements a customized class which implements IDTEVENTS but I promise that in any place of my code I'm writing that information.
app.execute(nothing,... MYEVENTS)
Public Class MYEVENTS
Implements IDTEVENTS
..
..
..
..
All the methods are declared although empty but OnQueryCancel which is customized.
How to disable this behaviour?
I'm concerned for that because of we could launch (when it's gonna in live) 300 or 400 packages on-daily basis!!!
Thanks in advance and regards,
I need to write back to a legacy system in the form of flat file --the first row would be a header and the remaining rows would be the actuals rows of data--each field would have a column delimiter of , and a row delimter of CRLF.
The source is a SQL Server 2005 table.
Im looking for a good example of a script task in the dataflow section that writes to a file.
Can anyone show me the code how to do this or point me to a link.
thanks in advance
Dave
I am using an error handler that was provided to me from another source. However, I notice that there's something in the code that writes the error message twice. I tried to discover what it was, but could not seem to pinpoint it. Here's an example of what my email messages look like:
Is activity file current?
The Script returned a failure result.
The extracts in D:myFolder are not current! Data NOT loaded.
Is activity file current?
The Script returned a failure result.
The extracts in D:myFolder are not current! Data NOT loaded.
Obviously, I just want my email to read:
Is activity file current?
The Script returned a failure result.
The extracts in D:myFolder are not current! Data NOT loaded.
Somewhere, errorMessages is being written to more than once. Need help finding the error.
Thanks!
Here is the code from my Event Handlers:
OnError event:
Public Sub Main()
Dim messages As Collections.ArrayList
Try
messages = CType(Dts.Variables("errorMessages").Value, Collections.ArrayList)
Catch ex As Exception
messages = New Collections.ArrayList()
End Try
messages.Add(Dts.Variables("SourceName").Value.ToString())
messages.Add(Dts.Variables("ErrorDescription").Value.ToString())
messages.Add(Dts.Variables("scriptError").Value.ToString())
Dts.Variables("errorMessages").Value = messages
Dts.TaskResult = Dts.Results.Success
End Sub
On PostExecute:
Public Sub Main()
Dim errorDesc As String
Dim messages As Collections.ArrayList
Try
messages = CType(Dts.Variables("errorMessages").Value, Collections.ArrayList)
Catch ex As Exception
Return
End Try
For Each errorDesc In messages
Dts.Variables("emailText").Value = Dts.Variables("emailText").Value.ToString + errorDesc + vbCrLf
Next
Dts.TaskResult = Dts.Results.Success
End Sub
I built a SSIS(writing out to a flat file ) in 32 bit machine and it woks fine . But however when I deploy to the produciton server(64 bit) the SSIS writes out garbage data . After some research I found out that the problem with the 32 bit OS and 64 bit OS problem.What is my next step. Am I out of luck that now I will have to redesing the SSIS in 64 bit?
View 5 Replies View RelatedHi!
I was assigned to solve performance problems for an application. I fired up Sql Server profiler and started a trace. Downloaded Sql Server Trace Analyzer. It's a trial version so it's very limited. What I found is that one stored procedure generates almost 400 000 reads everytime it's used and it's used everytime the user wants to see his orders. I've tried to translate the t-sql to english from swedish, it looks something like this:
select top 100
o.orderid,
o.name,
o.latestdeldate,
os.name as OrderStatus,
os.orderstatusID,
p.placeID,
p.name as place,
p.address,
p.city,
a.name as worktype,
noOfActions=(select count(*) from actions a where a.order_orderid=o.orderid),
noOfServiceObjects = (select count(*) from Serviceobject s, Actions a where s.Place_PlaceID = o.Place_PlaceID and a.order_orderid = o.orderid and a.Serviceobject_serviceobjectid = s.serviceobjectid),
...
...
...
It has 8 select count(*) in the select statement then in the where statement it has 2 more select count(*).
I know it's very difficult for you to come up with a solution but do you know a better way than to use select count(*) everywhere? The count is used for to show different status flags on the website.
/Magnus
Jesus saves. But Gretzky slaps in the rebound.
Im backing up to a network directory thats actually a mount point on a different server.My backup was slower than usual so i opened up perfmon to have a look.
When selecting the mount point from the Logical Disks section in perfmon i can see that writes/sec & write bytes/sec both show zero for a long period of time, even though the backup percent complete is increasing.Then all of a sudden the writes to the network share jump massively.
Is there some caching mechanism for backups in sql where during a backup data is only flushed to the disk periodically during backup?
If I'm doing a dirty reads and a someone updates a record when I'm trying to read it is it possible to read both the old and new records thereby retrieving two records?
View 2 Replies View Relatedserver: QAT on clustering server ----> 23 seconds
----------------------------------------------------
SS 2000 developer edition SP4
win NT 5.2 (3790) SP4
MeM 7935 MB
processors 4
root directory C:program files...
use a fixed memeroy size 640 MB
reserve physical memory for sql server
minimum query memory 1024 kb
use all available processors
minimum query plan threshold for considering 5
PROFILER READS = 5234
server: MILLER ----> 3 seconds
----------------------------------------------------
SS 2000 developer edition no service pack
win NT 5.2 (3790) SP4
MeM 2047 MB
processors 4
root directory f:MSSQL$INAQAT
dynamically configure sql server memory
use all available processors
minimum query plan threshold for considering 5
PROFILER READS = 598
----------------------------------------------------
Making story short. I got an application that hits only 1 database called RECORDS. I'm getting different duration when running an application. 23 and 3 seconds.
Same database, same objects and same application.
SERVER QAT is our staging server, means lots of databases
SERVER MILLER is just a server i just assembled, means just one database (RECORDS).
Not sure if it's because it's a clustering server that is causing the issue nor the reads. If its the reads, what is causing it? Do you think is the how the memory is configured?. Will the experts pls stand up?
So I€™m at a dead-end looking for the reason behind the following behavior. Just to make sure no one misses it, the 'behavior' is the difference in the number of reads between using sp_executesql and not.
The following statements are executed against a SQL 2000 database that contains >1,000,000 records in the act_item table. They are run using Query Analyzer and the Duration and Reads come from SQL Profiler
SQL 1:
exec sp_executesql N'update act_item set Priority = @Priority where activity_code = @activity_code', N'@activity_code nvarchar(40),@Priority int', @activity_code = N'46DF335F-68F7-493F-B55E-5F9BC6CEBC69', @Priority = 0
Reads: ~22000
Duraction: 250-350 ms
SQL 2:
DECLARE @Priority int
DECLARE @Activity_Code char(36)
SET @Priority = 0
SET @Activity_Code = '46DF335F-68F7-493F-B55E-5F9BC6CEBC69'
update act_item set Priority = @Priority where activity_code = @activity_code
Reads: ~160
Duration: 0 ms
Random information:
Activity_code is an indexed field on the table, although it is not the primary key. There are a total of four indexes on the table, none of which include the priority as one of the fields.
There are two triggers on the table, neither of which is executed for this SQL statement (there is an IF UPDATE(fieldname) surrounding the code in the trigger)
There are no foreign relationships
I checked (using perfmon) to see if a compilation/recompilation was happening. No it's not.
Any suggestions as to avenues that could be examined would be appreciated.
TIA
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.
Hello,
im using sqldatareader to read my data and whenever time i loop through the reader it starts from second row why is that?
here is my code:while (reader.Read()){hinfo.Name = reader["_name"].ToString();hi.Add(hinfo);}
i look at the database and i have two rows but its reading only the second row, skiping the first row
I have a set of triggers that log the history of changes to a table - i.e. I record inserts, updates, deletes (pretty standard audit stuff I suppose). I want to also log reads on that data. If I were using sprocs for reading data, this would be relatively painless, but I am using an O/R mapper to handle my data access, which writes dynamic sql at runtime (and I don't want to use sprocs with it) and then sends it down to the DB. Is there a way I can intercept reads and log them to the same table I am logging other actions? I know very little about the new capabilities of SQL Server 2005, but I would think I could somehow, maybe via the new CLR capabilities or similar, get access to these types of events within the database? Anyone? I know I could always do this higher up in the application layers, but I would like to keep all of this at the database level if possible....Thanks,
View 1 Replies View RelatedSQL 6.5 - 5.5 Gig
NT
Hello,
Throughout the day our Document Management application generates high busts of physical page reads when users query the database.
What SQL configuration parameter(s) should I check/modify to insure that the database is performing at it's optimun during these bursts?
Thank You in advance.
I'm trying to insert all the rows from a table to a new table.
(insert A select * from AA)
The reads on Profiler shows ar really high value (10253548).
First I created a unique clustered index and the reads shows (3258445), then I created a non clustered index expecting to have lower reads. Instead the reads shows (10253548).
I read creating indexes helps reduce reads. But it's not happening.
Any ideas what is going on?
=============================
http://www.sqlserverstudy.com
Hi,
Can any of can explain, what the "Reads" column in Profiler exactly mean ? I'm not comfortable with the explanation given in BOL.
"The number of read operations on the logical disk that are performed by the server on behalf of the event. These read operations include all reads from tables and buffers during the statement's execution"
For the same procedure with same parameters, if the server is not loaded much, the Reads are in a few hundreds, but when there are more than 1000 concurrent users, why it is going to millions ? What other parameters affecting this reads ? And how can I reduce it ?
Environment: SQL Server 2005 64-bit Enterprise Edition on Windows Server 2003 R2 Server x64 Enterprise Edition SP2
Thanks in Advance.
Regards
Babu
Hi,
I have been seeing a basic scenario of a write transaction appearing to unexpectedly lock-out reading.
The database has isolation set to "READ COMMITTED".
The scenario is:
1.) Start a transaction (for doing a write)
2.) Do a read before the transaction (for doing the write) is committed (e.g. sqlCommand2.ExecuteReader()).
--> the code will appear to lock-up (then time out).
I see the same behavior if I step through the "write" code with the debugger (to a point after the transaction is started, but before it is committed), and run a "SELECT * FROM" type query from Microsoft SqlServer Management Studio.
Following is the code sample demonstates the issue.
Thoughts on how to resolve the issue (to let me do "read committed" reading of the database table)?
Thanks!
Andy
Module Transaction
Sub Main()
Dim exception1 As Exception
Try
' Create/Open Database Connection
Dim sqlConnection1 As New System.Data.SqlClient.SqlConnection("Server=GRB-AB;Database=Transaction;Trusted_Connection=True;")
sqlConnection1.Open()
' Start transaction
Dim sqlTransaction1 As System.Data.SqlClient.SqlTransaction = sqlConnection1.BeginTransaction()
' Set Parent record
Dim sqlCommand1 As New System.Data.SqlClient.SqlCommand("INSERT INTO Parent (Name) VALUES ('ParentValue');", sqlConnection1)
sqlCommand1.Transaction = sqlTransaction1
sqlCommand1.ExecuteNonQuery()
' Get Id from parent record (note: this code assumes the table was empty when this program starts)
sqlCommand1 = New System.Data.SqlClient.SqlCommand("SELECT Id FROM Parent;", sqlConnection1)
sqlCommand1.Transaction = sqlTransaction1
Dim parentId As Integer = CType(sqlCommand1.ExecuteScalar(), Integer)
'
' Do reading test to test concurrently reading table being written to
'
' Create/Open Database Connection for reading test
Dim sqlConnection2 As New System.Data.SqlClient.SqlConnection("Server=GRB-AB;Database=Transaction;Trusted_Connection=True;")
sqlConnection2.Open()
Dim sqlCommand2 As New System.Data.SqlClient.SqlCommand("SELECT Id FROM Parent;", sqlConnection2)
sqlCommand2.ExecuteReader()
Dim i As Integer
While (sqlCommand2.ExecuteReader.Read = True) ' <===== LOCKS UP HERE **************
i = i + 1
End While
'
' End reading test
'
' Set child record
sqlCommand1 = New System.Data.SqlClient.SqlCommand( _
"INSERT INTO Child (Name, ParentId) VALUES ('ChildValue', " & parentId.ToString & ");", sqlConnection1)
sqlCommand1.Transaction = sqlTransaction1
sqlCommand1.ExecuteScalar()
' Either 1.) commit transaction OR 2.) rollback transaction
Dim test As Boolean = False
If test = False Then
sqlTransaction1.Commit()
Else
sqlTransaction1.Rollback()
End If
sqlConnection1.Close()
sqlConnection2.Close()
Catch ex As Exception
exception1 = ex
End Try
End Sub
End Module
I have written a same stored proc in TSQL and SQL CLR which basically takes an input xml and returns xml document. In SQL Profiler, I am getting reads value about five times more for the CLR. Does anyone has any idea why the CLR is doing more reads than TSQL? Thanks in advance.
View 5 Replies View Related