SQL Server Admin 2014 :: Why Number Of Reads Increases During Insert Test
Jun 2, 2014
I am writing a performance baseline test.
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
View 4 Replies
ADVERTISEMENT
Jul 17, 2015
I have inherited a database that is over-indexed, i.e. there are sometimes 10-20 indexes on a table. The performance is at times not great due to blocking from long running queries. I want to clean up the indexes as a starting point.
Through a query I found some time ago on the SQLCat blog I have discovered a large number of indexes in the database that have a huge disparity between reads and writes. The range of difference is sometimes almost 2 million more writes than reads. Should I just drop the indexes that have say, more than 100,000 more writes than reads and then see what the Missing Index DMVs tell me after a few days of running without those indexes?
In some cases there are a few hundred thousand reads but maybe a million writes on the index. Thus, there are a fair number of reads happening, just not in comparison to the number of writes. In some cases there are almost no reads and a million or more writes. I am obviously dropping those indexes. I just am not sure what to do about the indexes that do have a fair number of reads.
View 9 Replies
View Related
Aug 31, 2015
I am checking some ratio numbers for our system engineers, those are
Read/write ratio?
Random/sequential ratio?
Read/write block size?
For Read/write ratio, I am using below query,
SELECT
m.type_desc
, CEILING(sum(num_of_bytes_read*1.0) / (sum(num_of_bytes_read*1.0) + sum(num_of_bytes_written*1.0)) * 100) AS 'Read %'
, CAST((sum(v.size_on_disk_bytes) / 1024.0 / 1024 / 1024) AS MONEY) AS 'FileSizeGB'
[Code] ....
Random/sequential ratio, I googled but cannot find a similar query to get the result?
View 1 Replies
View Related
Jan 14, 2015
I Want to test for Automatic switch between primary and secondary.
How do i do it ?
or
I need this for some purpose : How to set suspect mode for a database ?
View 1 Replies
View Related
Aug 25, 2015
I had an existing table with lots of indexes.
As a test (fro speed) - I added a non clustered column-store index.
When I run test queries it always ignores my new column-store index. Why?
Should I remove the old indexes, leaving just the column store?
View 2 Replies
View Related
Jan 7, 2015
I am trying to write a sql script that will basically test logins for Windows NT Similar to when you bring up SQL Studio, and do run as windows NT I want to ensure my rights are not removed from SQL Servers and if so send my nice DBA an email.
How to do the connect as ? and check my permission is still set to access database with db_datawriter, db_datareader etc
View 2 Replies
View Related
Oct 7, 2015
I have heard that high numbers of VLF's aren't good. It can impact performance and can delay recovery time, so I wanted to test that.
I created 2 DBs with 100MB datafile and 50MB logfile.
TestDB log file had 100MB autogrowth
TestDB2 log file had 1% growth.
I inserted 1048576 records, took the backup
Ran DBCC loginfo and
TestDB had 40 VLFs and
TestDB2 had 165 VLFs
But when I restored both DBs, this is what I got.
TestDB:
RESTORE DATABASE successfully processed 42258 pages in 4.420 seconds (74.691 MB/sec).
SQL Server Execution times:
CPU Time = 125ms, elapsed time = 8323 ms.
TestDB2:
RESTORE DATABASE successfully processed 42257 pages in 3.943 seconds (83.724 MB/sec).
SQL Server Execution Times:
CPU time = 109 ms, elapsed time = 8314 ms.
Question is: Where is the difference? How TestDB which has 40 VLFs are better than TestDB22 which has 165 VLFs.
View 6 Replies
View Related
Apr 7, 2015
I have this query
SELECT top 100 Ltrim([text]),objectid,total_rows,total_logical_reads , execution_count
FROM sys.dm_exec_query_stats AS a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS b
where last_execution_time >= '2015-04-07 10:01:01.01'
ORDER BY execution_count DESC
But the result of execution count is from the first. I want to know it only one day.
View 9 Replies
View Related
Feb 2, 2015
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage.
Log files – should go on the fastest writing storage.
TempDb – involves a lot of writing at the same time the data files are being read.
Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
View 9 Replies
View Related
Sep 9, 2015
Our development team wanted to create a database user for each application user in the application and use these for granular data access control, which at first, sounded like a good idea but our initial testing ran into some interesting results.
Our target user base was about 15 million users with an estimated 1% concurrency rate, and finding no MS documentation on an upper limit to the number of users a database can have we began some load testing to see how the database performed. In the hundreds of thousands of users range our test database had a hard time performing well under light loads (even without any concurrent connections).
When we purged the users and reverted back to just a handful of service accounts, performance went back to "normal" under the same loads. I began to wonder if this is a situation where throwing more hardware at the problem would overcome the issue or if there is a practical upper limit to the number of users a single database can handle well.
(There were of course other cons to this arrangement and I certainly was never going to expand the users tree in the object explorer for a database like this, but we thought it a solution worth investigating.)
What is the largest number of users any of you have had in a single database?
View 3 Replies
View Related
Oct 27, 2015
I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.
View 0 Replies
View Related
Nov 6, 2015
I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.
On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.
Is this normal behaviour? I've seen more blocking as a result of this.
Is there any way to reduce the number of SPIDs generated?
View 0 Replies
View Related
Apr 3, 2015
Basically the question is, which number should I pick?
View 4 Replies
View Related
May 3, 2015
ID A B C AVG
------------------------
1 08 09 10 -
------------------------
2 10 25 26 -
------------------------
3 09 15 16 -
------------------------
I want to calculate the average of the larges two number from the column A,B & C for particular identity and store that average in the AVG column....
View 9 Replies
View Related
Jan 6, 2014
We have created a DDL trigger on SQL server 2005 database for DB audit purpose. Following is the script used for trigger creation
USE [master]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[ChangeLog](
[Code] ....
After the DDL trigger creation. Application team started reporting following error while executing a stored procedure.
*********************************
Error 1:
INSERT failed because the following SET options have incorrect settings: 'ARITHABORT'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or query notifications and/or xml data type methods.
Error2:
[Execute SQL Task] Error: Executing the query "exec sp_drop_indexes_EnhLeaseData delete from dbo.leases where vin_num='XXX' and lease_acct_num='XXXX' delete from dbo.leases where vin_num='XXX' and lease_acct_num='080066225' " failed with the following error: "INSERT failed because the following SET options have incorrect settings: 'ARITHABORT'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or query notifications and/or xml data type methods.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
View 1 Replies
View Related
Feb 4, 2015
I have a master table with after insert trigger on it.. When record is inserted into master table, the trigger fires and is captured in the backoffice table. In case the trigger fails, my record is neither in the master table nor in the back office table..
Is there anyway to capture the record either in the master table or in a separate table.
View 6 Replies
View Related
Jun 10, 2015
Here is my table:
My question is: How can I insert a row for each unique TemplateId. So let's say I have templateIds like, 2,5,6,7... For each unique templateId, how can I insert one more row?
View 0 Replies
View Related
Aug 27, 2015
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
View 0 Replies
View Related
Aug 19, 2015
I often use profiler as one tool to identify bad plans. The reads column gives me a good indication of excessive IO to dig into and correct if necessary. I often use it with Showplan so I can see what a query does, replicate it and fix it.
However I have just lost some faith in it. I am looking at a poorly performing query joining five tables. A parallel plan has been generated and one table is being scanned (in parallel) due to a missing index. This table had in excess of 4 million rows in it. The rest hitd indexes well. However the entire query generates ONLY 12 READS.
Once corrected, a single processor plan is used. This looks really efficient and uses 120 reads. That looks the right figure to me.
Clearly 12 reads is wrong. Does the profiler only display one thread of a parallel plan perhaps? Or something else?
View 1 Replies
View Related
Jul 27, 2007
Hi,
Can any of can explain, what the "Reads" column in Profiler exactly mean ? I'm not comfortable with the explanation given in BOL.
"The number of read operations on the logical disk that are performed by the server on behalf of the event. These read operations include all reads from tables and buffers during the statement's execution"
For the same procedure with same parameters, if the server is not loaded much, the Reads are in a few hundreds, but when there are more than 1000 concurrent users, why it is going to millions ? What other parameters affecting this reads ? And how can I reduce it ?
Environment: SQL Server 2005 64-bit Enterprise Edition on Windows Server 2003 R2 Server x64 Enterprise Edition SP2
Thanks in Advance.
Regards
Babu
View 4 Replies
View Related
Apr 17, 2007
I'm running the same query on two different PCs and tracing results in Profiler on my PC. When executing the query on PC1 - the total number of reads is 200000. When executing the same query on PC2 - the toal number of reads is 13000. It is almost 15 times more reads when executing query on PC2. The executed query is same on PC1 and PC2. Any reason for this?
I'm trying to analyse that query and reduce the number of logical reads as it's is too high but then I get completly different result on different PC.
Thanks.
View 5 Replies
View Related
Apr 17, 2007
I'm using an application that produce 48000000 reads for one stored procedure and 170 seconds to complete. The same procedure when executed in SQL Analyzer takes only one seconds and 10000 reads.
What is happening here? Where should I look to solve this problem?
Thanks
View 1 Replies
View Related
Apr 14, 2015
I inherited a lot of Servers to upgrade to 2014 to include an SSRS Server.
The encryption Key was never backed up and it seems that no one knows what the password is?
Do I have to manually load the reports? There are a lot of Reports.
[URL]
View 4 Replies
View Related
Nov 30, 2006
I've been looking around for some kind of known issue or something, but can't find anything. Here's what I'm experiencing:
I have a table with about 50,000 rows. I open several connections and use a command to ExecuteResultSet against each command, with CommandType.TableDirect, CommandText set to the name of the table, and IndexName set to various indexes. In the end, I have several SqlCeResultSet instances which are then maintained for the life of the AppDomain.
In a loop, I call SqlCeResultSet.Read() on one of the instances, and if it returns false, I call SqlCeResultSet.ReadFirst() - essentially creating a circular pass through the result set.
In a Visual Studio debug session, this approach goes swimmingly for a short time, and then after a successful Read(), I'm pegged with an InvalidOperationException (text: "No data exists for the row/column") for a column which was succesfully read on the previous Read(). If, in the immediate window, I call SqlCeResultSet.Read() again on the result set instance, the Get___ methods work as they had been in the previous reads.
It seems like the internal state of the ResultSet is getting corrupted somehow, but it is opaque to me. Any insights on why this suddenly throws this exception?
View 8 Replies
View Related
Sep 26, 2013
I want to use BCP to load data from a text file.
By default, constraints are turned off in bcp, so I use the CHECK_CONSTRAINTS hint.
bcp aborts if ANY of the rows contains a FK violation. No data get loaded.
So if I add the -b 1 batch size option, it loads all data UNTIL the first FK violation, but nothing after that.
I want to load EVERYTHING ... except for the violations. But bcp won't let me. Is there a way?
View 2 Replies
View Related
Jan 13, 2014
If I install an instance with Windows Only authentication, and then change it to Mixed Mode, if I enable the sa login, the password has already been set. What is the default? If it's generated, how secure is it? Is the password generated? What algorithm is used for that?
View 9 Replies
View Related
Mar 21, 2014
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
View 9 Replies
View Related
Jun 18, 2014
I am using a monitoring system where I can monitor a numeric SQL result assuming the result is one field and one row.I would like to do this to say monitor the free available space or percentage on say the Master database. DBCC SQLPERF gives me a few columns and results for all databases on the server.
View 2 Replies
View Related
Jun 25, 2014
In our environment applications are using a DNS name which points to the physical server ip address. Now we are planning to move to 2014. We are planning to have servers in different subnets so we will be having two ip adresses for listener. How we can point the DNS to the listener ips? If failover happens can the DNS point to the exact ip address of the listener where it's primary node?
View 1 Replies
View Related
Jul 31, 2014
Is there a way to schedule a sql job to run at different intervals
For eg:
The job should run at
7:00 Am
8:00 AM
and then at 10:00 Am
View 3 Replies
View Related
Nov 19, 2014
We installed the Sql 2014 in Test server.
We observed frequently
"Process 0:0:0 (0x1e10) Worker 0x00000006B6D341A0 appears to be non-yielding on Scheduler 13. Thread creation time: 12906028806348. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 13%. System Idle 84%. Interval: 70189 ms."
Is it better to run the profiler or performan counter?
What are the filters we have to select in the profiler to monitor the Sql server
View 0 Replies
View Related
Dec 8, 2014
I have a SQL server box running 2014 reporting services. I have another server running IIS v8.
I would like to be able to connect to the IIS site and be given the SSRS report browser.
So externally if I browse to [URL], I am presented with the report server interface, the same as if I browse to http://xxx.xxx.xxx.xxx/reports internally.
What are my options?
View 4 Replies
View Related
Dec 11, 2014
What is the best approach for a read only copy of a database that is ~ 1TB. The primary database is fed nightly with an ETL process. We are currently trying to duplicate the ETL to read only server but that process is not going well. So we are looking at other options to let SQL make the copy.
The primary database is on a Win12R2 with SQL 12 or 14, a 2 node A/P failover cluster.
The read only copy will be on a Win12R2 with SQL 12 or 14. It is not a requirement to fail over to the read only copy if the primary should go down.
What would best the approach to accomplish the end result?
View 3 Replies
View Related