I'm trying to figure out why my sqlserver is flatlined on the CPU. I'm doing a trace and can't help but notice this, with crazy high reads. I'm not sure what this is? It doesnt look good to me, altho maybe its nothing. Any info is much appreciated.
We are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.
we had some slow down complaints lately and this query seems to be the culprit almost every single time. The estimated execution plan is a clustered index seek as there is a clustered index on the uidcustomerid column. setting profile statistics on shows that every time it executes it does an index seek.
profiler session showed a huge number of reads for these queries depending on the value being looked up. 1500 through 50000. i set up profile io on and the culprit is lob logical reads. everything else is 0 or very low. in this case lob logical reads is over 1700.
3 of the columns in the select statement are text columns. when i take them out of the query the lob logical reads drops to 0 and goes up incrementally as i add each column back in.
is there anyway to improve the performance without changing data types to varchar(max)?
select SID,Last_name,Name_2,First_name,Middle_initial,Descriptives,Telephone_number,mainline,Residence,ADL, DID_number,Svce_street,Svce_town,Svce_state,Svce_appt,Mailing_street,Mailing_town,Mailing_state,Mailing_appt, Mailing_zip,Listing,Addl_listing,Published,Listed,Gold_number,PIN,status,SSnumber,tax_jurisdiction, Bill_date,Past_balance,Service_start_date,Service_end_date,LOA,FCC_type,Line_type,I_W,Jacks,Voice_messaging, vms_ring_cycles,CCS,phonesmarts,ringmate,voice_dialing,Bill_detail,Contact_Number,Contact_extension, Best_Time,suspend,suspend_start,suspend_end,credits_allowed,credits_granted,home_region,Calling_Plan,Local_Plan, Local_Plan_Rate,Flat_Rate,Sales_agent,Community,Building_Mgmt,How_Heard,Incentive_1,Incentive_1a,Incentive_1b, Incentive_1c,Incentive_2,Incentive_2a,Incentive_2b,Incentive_2c,Incentive_3,Incentive_3a,Incentive_3b, Incentive_3c,block_operator,block_collect,block_group,block_adult,block_call_return,block_repeat_dialing, block_call_trace,block_caller_id,block_anonymous,block_all_high_toll,block_regional_and_ld,block_DA_Call_Completion, block_DA,block_3rd_party,bank,prepayment,dial_around_number,custid,waive_interest,Financial_Treatment, Other_Feature_1_code,Other_Feature_1_rate,Other_Feature_2_code,Other_Feature_2_rate,Other_Feature_3_code, Other_Feature_3_rate,Other_Feature_4_code,Other_Feature_4_rate,Partial_Account,mail_date,snp_1_date,snp_2_date, terminate_date,snp1notified,snp1peak,snp1offpeak,snp2notified,snp2peak,snp2offpeak,avg_days_paid,Pulled_Ld,SNP1, SNP2,Treatment,Collections,Installment,Nynex_BTN,LD_rate,local_discount,to_month,rounds_up,full_package_made, local_made,PIC,LPIC,tax_exempt_local,tax_exempt_federal,CommissionedAgent,LDRateID,UidCustomerId, accVchLineClassUSOC,block_Inter_Reg_LD,block_international,block_DA_3rd_Collect,block_DH2,block_ISP_2,block_ISP_3, block_ISP4_3_GBAS,block_ISP3_3_GBAS,block_collect_only,block_LD_Reg_DA,block_usage_based,block_ISP5_3_GBAS, block_ISP5_2_GBAS,block_group_adult,csr_PIC,csr_LPIC,csr_SA,csr_exception,cutover_status,cutover_datetime, OutsideAgent,prfVchAttributes,uidResellerID,Category,uidDealID from profiles where UidCustomerID in (352199267)
Suddenly in one database we have a lot of errors, it seams some things arecorrupted. I tried to start maintanance / database repair, but this failstoo.When selecting in Query Analyzer a range of records from a table I get thefollowing message:Location: p:sqltdbmsstorengdrsinclude ecord.inl:1447Expression: m_SizeRec > 0 && m_SizeRec <= MAXDATAROWSPID: 68Process ID: 1208When I select the record that causes this error, the following error isreported in Query Analyzer:Could not find the index entry for RID '163748993200' in index page(3:373352), index ID 0, database 'sal'.In the log I see a lot of these messages:Stack Signature for the dump is 0x&D179C48Could not open FCD for invalid file ID 21761 in database'sal'I/O error (bad page ID) detected during read at offset 0x00000b64d2000How can this be fixed?How can I rebuild the index for one table / check integrity of one table?What kind of actions may caused this corruption (if it is corruption) ?How can it be prevented?I hope someone can help.Regards,Rene
I am running an application on one NT Server, running against SQL Server 6.5 sp 3, and SQL 7 with sp1 applied.
The application is a 'data migration' type application - ie heavy insert and update workload - against many (50+ tables) with many different SQL statements.
The SQL 7 server is configured with 'floating' memory.
On SQL 7 - I am experiencing very high page faults/second for the sqlservr process - sometimes peaking at over 1,000. I was under the impression any number greater than 10 indicates a problem with system performance.
The same application, same data, same NT configuration etc against SQL 6.5 does not page fault. SQL Server 6.5 completes the work faster than 7.
Just built a new DB server. Using SQL2K5Standard 64bit on W2K3Server 64bit. Hardware includes 16GBs memory/dual 4-core procs and many spindles, however we only have a 2GB page file on C:. SQL is set to to usa a maximum of 12GB memory which is way more than it should need. Problem is, we are occasionally seeing very high page file usage. If Task Manager is correct, (which it really can't) I am using 10 and 13.8GB (as listed in PF Usage) But I only have a 2GB page file! Occasionally when simply copying a file from one partition to another, PerfMon shows Pages/sec jump to 10,000. Also see some very high Disk Queueing.This system isn't even in production yet. My main questions are two: 1) some documentation says with SQL2K5-64 bit running on W2K3-64bit I may not even need a page file. Is this correct? 2) What would be causing high usage of the swap file with 16GBs of installed memory? And help would be appreciated. Thanks in advance!
Is there a way to configure mirroring to go from High Availability to High Protection without having to reconfigure Database Mirroring? Using the interface in Management Studio, I can change the configuration option to High Performance, but not High Protection despite both of them being Synchronous.
If not, what are the recommended steps to configure the mirror once it already has been configured? Is just like initially setting up the mirror or would there be any shortcuts I could take? If I stop the mirroring and remove the witness, will the High Protection option be available?
I realise this is a stupid quesiton but i cannot really find any confirmation of this in BOL.
If you are running High Safety with automatic failover, when failover occurs does this automatically change to High Performance mode. SInce for failover to occur something has happen with the primary , it will be impossible to commit transactions on the new primary and mirror asyncronously since 1 of them is no longer available.
So am i correct in assuming that automatic failover also automatically changes the mode to High Performacne for that session?
I have created one reports but all the records are displaying on one page.find a solution to display the records page by page. I created the same report without group so the records are displaying in page by page.
Hi! I was assigned to solve performance problems for an application. I fired up Sql Server profiler and started a trace. Downloaded Sql Server Trace Analyzer. It's a trial version so it's very limited. What I found is that one stored procedure generates almost 400 000 reads everytime it's used and it's used everytime the user wants to see his orders. I've tried to translate the t-sql to english from swedish, it looks something like this:
select top 100 o.orderid, o.name, o.latestdeldate, os.name as OrderStatus, os.orderstatusID, p.placeID, p.name as place, p.address, p.city, a.name as worktype, noOfActions=(select count(*) from actions a where a.order_orderid=o.orderid), noOfServiceObjects = (select count(*) from Serviceobject s, Actions a where s.Place_PlaceID = o.Place_PlaceID and a.order_orderid = o.orderid and a.Serviceobject_serviceobjectid = s.serviceobjectid), ... ... ...
It has 8 select count(*) in the select statement then in the where statement it has 2 more select count(*).
I know it's very difficult for you to come up with a solution but do you know a better way than to use select count(*) everywhere? The count is used for to show different status flags on the website.
If I'm doing a dirty reads and a someone updates a record when I'm trying to read it is it possible to read both the old and new records thereby retrieving two records?
How can You find the reads and writes per second of your hard drives in sql. I am reading my SQL book and it says that your average disk should have 125 or less i/o's. And it gave the forumal but as mentioned I don't know how to find the reads and writes.
server: QAT on clustering server ----> 23 seconds ---------------------------------------------------- SS 2000 developer edition SP4 win NT 5.2 (3790) SP4 MeM 7935 MB processors 4 root directory C:program files... use a fixed memeroy size 640 MB
reserve physical memory for sql server minimum query memory 1024 kb
use all available processors minimum query plan threshold for considering 5
PROFILER READS = 5234
server: MILLER ----> 3 seconds ---------------------------------------------------- SS 2000 developer edition no service pack win NT 5.2 (3790) SP4 MeM 2047 MB processors 4 root directory f:MSSQL$INAQAT
dynamically configure sql server memory
use all available processors minimum query plan threshold for considering 5 PROFILER READS = 598
---------------------------------------------------- Making story short. I got an application that hits only 1 database called RECORDS. I'm getting different duration when running an application. 23 and 3 seconds. Same database, same objects and same application. SERVER QAT is our staging server, means lots of databases SERVER MILLER is just a server i just assembled, means just one database (RECORDS).
Not sure if it's because it's a clustering server that is causing the issue nor the reads. If its the reads, what is causing it? Do you think is the how the memory is configured?. Will the experts pls stand up?
So I€™m at a dead-end looking for the reason behind the following behavior. Just to make sure no one misses it, the 'behavior' is the difference in the number of reads between using sp_executesql and not.
The following statements are executed against a SQL 2000 database that contains >1,000,000 records in the act_item table. They are run using Query Analyzer and the Duration and Reads come from SQL Profiler
SQL 2: DECLARE @Priority int DECLARE @Activity_Code char(36)
SET @Priority = 0 SET @Activity_Code = '46DF335F-68F7-493F-B55E-5F9BC6CEBC69' update act_item set Priority = @Priority where activity_code = @activity_code
Reads: ~160 Duration: 0 ms
Random information:
Activity_code is an indexed field on the table, although it is not the primary key. There are a total of four indexes on the table, none of which include the priority as one of the fields. There are two triggers on the table, neither of which is executed for this SQL statement (there is an IF UPDATE(fieldname) surrounding the code in the trigger) There are no foreign relationships I checked (using perfmon) to see if a compilation/recompilation was happening. No it's not. Any suggestions as to avenues that could be examined would be appreciated.
Hello, im using sqldatareader to read my data and whenever time i loop through the reader it starts from second row why is that? here is my code:while (reader.Read()){hinfo.Name = reader["_name"].ToString();hi.Add(hinfo);} i look at the database and i have two rows but its reading only the second row, skiping the first row
I have a set of triggers that log the history of changes to a table - i.e. I record inserts, updates, deletes (pretty standard audit stuff I suppose). I want to also log reads on that data. If I were using sprocs for reading data, this would be relatively painless, but I am using an O/R mapper to handle my data access, which writes dynamic sql at runtime (and I don't want to use sprocs with it) and then sends it down to the DB. Is there a way I can intercept reads and log them to the same table I am logging other actions? I know very little about the new capabilities of SQL Server 2005, but I would think I could somehow, maybe via the new CLR capabilities or similar, get access to these types of events within the database? Anyone? I know I could always do this higher up in the application layers, but I would like to keep all of this at the database level if possible....Thanks,
Is there a way to get a total count of all SELECT, UPDATE, DELETE and INSERT statements to a SQL Server 6.5 database during a 12 hour period? I'm thinking maybe someone knows of a software that reads the log or monitors the server... I've been looking at the performance monitor and, although it has good information, it doesn't capture DML's.
I'm trying to insert all the rows from a table to a new table. (insert A select * from AA) The reads on Profiler shows ar really high value (10253548).
First I created a unique clustered index and the reads shows (3258445), then I created a non clustered index expecting to have lower reads. Instead the reads shows (10253548).
I read creating indexes helps reduce reads. But it's not happening. Any ideas what is going on?
Can any of can explain, what the "Reads" column in Profiler exactly mean ? I'm not comfortable with the explanation given in BOL.
"The number of read operations on the logical disk that are performed by the server on behalf of the event. These read operations include all reads from tables and buffers during the statement's execution"
For the same procedure with same parameters, if the server is not loaded much, the Reads are in a few hundreds, but when there are more than 1000 concurrent users, why it is going to millions ? What other parameters affecting this reads ? And how can I reduce it ?
Environment: SQL Server 2005 64-bit Enterprise Edition on Windows Server 2003 R2 Server x64 Enterprise Edition SP2
I have been seeing a basic scenario of a write transaction appearing to unexpectedly lock-out reading.
The database has isolation set to "READ COMMITTED".
The scenario is:
1.) Start a transaction (for doing a write)
2.) Do a read before the transaction (for doing the write) is committed (e.g. sqlCommand2.ExecuteReader()).
--> the code will appear to lock-up (then time out).
I see the same behavior if I step through the "write" code with the debugger (to a point after the transaction is started, but before it is committed), and run a "SELECT * FROM" type query from Microsoft SqlServer Management Studio.
Following is the code sample demonstates the issue.
Thoughts on how to resolve the issue (to let me do "read committed" reading of the database table)?
Thanks!
Andy
Module Transaction
Sub Main()
Dim exception1 As Exception
Try
' Create/Open Database Connection
Dim sqlConnection1 As New System.Data.SqlClient.SqlConnection("Server=GRB-AB;Database=Transaction;Trusted_Connection=True;")
sqlConnection1.Open()
' Start transaction
Dim sqlTransaction1 As System.Data.SqlClient.SqlTransaction = sqlConnection1.BeginTransaction()
' Set Parent record
Dim sqlCommand1 As New System.Data.SqlClient.SqlCommand("INSERT INTO Parent (Name) VALUES ('ParentValue');", sqlConnection1)
sqlCommand1.Transaction = sqlTransaction1
sqlCommand1.ExecuteNonQuery()
' Get Id from parent record (note: this code assumes the table was empty when this program starts)
sqlCommand1 = New System.Data.SqlClient.SqlCommand("SELECT Id FROM Parent;", sqlConnection1)
sqlCommand1.Transaction = sqlTransaction1
Dim parentId As Integer = CType(sqlCommand1.ExecuteScalar(), Integer)
'
' Do reading test to test concurrently reading table being written to
'
' Create/Open Database Connection for reading test
Dim sqlConnection2 As New System.Data.SqlClient.SqlConnection("Server=GRB-AB;Database=Transaction;Trusted_Connection=True;")
sqlConnection2.Open()
Dim sqlCommand2 As New System.Data.SqlClient.SqlCommand("SELECT Id FROM Parent;", sqlConnection2)
sqlCommand2.ExecuteReader()
Dim i As Integer
While (sqlCommand2.ExecuteReader.Read = True) ' <===== LOCKS UP HERE **************
i = i + 1
End While
'
' End reading test
'
' Set child record
sqlCommand1 = New System.Data.SqlClient.SqlCommand( _
I have written a same stored proc in TSQL and SQL CLR which basically takes an input xml and returns xml document. In SQL Profiler, I am getting reads value about five times more for the CLR. Does anyone has any idea why the CLR is doing more reads than TSQL? Thanks in advance.
Hello I have a project that uses a large number of MS Data access pages created in Access 2003 and runs on MS SQL2005.
When I am on lets say my client, (first page in a series) data access page and I have completed the fields in the (DAP), I am directing my users to the next step of the registration process by means of a hyperlink to another Data access page in the same web but in a linked or sometimes different table.
I need to pass data entered /created on the first page to the next page and populate the next page with some data from the first page / table. (like staying on the client name and ID when i go to the next page)
I also need the first data access page to open and display a blank or new record. Not an existing record. I will also be looking to creata a drop down box as a record selector.
Any pointers in the right direction would be appreciated. I am some what new to data access pages so a walk through would be nice but anything you got is welcome. Thanks Peter€¦
Just migrated application from Oracle to SQL and we are seeing alot of deadlocking and blocking. I did notice that app seems to be passing isolation level of repeatable read. Attached is a .doc of one of the deadlocks, is there a way to avoid these in the repeatable read isolation level? This example is a select with two tables, using NCI's that cover the where, and a insert doing just a clustered index insert. Is this simply try to get rid of the repeateable read if not needed, guess have to check with vendor on that or is there a way to get this to not deadlock using repeatable read?
I am running a profiler trace against a database and noticed that thereads column always shows 0. When running the same trace againstanother machine I get back values in the reads column. I took a querythat profiler reported as having 0 reads and ran in in query analyzerwtih STATISTICS IO on and confirmed that there are in fact reads:Table 'tt_cawardalloc'. Scan count 1, logical reads 8, physical reads0, read-ahead reads 1.Table 'tt_clineitem'. Scan count 10, logical reads 125208, physicalreads 1540, read-ahead reads 2995.Table 'tt_contractitem'. Scan count 32, logical reads 676, physicalreads 0, read-ahead reads 0.Table 'tt_contract2'. Scan count 3, logical reads 121, physical reads4, read-ahead reads 0.I am on SQL 2000 sp3a. Any help appreciated.Thanks!
I'm running the same query on two different PCs and tracing results in Profiler on my PC. When executing the query on PC1 - the total number of reads is 200000. When executing the same query on PC2 - the toal number of reads is 13000. It is almost 15 times more reads when executing query on PC2. The executed query is same on PC1 and PC2. Any reason for this?
I'm trying to analyse that query and reduce the number of logical reads as it's is too high but then I get completly different result on different PC.
Lets say user A accesses a record and is making an update to a column... next user B accesses the same record and makes an update to the same column and saves the data... how can user A check to see if an update has been made to prevent overwriting the data..
Is there a query statement that user A can write to check for this?
I understand locking can be used to prevent this but is there an alternative to locking.
Why is there often such a dramatic discrepancy between the logical reads recorded in the trace file versus the output of STATISTICS IO?
In the server-side trace I have running I found a reporting procedure that shows having 136,949,501 reads (yes, in hundreds of millions), and it's taking 13,508 seconds to complete.
So I pull the code from the trace and execute it via SSMS - it runs < 1 second, and only generates about 4,000 reads (using various different parameters I get the same result)
All my queries are being blocked while the tables are being replicatedand it is causing some 2 minute blocking. Is there a way for theReplication to allow dirty reads because I really don't care aboutthat, I would rather have dirty reads than 2 minute waits.Thanks.