Number Of Reads In Profiler Is Not The Same When Running The Same SP On Different PCs
Apr 17, 2007
I'm running the same query on two different PCs and tracing results in Profiler on my PC. When executing the query on PC1 - the total number of reads is 200000. When executing the same query on PC2 - the toal number of reads is 13000. It is almost 15 times more reads when executing query on PC2. The executed query is same on PC1 and PC2. Any reason for this?
I'm trying to analyse that query and reduce the number of logical reads as it's is too high but then I get completly different result on different PC.
Thanks.
View 5 Replies
ADVERTISEMENT
Jul 27, 2007
Hi,
Can any of can explain, what the "Reads" column in Profiler exactly mean ? I'm not comfortable with the explanation given in BOL.
"The number of read operations on the logical disk that are performed by the server on behalf of the event. These read operations include all reads from tables and buffers during the statement's execution"
For the same procedure with same parameters, if the server is not loaded much, the Reads are in a few hundreds, but when there are more than 1000 concurrent users, why it is going to millions ? What other parameters affecting this reads ? And how can I reduce it ?
Environment: SQL Server 2005 64-bit Enterprise Edition on Windows Server 2003 R2 Server x64 Enterprise Edition SP2
Thanks in Advance.
Regards
Babu
View 4 Replies
View Related
Apr 17, 2007
I'm using an application that produce 48000000 reads for one stored procedure and 170 seconds to complete. The same procedure when executed in SQL Analyzer takes only one seconds and 10000 reads.
What is happening here? Where should I look to solve this problem?
Thanks
View 1 Replies
View Related
Jul 23, 2005
I am running a profiler trace against a database and noticed that thereads column always shows 0. When running the same trace againstanother machine I get back values in the reads column. I took a querythat profiler reported as having 0 reads and ran in in query analyzerwtih STATISTICS IO on and confirmed that there are in fact reads:Table 'tt_cawardalloc'. Scan count 1, logical reads 8, physical reads0, read-ahead reads 1.Table 'tt_clineitem'. Scan count 10, logical reads 125208, physicalreads 1540, read-ahead reads 2995.Table 'tt_contractitem'. Scan count 32, logical reads 676, physicalreads 0, read-ahead reads 0.Table 'tt_contract2'. Scan count 3, logical reads 121, physical reads4, read-ahead reads 0.I am on SQL 2000 sp3a. Any help appreciated.Thanks!
View 8 Replies
View Related
Jun 2, 2008
I ran Sql profiler and got the following results for a stored procedure
CPU of 1078;
Reads of 125464
writes of 0
Duration of 1882
how do i interpret the above results.
also what CPU and Duration is considered high and indicating a poor performing query.
I am using SQL Server 2005. Thanks
View 4 Replies
View Related
Aug 19, 2015
Is the SQL Server Profiler Reads Column Incorrect For Parallel Plans?
I often use profiler as one tool to identify bad plans. The reads column gives me a good indication of excessive IO to dig into and correct if necessary. I often use it with Showplan so I can see what a query does, replicate it and fix it.
However I have just lost some faith in it. I am looking at a poorly performing query joining five tables. A parallel plan has been generated and one table is being scanned (in parallel) due to a missing index. This table had in excess of 4 million rows in it. The rest hitd indexes well. However the entire query generates ONLY 12 READS.
Once corrected, a single processor plan is used. This looks really efficient and uses 120 reads. That looks the right figure to me.
Does the profiler only display one thread of a parallel plan perhaps? Or something else?
View 9 Replies
View Related
Aug 19, 2015
I often use profiler as one tool to identify bad plans. The reads column gives me a good indication of excessive IO to dig into and correct if necessary. I often use it with Showplan so I can see what a query does, replicate it and fix it.
However I have just lost some faith in it. I am looking at a poorly performing query joining five tables. A parallel plan has been generated and one table is being scanned (in parallel) due to a missing index. This table had in excess of 4 million rows in it. The rest hitd indexes well. However the entire query generates ONLY 12 READS.
Once corrected, a single processor plan is used. This looks really efficient and uses 120 reads. That looks the right figure to me.
Clearly 12 reads is wrong. Does the profiler only display one thread of a parallel plan perhaps? Or something else?
View 1 Replies
View Related
Jun 1, 2007
we had some slow down complaints lately and this query seems to be the culprit almost every single time. The estimated execution plan is a clustered index seek as there is a clustered index on the uidcustomerid column. setting profile statistics on shows that every time it executes it does an index seek.
profiler session showed a huge number of reads for these queries depending on the value being looked up. 1500 through 50000. i set up profile io on and the culprit is lob logical reads. everything else is 0 or very low. in this case lob logical reads is over 1700.
3 of the columns in the select statement are text columns. when i take them out of the query the lob logical reads drops to 0 and goes up incrementally as i add each column back in.
is there anyway to improve the performance without changing data types to varchar(max)?
select SID,Last_name,Name_2,First_name,Middle_initial,Descriptives,Telephone_number,mainline,Residence,ADL,
DID_number,Svce_street,Svce_town,Svce_state,Svce_appt,Mailing_street,Mailing_town,Mailing_state,Mailing_appt,
Mailing_zip,Listing,Addl_listing,Published,Listed,Gold_number,PIN,status,SSnumber,tax_jurisdiction,
Bill_date,Past_balance,Service_start_date,Service_end_date,LOA,FCC_type,Line_type,I_W,Jacks,Voice_messaging,
vms_ring_cycles,CCS,phonesmarts,ringmate,voice_dialing,Bill_detail,Contact_Number,Contact_extension,
Best_Time,suspend,suspend_start,suspend_end,credits_allowed,credits_granted,home_region,Calling_Plan,Local_Plan,
Local_Plan_Rate,Flat_Rate,Sales_agent,Community,Building_Mgmt,How_Heard,Incentive_1,Incentive_1a,Incentive_1b,
Incentive_1c,Incentive_2,Incentive_2a,Incentive_2b,Incentive_2c,Incentive_3,Incentive_3a,Incentive_3b,
Incentive_3c,block_operator,block_collect,block_group,block_adult,block_call_return,block_repeat_dialing,
block_call_trace,block_caller_id,block_anonymous,block_all_high_toll,block_regional_and_ld,block_DA_Call_Completion,
block_DA,block_3rd_party,bank,prepayment,dial_around_number,custid,waive_interest,Financial_Treatment,
Other_Feature_1_code,Other_Feature_1_rate,Other_Feature_2_code,Other_Feature_2_rate,Other_Feature_3_code,
Other_Feature_3_rate,Other_Feature_4_code,Other_Feature_4_rate,Partial_Account,mail_date,snp_1_date,snp_2_date,
terminate_date,snp1notified,snp1peak,snp1offpeak,snp2notified,snp2peak,snp2offpeak,avg_days_paid,Pulled_Ld,SNP1,
SNP2,Treatment,Collections,Installment,Nynex_BTN,LD_rate,local_discount,to_month,rounds_up,full_package_made,
local_made,PIC,LPIC,tax_exempt_local,tax_exempt_federal,CommissionedAgent,LDRateID,UidCustomerId,
accVchLineClassUSOC,block_Inter_Reg_LD,block_international,block_DA_3rd_Collect,block_DH2,block_ISP_2,block_ISP_3,
block_ISP4_3_GBAS,block_ISP3_3_GBAS,block_collect_only,block_LD_Reg_DA,block_usage_based,block_ISP5_3_GBAS,
block_ISP5_2_GBAS,block_group_adult,csr_PIC,csr_LPIC,csr_SA,csr_exception,cutover_status,cutover_datetime,
OutsideAgent,prfVchAttributes,uidResellerID,Category,uidDealID
from profiles where UidCustomerID in (352199267)
View 3 Replies
View Related
Nov 30, 2006
I've been looking around for some kind of known issue or something, but can't find anything. Here's what I'm experiencing:
I have a table with about 50,000 rows. I open several connections and use a command to ExecuteResultSet against each command, with CommandType.TableDirect, CommandText set to the name of the table, and IndexName set to various indexes. In the end, I have several SqlCeResultSet instances which are then maintained for the life of the AppDomain.
In a loop, I call SqlCeResultSet.Read() on one of the instances, and if it returns false, I call SqlCeResultSet.ReadFirst() - essentially creating a circular pass through the result set.
In a Visual Studio debug session, this approach goes swimmingly for a short time, and then after a successful Read(), I'm pegged with an InvalidOperationException (text: "No data exists for the row/column") for a column which was succesfully read on the previous Read(). If, in the immediate window, I call SqlCeResultSet.Read() again on the result set instance, the Get___ methods work as they had been in the previous reads.
It seems like the internal state of the ResultSet is getting corrupted somehow, but it is opaque to me. Any insights on why this suddenly throws this exception?
View 8 Replies
View Related
Jun 2, 2014
I am writing a performance baseline test.
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
View 4 Replies
View Related
Mar 28, 2000
The number of users on the Server increase when I am running the SQL Server 7 Profiler. On the trace properties, I choose monitoring dead lock, dead lock chain, store procedure: compelete, starting, stmStarting and stmComplete. Any idea why the number of users increase, and what properties I can choose to avoid this problem. Thanks
View 1 Replies
View Related
Feb 6, 2007
Where - in SQL Server Profiler - can I see the port number that someone has used to connect to a database?
e.g. given the connect string "tcp:MACHINE1INSTANCE7,3045" - where in Profiler does it tell me that the connection is using port 3045? I looked at "audit login" and "existing connection" but I don't see a port number...
Any other SQL/Windows tools that I can use to monitor connections to databases on specific ports?
thanks
View 1 Replies
View Related
Jan 11, 2008
My table has a running number primary key. I want to insert data to this table and automatically generate running number pk. I try to write SQL command like this.
SELECT MAX(ID) AS MaxId from test
INSERT INTO test (ID, data1, data2, data3) VALUES (MaxId+1, @data1, @data2, @data3)
but it fails. How should I do? Thank you in advance
View 1 Replies
View Related
Sep 13, 2006
Hi,
First of all, my apology. I'm not sure if this is the right SQL server forum. Moreover, this didn't even happen on SQL Server but Sybase 11.9. But I'm thinking that the same situation might happen on SQL Server and I can't find any Sybase forum to run this by some experts.
I got the following error on a dev server:
Error: Number (1204) Severity (17). SQL Server has run out of LOCKS. Re-run your command when there are fewer active users, or contact a user with System Administrator (SA) role to reconfigure SQL Server with more LOCKS
The number of locks was set to 10,000.
I set it to 15,000 and ran the same code: Same error.
I set it to 20,000 and ran the same code: No error.
I set it back to 15,000 and ran the same code: No error.
I set it to 10,000 and ran the same code: No error.
I set it to 5,000 and ran the same code: No error.
It seems strange that it used not to work at 10,000 locks (repeatedlly) but once I set the number of locks to 20,000 once, then it worked with the number of locks as low as 5,000. Does anybody have any idea why this would happen?
Another question is that when the number of locks is set to 10,000, I run sp_lock while the code is running and the number of rows returned was often over 10,000. Still, I didn't get the error. Anybody knows why?
Did anybody ever run into a similar situation?
Appreciate your help.
View 1 Replies
View Related
May 11, 2007
I know that if I set the Package.MaxConcurrentExecutables Property to -1, it will execute the maximum number of concurrent running executables to equal the number of processors plus two.
I have a requirement where I have to run the maximum number of concurrent running executables. However, the number of concurrent executables to run varies each month (it is not fixed).
If I have a loop inside the package that executes each stored procedure (same sproc, different parameter), will SQL Server execute the maximum number of concurrent running executables possible? (It may execute 4 or 8 at a time and that is fine.)
Thanks,
Michael
View 1 Replies
View Related
Jan 11, 2008
Hi,
I have stumbled on a problem with running a large number of SSIS packages in parallel, using the €œdtexec€? command from inside an SQL Server job.
I€™ve described the environment, the goal and the problem below. Sorry if it€™s a bit too long, but I tried to be as clear as possible.
The environment:
Windows Server 2003 Enterprise x64 Edition, SQL Server 2005 32bit Enterprise Edition SP2.
The goal:
We have a large number of text files that we€™re loading into a staging area of a data warehouse (based on SQL Server 2k5, as said above).
We have one €œmain€? SSIS package that takes a list of files to load from an XML file, loops through that list and for each file in the list starts an SSIS package by using €œdtexec€? command. The command is started asynchronously by using system.diagnostics.process.start() method. This means that a large number of SSIS packages are started in parallel. These packages perform the actual loading (with BULK insert).
I have successfully run the loading process from the command prompt (using the dtexec command to start the main package) a number of times.
In order to move the loading to a production environment and schedule it, we have set up an SQL Server Agent job. We€™ve created a proxy user with the necessary rights (the same user that runs the job from command prompt), created an the SQL Agent job (there is one step of type €œcmdexec€? that runs the €œmain€? SSIS package with the €œdtexec€? command).
If the input XML file for the main package contains a small number of files (for example 10), the SQL Server Agent job works fine €“ the SSIS packages are started in parallel and they finish work successfully.
The problem:
When the number of the concurrently started SSIS packages gets too big, the packages start to fail. When a large number of SSIS package executions are already taking place, the new dtexec commands fail after 0 seconds of work with an empty error message.
Please bear in mind that the same loading still works perfectly from command prompt on the same server with the same user. It only fails when run from the SQL Agent Job.
I€™ve tried to understand the limit, when do the packages start to fail, and I believe that the threshold is 80 parallel executions (I understand that it might not be desirable to start so many SSIS packages at once, but I€™d like to do it despite this).
Additional information:
The dtexec utility provides an error message where the package variables are shown and the fact that the package ran 0 seconds, but the €œMessage€? is empty (€œMessage: €œ).
Turning the logging on in all the packages does not provide an error message either, just a lot of run-time information.
The try-catch block around the process.start() script in the main package€™s script task also does not reveal any errors.
I€™ve increased the €œmax worker threads€? number for the cmdexec subsystem in the msdb.dbo.syssubsystems table to a safely high number and restarted the SQL Server, but this had no effect either.
The request:
Can anyone give ideas what could be the cause of the problem?
If you have any ideas about how to further debug the problem, they are also very welcome.
Thanks in advance!
Eero Ringmäe
View 2 Replies
View Related
Mar 25, 2008
Hi!
I was assigned to solve performance problems for an application. I fired up Sql Server profiler and started a trace. Downloaded Sql Server Trace Analyzer. It's a trial version so it's very limited. What I found is that one stored procedure generates almost 400 000 reads everytime it's used and it's used everytime the user wants to see his orders. I've tried to translate the t-sql to english from swedish, it looks something like this:
select top 100
o.orderid,
o.name,
o.latestdeldate,
os.name as OrderStatus,
os.orderstatusID,
p.placeID,
p.name as place,
p.address,
p.city,
a.name as worktype,
noOfActions=(select count(*) from actions a where a.order_orderid=o.orderid),
noOfServiceObjects = (select count(*) from Serviceobject s, Actions a where s.Place_PlaceID = o.Place_PlaceID and a.order_orderid = o.orderid and a.Serviceobject_serviceobjectid = s.serviceobjectid),
...
...
...
It has 8 select count(*) in the select statement then in the where statement it has 2 more select count(*).
I know it's very difficult for you to come up with a solution but do you know a better way than to use select count(*) everywhere? The count is used for to show different status flags on the website.
/Magnus
Jesus saves. But Gretzky slaps in the rebound.
View 19 Replies
View Related
Aug 1, 2001
If I'm doing a dirty reads and a someone updates a record when I'm trying to read it is it possible to read both the old and new records thereby retrieving two records?
View 2 Replies
View Related
Oct 30, 2006
How can You find the reads and writes per second of your hard drives in sql. I am reading my SQL book and it says that your average disk should have 125 or less i/o's. And it gave the forumal but as mentioned I don't know how to find the reads and writes.
View 4 Replies
View Related
May 1, 2008
server: QAT on clustering server ----> 23 seconds
----------------------------------------------------
SS 2000 developer edition SP4
win NT 5.2 (3790) SP4
MeM 7935 MB
processors 4
root directory C:program files...
use a fixed memeroy size 640 MB
reserve physical memory for sql server
minimum query memory 1024 kb
use all available processors
minimum query plan threshold for considering 5
PROFILER READS = 5234
server: MILLER ----> 3 seconds
----------------------------------------------------
SS 2000 developer edition no service pack
win NT 5.2 (3790) SP4
MeM 2047 MB
processors 4
root directory f:MSSQL$INAQAT
dynamically configure sql server memory
use all available processors
minimum query plan threshold for considering 5
PROFILER READS = 598
----------------------------------------------------
Making story short. I got an application that hits only 1 database called RECORDS. I'm getting different duration when running an application. 23 and 3 seconds.
Same database, same objects and same application.
SERVER QAT is our staging server, means lots of databases
SERVER MILLER is just a server i just assembled, means just one database (RECORDS).
Not sure if it's because it's a clustering server that is causing the issue nor the reads. If its the reads, what is causing it? Do you think is the how the memory is configured?. Will the experts pls stand up?
View 20 Replies
View Related
Jul 18, 2006
So I€™m at a dead-end looking for the reason behind the following behavior. Just to make sure no one misses it, the 'behavior' is the difference in the number of reads between using sp_executesql and not.
The following statements are executed against a SQL 2000 database that contains >1,000,000 records in the act_item table. They are run using Query Analyzer and the Duration and Reads come from SQL Profiler
SQL 1:
exec sp_executesql N'update act_item set Priority = @Priority where activity_code = @activity_code', N'@activity_code nvarchar(40),@Priority int', @activity_code = N'46DF335F-68F7-493F-B55E-5F9BC6CEBC69', @Priority = 0
Reads: ~22000
Duraction: 250-350 ms
SQL 2:
DECLARE @Priority int
DECLARE @Activity_Code char(36)
SET @Priority = 0
SET @Activity_Code = '46DF335F-68F7-493F-B55E-5F9BC6CEBC69'
update act_item set Priority = @Priority where activity_code = @activity_code
Reads: ~160
Duration: 0 ms
Random information:
Activity_code is an indexed field on the table, although it is not the primary key. There are a total of four indexes on the table, none of which include the priority as one of the fields.
There are two triggers on the table, neither of which is executed for this SQL statement (there is an IF UPDATE(fieldname) surrounding the code in the trigger)
There are no foreign relationships
I checked (using perfmon) to see if a compilation/recompilation was happening. No it's not.
Any suggestions as to avenues that could be examined would be appreciated.
TIA
View 3 Replies
View Related
Jun 20, 2007
Hello,
im using sqldatareader to read my data and whenever time i loop through the reader it starts from second row why is that?
here is my code:while (reader.Read()){hinfo.Name = reader["_name"].ToString();hi.Add(hinfo);}
i look at the database and i have two rows but its reading only the second row, skiping the first row
View 2 Replies
View Related
May 31, 2006
I have a set of triggers that log the history of changes to a table - i.e. I record inserts, updates, deletes (pretty standard audit stuff I suppose). I want to also log reads on that data. If I were using sprocs for reading data, this would be relatively painless, but I am using an O/R mapper to handle my data access, which writes dynamic sql at runtime (and I don't want to use sprocs with it) and then sends it down to the DB. Is there a way I can intercept reads and log them to the same table I am logging other actions? I know very little about the new capabilities of SQL Server 2005, but I would think I could somehow, maybe via the new CLR capabilities or similar, get access to these types of events within the database? Anyone? I know I could always do this higher up in the application layers, but I would like to keep all of this at the database level if possible....Thanks,
View 1 Replies
View Related
Jan 17, 2002
SQL 6.5 - 5.5 Gig
NT
Hello,
Throughout the day our Document Management application generates high busts of physical page reads when users query the database.
What SQL configuration parameter(s) should I check/modify to insure that the database is performing at it's optimun during these bursts?
Thank You in advance.
View 1 Replies
View Related
Jul 21, 2000
Is there a way to get a total count of all SELECT, UPDATE, DELETE and INSERT statements to a SQL Server 6.5 database during a 12 hour period? I'm thinking maybe someone knows of a software that reads the log or monitors the server... I've been looking at the performance monitor and, although it has good information, it doesn't capture DML's.
FYI - it's for capacity planning.
TIA,
Mike
View 1 Replies
View Related
Aug 24, 2007
I'm trying to insert all the rows from a table to a new table.
(insert A select * from AA)
The reads on Profiler shows ar really high value (10253548).
First I created a unique clustered index and the reads shows (3258445), then I created a non clustered index expecting to have lower reads. Instead the reads shows (10253548).
I read creating indexes helps reduce reads. But it's not happening.
Any ideas what is going on?
=============================
http://www.sqlserverstudy.com
View 6 Replies
View Related
Mar 5, 2008
GUys,
Is there any way track tables which have most no of reads and writes from a database of 400 tables.
Thanks
View 9 Replies
View Related
Aug 28, 2006
Hi,
I have been seeing a basic scenario of a write transaction appearing to unexpectedly lock-out reading.
The database has isolation set to "READ COMMITTED".
The scenario is:
1.) Start a transaction (for doing a write)
2.) Do a read before the transaction (for doing the write) is committed (e.g. sqlCommand2.ExecuteReader()).
--> the code will appear to lock-up (then time out).
I see the same behavior if I step through the "write" code with the debugger (to a point after the transaction is started, but before it is committed), and run a "SELECT * FROM" type query from Microsoft SqlServer Management Studio.
Following is the code sample demonstates the issue.
Thoughts on how to resolve the issue (to let me do "read committed" reading of the database table)?
Thanks!
Andy
Module Transaction
Sub Main()
Dim exception1 As Exception
Try
' Create/Open Database Connection
Dim sqlConnection1 As New System.Data.SqlClient.SqlConnection("Server=GRB-AB;Database=Transaction;Trusted_Connection=True;")
sqlConnection1.Open()
' Start transaction
Dim sqlTransaction1 As System.Data.SqlClient.SqlTransaction = sqlConnection1.BeginTransaction()
' Set Parent record
Dim sqlCommand1 As New System.Data.SqlClient.SqlCommand("INSERT INTO Parent (Name) VALUES ('ParentValue');", sqlConnection1)
sqlCommand1.Transaction = sqlTransaction1
sqlCommand1.ExecuteNonQuery()
' Get Id from parent record (note: this code assumes the table was empty when this program starts)
sqlCommand1 = New System.Data.SqlClient.SqlCommand("SELECT Id FROM Parent;", sqlConnection1)
sqlCommand1.Transaction = sqlTransaction1
Dim parentId As Integer = CType(sqlCommand1.ExecuteScalar(), Integer)
'
' Do reading test to test concurrently reading table being written to
'
' Create/Open Database Connection for reading test
Dim sqlConnection2 As New System.Data.SqlClient.SqlConnection("Server=GRB-AB;Database=Transaction;Trusted_Connection=True;")
sqlConnection2.Open()
Dim sqlCommand2 As New System.Data.SqlClient.SqlCommand("SELECT Id FROM Parent;", sqlConnection2)
sqlCommand2.ExecuteReader()
Dim i As Integer
While (sqlCommand2.ExecuteReader.Read = True) ' <===== LOCKS UP HERE **************
i = i + 1
End While
'
' End reading test
'
' Set child record
sqlCommand1 = New System.Data.SqlClient.SqlCommand( _
"INSERT INTO Child (Name, ParentId) VALUES ('ChildValue', " & parentId.ToString & ");", sqlConnection1)
sqlCommand1.Transaction = sqlTransaction1
sqlCommand1.ExecuteScalar()
' Either 1.) commit transaction OR 2.) rollback transaction
Dim test As Boolean = False
If test = False Then
sqlTransaction1.Commit()
Else
sqlTransaction1.Rollback()
End If
sqlConnection1.Close()
sqlConnection2.Close()
Catch ex As Exception
exception1 = ex
End Try
End Sub
End Module
View 1 Replies
View Related
Sep 19, 2006
I have written a same stored proc in TSQL and SQL CLR which basically takes an input xml and returns xml document. In SQL Profiler, I am getting reads value about five times more for the CLR. Does anyone has any idea why the CLR is doing more reads than TSQL? Thanks in advance.
View 5 Replies
View Related
Dec 22, 2000
Hi Everybody,
One of my friend asked me "How do we reduce the query logical, scan reads
in SQL Server?".
I really don't know, how to answer him.
Can anybody explain me regarding this.
thanks,
Srini
View 2 Replies
View Related
May 5, 2015
Just migrated application from Oracle to SQL and we are seeing alot of deadlocking and blocking. I did notice that app seems to be passing isolation level of repeatable read. Attached is a .doc of one of the deadlocks, is there a way to avoid these in the repeatable read isolation level? This example is a select with two tables, using NCI's that cover the where, and a insert doing just a clustered index insert. Is this simply try to get rid of the repeateable read if not needed, guess have to check with vendor on that or is there a way to get this to not deadlock using repeatable read?
View 2 Replies
View Related
May 2, 2007
Hi,
I'm trying to figure out why my sqlserver is flatlined on the CPU. I'm doing a trace and can't help but notice this, with crazy high reads. I'm not sure what this is? It doesnt look good to me, altho maybe its nothing. Any info is much appreciated.
Thanks again!
mike123
Event Class/ TextData/ApplicationName/ LoginName/ CPU/ Reads/ Writes/ Duration
Audit Logout.Net SqlClient Data ProviderloginName3764129784 3146 156
View 3 Replies
View Related
Apr 17, 2008
Problem Statement........
Lets say user A accesses a record and is making an update to a column... next user B accesses the same record and makes an update to the same column and saves the data... how can user A check to see if an update has been made to prevent overwriting the data..
Is there a query statement that user A can write to check for this?
I understand locking can be used to prevent this but is there an alternative to locking.
View 5 Replies
View Related