Profile Logical Reads Versus STATISTICS IO
May 5, 2015
Why is there often such a dramatic discrepancy between the logical reads recorded in the trace file versus the output of STATISTICS IO?
In the server-side trace I have running I found a reporting procedure that shows having 136,949,501 reads (yes, in hundreds of millions), and it's taking 13,508 seconds to complete.
So I pull the code from the trace and execute it via SSMS - it runs < 1 second, and only generates about 4,000 reads (using various different parameters I get the same result)
The execution plan shows nothing unusual
View 5 Replies
ADVERTISEMENT
Apr 12, 2006
Hello
I'm doing som performance research, I have a index with following priority: ClientId, Active, ProductId. Active is a bit field telling whether the Product is active or not, it can be inactive products than active, but always at least one active product.
When I'm executing
SELECT * FROM [table] WHERE ClientId = [id] AND ProductId IN (1,2,3,5,7,9,20)
I'm getting following result: Scan count 1, logical reads 490
When I'm leading SQL Server to the right paths by including the to possible values in Active by executing the following SQL:
SELECT * FROM [table] WHERE ClientId = [id] AND ProductId IN (1,2,3,5,7,9,20) AND Active IN (0,1)
I'm getting following results: Scan count 14, logical reads 123
With this information, which version would you say is fastest and why?
When I was running this query 1000 times with different ClientId I got a average time of 172 ms for the first query, and 155 ms for the second one. I have been told that scan count is very expensive... out of this example it seems that the cost of 1 scan count is like 20 logical reads?
View 8 Replies
View Related
Mar 4, 2006
I am running a query in SQL 2000 SP4, Windows 2000 Serverthat is not being shared with any other users or any sqlconnections users. The db involves a lot of tables,JOINs, LEFT JOINs, UNIONS etc... Ok it's not a prettycode and my job is to make it better.But for now one thing I would like to understand with yourhelp is why the same SP on the same server and everythingthe same without me changing anything at all in terms ofSQL Server (configuration, code change, ...) runs inQuery Analyzer in 1:05 minute and i see one table get ahit of 15 million logical reads:Table 'TABLE1'. Scan count 2070, logical reads 15516368,physical reads 147, read-ahead reads 0.This 'TABLE1' has about 400,000 recordsThe second time i ran right after in Query Analyzer again:Table 'TABLE1'. Scan count 2070, logical reads 15516368,physical reads 0, read-ahead reads 0.I can see now the physical reads being 0 as it isunderstandable that SQL is now fetching the data frommemory.But now the third time I ran:Table 'TABLE1'. Scan count 28, logical reads 87784,physical reads 0, read-ahead reads 0.The Scan count went down from 2070 to 28. I don'tknow what the Scan count is actually. It scanned thetable 28 times?The logical reads went down to 87,784 reads from 15million and 2 seconds execution time!Anybody has any ideas why this number change?The problem is i tried various repeats of my test, irebooted the SQL Server, dropped the database, restoredit, ran the same exact query and it took 3-4-5 secondswith 87,784 reads vs 15 million.Why i don't see 15 million now?Well i kept working during the day and i happen to run intoanother set of seeing 15 million again. A few runs wouldkeep running at the paste of 15 million over 1 minute andeventually the numbers went back down to 87,784 and 2seconds.Is it my way of using the computer? Maybe i was openingtoo many applications, SQL was fighting for memory?Would that explain the 15 million reads?I went and changed my SQL Server to used a fixed memoryof 100 megs, restarted it and tested again the samequery but it continued to show 87,784 reads with 2 secondsexecution time.I opened all kinds of applications redid the same testand i was never able to see 15 million reads again.Can someone help me with suggestions on what could bethis problem and what if i could find a way to come tosee 15 million reads again?By the way with the limited info you have here about thedatabase I am using, is 87,784 reads a terrible number ofreads, average or normal when the max records in the manytables involved in this SP is 400,000 records?I am guessing it is a terrible number, am I correct?I would appreciate your help.Thank you
View 4 Replies
View Related
Dec 22, 2000
Hi Everybody,
One of my friend asked me "How do we reduce the query logical, scan reads
in SQL Server?".
I really don't know, how to answer him.
Can anybody explain me regarding this.
thanks,
Srini
View 2 Replies
View Related
Dec 12, 2005
How do I determine which method I should use ifI want to optimize the performance of a database.I took Northwind's database to run my example.My query is I want to retrieve the Employees' Firstand Last Names that sold between $100,000 and$200,000.First let me create a function that takes the EmployeeIDas the input parameter and returns the Employee'sFirst and Last name:CREATE FUNCTION dbo.GetEmployeeName(@EmployeeID INT)RETURNS VARCHAR(100)ASBEGINDECLARE @NAME VARCHAR(100)SELECT @NAME = FirstName + ' ' + LastNameFROM EmployeesWHERE EmployeeID = @EmployeeIDRETURN ISNULL(@NAME, '')ENDMy first method to run this:SELECT EmployeeID, dbo.GetEmployeeName(EmployeeID) ASEmployee, SUM(UnitPrice * Quantity) AS AmountFROM OrdersJOIN [Order Details] ON Orders.OrderID =[Order Details].OrderIDGROUP BY EmployeeID,dbo.GetEmployeeName(EmployeeID)HAVING SUM(UnitPrice * Quantity) BETWEEN100000 AND 200000It's running in 4 seconds time. And here are theStatistics IO and Time results:SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 17 ms, elapsed time = 17 ms.(3 row(s) affected)Table 'Order Details'. Scan count 1, logical reads 10,physical reads 0, read-ahead reads 0.Table 'Orders'. Scan count 1, logical reads 21,physical reads 0, read-ahead reads 0.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3934 ms.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3935 ms.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3935 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.Now my 2nd method:IF (SELECT OBJECT_ID('tempdb..#temp_Orders')) IS NOT NULLDROP TABLE #temp_OrdersGOSELECT EmployeeID, SUM(UnitPrice * Quantity) AS AmountINTO #temp_OrdersFROM OrdersJOIN [Order Details] ON Orders.OrderID =[Order Details].OrderIDGROUP BY EmployeeIDHAVING SUM(UnitPrice * Quantity) BETWEEN100000 AND 200000GOSELECT EmployeeID, dbo.GetEmployeeName(EmployeeID),AmountFROM #temp_OrdersGOIt's running in 0 seconds time. And here are the Statistics IOand Time results:SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.Table '#temp_Orders0000000000F1'. Scan count 0, logicalreads 1, physical reads 0, read-ahead reads 0.Table 'Order Details'. Scan count 830, logical reads 1672,physical reads 0, read-ahead reads 0.Table 'Orders'. Scan count 1, logical reads 3, physical reads 0,read-ahead reads 0.QL Server Execution Times:CPU time = 15 ms, elapsed time = 19 ms.(3 row(s) affected)SQL Server Execution Times:CPU time = 15 ms, elapsed time = 19 ms.SQL Server Execution Times:CPU time = 15 ms, elapsed time = 20 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 1 ms.(3 row(s) affected)Table '#temp_Orders0000000000F1'. Scan count 1,logical reads 2, physical reads 0, read-ahead reads 0.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.By the way why "SQL Server Execution Times"exists 3 times and not just one time?Summary:The first code is clean, 1 single SELECT statement buttakes 4 long seconds to execute. The logical reads arevery few compared to the second method.The second code is less clean and uses a temp table buttakes 0 second to execute. The logical reads are waytoo high compared to the first method.What am I supposed to conclude in this example?Which method should I use over the other and why?Are both methods good depending on which I prefer?If I can wait four seconds, it's better to reduce the logicalreads in order to provide less Blocking on the live tablesin a heavily accessed database?Which method should I choose on my own database?Calling a function like dbo.GetEmployeeName getsprocessed per each returned row, correct? That meansIf i had a scenario where 1000 records were to be returnedwould it be better to dump 1000 records to a temp tablevariable and then call a function to process each recordone at a time?Or would the direct approach without usinga temp table cause slower processing and moreblocking/deadlocks because I am calling the functionper each row as I am accessing directly from the tables?Thank you
View 1 Replies
View Related
May 22, 2008
A table in one of my databases is running very slowly. The IO is very high and below is a printout from the SET STATISTICS IO ON command run on a common query used on the table:
(4162 row(s) affected)
Table 'WebProxyLog'. Scan count 3, logical reads 873660, physical reads 3493, read-ahead reads 505939, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
I have a clustered unique index and a nonclustered index on the table. I have ran SQL Profiler and opened the trace in Database Tuning Advisor, DTA displays 0% improvement suggestions. I have a number of statistics on the table and index which are all up to date and fragmentation is less than 1%. I've tried a number of variations on indexes to improve performance but to no avail. There is only one query which runs on the table, and the nonclustered index created on the table did significantly improve performance, however the query still runs at around 23 seconds. The query does bring back a large amount of data however i'm sure there is a way to bring down the IO and logical reads to improve performance.
The table and index scripts are below:
Code Snippet
-- =================== Table and Clustered index ===========================
CREATE TABLE [dbo].[WebProxyLog](
[ClientIP] [bigint] NULL,
[ClientUserName] [nvarchar](514) NULL,
[ClientAgent] [varchar](128) NULL,
[ClientAuthenticate] [smallint] NULL,
[logTime] [datetime] NULL,
[servername] [nvarchar](32) NULL,
[DestHost] [varchar](255) NULL,
[DestHostIP] [bigint] NULL,
[DestHostPort] [int] NULL,
[bytesrecvd] [bigint] NULL,
[bytessent] [bigint] NULL,
[protocol] [varchar](12) NULL,
[transport] [varchar](8) NULL,
[operation] [varchar](24) NULL,
[uri] [varchar](2048) NULL,
[mimetype] [varchar](32) NULL,
[objectsource] [smallint] NULL,
[rule] [nvarchar](128) NULL,
[SrcNetwork] [nvarchar](128) NULL,
[DstNetwork] [nvarchar](128) NULL,
[Action] [smallint] NULL,
[WebProxyLogid] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [pk_webproxylog_webproxylogid] PRIMARY KEY CLUSTERED
(
[WebProxyLogid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
-- =================== Nonclustered Index ===========================
CREATE NONCLUSTERED INDEX [dta_ix_WebProxyLog_Kaction_clientusername_logtime_uri_mimetype_webproxylogid] ON [dbo].[WebProxyLog]
(
[Action] ASC
)
INCLUDE ( [ClientUserName],
[logTime],
[uri],
[mimetype],
[WebProxyLogid]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
-- =================== Query which is called regularly on the table ===========================
SELECT [User] = CASE
WHEN LEFT(clientusername,3) = domain' THEN RIGHT(clientusername,LEN(clientusername) - 3)
ELSE clientusername
END,
logtime AS [Date],
desthost AS [Site],
uri AS [Actual Site]
FROM webproxylog
WHERE CONVERT(Datetime,CONVERT(VarChar(25),logtime,106),106) BETWEEN '20 apr 2008' AND '14 may 2008'
AND(RIGHT(uri,4) NOT IN('.css','.jpg','.gif','.png','.bmp','.vbs'))
AND (RIGHT(uri,3) NOT IN('.js'))
AND LEFT(mimetype,6) = 'text/h'
AND (uri NOT LIKE '%sometext.local%')
AND (uri NOT LIKE '%sometext.co.uk%')
AND [action] = 9
AND (clientusername IN ('USERNAME'))
ORDER BY logtime ASC;
PS There are 60,078,605 rows in the table
Please help!
Many Thanks
View 6 Replies
View Related
Jul 23, 2014
How can I insert the results of "set statistics profile on" into a table?
View 2 Replies
View Related
Jun 1, 2007
we had some slow down complaints lately and this query seems to be the culprit almost every single time. The estimated execution plan is a clustered index seek as there is a clustered index on the uidcustomerid column. setting profile statistics on shows that every time it executes it does an index seek.
profiler session showed a huge number of reads for these queries depending on the value being looked up. 1500 through 50000. i set up profile io on and the culprit is lob logical reads. everything else is 0 or very low. in this case lob logical reads is over 1700.
3 of the columns in the select statement are text columns. when i take them out of the query the lob logical reads drops to 0 and goes up incrementally as i add each column back in.
is there anyway to improve the performance without changing data types to varchar(max)?
select SID,Last_name,Name_2,First_name,Middle_initial,Descriptives,Telephone_number,mainline,Residence,ADL,
DID_number,Svce_street,Svce_town,Svce_state,Svce_appt,Mailing_street,Mailing_town,Mailing_state,Mailing_appt,
Mailing_zip,Listing,Addl_listing,Published,Listed,Gold_number,PIN,status,SSnumber,tax_jurisdiction,
Bill_date,Past_balance,Service_start_date,Service_end_date,LOA,FCC_type,Line_type,I_W,Jacks,Voice_messaging,
vms_ring_cycles,CCS,phonesmarts,ringmate,voice_dialing,Bill_detail,Contact_Number,Contact_extension,
Best_Time,suspend,suspend_start,suspend_end,credits_allowed,credits_granted,home_region,Calling_Plan,Local_Plan,
Local_Plan_Rate,Flat_Rate,Sales_agent,Community,Building_Mgmt,How_Heard,Incentive_1,Incentive_1a,Incentive_1b,
Incentive_1c,Incentive_2,Incentive_2a,Incentive_2b,Incentive_2c,Incentive_3,Incentive_3a,Incentive_3b,
Incentive_3c,block_operator,block_collect,block_group,block_adult,block_call_return,block_repeat_dialing,
block_call_trace,block_caller_id,block_anonymous,block_all_high_toll,block_regional_and_ld,block_DA_Call_Completion,
block_DA,block_3rd_party,bank,prepayment,dial_around_number,custid,waive_interest,Financial_Treatment,
Other_Feature_1_code,Other_Feature_1_rate,Other_Feature_2_code,Other_Feature_2_rate,Other_Feature_3_code,
Other_Feature_3_rate,Other_Feature_4_code,Other_Feature_4_rate,Partial_Account,mail_date,snp_1_date,snp_2_date,
terminate_date,snp1notified,snp1peak,snp1offpeak,snp2notified,snp2peak,snp2offpeak,avg_days_paid,Pulled_Ld,SNP1,
SNP2,Treatment,Collections,Installment,Nynex_BTN,LD_rate,local_discount,to_month,rounds_up,full_package_made,
local_made,PIC,LPIC,tax_exempt_local,tax_exempt_federal,CommissionedAgent,LDRateID,UidCustomerId,
accVchLineClassUSOC,block_Inter_Reg_LD,block_international,block_DA_3rd_Collect,block_DH2,block_ISP_2,block_ISP_3,
block_ISP4_3_GBAS,block_ISP3_3_GBAS,block_collect_only,block_LD_Reg_DA,block_usage_based,block_ISP5_3_GBAS,
block_ISP5_2_GBAS,block_group_adult,csr_PIC,csr_LPIC,csr_SA,csr_exception,cutover_status,cutover_datetime,
OutsideAgent,prfVchAttributes,uidResellerID,Category,uidDealID
from profiles where UidCustomerID in (352199267)
View 3 Replies
View Related
Oct 2, 2014
We would like to benchmark our logical reads daily to show our improvement as we tune the queries over time.
I am using sys.dm_exec_query_stats summing the Physical and Logical Reads. Is this a viable option for gathering this metric? Is this a viable metric to gather?
select sum(total_physical_reads) as TotalPhyReads, sum(total_logical_reads) as TotalLogReads from sys.dm_exec_query_stats;
How best to provide performance based metrics.
View 4 Replies
View Related
Jan 22, 2015
We have a previous SQL 2012 cluster that emails us when a new database is added. I am unable to figure out why we get this error any time we add a new database to it, and also it prevents us from adding a new availability group to this cluster because of this error.
I also am unable to figure out what profile it is talking about as this was setup before me and I am not a DBA.
View 9 Replies
View Related
Jul 20, 2005
Hello group.I have an issue, which has bothered me for a while now:I'm wondering why the column statistics, which SQL Server wants me tocreate, if I turn off auto-created statistics, are so important to theoptimizer?Example: from Northwind (with auto create stats off), I do the following:SELECT * FROM Customers WHERE Country = 'Sweden'My query plan show a clustered index scan, which is expected - no indexexists for Country. BUT, the query plan also shows, that the optimizer ismissing a statistic on Country, which tells me, that the optimizer wouldbenefit from knowing this.I cannot see why? (and I've been trying for a while now).If I create the missing statistics, nothing happens in the query plan (andwhy should it?). I could understand it, if the optimizer suggested an indexon Country - this would make sense, but if creating the missing index, queryanalyzer creates the statistics with an empty index, which seems to me to beless than usable.I've been thinking long and hard about this, but haven't been able to reacha conclusion :) It has some relevance to my work, because allowing theoptimizer to create missing statistics limits my options for designingindexes (e.g. covering) for some rather wide tables, so I'm thinking why notturn it off altogether. But I would like to know the consequences - hopesomebody has already delved into this, and knows a good explanation.RgdsJesper
View 5 Replies
View Related
Aug 1, 2006
What is the unit of the numbers you get in the Time Statistics-part when running a query in Microsoft SQL Server Management Studio with Client Statistics turned on?
Currently I get mostly 0´s, but if I try and *** up a query on purpose I can get
it up to around 30... Is it milliseconds or som made up number based on clockcycles or... ?
I would also like to know if it´s possible to change the precision.
- Nikolaj
View 3 Replies
View Related
Mar 25, 2008
Hi!
I was assigned to solve performance problems for an application. I fired up Sql Server profiler and started a trace. Downloaded Sql Server Trace Analyzer. It's a trial version so it's very limited. What I found is that one stored procedure generates almost 400 000 reads everytime it's used and it's used everytime the user wants to see his orders. I've tried to translate the t-sql to english from swedish, it looks something like this:
select top 100
o.orderid,
o.name,
o.latestdeldate,
os.name as OrderStatus,
os.orderstatusID,
p.placeID,
p.name as place,
p.address,
p.city,
a.name as worktype,
noOfActions=(select count(*) from actions a where a.order_orderid=o.orderid),
noOfServiceObjects = (select count(*) from Serviceobject s, Actions a where s.Place_PlaceID = o.Place_PlaceID and a.order_orderid = o.orderid and a.Serviceobject_serviceobjectid = s.serviceobjectid),
...
...
...
It has 8 select count(*) in the select statement then in the where statement it has 2 more select count(*).
I know it's very difficult for you to come up with a solution but do you know a better way than to use select count(*) everywhere? The count is used for to show different status flags on the website.
/Magnus
Jesus saves. But Gretzky slaps in the rebound.
View 19 Replies
View Related
Apr 3, 2000
Hi,
This is Raj..could anyone pls explain the sql profile and how we will set the row-level locks as a dba.Sorry it is a silly question but sometimes these questions are bugging my mind.
Thnak u in advance.
--Raj
View 1 Replies
View Related
Aug 1, 2001
If I'm doing a dirty reads and a someone updates a record when I'm trying to read it is it possible to read both the old and new records thereby retrieving two records?
View 2 Replies
View Related
Oct 30, 2006
How can You find the reads and writes per second of your hard drives in sql. I am reading my SQL book and it says that your average disk should have 125 or less i/o's. And it gave the forumal but as mentioned I don't know how to find the reads and writes.
View 4 Replies
View Related
May 1, 2008
server: QAT on clustering server ----> 23 seconds
----------------------------------------------------
SS 2000 developer edition SP4
win NT 5.2 (3790) SP4
MeM 7935 MB
processors 4
root directory C:program files...
use a fixed memeroy size 640 MB
reserve physical memory for sql server
minimum query memory 1024 kb
use all available processors
minimum query plan threshold for considering 5
PROFILER READS = 5234
server: MILLER ----> 3 seconds
----------------------------------------------------
SS 2000 developer edition no service pack
win NT 5.2 (3790) SP4
MeM 2047 MB
processors 4
root directory f:MSSQL$INAQAT
dynamically configure sql server memory
use all available processors
minimum query plan threshold for considering 5
PROFILER READS = 598
----------------------------------------------------
Making story short. I got an application that hits only 1 database called RECORDS. I'm getting different duration when running an application. 23 and 3 seconds.
Same database, same objects and same application.
SERVER QAT is our staging server, means lots of databases
SERVER MILLER is just a server i just assembled, means just one database (RECORDS).
Not sure if it's because it's a clustering server that is causing the issue nor the reads. If its the reads, what is causing it? Do you think is the how the memory is configured?. Will the experts pls stand up?
View 20 Replies
View Related
Jul 18, 2006
So I€™m at a dead-end looking for the reason behind the following behavior. Just to make sure no one misses it, the 'behavior' is the difference in the number of reads between using sp_executesql and not.
The following statements are executed against a SQL 2000 database that contains >1,000,000 records in the act_item table. They are run using Query Analyzer and the Duration and Reads come from SQL Profiler
SQL 1:
exec sp_executesql N'update act_item set Priority = @Priority where activity_code = @activity_code', N'@activity_code nvarchar(40),@Priority int', @activity_code = N'46DF335F-68F7-493F-B55E-5F9BC6CEBC69', @Priority = 0
Reads: ~22000
Duraction: 250-350 ms
SQL 2:
DECLARE @Priority int
DECLARE @Activity_Code char(36)
SET @Priority = 0
SET @Activity_Code = '46DF335F-68F7-493F-B55E-5F9BC6CEBC69'
update act_item set Priority = @Priority where activity_code = @activity_code
Reads: ~160
Duration: 0 ms
Random information:
Activity_code is an indexed field on the table, although it is not the primary key. There are a total of four indexes on the table, none of which include the priority as one of the fields.
There are two triggers on the table, neither of which is executed for this SQL statement (there is an IF UPDATE(fieldname) surrounding the code in the trigger)
There are no foreign relationships
I checked (using perfmon) to see if a compilation/recompilation was happening. No it's not.
Any suggestions as to avenues that could be examined would be appreciated.
TIA
View 3 Replies
View Related
Feb 17, 2007
I use the default database called "ASPNETDB.mdf" that is automatically created in Visual Studio Express 2005....
There is an table called "aspnet_Profile" that holds the profile-properties, UserID etc.
There is also another table called "aspnet_Users" that holds all usernames, UserID etc...
No to my problem:
I have a SqlDataSource-control and want to select all users that have the property profile.Color = "Blue"....
How can I write the SQL-part for that?
View 2 Replies
View Related
Apr 13, 2007
I am using a SqlDatasource and need to set a SelectParamter to the ProviderUserKey (The GUID of the user when Profiles are enabled)
Can anyone tell me whether it is possible and How?
I am currently using the session state to store it in and then using the session=... to get the value into the parameter.
Is there a direct way of passing this value into a SelectParameter when using a SqlDataSource?
Thanks in advance.
View 3 Replies
View Related
May 27, 2001
Hi,
Is it necesarry that to run SQL Mail, you need MS Exchange Server as the mail server? Our mail server is MDaemon 2.8. Can anyone tell me what would be the mail profile for MDaemon?
TIA
Wilson
View 1 Replies
View Related
Sep 16, 2003
I am using MS SQL 2000 profile function to monitor the a report process. (VB + Crystal+MS SQL)
I notice there are some actions named
as "object created "
what does it mean ?
View 3 Replies
View Related
Feb 3, 2008
Hi
Is there any way that I can use an ASP.NET profile variable in a T-SQL parameter?
I need to base a query on the current ASP.NET profile variable for purposes of creating a report.
Thanks
Deon
View 3 Replies
View Related
Jun 20, 2007
Hello,
im using sqldatareader to read my data and whenever time i loop through the reader it starts from second row why is that?
here is my code:while (reader.Read()){hinfo.Name = reader["_name"].ToString();hi.Add(hinfo);}
i look at the database and i have two rows but its reading only the second row, skiping the first row
View 2 Replies
View Related
May 31, 2006
I have a set of triggers that log the history of changes to a table - i.e. I record inserts, updates, deletes (pretty standard audit stuff I suppose). I want to also log reads on that data. If I were using sprocs for reading data, this would be relatively painless, but I am using an O/R mapper to handle my data access, which writes dynamic sql at runtime (and I don't want to use sprocs with it) and then sends it down to the DB. Is there a way I can intercept reads and log them to the same table I am logging other actions? I know very little about the new capabilities of SQL Server 2005, but I would think I could somehow, maybe via the new CLR capabilities or similar, get access to these types of events within the database? Anyone? I know I could always do this higher up in the application layers, but I would like to keep all of this at the database level if possible....Thanks,
View 1 Replies
View Related
Jan 17, 2002
SQL 6.5 - 5.5 Gig
NT
Hello,
Throughout the day our Document Management application generates high busts of physical page reads when users query the database.
What SQL configuration parameter(s) should I check/modify to insure that the database is performing at it's optimun during these bursts?
Thank You in advance.
View 1 Replies
View Related
Jul 21, 2000
Is there a way to get a total count of all SELECT, UPDATE, DELETE and INSERT statements to a SQL Server 6.5 database during a 12 hour period? I'm thinking maybe someone knows of a software that reads the log or monitors the server... I've been looking at the performance monitor and, although it has good information, it doesn't capture DML's.
FYI - it's for capacity planning.
TIA,
Mike
View 1 Replies
View Related
Aug 24, 2007
I'm trying to insert all the rows from a table to a new table.
(insert A select * from AA)
The reads on Profiler shows ar really high value (10253548).
First I created a unique clustered index and the reads shows (3258445), then I created a non clustered index expecting to have lower reads. Instead the reads shows (10253548).
I read creating indexes helps reduce reads. But it's not happening.
Any ideas what is going on?
=============================
http://www.sqlserverstudy.com
View 6 Replies
View Related
Mar 5, 2008
GUys,
Is there any way track tables which have most no of reads and writes from a database of 400 tables.
Thanks
View 9 Replies
View Related
Jul 27, 2007
Hi,
Can any of can explain, what the "Reads" column in Profiler exactly mean ? I'm not comfortable with the explanation given in BOL.
"The number of read operations on the logical disk that are performed by the server on behalf of the event. These read operations include all reads from tables and buffers during the statement's execution"
For the same procedure with same parameters, if the server is not loaded much, the Reads are in a few hundreds, but when there are more than 1000 concurrent users, why it is going to millions ? What other parameters affecting this reads ? And how can I reduce it ?
Environment: SQL Server 2005 64-bit Enterprise Edition on Windows Server 2003 R2 Server x64 Enterprise Edition SP2
Thanks in Advance.
Regards
Babu
View 4 Replies
View Related
Aug 28, 2006
Hi,
I have been seeing a basic scenario of a write transaction appearing to unexpectedly lock-out reading.
The database has isolation set to "READ COMMITTED".
The scenario is:
1.) Start a transaction (for doing a write)
2.) Do a read before the transaction (for doing the write) is committed (e.g. sqlCommand2.ExecuteReader()).
--> the code will appear to lock-up (then time out).
I see the same behavior if I step through the "write" code with the debugger (to a point after the transaction is started, but before it is committed), and run a "SELECT * FROM" type query from Microsoft SqlServer Management Studio.
Following is the code sample demonstates the issue.
Thoughts on how to resolve the issue (to let me do "read committed" reading of the database table)?
Thanks!
Andy
Module Transaction
Sub Main()
Dim exception1 As Exception
Try
' Create/Open Database Connection
Dim sqlConnection1 As New System.Data.SqlClient.SqlConnection("Server=GRB-AB;Database=Transaction;Trusted_Connection=True;")
sqlConnection1.Open()
' Start transaction
Dim sqlTransaction1 As System.Data.SqlClient.SqlTransaction = sqlConnection1.BeginTransaction()
' Set Parent record
Dim sqlCommand1 As New System.Data.SqlClient.SqlCommand("INSERT INTO Parent (Name) VALUES ('ParentValue');", sqlConnection1)
sqlCommand1.Transaction = sqlTransaction1
sqlCommand1.ExecuteNonQuery()
' Get Id from parent record (note: this code assumes the table was empty when this program starts)
sqlCommand1 = New System.Data.SqlClient.SqlCommand("SELECT Id FROM Parent;", sqlConnection1)
sqlCommand1.Transaction = sqlTransaction1
Dim parentId As Integer = CType(sqlCommand1.ExecuteScalar(), Integer)
'
' Do reading test to test concurrently reading table being written to
'
' Create/Open Database Connection for reading test
Dim sqlConnection2 As New System.Data.SqlClient.SqlConnection("Server=GRB-AB;Database=Transaction;Trusted_Connection=True;")
sqlConnection2.Open()
Dim sqlCommand2 As New System.Data.SqlClient.SqlCommand("SELECT Id FROM Parent;", sqlConnection2)
sqlCommand2.ExecuteReader()
Dim i As Integer
While (sqlCommand2.ExecuteReader.Read = True) ' <===== LOCKS UP HERE **************
i = i + 1
End While
'
' End reading test
'
' Set child record
sqlCommand1 = New System.Data.SqlClient.SqlCommand( _
"INSERT INTO Child (Name, ParentId) VALUES ('ChildValue', " & parentId.ToString & ");", sqlConnection1)
sqlCommand1.Transaction = sqlTransaction1
sqlCommand1.ExecuteScalar()
' Either 1.) commit transaction OR 2.) rollback transaction
Dim test As Boolean = False
If test = False Then
sqlTransaction1.Commit()
Else
sqlTransaction1.Rollback()
End If
sqlConnection1.Close()
sqlConnection2.Close()
Catch ex As Exception
exception1 = ex
End Try
End Sub
End Module
View 1 Replies
View Related
Sep 19, 2006
I have written a same stored proc in TSQL and SQL CLR which basically takes an input xml and returns xml document. In SQL Profiler, I am getting reads value about five times more for the CLR. Does anyone has any idea why the CLR is doing more reads than TSQL? Thanks in advance.
View 5 Replies
View Related
May 11, 2007
Hello.When I create a user at the ASP.NET database, I need to insert more fields than the defaults, and I do it like this: Dim customProfile As ProfileCommon = ProfileCommon.Create(CreateUserWizard1.UserName, True)
customProfile.telephone =
(CType(CreateUserWizard1.CreateUserStep.ContentTemplateContainer.FindControl("TelephoneText"),
TextBox)).Text(telephone example)Those data are inserted at the DB at the aspnet_Profile table, in the fields PropertyNames & PropertyValuesString, but they are saved together (see image above)
I want to separate those properties in the GridView as long as each property appears in a column, is it possible? Thank you very much, i'm expecting your answers.
View 11 Replies
View Related