DB Engine :: Page Life Expectancy Increases Dramatically From A Minute To Another
May 22, 2015
I measure PLE on my server and insert them every minute into a table. Now, when I look into the table I just dont know how to interpret the following data. I dont understand how is that possible. It's an sql server bug? or? How to interpret that data?Â
View 10 Replies
ADVERTISEMENT
May 27, 2015
I have few servers that are VM and different version of SQL Severs (2008 Express/Standard, 2012, or 2014 Enterprise) and seems all of them have same issue that when I am querying "sys.dm_os_performance_counters" for "[object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Page life expectancy'" I am getting very big number (Ex, above 100K or some servers 1M).Is this number seems fine or acceptable?Â
View 6 Replies
View Related
Sep 12, 2014
Our server administrator forwarded some messages from SCOM that indicate:
SQL DB Engine 2012 Page Life Expectancy and Buffer Cache Hit Ratio is too low
When I logged into the offending server, I could not find anything in the SQL Log File that indicates this.
I was wondering how did SCOM identify this issue - where in SQL Server would this have been reported to SCOM?
View 3 Replies
View Related
Feb 28, 2006
In my MSSQL server 2000 sp4, in performanece monito, buffer memory, pagelife expectancy is equal to 0.00000 (average for 10 sec, auto update). Ithink sth is wrong configured, but what?Marek
View 1 Replies
View Related
Feb 18, 2014
I started receiving these alert messages, and after doing some re-search still can't figure out how to totally resolve it. From what I gather the value Microsoft stipulates 300 for PLE is not accurate if you running a 64 bit OS and dependent of the amount of RAM you allocate to SQL.
If I allocate 20 Gig of RAM to SQL, The PLE should not drop below 1500 - (PLE should be 300 for every 4 GB of RAM) (20/4)*300
During the course of the day it sometimes drops below 1500, so my question is how can I further see why and what query is causing this to happen???
I setup a monitoring job as mentioned by Steve Hood to capture results for me every 20 min.
View 6 Replies
View Related
Mar 17, 2008
STUCK! I have a consistent PLE value of 0. The server has 8GB RAM, I/O-CPU counters under 30%, various DMV's results show the buffer and memory cache do not calculate up to it's AWE (32-bit) allocation 7.5GB, waits don't show any problems, no locks, no blocks.
There are improvements to indexes that have to be made but where can I identify the root cause to the low PLE? The same process runs x6 quicker on 2 other much lower spec SQL servers exact copies from a data prospective, however there PLE is very high and not production.
Any ideas?
View 1 Replies
View Related
Jan 16, 2008
Page Life Expectancy (PLE) is pretty bad on my server. PLE is hovering around 3 minutes "sometimes" but is usally around 20-30 seconds.
Total memory allocated to SQL ( a fixed amount ) is set at 3GB.
Of the total memory allocated, SQL Server is using 2.52GB ... so there is room if needed.
The Buffer Cache is sitting at 2.09GB with a hit rate around 99.8%.
The Procedure Cache is sitting at 378MB with a hit rate around 90.5%.
CPU is hovering around 10-20%
Free System Page Table Entries is LOW ... at 22343
Disc Queue Lengths spike quite often to above 5 and sometimes as high as 36. Usually sitting at .05 to 1.0 (and there are times when the DQLs are great and not measureable.
What I need to find out is how to get PLE above the recommended 5 minute mark???
Please let me know if there are any other items I need to note.
Thanks!
=========================================
Here are some hardware/Software/Implementation stats:
=========================================
SQL Server 2005 Standard w/ latest patch of 3152
Windows Sever 2003 R2 Standard w/ latest patches applied (says PAE is enabled in the System Properties.. General tab
4 Intel Xeon X5355 @266GHz
4GB RAM with 3GB dedicated to SQL Server via the /3GB switch
System Disc ( C: ) is 136GB (free space is 122GB) and is RAID10
Data/SQL Disc ( G: ) is 408GB (free space is 347GB) and is RAID10
The SQL files (MDFs/LDFs, TempDB, DB & TLog backups, SQL application and all that is SQL related) are all on the single array (G (** which I must note is NOT how I configured the SQL environment but aquired the setup when I started the position).
View 6 Replies
View Related
Jun 9, 2015
Every day, at same hour, my SQL Server is showing error of PLE less than 300 seconds.
So after this, the server works properly.
View 5 Replies
View Related
Aug 17, 2015
I'm getting an alert which states that both my Buffer Cache Hit Ratio and PLE are low on one of my SQL Servers though I'm not sure how to correctly check this.
I ran:
SELECT object_name, counter_name, cntr_value
FROM sys.dm_os_performance_counters
WHERE [object_name] LIKE '%Buffer Manager%'
AND [counter_name] = 'Buffer cache hit ratio'
Which gives me the Buffer Cache Hit Ratio, cntr_Value of 9 though its constantly dipping between 3-3000 and is never steady and I'm unsure if this is normal.
I also ran:
SELECT object_name, counter_name, cntr_value
FROM sys.dm_os_performance_counters
WHERE [object_name] LIKE '%Buffer Manager%'
AND [counter_name] = 'Page life expectancy'
Which gives me the Page life expectancy of 209061.
If these values would cause concern and if this is a normal Buffer Cache Hit Ratio? It's constantly dropping from high or low from what I can see. These scripts were pulled from another forum and I'm assuming they're showing the correct values.
View 1 Replies
View Related
Jun 2, 2015
Is this Possible, If database is in Simple recovery Mode and the ldf size gets increased?? .
mdf size : Â 159 GB (171,383,717,888 bytes)
ldf size :Â 6.46 GB (6,945,505,280 bytes).
My question is if the recovery model is in Simple Mode then why the log gets generated high.
dbcc sqlperf(logspace) --output
DATABASE Â Logsize(MB) Â Â Â Log space used(%) Â Â status
mam      6623.742
    0.4305579
       0
Is there any issue or it is Normal.
View 9 Replies
View Related
Oct 2, 2007
Hi
I am having a problem with a particular stored procedure in a database application and I have ran out of ideas as to what is the cause. When calling this stored procedure from a .Net application it typically returns results in about 0.2 seconds. 24 hours after it's creation, the procedure takes over 40 seconds to return the same results to the application. However if I call the procedure via Management Studio or Query Analyzer, the performance remains consistently fast.
It's a fairly complicated query making use of the following features:
FOR XML EXPLICIT
The ROW_NUMBER function
Input Parameters
The procedure is replicated, along with the tables that it references
The calling application is using ExecuteXMLReader to retrieve the results.
To fix the problem, I can simply run an ALTER PROCEDURE statement (without changing any of the contents of the stored procedure). However, by the next morning, the problem will have reoccurred.
Can anyone shed any light on why this is happening?
Phil
View 9 Replies
View Related
Jan 22, 2015
What is the impact of fragmentation when my tables fragmentation is 99% but page count is 300 only?
View 6 Replies
View Related
Jun 23, 2015
 I have used Extented event to monitor the occurances of TempDB contention on Production server . I found there are several entried logged in in 30 mints .Now I am trying to determin if Tempdb contention on PFS, GAM or SGAM page then I will decide if I need to increase the number of TempDB data files on Production server . Currently , There are 8 TempDB Data files configured on its separate Disks .There are Page_IDs I found in the extented events for Tempdb files -
Page_IDÂ =1 for PFS page
Page_ID = 2 for GAM page
Page_ID =3 for SGAM page
but I found the Below Page_IDs and I know there is a formula that you can use to identify if page is PFS,GAM or SGAM ? How should I use this formula and what should I look for to determine if page is PFS,GAM or SGAM ? Is there any threshold value for the duration of TempDB contention occured ?
View 10 Replies
View Related
Oct 4, 2004
Hi everybody,
I have a database in production server with 3,5 GB of data file size and 10 GB log file size. This is very strange isn't it?
The features of this database are:
SQL Server 2000
Recovery Model = Full
Auto Update Statistics = Yes
Torn page detection = Yes
Auto create statistics = Yes
Full database backup taken once daily.
No log backup is taken.
So, I would like to apply some statregy to avoid the log file increase out of control. Can you give me your suggestions???
My free disk space is very low.
Thank you all,
View 5 Replies
View Related
Dec 28, 2007
On some tables when I run DBCC ShowContig followed by DBReindex followed by ShowContig I notice Extent Scan Fragmentation actually increases. Why does this happen? Below are the SHOWCONTIG results after running DBReindex three times.
After First DBReindex
- Pages Scanned................................: 986
- Extents Scanned..............................: 124
- Extent Switches..............................: 123
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 100.00% [124:124]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 47.58%
- Avg. Bytes Free per Page.....................: 91.0
- Avg. Page Density (full).....................: 98.88%
After Second DBReindex
- Pages Scanned................................: 986
- Extents Scanned..............................: 124
- Extent Switches..............................: 123
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 100.00% [124:124]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 20.16%
- Avg. Bytes Free per Page.....................: 91.0
- Avg. Page Density (full).....................: 98.88%
After Third DBReindex
- Pages Scanned................................: 986
- Extents Scanned..............................: 124
- Extent Switches..............................: 123
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 100.00% [124:124]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 67.74%
- Avg. Bytes Free per Page.....................: 91.0
- Avg. Page Density (full).....................: 98.88%
Thanks, Dave
View 6 Replies
View Related
Feb 4, 2008
Hi,
Im in the process of setting up logshipping on sqlserver 2005 enterprise edition.
My scenario is like this:
My Avg size of my tlog is 500MB and im planning to set the log shipping at 30mins interval(ie backup job schedule,Copy,restore job schedule).But at some part of the day the Tlog suddenly increases up to 1.5GB - 2 GB .So i wanted to know, wht if that 1.5GB-2GB tlog file is unable to get backed up,copy and restore at 30mins interval?.How to deal with this kind of issues where the size of tlogs are increased suddenly.I cannot do it at15mins interval due to some network restrictions at my office.
i have One more doubt about the setting on Logshipping screen:
Now let us suppose my settting on log shipping screen 'Alert if no restore occurs within' is set to '180mins', then does this setting mean that the restore job will keep on looking for the copied file in the folder on secondary for next 90mins and if its not able to find any, it will generate an alert after 90mins ??? or it will generate an error if its nt able to find any copied file after the first restore job execution.???
in the same way,
Thnx in advance for any help.
Regards
Arvind L
View 1 Replies
View Related
Apr 28, 2008
Hello!
I have a problem. I want to know if the time which is needed for creating an index increases proportional to the amount of rows. example: if creating an index on a table which 10.000 rows takes 15 seconds. does creating an index on a table with 20.000 rows take 30 seconds , 40.000 rows 60 seconds and so on...
or does it take longer like 10.000 rows 15 second, 20.000 rows 40 seconds, 40.000 rows 80 seconds.
thx for your help!!
Filipe
View 4 Replies
View Related
Jun 2, 2014
I am writing a performance baseline test.
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
View 4 Replies
View Related
Dec 3, 2007
I am using SQL 2005. I have one very big database. The size of the DB is 1500986.88 MB and space available is 149359.48 MB. There are log files in this DB. The size of my log files are growing so big. The currently allocated space of one log file is 799874.50 MB and available free space is 38281.67 MB (4%). And the currently allocated space of another log file is 1500 MB and available free space for this log file is -760092.83 MB (-50672%)
I am not sure how come available free space for the 2nd log file becomes –ve. What does it mean? If I shrink log file, will I get any extra space? If not, what is the solution?
View 4 Replies
View Related
Dec 5, 2007
Hi All,
The current/ Base table would be like below,
Products
level
Date
N1
b
11/5/2007
N2
p
11/6/2007
N3
p
11/7/2007
N4
p
11/14/2007
N5
b
11/15/2007
N6
p
11/23/2007
Expected Result.
<=11/7/2007
<= 11/14/2007
<=11/21/2007
b
1
1
2
p
2
3
4
Total
3
4
6
As you can see, the above table has cumulative data.
1. It calculates the number of Products submitted till a particular date- weekly
2. The date columns should increase dynamically(if the dates in base table increases) each time the query is executed
For ex: the next date would be 11/28/2007
I tried something like, it gives me count of €˜b€™ level and €˜p€™ level products by week
declare @date1 as datetime
select @date1 = '6/30/2007'
while (@date1 != (select max(SDate) from dbo.TrendTable))
begin
set @date1 = @date1 + 7
select Level, count(Products)
from
dbo.TrendTable
where SDate < @date1
group by Level
end
what I think is required is a pivot that dynamically adds the columns for increase in date range.
/Pls suggest if any other way of achieving it.
Pls help!!!
Thanks & Regards
View 3 Replies
View Related
Sep 14, 2007
My one VB Exe used to connect SQL 2000 using windows SQL Server Driver. But same exe is giving problem of SQL Connection Timed out error in SQL 2005 - SP1.
As this exe refers 5 tables from database and insert as well as update the data. Also client pc and server pc's ping test is around 30 to 40 ms.
Is there any way where we can increase timed out level at SQL 2005.
One more thing which i noticed in 2005. its really very heavy software. even my IBM Xeon_346 server 3GB Dual CPU, 3 GB RAM also not able to handle.
Thanks in Advance
Dino
Dina Satam
View 14 Replies
View Related
Jul 3, 2007
Hi. I have a table called "Maxes" with three fields: Exercise_ID, weight, and date. This is for journaling my weightlifting progress. What I want my query to do is this:
Return just one record for each Exercise_ID, and only the one with the most recent date.
I tried this:
Code:
SELECT DISTINCT Maxes.Exercise_ID, Maxes.Date_Maxed, Maxes.[Max Weight]
FROM Maxes ORDER BY Maxes.Date_Maxed DESC;
but it doesn't quite work. Can someone show me how to do this?
View 4 Replies
View Related
Nov 11, 2015
I have created one reports but all the records are displaying on one page.find a solution to display the records page by page. I created the same report without group so the records are displaying in page by page.
View 3 Replies
View Related
Jan 2, 2007
I am trying to create a query that will show how much revenue that we have recieved from a customer After the first invoice and I'm having a difficult time creating a query to do it.. I have a customer table and a sales table joined by custno.
SELECT Customer.LastName, Sales.InvDate, Sales.AmtChargeFROM Customer INNER JOIN Sales ON Customer.CustNo = Sales.CustNo
The output I'd like is
CustNo, LastName, FirstInvoiceAmount, LifeCycleAmount
Getting the first inv date seems straight forward
SELECT Customer.CustNo, MIN(Sales.InvDate) AS FirstInv FROM Customer INNER JOIN Sales ON Customer.CustNo = Sales.CustNo GROUP BY Customer.CustNo
However getting the amount of that first inv and then getting the sum of all invoices not including the first invoice has me scratching my head.
Can anyone point me in the right direction?
View 2 Replies
View Related
Aug 2, 2007
Hello,
This is my first foray into SQL Server and i am coming from an Oracle background.
We are currently looking to upgrade a COTS package from a third party supplier. I would like to know is there any end of life announcement for SQL Server 2000 ? Is there an End of Support date announced for SQL Server 2000 ?
In Oracle there is a product life cycle announcement screen, I would like to know if there is a similar page in SQL SERVER and where ?
thanks
View 3 Replies
View Related
Dec 13, 2007
I am not very good in queries. Could you please suggest me some web site/Tutorial/Artical where i can get Study Material for complex and real life queries. I know the syntexes, I just need to practice queries to enhance my skills
View 1 Replies
View Related
Apr 17, 2007
I have a c# application running on my machine. I want to point to a sql server data base express edition running on other machine. By default, from server explorer Ionly acces data base files of the local machine. I dont have acces to remote servers.
I used the file explorer and opened the mdf file on remote server. I tried to acces the mdf file remotely but i got excepcion.
My questios are:
- Is sql server 2005 express edition supporting remote connection to mdf data base file?
- In other words, I can remotelty conncect to mdf file from remote machine ?
So, I how can point my application to remote sql server 2005 express edition data base running on other machine?
Please, Help
View 2 Replies
View Related
Mar 7, 2007
Is it possible expire a report cache after less than one minute? I'm looking for a way to only have a report hit the database once every 10 seconds, no matter how many people are hitting it. Thanks.
View 1 Replies
View Related
Feb 1, 2008
Hi,
Here is a part of result set.
It is of every minute value.
How can I get every other 5-minute average value?
id datetime value
------------------- ----------------------------- --------
0xC00302FD 2008-01-31 18:36:00 0.104
0xC00302FD 2008-01-31 18:37:00 0.104
0xC00302FD 2008-01-31 18:38:00 0.104
0xC00302FD 2008-01-31 18:39:00 0.104
0xC00302FD 2008-01-31 18:40:00 0.104
0xC00302FD 2008-01-31 18:41:00 0.104
0xC00302FD 2008-01-31 18:42:00 0.104
...
...
...
View 1 Replies
View Related
Jan 17, 2002
How do i calculate the Transactions Per Minute (TPM). Do i need to use the Performance Monitor or Profiler. Let me know How do i calcualte.
I would like to have 24,000 inserts in One minute Per data migration. Is 24,000 will be Transaction Per Minute.
Thanks in Advance.
SS
View 1 Replies
View Related
Aug 22, 2007
I need a query that gives me the sum of every rows (time column) with lower 'rownr'
the result:
rownrtimetimesum
1100
21010
31020
41030
51040
61050
71060
81070
current table looks like this:
rownrtime
110
210
310
410
510
610
710
810
and i want the 'timesum' column to be in format hhhh:mm
current format is rownr=int, time=datetime
thx for all help
//Mr
View 14 Replies
View Related
Jan 28, 2008
Hello
Probably a very simple problem, but im stumped. I have a table which gives the start-time and end-time of an employees work day. I want to create a view which contains a line of data for each 5 minute period worked. Please help.
View 3 Replies
View Related
Jan 18, 2005
I would like something I can do inline eg:
select convert(blahdatatype,a.datefield) as smallerdatefield
from
a
where a.datefield is a datetime. If a contains rows like:
datefield
---------------------
01/20/2005 22:17:23
08/23/2001 03:04:15
...
Then the SQL above returns:
smallerdatefield
---------------------
01/20/2005 00:00:00
08/23/2001 00:00:00
...
Is there any non-obnoxious way (eg: without have to result to using datepart a million times) to do this? For instance, Oracle provides a function called Trunc which does it, but I cannot find an SQL Server equivalent. Anyone? TIA!!!
View 9 Replies
View Related