dbcc showcontig shows me nothing special, it looks ok
I launch performacne monitor and add SQL server cache manager: hit
ratio
and it is constantly at 7% and never changes up or down, it is just
constant.
Can this be correct? and if so it sounds rather bad, we have a
handfull of large tables that are heavily used and enough RAM to hold
them all in RAM so I really do not understand why the cache hit ratio
is not higher.
On Microsoft performance monitor, what is the difference between SQL Server Cache Manager: Cache Hit Ratio and SQL Server Buffer Manager: Buffer Cache Hit Ratio? We have a production server where the buffer cache hit ratio is consistently at 99%, which is normal. However, the cache hit ratio is 73%. What is the difference between the two hit ratios, and why would we have such a significant difference between the two?
I have been seeing this strange statistics in one of our servers. The cache hit ratio has gone beyond 100%, it is currently showing 124%. Has anyone seen this before.
I have a large dell server with 4 processors, and 8 gig of memory on win 2000 advanced with sql 2000 enterprise edition running a 3rd party app. My cache hit ratio averages about 76%. I thought the gneral rule was if you get below 80% to add more memory. However my memory manager shows I am only using 71% of my memory and have a full gig available. I have the sql server set to use about 7.1 gig of the 8 gig on the server. My ? is if I am only using 71% of my memory, will will adding more memory actually help my cache hit ratio
Maybe I am just a lot better at this than I thought, but I figure that somewhere there is a mathematical rule that is being overlooked. When I run dbcc sqlperf (lrustats) on some of my production machines, I sometimes end up with a cache hit ratio (which is defined as a percentage, mind you) that is slightly over the limit:
Statistic Value -------------------------------- ------------------------ Cache Hit Ratio 100.00898 Cache Flushes 0.0 Free Page Scan (Avg) 0.0 Free Page Scan (Max) 0.0 Min Free Buffers 331.0 Cache Size 4362.0 Free Buffers 9434.0
I suspect some counter somewhere is getting wrapped around its 4 byte limit. Is there any reliable source for getting statisics about SQL Server performance? Users tend be unreliable and say everything is slow.
I'm putting together some monitor scripts, have buffer cache ratio etc etc but struggling to get an accurate script for the current procedure cache hit ratio...
This issue just happen recently. The buffer cache ratio went from > 90%to 50% and has slowly been climbing back up over 8 hours or so. Itscurrently @ 76%. Is this something I should take action on immediately?It seems to be coming back to normal...
HiI have trouble with MSSQL2000 SP4 (without any hotfixes). During last twoweeks it start works anormally. After last optimalization (about few monthsago) it works good (fast, without blocks). Its buffer cache hit ratio wasabout 99.7-99.8. Last day it starts work slow, there was many blocks anddedlocks. There are no any queries, jobs and applications was added. Nowbuffer cache hit ratio oscilate about 95-98. I try update statistics andreindex some hard used tables, but there is no effect or effect is weryshort (after few hours problem return).Mayby somene know what it could be?Is it possible to estimate how each table (using DBCC SHOW_STATISTICS orDBCC SHOWCONTIG or others) how the table affect on total buffer cache hitratio?Marek---www.programowanieobiektowe.pl
Im getting this error when trying to set up a cache dependency...are there any special permissions etc?From CS:SqlCacheDependency dep = new SqlCacheDependency("MySite-Cache", "Products");Cache.Insert("Products", de.GetAllProductsList(), dep); From connectionStrings.config:<add name="SiteDB" connectionString="Data Source=localhost,[port]SQLEXPRESS;Integrated Security=true;User Instance=true; AttachDBFileName=|DataDirectory|ASPNETDB.MDF" providerName="System.Data.SqlClient" />Also tried this using my machinename<add name="SiteDB" connectionString="Data Source=<machinename>,[port]SQLEXPRESS;Integrated Security=true;User Instance=true; AttachDBFileName=|DataDirectory|ASPNETDB.MDF" providerName="System.Data.SqlClient" /> From web.config: <caching> <sqlCacheDependency enabled="true" pollTime="10000"> <databases> <add name="MySite-Cache" connectionStringName="SiteDB" pollTime="2000"/> </databases> </sqlCacheDependency> </caching> EDIT: So making progress I can't seem to get the table registered for cache dependency:The sample i have says"aspnet_regsql.exe -E -S .SqlExpress -d aspnetdb -t Customers -et"and the command line response is "Enabling the table for SQL cache dependency..An error has happened. Details of the exception:The table 'Customers' cannot be found in the database."Where does this "Customers" table come from? There is obviously not an application specific "Customers" table in aspnetdb I'm confused probably more by the example than anything....
Is there a way to drop clean buffers at the database level instead of the server/instance level like the undocumented €œDBCC FLUSHPROCINDB (@dbid)€?? Is there a workaround for €œdbo€? to be able to flush procedure and data cache without being elevated to €œsysadmin€? server role?
PS: I am aware of the sp_recompile option that can be used to invalidate cached execution plans. Thx.
I am looking at the plan caches/cached pages from the perspective of sys.dm_os_memory_cache_counters and sql serverlan Cache - Cache Pages
For the first one I am using
select (sum(single_pages_kb) + sum(multi_pages_kb) ) from sys.dm_os_memory_cache_counters where type = 'CACHESTORE_SQLCP' or type = 'CACHESTORE_OBJCP' a slight change from a query in http://blogs.msdn.com/sqlprogrammability/
For the second just perfmon.
The first one gives me a count of about 670,000 pages only for the object and query cache and the second one gives me a total of about 100,000 pages for five type of caches including object and query.
If I am using the query from http://blogs.msdn.com/sqlprogrammability/ to determin the plan cache size
select (sum(single_pages_kb) + sum(multi_pages_kb) ) * 8 / (1024.0 * 1024.0) as plan_cache_in_GB from sys.dm_os_memory_cache_counters where type = 'CACHESTORE_SQLCP' or type = 'CACHESTORE_OBJCP'
it gives me about 5 GB when in fact my SQL Server it can access only max 2GB with Total and Target Server Memory at about 1.5 GB.
I have a calculated member that is a ratio calculation. It works fine but I would like that the value for the "All" member to be the sum of the children.
So I have the following MDX where the calculated member [Dim Group Rubric].[Group Rubric Description].[BO + ADM Cost] does not give me good results for the "All" member:
WITH
MEMBER [TOTAL YEAR CONTRACT] AS sum({[Total Year]},[Dim Contract].[Contract Number].[563131])
MEMBER [TOTAL YEAR BU] AS sum ([Total Year])
MEMBER [Dim Group Rubric].[Group Rubric Description].[TOTAL BU PVN] as
([Dim Group Rubric].[Group Rubric Description].[PVN], [TOTAL YEAR BU])
MEMBER [Dim Group Rubric].[Group Rubric Description].[TOTAL BU BO + ADM] as
([Dim Group Rubric].[Group Rubric Description].[BO + ADM], [TOTAL YEAR BU])
MEMBER [Dim Group Rubric].[Group Rubric Description].[BO + ADM Cost] as
([Dim Group Rubric].[Group Rubric Description].[PVN], [TOTAL YEAR CONTRACT])
/ ([Dim Group Rubric].[Group Rubric Description].[TOTAL BU PVN], [TOTAL YEAR BU])
* ([Dim Group Rubric].[Group Rubric Description].[TOTAL BU BO + ADM], [TOTAL YEAR BU])
SELECT
NON EMPTY {[TOTAL YEAR CONTRACT],[TOTAL YEAR BU]}
ON 0,
{[Dim Group Rubric].[Group Rubric Description].[BO + ADM Cost]}
,[Dim BV Organisation Finance].[Business Unit Code].currentmember.name
<> '071MIND1'
)
)
ON 1
FROM (SELECT ([Dim Year].[Year].&[2008]) ON 0
FROM (SELECT ({[Dim BV Organisation Finance].[Business Unit Code].[071MIND1]
,[Dim BV Organisation Finance].[Business Unit Code].[071DIDD1]}) ON 0
FROM [BV Contract Margin DW]))
When I launch this query I obtain: TOTAL YEAR BU BO + ADM Cost All - 45013 (I would like to obtain here - 90651) BO + ADM Cost 071MIND1 - 35680 BO + ADM Cost 071DIDD1 - 54971
Please do you know what should be changed in this query to obtain the good results?
How do I author a SQL Query which meets the following requirement: Display a ratio of (instances where status is a success)/(instances where status is a success or instance is a failure). The below syntax is incorrect. However, I hope to beter explain what I am trying to accomplish using the incorrect SQL query below.
SELECT SUM(Case When Status = "SUCCESS")/SUM( Case When Status = "FALLBACK") FROM ruleResults;
I am doing some work in microstrategy reports and using SQL data cube. I am creating a summary report (Counting on Unique ID) in MS where when I put in a particular attribute/column from cube it splits the data count down into 2 separate values that I have defined in the select case statement of the SQL query.
I want to work out a ratio of the count unique ID between these two values but can't do this in MS report as it is not a physical column but summary split of data.Is there a way I can do this in SQL? My summary looks something like this:
N E ====================== ========================= **Y** N **Y** N ========= ========== ========= ========= 570 140 89 56
Where the Y/N field is the one I have split down against the N/E column. I want to work out ratio between the 2 "Y" fields but in SQL. Here is a SQL example snapshot of my data:
Unique_ID New/Existing Application Attend_Interview ========= ======================== ================ 12554445 E Y 65766879 N N 53375654 N Y 44323224 E N 93656786 E Y
wondering if I might be able to do a procedure or similar or something more dynamic in SQL query?
Hi, In m*y SQL server 7.0, when using Performance, I see in graph: Buffer Catche Hit Ratio Counter (SQLServer Buffer Manager objects)always is maximum (100). What does this mean ? What's the Buffer Catche Hit Ratio? How do I configure SQL server to performance? Thanks in advance.
Sample Query : SELECT 'A' [Type], 1 AS [NUMERATOR]    , 2 AS [DENOMINATOR] UNION ALL SELECT 'B' [Type], 2 AS [NUMERATOR]    , 4 AS [DENOMINATOR] UNION ALL SELECT 'C' [Type], 3 AS [NUMERATOR]    , 6 AS [DENOMINATOR]
[Code] ...
I need to show the ratios beside each bar (shown in red)Â as shown in the above diagram.
I'm trying to use the Ratio to Parent sproc - [URL] .... The problem is that the stored proc appears to be pegged at 4 decimal places (which translates to ##.##%). This is losing precision and isn't adding up to 100% when the ratios are summed individually.
Can this be fixed within SSAS or will i need to modify the assp code? .NET decimals are meant to automagically scale, so not sure why it's coming out as (5,4) but my .net coding knowledge is fairly poor.I get the correct behaviour if hardcode a ratio to parent MDX calc (the resulting ratio has as many decimal points as are required)
I created a PowerPivot measure which is a ratio Ratio = Number of Events/Time calculated on 12 months. I would like the grand total to be this Rate Sum(Number Of Events)/Sum(Time calculated).
In my Pivot I have one measure which I called Value and this value can have different types depending on one attribute.For instance one attribute is number of events, an other one is time and the third one is ratio.I want to display a custom grand total simple sum for events and time and a calculation of my ratio for ratio.
For instance                             201501 201502 201503 TOTAL Number of events           8        10        10    28 Time                             5           5         4    14 Ratio                           8/5      10/5     10/4 28/14
Just wanted to know what is a general rule of thumb when determining log file space against a database's data file.We allow our data file for our database to grow 10%, unlimited. We do not allow our log file to autogrow due to a specific and poorly written process (which we are in a three month process of remove) that can balloon the log file size.Should it be 10% of the Data file, i.e. if the Date file size is 800MB the log file should be 8MB?I realize there are a myraid of factors that go against file size but a general starting point would be nice.ThanksJeff--Message posted via http://www.sqlmonster.com
I read about the SQL 2005 Buffer Cache and the .NET cache.I tried it on about 8000 users query. Both are fast....SQL 2005 being a little faster (probably due to I was sorting the .NET Cache).Any comments?
Hi there, We have cluster server and we are using sql server 7, everything was working fine on node a and when we move to node b and run the job from the jobs it gives the error message:
Error 22022:SQLServerAgent Error: job ox7bb7d7bf23b8974d8ea8f15e43e854cf does not exist in the job cache
Hi all, I have a question regarding SQL caching. I want to use SQL caching for my pages.I've gone through http://msdn2.microsoft.com/en-us/library/e3w8402y(VS.80).aspx article on msdn, where in step by step procedure is explained for this. But I have a slightly different situation. In the article, we need to configure our application from web.config by following... <!-- caching section group --><caching> <sqlCacheDependency enabled = "true" pollTime = "1000" > <databases> <add name="Northwind" connectionStringName="NorthwindConnectionString1" pollTime = "1000" /> </databases> </sqlCacheDependency></caching>
Where in "NorthwindConnectionString1" is the SQLdatasource connection string. But I'm not using SQLDatasource, and have my own classes to build a connection datasource and get a dataset, and then bind it to my control.So in this case what should I write for the above "connectionstring" attribute. My connection string is in web.config only with a section defined. and in my application i'm reading from web.fing only.The connection section i'm using is as follows... <ConnSettings> <add key="ConnString" value="server=serverName;database=dbName;uid=dbuser;Pwd=password"/> </ConnSettings> I want to know how to configure my sql cache in this situation.