My server (SQL 2005 SP2) typically runs with a procedure cache usage of about 92% or higher... lately it seems like at some point in time during the day it just drops to anywhere between 50% and 65%... with this comes horrible server performance and many snowball effects. If I clear the procedure cache it will go up only about 10% for a minute or two. The only way I can get it to recover completely seems to be restarting the SQL service. Then it will be fine till the next incident. The database is a read only (not set to read only but no updates other than replication). and the same SPs are run over and over and over throughout the day. also did notice that the compiles of the SPs goes up drastically at this point also. not sure if this is part of the cause or part of the effect.
CPU is normal. response from anything (even sp_who) is slow.
i do not understand the way procedure cache works completely so I thought I would ask for some direction.
Any ideas where to look or where to start???
Any thing I can do to catch this when it happens would be great.
Is there a way to drop clean buffers at the database level instead of the server/instance level like the undocumented €œDBCC FLUSHPROCINDB (@dbid)€?? Is there a workaround for €œdbo€? to be able to flush procedure and data cache without being elevated to €œsysadmin€? server role?
PS: I am aware of the sp_recompile option that can be used to invalidate cached execution plans. Thx.
Question- Why am I getting 428 pages for which there is no corresponding DB object? Why are so many pages present in sys.dm_os_buffer_descriptors but are missing from sys.allocation_units.
I know this might be a dumb one, but what the heck. My new 7.0 server's procedure cache stays at 100%. After researching this looks like what I want. Nay response appreciated.
Ours is a MSSQL Server Client server application with very minimal usage of Store procedures. The proc cache is configured at 5%.I execute "dbcc proccache" to keep track of the proc cache. I have seen that the "proc cache size" reduces to a very small a amount when there is peak usage. It starts at 42,000 and comes down to 400, though it is always greater than "proc cache used". I am worried if this causes crashes. Please advise why this happens and solutions if any. Thanks in advance, Ramakrishna seelam.
I have installed a SQL Server diagnose tool for evaluation. It prompts and warns me that "Procedure Cache hit rate is for example 15%. Its help indicates:
The Procedure Cache Hit Rate alarm is raised when the ratio between the number of times SQL Server looks for a plan in the procedure cache and the number of times it does not find a required plan in the procedure cache falls below a threshold.
A low procedure cache hit rate indicates that SQL Server is finding fewer of the query execution plans it needs already in memory and therefore has to perform more compiles. These extra compilations will degrade SQL Server performance by causing extra CPU load.
Is any way we can tell sql server to keep specific (long runing) query in procedure Cache. I already tried to do this by creating job (run every 1 hr from 8 am to 6 pm) but is not enough
Is there a way to increase the size of the procedure cache. Or is it only a auto configuring option. I have 2gb of memory, and when I check the size of the procedure cache it is just 10mb. I would like to increase this to around 50mb. Not sure if there is an setting to do this. Had a look on BOL could not find anything.
I'm putting together some monitor scripts, have buffer cache ratio etc etc but struggling to get an accurate script for the current procedure cache hit ratio...
I have a 32 bit SQL 2005 EE clustered installation with 10GB of physical memory and AWE enabled. Our monitoring tool, Spotlight, is reporting the Procedure Cache to be 384MB and a Hit Rate of 75% on a fairly regular basis. Sometimes the Procedure Cache increases to 495MB and a Hit Rate of 82%.
(1) With 2005 can the Procedure Cache be increased?
(2) What is the max size of Procedure Cache?
(3) How do I increase the Hit Rate to a higher percentage?
I do not encounter the issue on any other SQL Server installation, however this is our only cluster.
DBCC PROCCACHE num proc buffs = 64889 num proc buffs used = 1135 num proc buffs = 1135 active proc cache size = 2896 proc cache used = 364 proc cache active = 364
Using SQL Server 2000. When does SQL flush or clear the procedurecache? I am dynamically creating and dropping stored procedures (SP).Does SQL clear the cache for the SP that has been dropped? If not,when the SP is recreated (with the same name), does SQL use theexecution plan from cache?Thank you in advance.Jack
Im getting this error when trying to set up a cache dependency...are there any special permissions etc?From CS:SqlCacheDependency dep = new SqlCacheDependency("MySite-Cache", "Products");Cache.Insert("Products", de.GetAllProductsList(), dep); From connectionStrings.config:<add name="SiteDB" connectionString="Data Source=localhost,[port]SQLEXPRESS;Integrated Security=true;User Instance=true; AttachDBFileName=|DataDirectory|ASPNETDB.MDF" providerName="System.Data.SqlClient" />Also tried this using my machinename<add name="SiteDB" connectionString="Data Source=<machinename>,[port]SQLEXPRESS;Integrated Security=true;User Instance=true; AttachDBFileName=|DataDirectory|ASPNETDB.MDF" providerName="System.Data.SqlClient" /> From web.config: <caching> <sqlCacheDependency enabled="true" pollTime="10000"> <databases> <add name="MySite-Cache" connectionStringName="SiteDB" pollTime="2000"/> </databases> </sqlCacheDependency> </caching> EDIT: So making progress I can't seem to get the table registered for cache dependency:The sample i have says"aspnet_regsql.exe -E -S .SqlExpress -d aspnetdb -t Customers -et"and the command line response is "Enabling the table for SQL cache dependency..An error has happened. Details of the exception:The table 'Customers' cannot be found in the database."Where does this "Customers" table come from? There is obviously not an application specific "Customers" table in aspnetdb I'm confused probably more by the example than anything....
I was wondering if SQL Cache Dependency would be in fact invalidated if: 1. it was created based on a procedure type command. 2. if the select statement retrieves the data from multiple database tables Any help would be more appreciated. I am stuck with the fact that none of the data bases on sql dependency is invalidated. I spent literally hours to understand what i am doing incorrectly.
We see the following message in our error log. WARNING: Clearing procedure cache to free contiguous memory. It is accomonpanied by fairly intensive CPU activity. We get this roughly once per working day.
Anyone have any idea why, and what we can do to stop this?
I want to analyze procedure cache, to find inefficient plans and parameter issues.
I do it trow DMV But my requests to DMV are very slow and demand resources because procedure cache is about several GB Actually I dont need on-line analysis.
Is it possible to have fast snapshot of procedure cache?
I am looking at the plan caches/cached pages from the perspective of sys.dm_os_memory_cache_counters and sql serverlan Cache - Cache Pages
For the first one I am using
select (sum(single_pages_kb) + sum(multi_pages_kb) ) from sys.dm_os_memory_cache_counters where type = 'CACHESTORE_SQLCP' or type = 'CACHESTORE_OBJCP' a slight change from a query in http://blogs.msdn.com/sqlprogrammability/
For the second just perfmon.
The first one gives me a count of about 670,000 pages only for the object and query cache and the second one gives me a total of about 100,000 pages for five type of caches including object and query.
If I am using the query from http://blogs.msdn.com/sqlprogrammability/ to determin the plan cache size
select (sum(single_pages_kb) + sum(multi_pages_kb) ) * 8 / (1024.0 * 1024.0) as plan_cache_in_GB from sys.dm_os_memory_cache_counters where type = 'CACHESTORE_SQLCP' or type = 'CACHESTORE_OBJCP'
it gives me about 5 GB when in fact my SQL Server it can access only max 2GB with Total and Target Server Memory at about 1.5 GB.
I posted a related thread before about this error below when I process a dimension. And seems that the solution by using "ClearCache" can not fingure out the issue when I want to process a mining structure...... .
OLE DB error: OLE DB or ODBC error: There is not enough procedure cache to run this procedure, trigger, or SQL batch. Retry later, or ask your SA to reconfigure SQL Server with more procedure cache. ; Sort failed because there is insufficient procedure cache for the configured number of sort buffers. Please retry the query after configuring lesser number of sort buffers.
Could someone please give me some suggestions? Your help will be very appreciated:-)
i've a store procedure has a strange behavior, As soon as created has a good performance , but after some times (indeterminated) it takes more time to be execute.... (up to 70s!!!)
The thing that i've not understood was if i take the query inside to the store i execute it separtely I get result immediately... :eek:
Dropping and re-creating procedure,it become newly fast... I've just scheduled a maintenance plan with index optimization and integrity check, but this seems doesn't work ...
Hi,what I am trying to discern is if there is any way of logging SPactivity on a SQL server 2k DB. Ideally I would want to log SP name,parameters, user and time.I found sp_monitor in MSDN but that just gives overall statistics.Not specific enough to aid debugging.Thanks in advance,Finlay Macrae
I have around 100 packages (all [packages run at same time) each package calls a stored Procedure once the Stored proc Execution is completed the package will write the log information into a log table, here how can i capture the CPU and Memory usage for execution of each stored proc.
I am maintaining an application where most of the business rules are inTriggers, Stored Procedures and User Defined Functions. When a bugarises, it can get very tedious to debug. Today for example, I wantedto modify a function that was being called by a trigger. The problemis that I don't want to change the function, for fear that it is beingcalled by one of the other SP's or triggers in the database (there arehundreds of them)Essentially, I need a tool that allows me to view where functions andsp's are being referenced from. At the very least, I'd like to performa "full text search" in the database objects, so that let's say I havea function named "fn_doSomething", I can search the schema for thisstring and get all the places where it appears.As you can see, I'm in the dark here. I've never worked on a systemwhere all business rules are at the database level. If you know of atool that does what I describe above, or anything else that wouldfacilitate my life, please let me know!Thanks for your help,Marc
I have a client program that writes to sql server database 10 records per second . i want to compute the CPU usage and the memory usage for the whole program or CPU usage,memory usage for the insert statement in the program .
Hello, When I am seeing SQL Server 2005 Management studio Server Dashboard> I am seeing my(USERS) databases and msdb database usage is very small % of in CPU Usage(%), Logical IO Performed (%) Usage pie chart.
90% of Total cpu usage is showing for Adhoc Queries. what excatly this means in Dashboard? if application uses more than it would have shown in Database level or not?
sicerely this dashboard is good, if any one is watching daily, please advice their experiences here.
We have a database that we are preparing to set up Merge replication on. We often make schema changes via T-SQL, many of these changes are made to tables on which an ALTER TABLE statement will not do (rather the creation of a temporary table, copying of the data, deleting the original table then renaming the temp table).
My question is how this will affect Merge replication. I have not been able to find anything that is very clear on this. From what I gather, if a table needs to be dropped that is participating in a merge replication, I need to go through the manual process (manual, as in calling the necessary system stored procedures) of removing the the article from any publication (and subsequent filtering), make the modifications, then re-add it to the publication and filtering.
Is this correct? If so, a new snapshot needs to be created, correct? If so, I have a follow-up question regarding that snapshot.
If a new snapshot needs to be created, what happens during replication/synchronization? Meaning, since it is a new snapshot, does the client (subscriber) see the whole thing as new or is it smart enough to recognize that only that one table i have changed needs to be synched?
I am quite new to replication, as you can tell, so please forgive the rambling. I ask these questions because I have heard different answers on both questions...so I would like to get the correct answers.
Is there a way to have SSIS monitor a folder for file drops? I have been unable to determine which object/task to use for this. We need the ability to have it monitor for files being dumped by other systems, pick those up and then process them. Thanks for your assistance.
We've implemented mirroring between two identical servers. Sporadically, the mirroring session will drop and the ERRORLOG reflects the errors below at the exact time the mirroring session becomes suspended. We do not manage our back end network since we use a dedicated hosting environment at a remote location. Is this issue solely caused by network connectivity issues, or are there other factors at work?
2007-07-16 04:24:37.24 spid23s Error: 1453, Severity: 16, State: 1. 2007-07-16 04:24:37.24 spid23s 'TCP://192.168.215.92:5022', the remote mirroring partner for database 'evestment', encountered error 1204, status 4, severity 19. Database mirroring has been suspended. Resolve the error on the remote server and resume mirroring, or remove mirroring and re-establish the mirror server instance. 2007-07-16 04:24:48.46 spid23s Error: 1479, Severity: 16, State: 1. 2007-07-16 04:24:48.46 spid23s The mirroring connection to "TCP://192.168.215.92:5022" has timed out for database "evestment" after 10 seconds without a response. Check the service and network connections.