Is there a way to increase the size of the procedure cache. Or is it only a auto configuring option.
I have 2gb of memory, and when I check the size of the procedure cache it is just 10mb. I would like to increase this to around 50mb. Not sure if there is an setting to do this. Had a look on BOL could not find anything.
Can I know somedetail about why the Sqlservr.exe app increasing in size drastically. Even I check all parameter of the server and I check the process running on server.
I feel server is not releasing the queues and It is occupying the memory. I any one suggest what could be the cause ?
I am having trouble increasing the size of the log device on a SQL 6.5 database. When I use SQL Enterprise Manager I get an error saying that the device has 0 MBs available. When I use the ALTER DB statement I get an error saying that there is not enough space on the disk, but I know this not to be the case. Has anyone any suggestions? Thanks in advance, Michael lawlor
I have a database with 1 .mdf data file and 1 .ldf t-log file. There are multiple inserts/deletes/transactions performed on the data daily, but the size of the two files remains constant (5,774,,336 and 153,480 respectively)??? I perform daily full backups and hourly T-log backups (during business hours of 9-6) and these backup files change size, but why aren't my physical DB files changing? I have them set to auto-grow at 10% unrestricted...
Just wondering if you could help me on this one. I'm not sure if mytransaction logs are behaving oddly or what. I've successfuly managedto shrink my transaction logs from 7GB down to 1MB and now I find itstrange that the log file doesn't seem to increase its size. Thetimestamp of the logfile is updating as well. But the size of the fileis constant. I haven't configured my database to do auto-shrink so Imreally confused why it hasn't changed its size for more than a monthnow.Hope I'm not losing any data here.Kindly advise.
i have a very big database and number of people are working on it.. it's log file size is increaseing very day too much.. i am taking log back every 30mint...
i dont' know that wethare i need to truncate the log file after taking the log file backup or not.. i am taking differentail backup every day and full backup every week...
Please tell me do i need to trucate the log file to reduce the file size or i hv to leave as it....
Is there a way to drop clean buffers at the database level instead of the server/instance level like the undocumented €śDBCC FLUSHPROCINDB (@dbid)€?? Is there a workaround for €śdbo€? to be able to flush procedure and data cache without being elevated to €śsysadmin€? server role?
PS: I am aware of the sp_recompile option that can be used to invalidate cached execution plans. Thx.
I am using SQL Server 2000 with replication object for two location. Log size on publisher go upto 25 times of data file size, I mean 80 MB Data files has maintains 2 GB log file and it is same for all five co's working on same windows 2000 advanced server board.
Since last week server randamly get disconnected from user applications and at that time few tables are not openable at server.
Can any one give a reason ? Why this type misbehaviou done by SQL Server 2000?
I have a database which is centered around two date tables (approx. 5.5 million records each). We are finally making the big leap from SQL Server 2005 to 2012. The data is currently stored as datetime, and we are hoping to take advantage of the new date datatype, since the time component is not needed.
The first table has 13 different date columns. In testing on the 2012 server I have changed 3 columns so far, and have seen that changing the datatype to date is actually increasing the Index size and not affecting the data size. Only 3 of the columns are associated with indexes, and modifying a non-indexed column still increased the index size. I am running the Disk Usage by Table report to view the sizes.
data source=RemoteHostName;initial catalog=myDb;password=sa;user id=sa; Max pool size = 200;
And now strange thing is happening ,, I am receiving error :
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached
The SqlServer Activity Manager is telling that only 100 connections are pooled, and I guess that the Max pool size is 100, It is not being changed by my Connection string. As I am trying to change the default 100 pool size value to 200.
Huh , So stucked up , how to increase the Max pool size.. Is there any way .
I have to perform a lookup in a table based on a query like:
"... where ? = [RefTable].fieldID and ? between [RefTable].AnotherFieldValue and [RefTable].AThirdFieldValue"
So, SSIS has put the CacheType to none. As I really need to speed up the job I want to set the CacheType to partial (full isn't an option due to the custom query I use here).
But here it comes: when using partial CacheType, one has to set the cache size manually - and I really don't know what value I should assign to it - is there a guideline on this topic?
I work on a Win2003 server platform with sql server 2005 - 2 processors - 2Gb Ram - enough disc space
Is there any way of getting around the 128MB file size limit when creating and adding SSEv databases to VS2005? Currently I get the following error when trying to connect to a database:"The database file is larger than the configured maximum database size. This setting takes effect on the first concurrent database connection only...". This after I altered the app.config file to "...Max Database Size=600;..." Have anyone tried to use SSEv to cache data with the use of the Smart Application Offline Building Block? Is there a provider I can use for doing this?
I searched around and not been able to find any guidance on this question: if I am designing a lookup transformation, how do I decide what I should set the cache size to?
For the transformation on which I am currently working, the size of the lookup table will be small (like a dozen rows), so should I just reduce it to 1MB? Or should I not even bother with caching for a lookup table that small?
Hypothetically speaking, if I am working with much larger lookups in the future (let's say 30,000 rows in the lookup table), is there some methor or formula that I should use to try to figure out the best cache size?
If the cache size is set larger than the actual data being cached, is the entire cache size allocated in memory, or is the cache size managed to only be as large as it needs to be? In other words, is the cache size a maximum to which the cache will grow, or is it a preset that sets the cache for that lookup to exactly the specified size?
I have a virtual server (VMware ESX) with 64GB RAM running a single instance of SQL 2012 SP1. The max memory config is set to 59392 (58GB).
The Page Life Expectancy for this server has been averaging well under 10 mins for the last few days, according to our monitoring.
I have been checking the amount of data in the buffer cache periodically during the day with the below query, which seems to show that there is never more than about 10GB of data at any one time, frequently dropping below 5GB:
SELECT COUNT(*) AS BufferPages, CONVERT(decimal(10, 2), COUNT(*) / 128.0) AS BufferMB FROM sys.dm_os_buffer_descriptorsWhy would the amount of cached data be so low (and cause so much churn)?
I am aware that other things will require some of that memory (plan cache etc.) but with Max Mem of 58GB, I would expect there to be a much higher amount of actual cached data at any one time. I did the same checks on another VM with the same amount of RAM/Max Mem setting, and there was 50GB of data in the cache, with PLE measured in hours.
After converting from SSIS 2008 to SSIS 2012, I am facing major performance slowdown while loading fact data.When we used 2008 - one file used to take around 2 hours average and now after converting to 2012 - it took 17 hours to load one file. This is the current scenario: We load data into Staging and Select everything from Staging (28 million rows) and use a lookups for each dimension. I believe it is taking very long time due to one Dimension table which has (89 million rows).Â
With the lookup, we currently are using partial cache because full cache caused system out of memory.Lookup Transformation Editor - on this lookup -Â how to increase the size on partial Cache size 64-bit? I am being stuck at 4096 MB and can not increase it. In 2008, I had 200,000 MB partial cache size.
I know this might be a dumb one, but what the heck. My new 7.0 server's procedure cache stays at 100%. After researching this looks like what I want. Nay response appreciated.
Ours is a MSSQL Server Client server application with very minimal usage of Store procedures. The proc cache is configured at 5%.I execute "dbcc proccache" to keep track of the proc cache. I have seen that the "proc cache size" reduces to a very small a amount when there is peak usage. It starts at 42,000 and comes down to 400, though it is always greater than "proc cache used". I am worried if this causes crashes. Please advise why this happens and solutions if any. Thanks in advance, Ramakrishna seelam.
I have installed a SQL Server diagnose tool for evaluation. It prompts and warns me that "Procedure Cache hit rate is for example 15%. Its help indicates:
The Procedure Cache Hit Rate alarm is raised when the ratio between the number of times SQL Server looks for a plan in the procedure cache and the number of times it does not find a required plan in the procedure cache falls below a threshold.
A low procedure cache hit rate indicates that SQL Server is finding fewer of the query execution plans it needs already in memory and therefore has to perform more compiles. These extra compilations will degrade SQL Server performance by causing extra CPU load.
Is any way we can tell sql server to keep specific (long runing) query in procedure Cache. I already tried to do this by creating job (run every 1 hr from 8 am to 6 pm) but is not enough
I'm putting together some monitor scripts, have buffer cache ratio etc etc but struggling to get an accurate script for the current procedure cache hit ratio...
I have a 32 bit SQL 2005 EE clustered installation with 10GB of physical memory and AWE enabled. Our monitoring tool, Spotlight, is reporting the Procedure Cache to be 384MB and a Hit Rate of 75% on a fairly regular basis. Sometimes the Procedure Cache increases to 495MB and a Hit Rate of 82%.
(1) With 2005 can the Procedure Cache be increased?
(2) What is the max size of Procedure Cache?
(3) How do I increase the Hit Rate to a higher percentage?
I do not encounter the issue on any other SQL Server installation, however this is our only cluster.
DBCC PROCCACHE num proc buffs = 64889 num proc buffs used = 1135 num proc buffs = 1135 active proc cache size = 2896 proc cache used = 364 proc cache active = 364
Using SQL Server 2000. When does SQL flush or clear the procedurecache? I am dynamically creating and dropping stored procedures (SP).Does SQL clear the cache for the SP that has been dropped? If not,when the SP is recreated (with the same name), does SQL use theexecution plan from cache?Thank you in advance.Jack
My server (SQL 2005 SP2) typically runs with a procedure cache usage of about 92% or higher... lately it seems like at some point in time during the day it just drops to anywhere between 50% and 65%... with this comes horrible server performance and many snowball effects. If I clear the procedure cache it will go up only about 10% for a minute or two. The only way I can get it to recover completely seems to be restarting the SQL service. Then it will be fine till the next incident. The database is a read only (not set to read only but no updates other than replication). and the same SPs are run over and over and over throughout the day. also did notice that the compiles of the SPs goes up drastically at this point also. not sure if this is part of the cause or part of the effect.
CPU is normal. response from anything (even sp_who) is slow.
i do not understand the way procedure cache works completely so I thought I would ask for some direction.
Any ideas where to look or where to start??? Any thing I can do to catch this when it happens would be great.
Im getting this error when trying to set up a cache dependency...are there any special permissions etc?From CS:SqlCacheDependency dep = new SqlCacheDependency("MySite-Cache", "Products");Cache.Insert("Products", de.GetAllProductsList(), dep); From connectionStrings.config:<add name="SiteDB" connectionString="Data Source=localhost,[port]SQLEXPRESS;Integrated Security=true;User Instance=true; AttachDBFileName=|DataDirectory|ASPNETDB.MDF" providerName="System.Data.SqlClient" />Also tried this using my machinename<add name="SiteDB" connectionString="Data Source=<machinename>,[port]SQLEXPRESS;Integrated Security=true;User Instance=true; AttachDBFileName=|DataDirectory|ASPNETDB.MDF" providerName="System.Data.SqlClient" /> From web.config: <caching> <sqlCacheDependency enabled="true" pollTime="10000"> <databases> <add name="MySite-Cache" connectionStringName="SiteDB" pollTime="2000"/> </databases> </sqlCacheDependency> </caching> EDIT: So making progress I can't seem to get the table registered for cache dependency:The sample i have says"aspnet_regsql.exe -E -S .SqlExpress -d aspnetdb -t Customers -et"and the command line response is "Enabling the table for SQL cache dependency..An error has happened. Details of the exception:The table 'Customers' cannot be found in the database."Where does this "Customers" table come from? There is obviously not an application specific "Customers" table in aspnetdb I'm confused probably more by the example than anything....
I was wondering if SQL Cache Dependency would be in fact invalidated if: 1. it was created based on a procedure type command. 2. if the select statement retrieves the data from multiple database tables Any help would be more appreciated. I am stuck with the fact that none of the data bases on sql dependency is invalidated. I spent literally hours to understand what i am doing incorrectly.