Help, have recently upgraded from 6.5 to 7.0 and have come across a problem with performance. The problem appears to relate to the buffer cache being flushed, the buffer cache hit ratio drops from 98% to 0% in a matter of a second. It then very slowly grows, then is flushed again, then increase slowly upto 30%.
Does any one have any ideas as to what would flush the buffer cache?
Is there a way to drop clean buffers at the database level instead of the server/instance level like the undocumented €śDBCC FLUSHPROCINDB (@dbid)€?? Is there a workaround for €śdbo€? to be able to flush procedure and data cache without being elevated to €śsysadmin€? server role?
PS: I am aware of the sp_recompile option that can be used to invalidate cached execution plans. Thx.
My problem is that I cannot completely clean buffer cache on SQL Server 2005 version 9.00.2047.00 (probably SP1).
Right after I run DBCC DROPCLEANBUFFERS in the context of my database (this is development server, and so far I am only the one who is working with a particular database), I run a script that quetries sys.dm_os_buffer_descriptors view also from the context of my database to make sure that the buffer cache is really clean. However it shows large number of entries totalling 42 MB.
I ran both DBCC an the script in the past too, and it always showed nothing in the results, that means that buffers were really clean. The reason why I am running this is for benchmarking of existing and new application.
Does anybody have any idea, suggestions, how to troubleshoot this issue ? I already closed all connections to this database, but rebooting the server is not an option since other people are also working on it.
This issue just happen recently. The buffer cache ratio went from > 90%to 50% and has slowly been climbing back up over 8 hours or so. Itscurrently @ 76%. Is this something I should take action on immediately?It seems to be coming back to normal...
HiI have trouble with MSSQL2000 SP4 (without any hotfixes). During last twoweeks it start works anormally. After last optimalization (about few monthsago) it works good (fast, without blocks). Its buffer cache hit ratio wasabout 99.7-99.8. Last day it starts work slow, there was many blocks anddedlocks. There are no any queries, jobs and applications was added. Nowbuffer cache hit ratio oscilate about 95-98. I try update statistics andreindex some hard used tables, but there is no effect or effect is weryshort (after few hours problem return).Mayby somene know what it could be?Is it possible to estimate how each table (using DBCC SHOW_STATISTICS orDBCC SHOWCONTIG or others) how the table affect on total buffer cache hitratio?Marek---www.programowanieobiektowe.pl
We are troubleshooting a performance problem and the test result is slow the 1st time but the subsequent runs are faster.. Logging out of application and log back in ( connecting to a new database session) did not clear the buffer cache as I thought it would.. When does the database clear the buffer cache? Is it not per database session?
I can issue CHECKPOINT and then run DBCC DROPCLEANBUFFERS to clear the buffers in the disk. But since we are testing from the application,do we need to run these commands via application code to clear buffer/per database session OR can we run these commands from a management studio session?
I have a virtual server (VMware ESX) with 64GB RAM running a single instance of SQL 2012 SP1. The max memory config is set to 59392 (58GB).
The Page Life Expectancy for this server has been averaging well under 10 mins for the last few days, according to our monitoring.
I have been checking the amount of data in the buffer cache periodically during the day with the below query, which seems to show that there is never more than about 10GB of data at any one time, frequently dropping below 5GB:
SELECT COUNT(*) AS BufferPages, CONVERT(decimal(10, 2), COUNT(*) / 128.0) AS BufferMB FROM sys.dm_os_buffer_descriptorsWhy would the amount of cached data be so low (and cause so much churn)?
I am aware that other things will require some of that memory (plan cache etc.) but with Max Mem of 58GB, I would expect there to be a much higher amount of actual cached data at any one time. I did the same checks on another VM with the same amount of RAM/Max Mem setting, and there was 50GB of data in the cache, with PLE measured in hours.
Question- Why am I getting 428 pages for which there is no corresponding DB object? Why are so many pages present in sys.dm_os_buffer_descriptors but are missing from sys.allocation_units.
I'm getting an alert which states that both my Buffer Cache Hit Ratio and PLE are low on one of my SQL Servers though I'm not sure how to correctly check this.
I ran:
SELECT object_name, counter_name, cntr_value FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Buffer cache hit ratio'
Which gives me the Buffer Cache Hit Ratio, cntr_Value of 9 though its constantly dipping between 3-3000 and is never steady and I'm unsure if this is normal.
I also ran:
SELECT object_name, counter_name, cntr_value FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Page life expectancy'
Which gives me the Page life expectancy of 209061.
If these values would cause concern and if this is a normal Buffer Cache Hit Ratio? It's constantly dropping from high or low from what I can see. These scripts were pulled from another forum and I'm assuming they're showing the correct values.
I encountered the following error while attempting to preview an RDL report I was developing in VS2010 using SSDT:"The size necessary to buffer the XML content exceeded the buffer quota"
We have a set of reports with same header section in all the reports. So while developing a new report i used to copy that header section to the new report with same dataset names (without any change) , but while rendering the report it is throwing error " The size necessary to buffer the XML content exceeded the buffer quota".
I have a master package that executes a series of sub packages run from a SQL Agent job. One of those sub packages has been stable for a week, running at least once per day, but it just failed despite having been run once already today with the same set of input data.
There were a series of errors showing in the event log for the Execute Package Task starting with "Buffer Type 15 had a size of 0 bytes.", then "The buffer manager failed to create a new buffer type.", then "The Data Flow task cannot register a buffer type. The type had 32 columns and was for execution tree 3.", then "The layout failed validation." and finally "Error 0xC0012050 while loading package file "C:[Package].dtsx". Package failed validation from the ExecutePackage task. The package cannot run.".
SQLIS.com reports the constant for the error code as DTS_E_REMOTEPACKAGEVALIDATION ( http://wiki.sqlis.com/default.aspx/SQLISWiki/0xC0012050.html ).
I then ran the package on my dev machine in BIDS and it worked fine, so I re-ran the job on the server and this time that package executed ok, but another one fell over but did not put anything in the event log.
I'm experiencing a completely random warning from any given row count component within any given data flow task. It occurs sporadically. Whilst distracting, I don't see any adverse effects to the data after the packages complete. Can someone weigh in on this warning and let me know if it is indeed benign or what I maybe able to do to fix it?
Here's the warning:
"A call to the ProcessInput method for input 75997 on component "CNT Rows sent for STG table" (75995) unexpectedly kept a reference to the buffer it was passed. The refcount on that buffer was 4 before the call, and 5 after the call returned."
Im getting this error when trying to set up a cache dependency...are there any special permissions etc?From CS:SqlCacheDependency dep = new SqlCacheDependency("MySite-Cache", "Products");Cache.Insert("Products", de.GetAllProductsList(), dep); From connectionStrings.config:<add name="SiteDB" connectionString="Data Source=localhost,[port]SQLEXPRESS;Integrated Security=true;User Instance=true; AttachDBFileName=|DataDirectory|ASPNETDB.MDF" providerName="System.Data.SqlClient" />Also tried this using my machinename<add name="SiteDB" connectionString="Data Source=<machinename>,[port]SQLEXPRESS;Integrated Security=true;User Instance=true; AttachDBFileName=|DataDirectory|ASPNETDB.MDF" providerName="System.Data.SqlClient" /> From web.config: <caching> <sqlCacheDependency enabled="true" pollTime="10000"> <databases> <add name="MySite-Cache" connectionStringName="SiteDB" pollTime="2000"/> </databases> </sqlCacheDependency> </caching> EDIT: So making progress I can't seem to get the table registered for cache dependency:The sample i have says"aspnet_regsql.exe -E -S .SqlExpress -d aspnetdb -t Customers -et"and the command line response is "Enabling the table for SQL cache dependency..An error has happened. Details of the exception:The table 'Customers' cannot be found in the database."Where does this "Customers" table come from? There is obviously not an application specific "Customers" table in aspnetdb I'm confused probably more by the example than anything....
I am looking at the plan caches/cached pages from the perspective of sys.dm_os_memory_cache_counters and sql serverlan Cache - Cache Pages
For the first one I am using
select (sum(single_pages_kb) + sum(multi_pages_kb) ) from sys.dm_os_memory_cache_counters where type = 'CACHESTORE_SQLCP' or type = 'CACHESTORE_OBJCP' a slight change from a query in http://blogs.msdn.com/sqlprogrammability/
For the second just perfmon.
The first one gives me a count of about 670,000 pages only for the object and query cache and the second one gives me a total of about 100,000 pages for five type of caches including object and query.
If I am using the query from http://blogs.msdn.com/sqlprogrammability/ to determin the plan cache size
select (sum(single_pages_kb) + sum(multi_pages_kb) ) * 8 / (1024.0 * 1024.0) as plan_cache_in_GB from sys.dm_os_memory_cache_counters where type = 'CACHESTORE_SQLCP' or type = 'CACHESTORE_OBJCP'
it gives me about 5 GB when in fact my SQL Server it can access only max 2GB with Total and Target Server Memory at about 1.5 GB.
The scenario is as follows: I have a source with many rows. Each row has a column called max_qty_value. I need to perform a calculation using another column called qty. This calculation is something similar to dividing qty/(ceiling) max_qty_value. Once I have that number I need to write an additional duplicate row for each value from the prior calculation performed. For example, 15/4 = 4. I need to write 4 rows to the same target table as in line information for a purchase order.
The multicast transform appears to only support fixed and/or predetermined outputs. How do I design this logic in SSIS to write out dynamic number of rows to a target table.
On Microsoft performance monitor, what is the difference between SQL Server Cache Manager: Cache Hit Ratio and SQL Server Buffer Manager: Buffer Cache Hit Ratio? We have a production server where the buffer cache hit ratio is consistently at 99%, which is normal. However, the cache hit ratio is 73%. What is the difference between the two hit ratios, and why would we have such a significant difference between the two?
I'm having a problem importing a text file into a SQL db using DTS. I have to transform some of the data that is being imported so I think Bulk import is out of the question.
Everything works fine until a hit a row that contains more than 255 characters in one cell. Once it encounters that row, it fires this error:
"Error at source for row number 9.Errors encountered so far in this task :1 General Error: -2147217887(80040E21) Data for Source Column 3('Col3') is too large for the specified buffer size."
I found a entry in the MS KnowledgeBase that addresses the symptom but the workaround doesn't fix it:
I have a strange problem that I need to solve as soon as possible. I have created two CLR UDTs called point and point_list. Each record of a point_list consists of a list of points. I created a CLR stored procedure which reads some raw data and updates the point_list records. When I execute the stored procedure the following error appears :
System.Data.SqlTypes.SqlTypeException: The buffer is insufficient. Read or write operation failed. System.Data.SqlTypes.SqlTypeException: at System.Data.SqlTypes.SqlBytes.Write(Int64 offset, Byte[] buffer, Int32 offsetInBuffer, Int32 count) at System.Data.SqlTypes.StreamOnSqlBytes.Write(Byte[] buffer, Int32 offset, Int32 count) at System.IO.BinaryWriter.Write(Char ch) etc ...
Hi thereAnybody know how to increase the MS SQL server buffer size?I get an error when trying so insert some pictures as OLE objects. Whentransfering to the server i get an error, that the buffer sizes needs tobe increased.RegardsRudi W.
A script component receives some input. But I just can't get at the first row??
Basically, if i use the NextRow method in the in the Do statement, then it advances the row collection to the second row before it gets into the code inside the loop?? BUT, if I use the EndOfRowset property to define my loop then I get an error:
[PipelineBuffer has encountered an invalid row index value]
I'm guessing this means...I have to call NextRow before i access the data in the collection? But thats retarted because then I miss the first row?? what? What am I missing??
This is the code which works but I miss the first row:
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer) Dim strConcept As String
Do While Row.NextRow()
strConcept = Row.concept
updateDb(strConcept)
Loop End Sub
This is the code which throws the invalid row index error:
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer) Dim strConcept As String
Do While Not Row.EndOfRowSet()
strConcept = Row.concept
updateDb(strConcept)
Row.NextRow()
Loop End Sub
I've put some try catches in there an the error happens on the line which calls Row.concept....?
Can anyone help, it must be something I'm messing up
All, My weekly loading is failed and here is the error message I got. Could someone kindly point me what is the problem and how to detail with it?
Thanks
Error: 0xC0047012 at Fact_ResidentService, DTS.Pipeline: A buffer failed while allocating 63936 bytes. Error: 0xC0047011 at Fact_ResidentService, DTS.Pipeline: The system reports 43 percent memory load. There are 4227104768 bytes of physical memory with 2378113024 bytes free. There are 8796092891136 bytes of virtual memory with 8787211939840 bytes free. The paging file has 10300792832 bytes with 14786560 bytes free. Error: 0xC0047022 at Fact_ResidentService, DTS.Pipeline: The ProcessInput method on component "Union All 1" (3629) failed with error code 0x8007000E. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. Error: 0xC02020C4 at Fact_ResidentService, From_Basis [16]: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020. Error: 0xC0047038 at Fact_ResidentService, DTS.Pipeline: The PrimeOutput method on component "From_Basis" (16) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. Error: 0xC0047021 at Fact_ResidentService, DTS.Pipeline: Thread "WorkThread0" has exited with error code 0x8007000E. Error: 0xC0047021 at Fact_ResidentService, DTS.Pipeline: Thread "SourceThread1" has exited with error code 0xC0047038. Error: 0xC0047039 at Fact_ResidentService, DTS.Pipeline: Thread "WorkThread2" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
When running a package created on my local machine i get no errors at all but when i try to run the same package on the server i get an error specifying Microsoft.SqlServer.Dts.Pipeline.DoesNotFitBufferException: The value is too large to fit in the column data area of the buffer.
I have tried changing the defaultbuffersize of the data flow task but this makes no difference. I think that a buffer size for a particular column is being exceed but i cannot find anywhere to set this property.