I have a virtual server (VMware ESX) with 64GB RAM running a single instance of SQL 2012 SP1. The max memory config is set to 59392 (58GB).
The Page Life Expectancy for this server has been averaging well under 10 mins for the last few days, according to our monitoring.
I have been checking the amount of data in the buffer cache periodically during the day with the below query, which seems to show that there is never more than about 10GB of data at any one time, frequently dropping below 5GB:
SELECT COUNT(*) AS BufferPages, CONVERT(decimal(10, 2), COUNT(*) / 128.0) AS BufferMB FROM sys.dm_os_buffer_descriptorsWhy would the amount of cached data be so low (and cause so much churn)?
I am aware that other things will require some of that memory (plan cache etc.) but with Max Mem of 58GB, I would expect there to be a much higher amount of actual cached data at any one time. I did the same checks on another VM with the same amount of RAM/Max Mem setting, and there was 50GB of data in the cache, with PLE measured in hours.
extending the buffer pool onto an SSD drive that is shared ? For instance, if we had a mirrored system drive(The C logical partition) on SSD's, can we use the remaining space on that mirrored partition to extend my SQL 2014 buffer pool ? I understand that in this scenario, there is some competition for the I/O throughput between SQL Server's extended buffer pool and the OS. We intend to have the pagefile on a different disk, other than this specific SSD.
hi i'm having this error on my application"cannot allocate more connection.connect pool is at maximum increase max pool size" the proble is when i do testing this error does not apply it only Appears when the application is been used by many people How can I resolve this? Thanks
Is there a way to drop clean buffers at the database level instead of the server/instance level like the undocumented €œDBCC FLUSHPROCINDB (@dbid)€?? Is there a workaround for €œdbo€? to be able to flush procedure and data cache without being elevated to €œsysadmin€? server role?
PS: I am aware of the sp_recompile option that can be used to invalidate cached execution plans. Thx.
I am task with identifying the source database name, id, and server name for each staging table that I create. I need to add this to a derived column on all staging tables created from merging same tables on different servers together.
When doing a Merge Join, there is no way to identify the source of data so I would like to see if data came from one database more than the other servers or if their are duplicates across servers.
The thing that bugs me about SSIS Data Flow task is there is no way to do an easy Execute SQL Task after I select my ADO.NET Source to get this information because my connection string is dynamic and there is no way of know which data source is being picked up at runtime.
For Example I have Products table on Server 1 and 2:
Server 2 has more Products and would like to join the two together to create a staging table.
I encountered the following error while attempting to preview an RDL report I was developing in VS2010 using SSDT:"The size necessary to buffer the XML content exceeded the buffer quota"
We have a set of reports with same header section in all the reports. So while developing a new report i used to copy that header section to the new report with same dataset names (without any change) , but while rendering the report it is throwing error " The size necessary to buffer the XML content exceeded the buffer quota".
I am sure I have seen in the past in a monitoring tool that PLE drops off to 0 whenever we do a backup. I was doing some reading around this however and found something that said backups use a different portion of memory external to the buffer pool (minmax settings).
Is this correct and how can I tell how much memory will be required for a backup?
Hi, I'm trying to chase down some bottlenecks, and am currently tyring to figure out what's actually in our data buffer pool.
We've recently upgraded to SQL Server 2005 (sp2a); there's 4GB memory on the box (an active/passive cluster) with the /3GB switch set. I'm working on the learning curve for
sys.dm_os_buffer_descriptors and sys.allocation_units [and boy I sure wish SSMS's query windows wouldn't "copy" in HTML]. Based on BOL and some poking around, I've come up with the following query to list pages used within a given database:
on au.container_id = p.partition_id -- 2005 compatible, but maybe not in future versions
) obj
on bd.allocation_unit_id = obj.allocation_unit_id
left outer join sys.indexes ind
on ind.object_id = obj.object_id
and ind.index_id = obj.index_id
where bd.database_id = db_id()
group by obj.name, obj.object_id, obj.index_id, ind.name
order by cached_pages_count desc
This would appear to list how many pages are sitting in our buffer pool for which objects for the currently selected database. The thing is, for our "main" database, the vast majority of pages fall in that "unidentified" bucket -- their allocation_unit_ids are not in sys.allocation_units (or tempdb, I checked there just in case).
My question is: what are these pages? Where is this data coming from? Might these somehow be related with our execution/query cache, which appears to be larger than our data cache?
As may be obvious, this is all new to me, and any help would be greatly appreciated!
I have a site which provides online studying for students in a graduate program. Apparently their final was today because lastnight everybody was on. I know, however, that this is a small class, so no more than about 80 people could be on the site at one time. I still got SQL errors (emailed to me from Application_Error). I read on MSDN that by default connection pooling should be in place, with a pool size of 100... that would mean I wouldn't have this problem if only 80 students were on the site.
Why is this happening?
Below is my connection string from web.config. <add key="connStr" value="Server=xxxxxx;Database=xxxx;UID=xxx;PWD=xxx;"></add>Error: System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
When using connection string connect to sql server, there is a item max pool size. The default is 100. I set it to 500, but still get max pool size is reached and timeout error.
I checked my code and do open and close each connection each time using connection, so should have no memory leak.
What is the maximun value for max pool size? how to calc this size based on the memory used by SQL Server?
I am reading about Buffer Pool Extensions, and how it stores data pages on media like an SSD, to speed up retrieval in future. Would this be useless if my mdf files are already on SSD media? At most, I envisage it meaning that instead of grabbing the data from the mdf, it would grab the data from the buffer pool extension drive, but if they are both on SSD's, I'm not sure of how much return I would see.
Has any user decided to use BPE when their data is already on SSD's, and have they noticed any improvement in these cases?
Extending the buffer pool onto an SSD drive that is shared ? For instance, if we had a mirrored system drive(The C logical partition) on SSD's, can we use the remaining space on that mirrored partition to extend my SQL 2014 buffer pool ? I understand that in this scenario, there is some competition for the I/O throughput between SQL Server's extended buffer pool and the OS. We intend to have the pagefile on a different disk, other than this specific SSD.Â
Hello there,I have and small excel file, which when I try to import into SQlServer will give an error "Data for source column 4 is too large forthe specified buffer size"I have four columns in the excel file, one of the column contains alarge chunk of data so I created a table in SQL Server and changed thetype of the field to text so I could accomodate this field but stillno luck.Any suggestions as to how to go about this.Thanks in advance,Srikanth pai
I tried to synchronize the SQL 2000 database with the SQL mobile server on PDA with SQL server management studio from SQL 2005. I got the error as €œThe buffer pool is too small or there are too many open cursors. HRESULT 0x80004005 (25101) €?. The SQL database file size is 120MB.
I create the .sdf file on PDA with the following C# code:
string connString = "Data Source='Test.sdf'; max database size = 400; max buffer size = 10000;"; SqlCeEngine engine = new SqlCeEngine(connString); engine.CreateDatabase();
I think the 400MB max database size and 10000KB max buffer size are big enough to hold that SQL database data and I have already successfully synchronized my PDA with another smaller SQL server database file. I have been kept trying and searching this for couple of days and still can not figure it out.
Moreover, the synchronization always stops at the same table.
We noticed a deadlock 3-4 weeks ago on a table (table1) and deadlock graph was captured.
When I am analyzing the deadlock graph, page number using DBCC PAGE, I am getting the object id for a different table (table2). But deadlock graph shows the name of the object as table1.
Is it possible that subsequent defragmentation of indexes would have caused the respective page id to got re-allocated to a different table? I checked the deadlock graph lately only after 3-4 weeks.
I have a problem to import xls file to sql table, using MS SQL 2000 server. Actual main problem associated with it is xls file contain one colum having large amount of text which length is approximate 1500 characters. I am trying to resolve it through like save xls to csv or text file then import but it also can not copy whole text of that column, like any column in xls having 995 characters then text or csv file contain 560 characater. So, it is also wrong.
Help, have recently upgraded from 6.5 to 7.0 and have come across a problem with performance. The problem appears to relate to the buffer cache being flushed, the buffer cache hit ratio drops from 98% to 0% in a matter of a second. It then very slowly grows, then is flushed again, then increase slowly upto 30%.
Does any one have any ideas as to what would flush the buffer cache?
I am working with merge replication. The db server is running Windows 2003 Server Std x64 with SQL 2005 Std x64. The IIS server is running Windows 2003 Server Stx x86. The clients are Windows Mobile 5.0 running SQL 2005 Mobile.
I have previously been able to get the test client to initialize a subscription to the publication. I made some changes recently and added about 7 tables, bringing the total articles to 104. Some of the articles have unpublished columns, which means they are using vertical filtering as well. This publication is also using parameterized filters. During publication creation, there are several warnings that "Warning: column 'rowguid' already exists in the vertical partition".
The snapshot agent runs without any errors. When the client attempts to initialize, I get the error "The buffer pool is too small or there are too many open cursors."
My question is 2 part. First, what is causing this error, and second, how can it be resolved?
I’m attempting to use DTS to import data from a Memo field in MS Access (Jet 4.0 OLE DB Provider) into a SQL Server nvarchar(4000) field. Unfortunately, I’m getting the following error message:
Error at Source for Row number 30. Errors encountered so far in this task: 1. Data for source column 2 (‘Html’) is too large for the specified buffer size.
I also get this error message when attempting to import the same data from Excel.
Per the MS Knowledgebase article located at http://support.microsoft.com/?kbid=281517, I changed the registry property indicated to 0. This modification did not help.
Per suggestions in other SQL Server forums, I moved the offending row from row number 30 to row number 1. This change only resulted in the same error message, but with the row number indicated as “Row number 1�. (Incidentally, the data in this field is greater than 255 characters in every row, so the cause described in the Knowledgebase article doesn’t seem to be my problem).
You might also like to know that the data in the Access table was exported into this table from a SQL Server nvarchar(4000) field.
Does anybody know what might trigger this error message other than the data being less than 255 characters in the first eight rows (as described in the KB article)?
I’ve hit a brick wall, so I’d appreciate any insight.Thanks in advance!
My problem is that I cannot completely clean buffer cache on SQL Server 2005 version 9.00.2047.00 (probably SP1).
Right after I run DBCC DROPCLEANBUFFERS in the context of my database (this is development server, and so far I am only the one who is working with a particular database), I run a script that quetries sys.dm_os_buffer_descriptors view also from the context of my database to make sure that the buffer cache is really clean. However it shows large number of entries totalling 42 MB.
I ran both DBCC an the script in the past too, and it always showed nothing in the results, that means that buffers were really clean. The reason why I am running this is for benchmarking of existing and new application.
Does anybody have any idea, suggestions, how to troubleshoot this issue ? I already closed all connections to this database, but rebooting the server is not an option since other people are also working on it.
There are some more columns with more 'nvarchar' (max) and other INT data types. Anyway, I know a page is 8K size. How do I find out how much space does A ROW takes with above datatypes? If users add 5000 Rows per day, how do I figure out how much size the table will increase?
After converting from SSIS 2008 to SSIS 2012, I am facing major performance slowdown while loading fact data.When we used 2008 - one file used to take around 2 hours average and now after converting to 2012 - it took 17 hours to load one file. This is the current scenario: We load data into Staging and Select everything from Staging (28 million rows) and use a lookups for each dimension. I believe it is taking very long time due to one Dimension table which has (89 million rows).Â
With the lookup, we currently are using partial cache because full cache caused system out of memory.Lookup Transformation Editor - on this lookup -Â how to increase the size on partial Cache size 64-bit? I am being stuck at 4096 MB and can not increase it. In 2008, I had 200,000 MB partial cache size.
This issue just happen recently. The buffer cache ratio went from > 90%to 50% and has slowly been climbing back up over 8 hours or so. Itscurrently @ 76%. Is this something I should take action on immediately?It seems to be coming back to normal...
HiI have trouble with MSSQL2000 SP4 (without any hotfixes). During last twoweeks it start works anormally. After last optimalization (about few monthsago) it works good (fast, without blocks). Its buffer cache hit ratio wasabout 99.7-99.8. Last day it starts work slow, there was many blocks anddedlocks. There are no any queries, jobs and applications was added. Nowbuffer cache hit ratio oscilate about 95-98. I try update statistics andreindex some hard used tables, but there is no effect or effect is weryshort (after few hours problem return).Mayby somene know what it could be?Is it possible to estimate how each table (using DBCC SHOW_STATISTICS orDBCC SHOWCONTIG or others) how the table affect on total buffer cache hitratio?Marek---www.programowanieobiektowe.pl
I've never worked with the XML data type in SQL Server, although I know its been there for a few iterations of SQL SErver. Now I've got a situation in which it might store some configuration data as XML, since that's the way it comes. (We had thought about storing the data in a VARCHAR(MAX) field.)
The first question is does the XML data type have a size limitation? For example do you do something like:
ConfigFile XML(1000) NULL
Or is it just something like this:
ConfigFile XML NULL
The second question is persisting the data to a file. As the name I choose for the variable suggests, we want to save the data from a configuration file into a SQL Server database. How do we go about doing that? We'll be developing a C# application, it will read and write the data both from the SQL table and the user's local HD.
I'm trying to query a table where in the data in a cell is 65KB and when i try to do a SELECT I am unable to get the entire data from the cell.
SELECT CAST(Xml_data as XML) from TableName where ID=100 Error Message: Msg 9448, Level 16, State 1, Line 1 XML parsing: line 241, character 76, well formed check: undeclared entity