I have configured the memory for sql server as below
min : 512 MB
Max :13500 MB
Total RAM on the server is 16GB
I want to receive email from sql server informing that sql server is running on low memory ( less than 200 MB free) so that I do not get out of memory issues and the sql server continues to run without fail.
So I started a new job recently and have noticed a few strange configurations. Typically I would never mess with min memory per query option and index create memory option configuration because i just haven't seen any need to. My typical thought is that if it isn't broke... They have been modified on every single server in my environment.
From Books Online: • This option is an advanced option and should be changed only by an experienced database administrator or certified SQL Server technician. • The index create memory option is self-configuring and usually works without requiring adjustment. However, if you experience difficulties creating indexes, consider increasing the value of this option from its run value.
Is there an alert for physical memory in SQL2K5? My requirement is - I should get an alert when the free space on a particular drive comes below a threshhold.
------------------------ I think, therefore I am - Rene Descartes
We are going to install a SQL Server 2012 Ent. Edition two node (Active/Passive) cluster. Only one instance. Issue is, a separate shared storage is not provision for MSDTC.
1. Is it mandatory to configure MSDTC for a single SQL 2012 instance ?
2. Can we use one of the shared drives (Data/log/bkp/temp) for configuring MSDTC ?
I've built 4 new SQL2012 Ent SP2 instances using our standard build docsscripts all of which include an alert to pick up Sev 14 bad login messages.
However when I force a bad login to test these, they work on 2 servers but not the other 2.
The ones that fail write the Sev 14 msg to the SQL Error log and Windows Eventvwr fine, but there is no entry in the sysalerts table!?.
The alert is enabled and the windows event log service is running. I've noticed that none of the other alerts I've set up write to the sysalerts table though?
i'm not concerned about that yet, just the fact its not writing to the sysalerts table in the first place.
I'm trying to set up a SQL Alert to run when a Failed login error is generated. For whatever reason, I can't seem to get this working. I have it set up to run on error 18456 (Failed login), I have the server set to log failed login attempts and I can see the entry in the log file, but the alert never occurs. Basically, I want a way to detect when someone is trying to hack into my database using a brute force approach.
Any fix for the seemingly random sort order of the variables in the dropdown list when configuring parameters and connection managers in the SSISDB catalog?
I imported all of our connection strings into an environment (about 200 of them). They were inserted in alpha order and the ID values within the internal.environment_variables table shows them in order as well, by ID and by name. When I run profiler and capture the command that retrieves them and run it in ssms they are in order but in the dropdown they seem random.
There are no values within any of the tables that accounts for the order they are in.
If a package has 5 connections you need to go through the unsorted list 5 times to find them.
Sometimes you get lucky and they are in the first 20 or so.
I know I can write a script, just wondering if there is a fix for the sorting.
In my SQL Server Errorlog, I see the below error. The system has 8 GB of RAM with enough free RAM, something I can do to prevent this alert? (Note: I have no MIN/MAX memory set on this Instance)
A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 328 seconds. Working set (KB): 76896, committed (KB): 167628, memory utilization: 45%.
We have a new set up on VM to run an application running 24*7 (migrated from SQL server 2008R2) with below configs:
1. OS- Windows server 2012 Standard 64 bit hosted on Virtual Machine
2. Memory 16 GB and Cores =4 with 2.4GhZ processor
3. SQL server 2012 SP2 , 64 bit Standard edition.
4. Total size of databases as of now 15 GB with biggest being 5 GB.
How should i go around in setting the MAX and MIN server memory settings. I have this set up for many of SQL 2005 and 2008R2 servers, but for 2012 i heard that things has slightly changed.
How should i start analyzing and setting the right value of this MAX and MIN?
Apparently this error was fixed in CU12 for SQL 2008, but it seems to have raised it's head again in SQL 2012.[SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console.
I've got a client who is seeing it. but I've not seen a fix in CU1 or CU2 for 2012.
When I checked the Properties of the Statistic I can see it is on a varchar(3) field when there are only 3 different values in there - all char(1)
The total size of the data in the table according to the Disk Usage By Top Table Report is 199,680,712KB
So my question is this...
For the UPDATE STATS on this one column with FULL SCAN, does SQL Server read the entire table into the Buffer Pool. If so then if the table had 199,680,712KB of data then why did the session request 145,705,216KB.
Or does SQL Server just read the column and ClusteredIndex Key into the Buffer Pool?
I have an XML file about 25MB. When I read it with openrowset it gives me 'System.OutOfMemoryException'. The machine I'm running has 16GB of RAM and memory is definetely not exceeded. When I run smaller XML files it works fine.
I've read that older versions of SQL Server had this problem and it was caused by the parser having limited amount of memory. Is this still the case? Is there a way to change this?
Can you use the below query to get CPU high utilisation alert purposes for both named and default instance? or, do I need to make any changes here (@wmi_namespace=N'.ROOTCIMV2' ) ?
USE [msdb] GO EXEC msdb.dbo.sp_add_alert @name=N'CPU_WM_Utilization_Check', @message_id=0, @severity=0,
[SSIS.Pipeline] Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console.
I have a virtual server (VMware ESX) with 64GB RAM running a single instance of SQL 2012 SP1. The max memory config is set to 59392 (58GB).
The Page Life Expectancy for this server has been averaging well under 10 mins for the last few days, according to our monitoring.
I have been checking the amount of data in the buffer cache periodically during the day with the below query, which seems to show that there is never more than about 10GB of data at any one time, frequently dropping below 5GB:
SELECT COUNT(*) AS BufferPages, CONVERT(decimal(10, 2), COUNT(*) / 128.0) AS BufferMB FROM sys.dm_os_buffer_descriptorsWhy would the amount of cached data be so low (and cause so much churn)?
I am aware that other things will require some of that memory (plan cache etc.) but with Max Mem of 58GB, I would expect there to be a much higher amount of actual cached data at any one time. I did the same checks on another VM with the same amount of RAM/Max Mem setting, and there was 50GB of data in the cache, with PLE measured in hours.
We are just finishing our migration to SQL 2012. In our old environment, the instance which held our SharePoint databases also served other applications. We did not experience any performance related issues in the past due to this.
SharePoint basically requires MAXDOP to be 1, which is correct on the old server. Since this configuration may not be ideal for other applications that may be put within our environment, we our entertaining the idea of isolating SharePoint into its own instance, probably on the same box.
My manager wants me to come up with performance trace data to better prove that we need to go this route since we apparently have had issues in the past by blindly following Microsoft's best practices.
1.MAXDOP configuration - I understand this may be a 2 pronged approach that would require looking at various execution plans and CPU related counters in Perfmon. SharePoint likely requires a maxdop of 1 due to the nature of the application (lots of concurrent processes). What is the best way to show this need graphically?
2. Memory configuration for multiple instances - Does the Total Server Memory reveal all the memory that a given SQL instance is utilizing? Should I use this counter to identify appropriate min/max memory configurations for multiple instances on a single cluster?
The problem with the perfmon approach is that it's scope is limited to just the server. Since our SharePoint environment is currently being shared with other applications, I understand that I may have to utilize DMV statistics to narrow down my analysis.
I have a report that uses three tables from a database. As long as I only use two of the tables, it runs fine. I need the data from the third table for a line chart. So of course there is a great deal of data in the third table. I have a where clause for start date and end date. Is there a way I could only search the third table after I know what I need from it? Or
I have around 100 packages (all [packages run at same time) each package calls a stored Procedure once the Stored proc Execution is completed the package will write the log information into a log table, here how can i capture the CPU and Memory usage for execution of each stored proc.
I have been having issues with our SQL server for awhile now. It seems to run out of memory every few days and when I look at the memory dump, the MEMORYCLERK_SQLOPTIMIZER seem to take over memory and eventually cause the server to crash.
Here is the SQL verison we are using: Microsoft SQL Server 2012 (SP1) - 11.0.3460.0 (X64) Jul 22 2014 15:22:00 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.2 (Build 9200: ) (Hypervisor)..It is on a VM on Windows 2012 server. It has 20gb of RAM allocated to it and the MAX Server Memory is set to 16.5gb.
I have seen the MEMORYCLERK_SQLOPTIMIZER grow to about 11gb at the time of the server crash. Why that is happening? What is causing the memoryclerk_sqloptimizer to get so high? I have looked it up and it looks like it has to do with ad hoc requests, but is there something I can do to bring that memory down when it gets so high so that I can prevent a server crash?Do we just need to add more memory or is there a memory leak somewhere?
I know I should probably be posting this in the RS section but I have a Win 2008 R2 server and RS 2012 along with SSIS and Database server installed. I also have SQL 2005 instance with Sql 2005 SSIS running on the server.
I saw that Reporting services was consuming 9gb of ram a few days ago with no published reports. It's just a default install.
So I investigated the settings and the document: [URL]......
I added this section to the reportserver.config file to restrict memory usage
We are planning to upgrade. We are using Sql 2008R2 now. Which is the better option migrating to SQL 2012 or migrating to 2014?I am thinking 2014 has memory optimized tables and updatable column stored index. So it is better option.
windows is sql server 2012 64 bit edition and sql server is 2012 64 bit edition.
RAM installed on both server is around 65 gb of which 49 gb is max server memory allocated for sql services on both servers.
database related to reporting services are also in always on group .
We have also configured for reporting services and both are running on their respecting server.
Issue is on primary server reporting service is using almost 7 gb while on secondary it is using 10 gb even when there are 5 reports and its used within offices .
what issue and how to check why ssrs is using high memory..
any query , perfmon counters
reports are randomly used at client side
i have checked memory utilization through task manger..
Hello. I have received the follwoing error upon an attempt to Browse the Cube. All other tabs are functional, including the Calculations tab. We are running Windows Server 2003 SP2 and SQL Server 2005 SP2. Any suggestions would be greatly appreciated!
**EDIT** - Have confirmed SP1 for VS2005 is installed both locally and on server, also.
Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Microsoft Visual Studio)
------------------------------ Program Location:
at Microsoft.Office.Interop.Owc11.PivotView.get_FieldSets() at Microsoft.AnalysisServices.Controls.PivotTableFontAdjustor.TransformFonts(Font font) at Microsoft.AnalysisServices.Browse.CubeBrowser.UpdatePivotTable(Boolean translate) at Microsoft.AnalysisServices.Browse.CubeBrowser.UpdateAll(Boolean translate) at Microsoft.AnalysisServices.Browse.CubeBrowser.InitialUpdate() at Microsoft.AnalysisServices.Browse.CubeBrowser.SupportFunctionWhichCanFail(FunctionWhichCanFail function)
I've been researching AWE to determine if we should enable this for our environment.
Currently we have a quad core box with 4 gb of RAM (VMware). OS: Windows 2003 std, SQL Server 2005 std. 3GB is not set but will be as soon as we can perform maintenance on the server.
I have read mixed feedback on AWE, either it works great or grinds you to a hault. I would assume that the grinding to a hault is due to not setting the min/max values correctly or not enabling the lock page in memory setting.
We only have one instance of SQL on the server and this box won't be used for anything else aside from hosting SQL services. We do plan on running SSRS off of this server as well.
1. Will running SSRS and enabling AWE cause me problems? Will I have to reduce the max setting by the SSRS memory usage or will it share and play nice?
2. How do I go about setting the Max value? Should it be less than the physical RAM in the box? Right now its set to the default of 214748364, even if I don't enable AWE should this default value be changed?
3. It seems that even at idle the SQL server holds a lot of memory and the page file grows. If I restart the process in the morning, memory usage in taskmon is at 600mb or so. By the end of the day, its up around 2gb. How can I track down whats causing this, should this even concern me?
4. The lock Page in memory setting worries me. Everything I've read on this seems to give a warning about serious OS and other program support degradation. In some cases to the point where they have to restore the settings on the server before they can bring it back up. What are your thoughts on this.
I have a Windows sever 2012 with sql server 2012 enterprise. Ram size is 22GB. Sometimes SQL sever takes 95% memory.My question, How to reduce memory size without killing any process because it's production server.So there are many background process is running. And,Is there any guides to learn why Memory is raise d so high and how to reduce it.
Hello, I understand that we should use SSMS -> Server Properties -> Memory to put a cap on the SQL server memory usage, therefore it gives some space memory for OS, this is based on the fact if the max memory is not specified, SQL will use whatever available memory and eventually crash the system.
My question is that when a server has SSIS and SSAS services installed along with the SQL service. Would the max memory setting covers the SSIS and SSAS memory usage, or the SSIS and SSAS has to shared the memory with OS?
I am running Visual Studio 2005. I have an SSIS Package which is consuming a huge amount of memory. During the execution of the package the memory keeps increasing. Until finally i get an Out of Memory exception. I have run this package using dtexec, and in the BIDS. No difference. I do have some script components and have added some code to get the assemblies in the current appdomain. I do see that one particular assembly is increasing on every loop. VBAssembly every time it hits the script component is increasing by 6, and along with it the memory is climbing. What is this VBAssembly being used for is there an update to SQL Server Integration Services that I need?
sql server 2000 is running on windows server 2003 ... 4gb of memory on server .... 2003 was allocated 2.3gb nd sql server was allocated (and using all of it) 1.6gb for total of approx 4gb based on idera monitor software ... all memory allocated betweeen the OS and sql server .... then 4 more gb of memory added for total now of 8g ... now idera monitor shows 1.7gb for OS and 1.0 gb for sql server ..... 'system' info shows 8gb memory with PAE ... so I assume that the full 8gb can now be addressed .... why are less resources being used now with more total memory .... especially sql server ..... i thought about specifying a minimum memmry for sql server but i amnot convinced that would even work since it seems that this 1gb limit is artificial .... it it used 1.6 gb before why would it not use at least that much now ??
I've a database with a memory optimized filegroup on it. How can I remove it?I have removed the memory optimized table I had on it, but when I try to remove the filegroup I receive an error.