Integration Services :: High Memory Utilization
Apr 28, 2015
We have a system(32GB RAM and 2 TB hard disk, Windows7,SQL SERVER 2008R2 enterprise 64 bit). Looks like whenever i run some query(even query result 50 records) on the database, the Memory utilization is very high(30 GB) in task manager. How can i control this over usage? The memory setting is default in server properties(min 0 and max 2147483647).
View 7 Replies
ADVERTISEMENT
Mar 26, 2015
We have two server configured in always on mode .
windows is sql server 2012 64 bit edition and sql server is 2012 64 bit edition.
RAM installed on both server is around 65 gb of which 49 gb is max server memory allocated for sql services on both servers.
database related to reporting services are also in always on group .
We have also configured for reporting services and both are running on their respecting server.
Issue is on primary server reporting service is using almost 7 gb while on secondary it is using 10 gb even when there are 5 reports and its used within offices .
what issue and how to check why ssrs is using high memory..
any query , perfmon counters
reports are randomly used at client side
i have checked memory utilization through task manger..
View 0 Replies
View Related
Sep 27, 2001
I have a few in house developed application (VB based) that access the SQL server for adding, appending , creating tables. The application does the changes thru queries dynamically generated at the application level.
My MS SQL Server runs on a PIII / 256 MB Ram / 18 GB HDD
The problem is that the memory utilization of SQL server keeps growing constantly. Out of 512 MB (256 Physical + 256 Virtual) available teh memory utilization reaches a level of 490 MB and statys constant. Though SQL Server shows a utilization of 150 MB.
I suspect that SQL is not releasing memory back to the system. Please help in resolving. The problem may lie at the applications developed.
Jdindian
View 2 Replies
View Related
Aug 25, 2015
I know I should probably be posting this in the RS section but I have a Win 2008 R2 server and RS 2012 along with SSIS and Database server installed. I also have SQL 2005 instance with Sql 2005 SSIS running on the server.
I saw that Reporting services was consuming 9gb of ram a few days ago with no published reports. It's just a default install.
So I investigated the settings and the document: [URL]......
I added this section to the reportserver.config file to restrict memory usage
<WorkingSetMaximum>512000</WorkingSetMaximum>
<WorkingSetMinimum>128000</WorkingSetMinimum>
Any example altering the reportserver.config file in your standard build of sql server reporting services?
View 0 Replies
View Related
Mar 21, 2000
After a fresh install of SQL 6.5 with SP5a(or without), the cpu is running at anywhere from 50%-80%. It is loaded on a PDC, but when I stop the sql service cpu utilization drops to 0-2%. When I start the sql service it's right back up there, does anyone have a suggestion as to how to fix this or why the service would be doing this?
Thanks, Christel
View 1 Replies
View Related
Sep 5, 2002
I am getting high CPU utilization on the SQL Server process (>90%).
However the overall utilization (NT -- entire box) always seems to be under 50%.
Can someone explain why this is happening. The server is a quad; the SQL server process seems to be using only two CPUs at a time (not the same ones all the time).
Lightweitht pooling has been turned on and the maximum worker thread size has been left at the default value (255).
How can I configure SQL options to spread the load across all four CPUs ??????
View 2 Replies
View Related
Oct 17, 2006
Could anyone help me in finding out why the cpu utilization is very high??
I have two servers say, Server A , server B. There is a transactional replication going on from server A to B
There is a table say Table A on server A, which is being replicated to server B.
I created a trigger insert and update trigger on Table A on server B (i.e. on subscriber). Since then, the CPU utilization for server B is very high 80-90%
when i used profiler, i could see .whenever replication stored proc for insert or update executes..cpu utilization goes up..
trigger just insert the updated/inserted rows into some other table.
Could anyone tell me why the cpu utilization has gone up so much?? i am using sql server 2005
thanx
View 1 Replies
View Related
Oct 21, 2007
Our company recently combined our DBs into one SQL 2005 Server.
Dell Power Edge 1800 with 3.00 GHz Xeon Processor 800 FSB, 1 GB of RAM
Dell Power Edge 1600 with 2.80 GHz Xeon Processor 533 FSB, 1 GB of RAM
Combined into one:
Dell Power Edge 2950 Dual Core 1.6 GHz Xeon Woodcrest Processor, 4 GB of RAM
However, the CPU utilization on this new server is maintaining at about 90% with 3.82 GB of RAM used as well. It's a Windows Server 2003 R2 x64 edition running SQL Server 2005 SP2 x64. I have searched around Microsoft's website for any information that could be of help to me, but I was unable to locate anything. I was hoping that someone could provide some insight as to why this might be occuring. Or if this is a known issue.
Thanks,
Peter
View 10 Replies
View Related
Oct 22, 2007
We are using an IBM Xeon server with 4 GB RAM with windows 2000 server and MS Sql server 2005.
More frequently our server responce time is very slow, although the cpu utilization is between 7 - 10 %. we can not able to run even notepad on the server. we observed that the memory occupaid by the sqlserver prog is high. If we restart the server then it will return to normal level. But we do not want restart the server frequently.
Kindly provide me suitale solution.
Srinath
DBA
Reid & Taylor
Mysore, India
View 1 Replies
View Related
Apr 6, 2006
Hi,
In SQL 2005 Replication Monitor i was not seeing details for any of the publications on the "Distributor to Subscriber Histroy Tab" so i decided to stop and start synchronisation on this one publication. At this time there were approximayely 20000 undistributed commands. After the stop/start of the distribution agent, i started seeing messages like "x trasactions with x commands were delivered". Then i went and restarted all the other distribution agents using the Replication Monitor.
Has anyone experienced this kind of a behaviour?
The second issue is that our trasnactional replication looked to have caught up but i was supprised find that the distribution server was running at 100%. A profiler trace of the distribution database revealed that sp_MSget_repl_commands procedure was being executed and costing approximately in excess of 400 000 reads, 7000 in CPU cost and 15sec in duration. To me it looked as if sp_MSget_repl_commands has chosen an inefficient execution plan but then realised i couldn't recompile system procedures. I think a stop and start of the SQL instance is the only option i have.
PK
View 1 Replies
View Related
Oct 22, 2007
We are using an IBM Xeon server with Sevrer 2000 and sql server 2005 with 4B RAM.
We noticed that the responce time of the server is very slow. During this time the memory occupancy by the sqlserver is very high. Althouh the cpu utilization is very low (7-10%) we are unable to run even notepad on the server. In some other situations the cpu utilization will be 100% (for more than one hour) during this time also the sql server occupaies more memory. If we reatarts the server then the problem will be fixed, but we do not want to restart the server very frequently.
Kindly provide us a suitable solution
Srinath R
DBA
Reid & Taylor
Mysore. India
View 1 Replies
View Related
Jan 30, 2007
Hi,
We are having a big performance issue at our site. Here is the configuration of the box running SQL Server 2005:
64 bit Windows Enterprise Edition + SP1
Dual CPU, 16GB RAM
RAID 1 and RAID 5 - internal
SQL Server 2005 64-bit Enterprise Edition
With SP2 (CTP from December)
The "Lock Pages in Memory" is set and is being run under the same account that is being used to run SQL Server Services.
We are noticing that under load, the CPU utilization becomes nearly
100%. I have researched this and have come across a couple of
posts that indicate that this issue was fixed in SP2 - example: One
post talked about the hotfix #716 which is also a part of SP2 but even
after the application of that service pack, we are still having this
issue. I haven't tried setting the parameterization option to
forced for the database yet.
Is this a known issue with SP2? If not, what can we look for and
fix in our environment? Please let me know if I can provide more
information.
View 7 Replies
View Related
Jan 10, 2006
I have a question for anyone who mas some tips/pointers for optimizing SQL merge replication publications.
The front end web server is running IIS 6.0 on Windows 2003 x86 Server Standard (Server A). The back end database server is running SQL 2000 Standard on Windows 2003 x86 Standard (Server B). The merge replication clients connect via HTTPS over the Internet from a custom C#.NET 2005 application using SQL 2005 Mobile running on Windows Mobile 5.0 (Client).
The publication itself has several filters on it. The entry point uses the user's Windows username to start the filter. Based on the user, it then filters the records in multiple tables. There are 68 articles and 44 filter statements. The filters extend multiple layers deep, in other words they are not all filtering off the HOST_NAME() variable, some tables filter from records in tables that filter from the HOST_NAME() variable. The publication is set to minimize data sent to the clients, and considers a subscription out of date if it has not synced in the last 4 days. All the rowguids are indexed as well.
There are approximately 35 clients actively using the application at any given time. On average, a client will initiate a merge replication 3-4 times per hour from 8am-5pm. Generally, a sync will take between 10 seconds and 2 minutes to complete, with most of them being around 30 seconds on average.
When a client starts a sync, there is a spike to about 50% on the server's CPU graph. If multiple clients attempt to sync at the same time the CPU utilization can be pushed to 100% for extended periods (more than 30 seconds).
I recently completed a project to increase the bandwidth available to the clients, and plan to reduce the number of filters significantly (although this will obviously increase the amount of data going to the clients and the storage needs on the individual devices). I also plan on changing the setting to not minimize the amount of data sent to the clients.
Having said all that, does anyone have any information about how to further optimize merge publications to mobile clients? The next publication will be on SQL 2005 x64 Standard if I can solve the issues in the text environment. I would like to enhance the publication as much as possible to make the end user experience better than it currently is.
Thanks!
View 3 Replies
View Related
May 10, 2000
I installed 4GB of memory and I have never seen SQL memory utilization go beyond 2GB. I have SQL server set up to use as much memory as it needs. Does anyone no if SQL server can make use of more than 2GB.
View 1 Replies
View Related
Feb 28, 2000
Ours is a SQL 7.0 Enterprise edition with NT 4.0 Enterprise Edition. SQL Server has been configured with the default, 'Dynamic memory Allocation'. The system has 4GB of RAM. This is a dedicated SQL Server machine. But SQL Server seems to use only 1.8GB RAM(Counter: Total Server Memory) The page faults seem to be a max. of 600 and an avg. of 100. The processor utilization has suddenly increased to 90%. Is there anything wrong with the way SQL server is using memory? Is is not true that SQL Server 7.0 Enterprise edition can use upto 3GB RAM in a 4GB system?
Are there any links that can help troubleshoot this problem?
Thank you.
-Praveena
View 2 Replies
View Related
Jul 20, 2005
What do people think is normal for memory utilization? I know that's toobroad, so here are some basics.MS SQL Server 2000, Windows 2000 Server, 2GB RAMDb 1, size = 2.0 GBDb 2, size = 300MBDb 3, size = 50MBDb 4, size = 30MBDb 5, size = 30MBTypically 4-6 users, moderate usage 8-hrs/day. Performance has not slowed.Reboot on Sunday. sqlservr.exe in the Task Manager reports the followingSun 61MBMon 200MBTues 800MBWed 1,124MBThu 1,424MBFri 1,303MBI was getting srv 2020 errors when I had just 1 GB RAM: "The server wasunable to allocate from the system paged pool because the pool was empty."Then I did several updates to address this and got more RAM. I haven't seenthe errors since, but I haven't waited for them to happen: I'm rebootingevery week now. The memory numbers make me suspect SQL Server.Scratching my head. Not sure if my problem is gone, and this is normal SQLServer 2000 behavior, or if my problem is still lurking and I've only mutedit a bit.Any thoughts greatly appreciated.Tom
View 3 Replies
View Related
Apr 19, 2007
Dear all,
One of the server is having 2 GB of RAM and task manager is showing 1.87 GB memory in use.
I have to migrate few databases on the same server.
With high IO Operations.
I know server require more RAM, but how can i prove that server needs more RAM ?
Regards
Mohd Sufian
View 3 Replies
View Related
Feb 12, 2007
Environment: Win2003 SP1, 32 bit, SQL Server 2K5
My server has 16GB RM but it is using only 3GB. And I see my server is using 3GB of Virtual Memory, too. Why my physical memory is not being utilized? How can I increase Physical Memory usage and decrease VM usage?
Canada DBA
View 19 Replies
View Related
May 11, 2015
I have worked in other ETL tools. So, i am trying to figure out how to do thefile decryption and process the data in memory using SSIS.I am using SSIS on Azure VM and my source files are on Azure storage. The files are encrypted and we are trying to use Phython script to decrypt the files and pass it to SSIS. I found out that Execute Process task can call the Phython script. However, i would like to get the decrypted data from the file and pass it to the next task (control flow) in SSIS without saving it as a file (in-memory). I found that execute process task output can be stored as a Standard Output Variable or to an object. Will this work or do I need to follow any other methods (since we need the entire file to be sent for additional processing).
View 6 Replies
View Related
Jan 15, 2013
i have a nightly job (SSIS Package) scheduled using MS. The package loads data from the OLTP db to the warehouse. The server has 256GB memory and out of which 211GB is free.
the job runs w/o any problems but some times it fails with the following error"DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "The statement has been terminated.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Violation of PRIMARY KEY constraint '<var>PrimaryKeyName</var>'. Cannot insert duplicate key in object '<var>TableName</var>'.".
When i researched this error i found out that its because of the memory issue. we have 222GB free memory and how come this is possible. Is there a way in the package or anywhere else where i can specify how much (percentage) of the memory that the SSIS package should use (something like SSRS threshold levelp).
View 13 Replies
View Related
Apr 26, 2006
Can someone point me to some good articles or perhaps directly supply some words of wisdom with regard to wise utilization of variables within a T-SQL script from and standpoint of conserving memory usage and improved execution cost?
For example:
(1) Is it better to use varchars, nvarchars, etc. defined with minimal lengths to support the needs of the script or is it just as efficient to declare all with a length of say 4,000?
(2) I've seen behavior that leads me to believe that when passing a variable as a parameter in a nested procedure call, if the declared types of the parameter and the variable being passed in don't match (i.e. one is numeric(38,10) and the other is int), then implicit type conversions hurt performance. Is this true and how broadly does it apply?
(3) Does the number of variables declared in a script materially impact the performance and / or resource utlization?
(4) Is it more efficient to have a series of variable value assignments in a single SELECT statement versus a series of SET statements? Should I always perfer one to the other? Only within a looping construct?
Thanks,
Shadowraven
View 1 Replies
View Related
Nov 17, 2009
I am getting the following warning for my SSIS08 package: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console. I did check Warning in SSIS 2008 , but didn't find any solution. The package processes data and executes fine , but why do I see this warning? When I run this package on my machine, I see no such warning, it's only when I deploy it to our DEV SSIS server, I get this warning.
View 7 Replies
View Related
Apr 30, 2015
I am searching for query to find total memory allocated to sqlserver, Â and how much being used utilized as well cpu utilization percentage .Â
View 10 Replies
View Related
Apr 1, 2014
When running the etl I'm getting the error: <SSIS Task>: Shared Memory Provider: Timeout error [258] ; followed by the message "Communication link failure".
What is special about this message that it happens on a SQL Execute task (random task) and the Timeout is after 2 minutes.
When executing the packages separatly it is working fine. The SQL Tasks that are failing are also quit heavy, but reasonable and takes between >2min and 10 - 15 min. Statements are stored procedures that puts an index on 3 mil. records or update statements,...
I had a look to all my (SSIS-etl)Â timeouts and they have the default value 0, the "remote query timeout" of the server is set to 10 minutes. According to me, these are the only one that exists?
There are 2instances on the server each instance has 24GB allocated, the server has 64 in total. Also when the etl runs (that results in an error) no other etl is running on the 2 instances. I'm working with the oledb sql server native client11.0 provider : SQLNCLI11.1.
View 7 Replies
View Related
Sep 25, 2006
hi all
I got a small MSSQL server.. total database file size less than 7GB. with 2G rams installed 2 cpus. but for some reason when i check the task manager process mem usage is over 1.7G. the sql server memory setting in on Dynamically not fixed. and maximum 2G i believe is default. anyway. my question is over 1.7 memory usage is too high? because i dont have alot of transaction going. and cpu usage is very low. wondering if this's normal or not. and if is not normal . what cause the memory usage so high...... and how can i adjust back to normal. ? can anyone help me out? or any suggestion? thanks
View 4 Replies
View Related
Jan 29, 2008
I have a 2003 server with sql 2005 on it and the sqlservr.exe is using 880 meg of memeory and it will climb to 1.4 gig. if i reboot server it will go back to 100 meg and slowly climb back up. any ideas i am not a sql guy.
View 2 Replies
View Related
Jul 23, 2005
Hi,We have a prod server running on SQL server 2000 64 bit. It is a4cpu server with 16GB of RAM. we have a maxmemory setting of 15.5GBfor sql server. Inspite of 15GB being available for sql server, itstill uses paging file space, a lot. When looking thru task maanger wecan see sql server using 15.5GB of Memory usage and 22GB of Virtualmemory usage. I don't understand why it should even be using closer to7GB of Paging space, when it has so much memory. How does SQl serveruse Virtual memeory vs Physical memory?HAs anyone seen this before.ThanksGG
View 2 Replies
View Related
Sep 10, 2015
We have Windows Server 2008R2 installed on VM Server.On that we have three SQL instances running. From few past months we are observing physical memory is going high. Earlier we observed it was at 86-88%, now it is 96-97%.We have 16RAM & 8 CPU cores on VM. what is the best and ideal configuration so that we can rectify the high physical memory issue.
View 13 Replies
View Related
May 2, 2006
Hello,
Question:
How can I keep my thread alive after an out of memory exception? That is, I understand that sometimes a server may be unable to satisfy a memory request, but I'm okay with waiting -- I'm not okay with being terminated (think of the reaction to Oliver asking for some more). I would think that, in general, when any application makes a request for a resource that is currently unavailble, but may be available at another time, that application (process/thread/fiber) would be put in a Wait Queue for that resource. On a high concurrency system, this could obviously lead to deadlocks; however, I think in the situation I describe below, the killing is overkill.
Discussion & Background:
In my project, I have a SqlFunction, we'll call "SqlDecimal BigFunction()" that will allocate a large chunk of memory (~3MB) and can take anywhere from 20ms to 500ms to complete (on my system, assuming no other processor load). There are also Functions that are used to set control points for BigFunction (implying thread/fiber state -- or, if there is a distinction, Transaction state), which we will call "SqlBoolean SetControlPoint(SqlInt32 x, SqlInt32 y)". The 3MB requirement is constant, regardless of the number of control points. (Incidentally, the actual implementations of these functions are in a referenced assembly)
In Code:
////////////////////////////////////////////////////////////////////////////////////////////////////
[ThreadStatic] // UNSAFE
static externalAssembly.MyClass myObj;
[SqlFunction]
SqlDecimal BigFunction()
{
if(myObj == null) return -1;
// DoSomeWork will do something like, byte[] b = new byte[3 * 1024 * 1024];
return externalAssembly.DoSomeWork(myObject)
}
[SqlFunction]
SqlBoolean SetControlPoint(SqlInt32 x, SqlInt32 y)
{
if(myObj == null) myObj = new externalAssembly.MyClass();
myObj.SetPoint(x.Value, y.Value);
return SqlBoolean.True; // because we can't have 'void' return type
}
////////////////////////////////////////////////////////////////////////////////////////////////////
In low to moderate concurrency (single hyperthreaded CPU with 20 sessions banging it in a loop), it *usually* does okay. In a higher concurrency situation (2 hyperthreaded cpus with 10 sessions stressing this code and 10 other sessions doing regular TSQL Selects) It runs for a long time, but will occasionally throw an out-of-memory exception. (Previously, I was managing my thread state manually with a locked dictionary, an Int32 key, and CreateSession/ReleaseSession calls). When an out of memory exception is thrown while the dictionary is locked, I get an AppDomain unload, which is *completely* unacceptable)
So, I know that sometimes, I won't be able to allocate my 3MB (it could be 3kb, it just shows up more readily with a larger allocation request). That doesn't mean my externalAssembly is "misbehaving" or "off-in-the-weeds". It just means the server is loaded right now and can't satisfy my request. One may catch an OutOfMemory Exception (perhaps to add additional info about the point of failure), but the thread is already being aborted.
I tried modifying this implementation to use a buffer pool that is allocated on start-up. That worked pretty well (reduced % Time in GC a bit, also), but it forced my external assembly to be marked as unsafe rather than just external access because of the Synchronization methods used to manage the buffer pool. It also doesn't scale, at least not as it sits. Its just a fixed size buffer pool. With more processors and less peripheral loading, the extra processors would just be waiting for a buffer. Besides that, I thought there was some escalation policy about "waiting too long", but I may be wrong.
I would like to eliminate the "UNSAFE" attribute from the primary assembly -- mainly because it "sounds scary", but more realistically, because it is unsafe! Or at least, experience in the field points to synchronization issues being a primary cause of unreliability in systems. Also, calling the C# lock, Mutex, Monitor etc call into native code to use the OS for locking. When this happens, SQL doesn't really know what you're waiting for and can't take that info into account when scheduling. All it knows is that you're waiting on an OS lock. I thought the hosting API would've allowed the host to optionally implement its own locking primitives, especially a host that runs its own scheduler.
I've looked into constrained execution regions and Chris Brumme's blog entry on hosting. Using them would help ensure some protection, but I think even they do not protect a thread from being unloaded in the face of an OutOfMemoryException (or any asynch exception); rather, they allow you to safely clean up unmanaged references and ensure state integrity for the appdomain.
At any rate, this is getting a little long winded. If anyone has any feedback, I'd be delighted to hear it.
Thank you.
-Troy
System Info:
SELECT @@version
Microsoft SQL Server 2005 - 9.00.2047.00 (Intel X86) Apr 14 2006 01:12:25 Copyright (c) 1988-2005 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 2)
View 5 Replies
View Related
Mar 5, 2014
My database server memory utilisation is growing faster from past 1 week. it remained same for 1 week around 55% and now it is going to 70% and increasing.
Total OS memory is 32GB and I kept cap for sql server memory upto 29GB. Dont know what to do..
View 9 Replies
View Related
Jun 11, 2015
I have been having issues with our SQL server for awhile now. It seems to run out of memory every few days and when I look at the memory dump, the MEMORYCLERK_SQLOPTIMIZER seem to take over memory and eventually cause the server to crash.
Here is the SQL verison we are using: Microsoft SQL Server 2012 (SP1) - 11.0.3460.0 (X64) Jul 22 2014 15:22:00 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.2 (Build 9200: ) (Hypervisor)..It is on a VM on Windows 2012 server. It has 20gb of RAM allocated to it and the MAX Server Memory is set to 16.5gb.
I have seen the MEMORYCLERK_SQLOPTIMIZER grow to about 11gb at the time of the server crash. Why that is happening? What is causing the memoryclerk_sqloptimizer to get so high? I have looked it up and it looks like it has to do with ad hoc requests, but is there something I can do to bring that memory down when it gets so high so that I can prevent a server crash?Do we just need to add more memory or is there a memory leak somewhere?
View 2 Replies
View Related
Nov 6, 2007
My Sql express will use up to 1G or more memory and never release.
I had the same problem on SQL 2005 standard, I solved by adding /3G in boot.ini and turn on AWE.
but it seems SQL express doesn;t support AWE. so how could I do here?
thanks
View 6 Replies
View Related
Mar 11, 2008
Hello, we have a customer-facing ASP.NET web reporting application that allows the user to a). select a report b). make report settings (filter, sort, include/exclude columns), and c). run the report. An Oracle stored proc is used to return the data into a DataSet. Today the output is displayed in a bound, paginated DataGrid.
We€™d like to €˜plug-in€™ Reporting Services for improved formatting, grouping, matrix reporting, drilldown, graphics, etc.. I have read through the online docs, examples, and tutorials but have not found the exact (high-level) information I€™m looking for.
We need to keep our report settings page, Oracle SP architecture, etc... Is there a way to simply hand off a dynamically created DataSet to a report for display €“ like binding to the DataGrid? The data (query) would be specific to a report but columns can be selected/unselected for display.
In terms of deployment, is it necessary to install SQL Server in each environment (DEV-Integration, QA, PROD) to support Reporting Services? Since we are essentially an Oracle shop, this could be a tough sell. Is it possible to develop the report on a dev workstation (perhaps with SQL Server), and then publish to a web server where SQL Server is not installed?
These are obviously high-level questions - any help will be appreciated (examples, how-to's, etc..) €“ thanks in advance.
-Bill
View 1 Replies
View Related