Could anyone help me in finding out why the cpu utilization is very high??
I have two servers say, Server A , server B. There is a transactional replication going on from server A to B
There is a table say Table A on server A, which is being replicated to server B.
I created a trigger insert and update trigger on Table A on server B (i.e. on subscriber). Since then, the CPU utilization for server B is very high 80-90%
when i used profiler, i could see .whenever replication stored proc for insert or update executes..cpu utilization goes up..
trigger just insert the updated/inserted rows into some other table.
Could anyone tell me why the cpu utilization has gone up so much?? i am using sql server 2005
After a fresh install of SQL 6.5 with SP5a(or without), the cpu is running at anywhere from 50%-80%. It is loaded on a PDC, but when I stop the sql service cpu utilization drops to 0-2%. When I start the sql service it's right back up there, does anyone have a suggestion as to how to fix this or why the service would be doing this?
I am getting high CPU utilization on the SQL Server process (>90%). However the overall utilization (NT -- entire box) always seems to be under 50%.
Can someone explain why this is happening. The server is a quad; the SQL server process seems to be using only two CPUs at a time (not the same ones all the time).
Lightweitht pooling has been turned on and the maximum worker thread size has been left at the default value (255).
How can I configure SQL options to spread the load across all four CPUs ??????
I have a few in house developed application (VB based) that access the SQL server for adding, appending , creating tables. The application does the changes thru queries dynamically generated at the application level.
My MS SQL Server runs on a PIII / 256 MB Ram / 18 GB HDD
The problem is that the memory utilization of SQL server keeps growing constantly. Out of 512 MB (256 Physical + 256 Virtual) available teh memory utilization reaches a level of 490 MB and statys constant. Though SQL Server shows a utilization of 150 MB.
I suspect that SQL is not releasing memory back to the system. Please help in resolving. The problem may lie at the applications developed.
Our company recently combined our DBs into one SQL 2005 Server.
Dell Power Edge 1800 with 3.00 GHz Xeon Processor 800 FSB, 1 GB of RAM Dell Power Edge 1600 with 2.80 GHz Xeon Processor 533 FSB, 1 GB of RAM
Combined into one: Dell Power Edge 2950 Dual Core 1.6 GHz Xeon Woodcrest Processor, 4 GB of RAM
However, the CPU utilization on this new server is maintaining at about 90% with 3.82 GB of RAM used as well. It's a Windows Server 2003 R2 x64 edition running SQL Server 2005 SP2 x64. I have searched around Microsoft's website for any information that could be of help to me, but I was unable to locate anything. I was hoping that someone could provide some insight as to why this might be occuring. Or if this is a known issue.
We are using an IBM Xeon server with 4 GB RAM with windows 2000 server and MS Sql server 2005.
More frequently our server responce time is very slow, although the cpu utilization is between 7 - 10 %. we can not able to run even notepad on the server. we observed that the memory occupaid by the sqlserver prog is high. If we restart the server then it will return to normal level. But we do not want restart the server frequently.
In SQL 2005 Replication Monitor i was not seeing details for any of the publications on the "Distributor to Subscriber Histroy Tab" so i decided to stop and start synchronisation on this one publication. At this time there were approximayely 20000 undistributed commands. After the stop/start of the distribution agent, i started seeing messages like "x trasactions with x commands were delivered". Then i went and restarted all the other distribution agents using the Replication Monitor.
Has anyone experienced this kind of a behaviour?
The second issue is that our trasnactional replication looked to have caught up but i was supprised find that the distribution server was running at 100%. A profiler trace of the distribution database revealed that sp_MSget_repl_commands procedure was being executed and costing approximately in excess of 400 000 reads, 7000 in CPU cost and 15sec in duration. To me it looked as if sp_MSget_repl_commands has chosen an inefficient execution plan but then realised i couldn't recompile system procedures. I think a stop and start of the SQL instance is the only option i have.
We are using an IBM Xeon server with Sevrer 2000 and sql server 2005 with 4B RAM.
We noticed that the responce time of the server is very slow. During this time the memory occupancy by the sqlserver is very high. Althouh the cpu utilization is very low (7-10%) we are unable to run even notepad on the server. In some other situations the cpu utilization will be 100% (for more than one hour) during this time also the sql server occupaies more memory. If we reatarts the server then the problem will be fixed, but we do not want to restart the server very frequently.
We are having a big performance issue at our site. Here is the configuration of the box running SQL Server 2005:
64 bit Windows Enterprise Edition + SP1
Dual CPU, 16GB RAM
RAID 1 and RAID 5 - internal
SQL Server 2005 64-bit Enterprise Edition
With SP2 (CTP from December)
The "Lock Pages in Memory" is set and is being run under the same account that is being used to run SQL Server Services.
We are noticing that under load, the CPU utilization becomes nearly 100%. I have researched this and have come across a couple of posts that indicate that this issue was fixed in SP2 - example: One post talked about the hotfix #716 which is also a part of SP2 but even after the application of that service pack, we are still having this issue. I haven't tried setting the parameterization option to forced for the database yet.
Is this a known issue with SP2? If not, what can we look for and fix in our environment? Please let me know if I can provide more information.
We have a system(32GB RAM and 2 TB hard disk, Windows7,SQL SERVER 2008R2 enterprise 64 bit). Looks like whenever i run some query(even query result 50 records) on the database, the Memory utilization is very high(30 GB) in task manager. How can i control this over usage? The memory setting is default in server properties(min 0 and max 2147483647).
I have a question for anyone who mas some tips/pointers for optimizing SQL merge replication publications.
The front end web server is running IIS 6.0 on Windows 2003 x86 Server Standard (Server A). The back end database server is running SQL 2000 Standard on Windows 2003 x86 Standard (Server B). The merge replication clients connect via HTTPS over the Internet from a custom C#.NET 2005 application using SQL 2005 Mobile running on Windows Mobile 5.0 (Client).
The publication itself has several filters on it. The entry point uses the user's Windows username to start the filter. Based on the user, it then filters the records in multiple tables. There are 68 articles and 44 filter statements. The filters extend multiple layers deep, in other words they are not all filtering off the HOST_NAME() variable, some tables filter from records in tables that filter from the HOST_NAME() variable. The publication is set to minimize data sent to the clients, and considers a subscription out of date if it has not synced in the last 4 days. All the rowguids are indexed as well.
There are approximately 35 clients actively using the application at any given time. On average, a client will initiate a merge replication 3-4 times per hour from 8am-5pm. Generally, a sync will take between 10 seconds and 2 minutes to complete, with most of them being around 30 seconds on average.
When a client starts a sync, there is a spike to about 50% on the server's CPU graph. If multiple clients attempt to sync at the same time the CPU utilization can be pushed to 100% for extended periods (more than 30 seconds).
I recently completed a project to increase the bandwidth available to the clients, and plan to reduce the number of filters significantly (although this will obviously increase the amount of data going to the clients and the storage needs on the individual devices). I also plan on changing the setting to not minimize the amount of data sent to the clients.
Having said all that, does anyone have any information about how to further optimize merge publications to mobile clients? The next publication will be on SQL 2005 x64 Standard if I can solve the issues in the text environment. I would like to enhance the publication as much as possible to make the end user experience better than it currently is.
windows is sql server 2012 64 bit edition and sql server is 2012 64 bit edition.
RAM installed on both server is around 65 gb of which 49 gb is max server memory allocated for sql services on both servers.
database related to reporting services are also in always on group .
We have also configured for reporting services and both are running on their respecting server.
Issue is on primary server reporting service is using almost 7 gb while on secondary it is using 10 gb even when there are 5 reports and its used within offices .
what issue and how to check why ssrs is using high memory..
any query , perfmon counters
reports are randomly used at client side
i have checked memory utilization through task manger..
Is there a way to configure mirroring to go from High Availability to High Protection without having to reconfigure Database Mirroring? Using the interface in Management Studio, I can change the configuration option to High Performance, but not High Protection despite both of them being Synchronous.
If not, what are the recommended steps to configure the mirror once it already has been configured? Is just like initially setting up the mirror or would there be any shortcuts I could take? If I stop the mirroring and remove the witness, will the High Protection option be available?
I realise this is a stupid quesiton but i cannot really find any confirmation of this in BOL.
If you are running High Safety with automatic failover, when failover occurs does this automatically change to High Performance mode. SInce for failover to occur something has happen with the primary , it will be impossible to commit transactions on the new primary and mirror asyncronously since 1 of them is no longer available.
So am i correct in assuming that automatic failover also automatically changes the mode to High Performacne for that session?
Hi All, We have two production boxes running different ERP Apps on each boxes, We just added 2 more cpu's to one box and 1 cpu to other. The Currect CPUS config as stands is
Box 1 = 4 CPU's Box 2 = 2 CPU's
My boss whats some stats to see if performace was any better and if yes how much !!
Hi all, I have a problem in trying to find out why only one CPU in a 2 CPU H-T utilized. Using task manager, I can see 4 processors windows but only 1 actually utilized. I select +boost sql server and select all CPU for use. Queries and all other command ie dbcheck.. appear to use only single cpu. Any help would be nice. thanks Andrew
I want to keep track of the CPU utilization & number of users connected for each database on our production box. I chose to get the data from sysprocesses table from master database.
But I realised that for some reason the master..sysprocesses.CPU column stays static or just keeps on adding to existing values.
Is there any ways thru which I can clear this data ( cpu column in sysprocesses table) after I have captured it in a table ?
We have a SQL 7.0 Standard Server running on a Windows 2000 Servermachine with 2 800mhz Pentium III with 2GB memory. Our front end isAccess 97 and 2000 with most ADO connections for the scripts but someDAO for forms and reports. We recently "released" a new version ofthe "database" that caused a catastrophic event to start happeningwith our SQL server.Using PerfMon we monitored the CPU utilization on the server andnoticed that the CPU load would drop to 0 for approx 5-10 seconds andthen jump back up to our average 60-70% utilization. During thisdrop, there is NO disk activity no new connections being made, etc.We then took the process a step further and loaded a "stress" programthat put about 30% load on the server to start with. Then wemonitored each processes load. SQL Server process would drop to 0%while the stress process continued at 30%.The problem is that the SQL does absolutely NOTHING for 5-10 seconds.You cannot connect, any querys that are running stop, their is no diskactivity (logs, data drives), and you cannot even get sp_who2 to runfrom Query Analyser. We thought maybe blocking (we have built an"app" that monitors this), but we don't see any blocking before itlocks and nothing after it locks.Out of despiration we "rolled back" to our previous version to getpeople working again. After business hours, we have tried toduplicate the problem on machines (2 or 3 at a time) but cannot get itto duplicate the problem.The only experience we had previous to this was using DNS to resolvethe server name which caused a problem EXTREMELY familar to thisproblem. However, we have double checked every machine we have, andnone of them are using DNS to resolve.Any idea's would be most appreciated.Patrick Moore
1. A table has a PK of EmployeeID (non-clustered). The sql statement where clause uses something like "WHERE E.Action > 1 AND E.User = 1001 AND E.EmployeeID = 12345 Question: Will the PK index be used in determining the result set ?
2. A table has an index of EmployeeID + Company + State (clustered). The sql statement where clause is "Where EmployeeID = 1001". Question: Will the index be used in determining the result set ?
I installed 4GB of memory and I have never seen SQL memory utilization go beyond 2GB. I have SQL server set up to use as much memory as it needs. Does anyone no if SQL server can make use of more than 2GB.
Ours is a SQL 7.0 Enterprise edition with NT 4.0 Enterprise Edition. SQL Server has been configured with the default, 'Dynamic memory Allocation'. The system has 4GB of RAM. This is a dedicated SQL Server machine. But SQL Server seems to use only 1.8GB RAM(Counter: Total Server Memory) The page faults seem to be a max. of 600 and an avg. of 100. The processor utilization has suddenly increased to 90%. Is there anything wrong with the way SQL server is using memory? Is is not true that SQL Server 7.0 Enterprise edition can use upto 3GB RAM in a 4GB system?
Are there any links that can help troubleshoot this problem?
I have windows 2003 server with SQL Server installed on it for live calls billing but the CPU utilization is reaching the maximam and it's average above 60% which is causing lot of problems specially for the live environment. I have enough memory and free hard disk space is more than 40GB,
What do people think is normal for memory utilization? I know that's toobroad, so here are some basics.MS SQL Server 2000, Windows 2000 Server, 2GB RAMDb 1, size = 2.0 GBDb 2, size = 300MBDb 3, size = 50MBDb 4, size = 30MBDb 5, size = 30MBTypically 4-6 users, moderate usage 8-hrs/day. Performance has not slowed.Reboot on Sunday. sqlservr.exe in the Task Manager reports the followingSun 61MBMon 200MBTues 800MBWed 1,124MBThu 1,424MBFri 1,303MBI was getting srv 2020 errors when I had just 1 GB RAM: "The server wasunable to allocate from the system paged pool because the pool was empty."Then I did several updates to address this and got more RAM. I haven't seenthe errors since, but I haven't waited for them to happen: I'm rebootingevery week now. The memory numbers make me suspect SQL Server.Scratching my head. Not sure if my problem is gone, and this is normal SQLServer 2000 behavior, or if my problem is still lurking and I've only mutedit a bit.Any thoughts greatly appreciated.Tom
I have windows 2003 server with SQL Server installed on it for live calls billing but the CPU utilization is reaching the maximam and it's average above 60% which is causing lot of problems specially for the live environment. I have enough memory and free hard disk space is more than 40GB,
I created an indexed view in SQL 2000, and I expected to see the index created on the view referenced in the execution plan when I query the view. Instead, I see the index for the base table referenced in the execution plan. Why?
There are 6,000,000+ records in the base table, and the view only references 256 of these rows.
Here is some of the DDL if you need it:
CREATE TABLE [alarm_t] ( [ct_dtm] [datetime] NOT NULL , [dst_flg] [char] (3) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [stn_nm] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [alarm_txt] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [utc_dtm] [datetime] NOT NULL , [create_utc_dtm] [datetime] NOT NULL ) ON [PRIMARY] GO
CREATE CLUSTERED INDEX [alarm_idx2] ON [dbo].[alarm_t]([ct_dtm], [stn_nm], [dst_flg]) ON [PRIMARY] GO
create view dbo.alarm_Mapbd_v with schemabinding as SELECT [ct_dtm], [dst_flg], [stn_nm], [alarm_txt], [utc_dtm], [create_utc_dtm] FROM [dbo].[alarm_t] WHERE[stn_nm]= 'Mapbd' GO
create unique clustered index alarm_Mapbd_idx1 on dbo.alarm_Mapbd_v ( stn_nm, ct_dtm, dst_flg ) go
update statistics alarm_t go update statistics alarm_Mapbd_v go
The following 2 queries have the exact same execution plan, both showing a cost of 50%. I expected to see the index created on the view referenced in the execution plan for the first query. Is the index created on the view being used?
selectstn_nm, ct_dtm, dst_flg fromalarm_Mapbd_v go SELECT [ct_dtm], [dst_flg], [stn_nm], [alarm_txt], [utc_dtm], [create_utc_dtm] FROM [dbo].[alarm_t] WHERE[stn_nm]= 'Mapbd' go
I have noticed that when using SQL Query Analyzer some of my queries will use 100% CPU on my PC and next to nothing on the SQL server, while other Queries require 100% CPU on SQL server and do next to nothing on my PC. Does anyone know what determines this??
Right now I can produce this by executing two very similar T-SQL selects. The one that runs on the server only has one additional join - a very simply join at that. If I can change my SQL to make it run client side in some situations, that would be VERY HELPFUL!
My server has 16GB RM but it is using only 3GB. And I see my server is using 3GB of Virtual Memory, too. Why my physical memory is not being utilized? How can I increase Physical Memory usage and decrease VM usage?
I have a data mining app that does a series of select statements (no inserts). I'm noticing an odd occurance where if I start up 4 copies of the app on a quad core machine - sql takes full advantage of the 4 cores for a few minutes and then drops to 75% utilization - the other 25% is on the idle process. Two of the apps appear to be sharing a single proc of sql as each of their throughputs is cut by 50%. If I then start a 5th copy of the app, the machine is brought to full 100% utilization - two of the apps continue to appear to share a proc. SQL is set up to use all procs and I have even tried select the priority boost to no effect.
Any ideas how to ensure full sql utilization with the same number of apps as cores?
Happy New Year everyone!I would like to capture CPU Utilization % using TSQL. I know this canbe done using PerfMon but I would like to run TSQL command (maybe onceevery 5 minutes) and see what is the CPU Utilization at that instant sothat I can insert the value in a table and run reports based on thedata.I have spent a good amount of time scouring google groups but this isall I have found:SELECT(CAST(@@CPU_BUSY AS float)* @@TIMETICKS/ 10000.00/ CAST(DATEDIFF (s, SP2.Login_Time, GETDATE()) AS float)) ASCPUBusyPctFROMmaster..SysProcesses AS SP2WHERESP2.Cmd = 'LAZY WRITER'Problem is this gives me total amount of time CPU in %) has been busysince the server last started. What I want is the % for the instant -the same number we see in Task Manager and PerfMon.Any help would be appreciated.Thanks
I created a table with column name "description" as varchar(8000). My doubt is if I am not storing 8000 characters in this column, will SQL Server use memory space needed for 8000 characters ? or will it use only the space that needs for my text. ?