Database Processor Utilization
Jun 13, 2007Is it possible to know the processor utilization and memory usage of a single database in a SQL server?
View 1 RepliesIs it possible to know the processor utilization and memory usage of a single database in a SQL server?
View 1 RepliesI have noticed that when using SQL Query Analyzer some of my queries will use 100% CPU on my PC and next to nothing on the SQL server, while other Queries require 100% CPU on SQL server and do next to nothing on my PC. Does anyone know what determines this??
Right now I can produce this by executing two very similar T-SQL selects. The one that runs on the server only has one additional join - a very simply join at that. If I can change my SQL to make it run client side in some situations, that would be VERY HELPFUL!
Thanks in advance!
Ryan
I have a data mining app that does a series of select statements (no inserts). I'm noticing an odd occurance where if I start up 4 copies of the app on a quad core machine - sql takes full advantage of the 4 cores for a few minutes and then drops to 75% utilization - the other 25% is on the idle process. Two of the apps appear to be sharing a single proc of sql as each of their throughputs is cut by 50%. If I then start a 5th copy of the app, the machine is brought to full 100% utilization - two of the apps continue to appear to share a proc. SQL is set up to use all procs and I have even tried select the priority boost to no effect.
Any ideas how to ensure full sql utilization with the same number of apps as cores?
thanks,
We are seeing that the %Processor Time for the sqlservr process in Perfmon is over 100%. I am trying to understand how can the percentage of use be over 100%, and why it is over 100%. Someone told me that if the machine has multiple processors, that it will be over 100%. If that is the case, how can I determine what the maximum and normal values are? If I have 4 processors, does that mean 400% is the max? Does not make sense since it is suppose to be a percentage value...
Could someone explain to me how the CPU Utilization value is being measured, and if it is going over 100%, why that is and how I can determine what the threshold should be for monitoring?
SQL 2005 on Windows 2003 cluster.
Thanks!
I am running a query on a SQL Server 2005 database and encounter the following error message
"Internal Query Processor Error: The query processor encountered an unexpected error during execution."
There is a join between a table on the 2005 database and another on a 2000 database. I have run DBCC CHECKTABLE and found no errors on the two tables.
Anybody with ideas?
Thanks
database : MS SQL server ver 6.50.201
problem: server startup / server time out
details
SQL server shows usage of 100 % CPU utilization, if started automatically / manually. and does not proceed / hangs up(server time out), but win nt operates extremely slow.
The background of the situation in brief is as stated below.
The errorlog was found to have the messages
warning : OPEN OBJECTS parameter may be too low.
Attempt was made to free up descriptors in localdes()
Run sp_configure to increase parameter value.
Error : 644, severity : 21, state; state 1
the non clustered leaf row entry for page 2 row 1 was not found in index page 40 indexid 2 database 'tempdb'
Error : 2620 severity 21 state 3
the offset of the row number at offset 32 does not match the entry in the offset table of the follwoing page : page pointer = 0x1395800, page no= 40, status = 0x2, objectid = 1, indexid = 2
Action taken
I used the sp_option to increase the open objects from default 500 to 1000, and the LE threshold maximum value to 400 from the default 200.
after which I used reconfigure go and restarted the computer to take the effect. The following did work fine and server was working ok. But then from yesterday I am having the problem that the MS SQL sever is utilizing the cpu to 98 - 99 % and I am not able to connect to the server.
I tried to start the server with the minimal configuration by specifying the -f option in the service manager start up options, it showed the following message
00/12/07 10:40:49.73 ods Unable to connect. The maximum number of '5' configured user connections are already connected. System Administrator can configure to a higher value with sp_configure.
00/12/07 10:40:50.02 ods Unable to connect. The maximum number of '5' configured user connections are already connected. System Administrator can configure to a higher value with sp_configure.
afterwhich I tried with the option of starting the sql server with the option -c -f
which in the event detail of the win nt shows
"mesg 18109: Recovery dbid 6 ckpt(55813,8) oldest tras= (55813,0)"
The open procedure for service "MSSQLServer" in DLL "SQLCTR60.DLL" has taken longer than the established wait time to complete. The wait time in milliseconds is shown in the data.
DB-LIBRARY error - SQL Server connection timed out.
and in the errorlog present in MSSQLLOG directory
00/12/07 15:06:43.24 spid1 Recovering Database 'master'
00/12/07 15:06:43.31 spid1 Recovery dbid 1 ckpt (7944,28) oldest tran=(7944,0)
00/12/07 15:06:43.41 spid1 1 transactions rolled forward
00/12/07 15:06:43.49 spid1 Activating disk 'AM'
00/12/07 15:06:43.49 kernel initializing virtual device 1, D:MSSQLDATAAM.DAT
00/12/07 15:06:43.50 spid1 Activating disk 'AMLOG'
00/12/07 15:06:43.50 kernel initializing virtual device 2, D:MSSQLDATAAMLOG.DAT
The dbid is / was 6 for both the instances, the database which is having this dbid is database name "AM" and is having above 1300 tables (approx)
what could be the problem and solution for it?
kindly help me out of this situation.
If any more information is needed please contact me on devendrakulkarni@yahoo.com / devendra@me.iitb.ernet.in
Regards
Devendra
SQL Server 2005 9.0.3161 on Win 2k3 R2
I receive the following error:
"Error: 8624, Severity: 16, State: 1 Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services."
I have traced this to an insert statement that executes as part of a stored procedure.
INSERT INTO ledger (journal__id, account__id,account_recv_info__id,amount)
VALUES (@journal_id, @acct_id, @acct_recv_id, @amount)
There is also an auto-increment column called id. There are FK contraints on all of the columns ending in "__id". I have found that if I remove the contraint on account__id the procedure will execute without error. None of the other constraints seem to make a difference. Of course I don't want to remove this key because it is important to the database integrity and should not be causing problems, but apparently it confuses the optimizer.
Also, the strange thing is that I can get the procedure to execute without error when I run it directly through management studio, but I receive the error when executing from .NET code or anything using ODBC (Access).
Hi All,
We have two production boxes running different ERP Apps on each boxes, We just added 2 more cpu's to one box and 1 cpu to other. The Currect CPUS config as stands is
Box 1 = 4 CPU's
Box 2 = 2 CPU's
My boss whats some stats to see if performace was any better and if yes how much !!
How can i find such information from my box ?
Can anyonw help with this stat's ? !!
Thanks in advance
Girish
Hi all,
I have a problem in trying to find out why only one CPU in a 2 CPU H-T utilized. Using task manager, I can see 4 processors windows but only 1 actually utilized. I select +boost sql server and select all CPU for use. Queries and all other command ie dbcheck.. appear to use only single cpu.
Any help would be nice.
thanks
Andrew
Hi All,
I want to keep track of the CPU utilization & number of users connected for each database on our production box. I chose to get the data from sysprocesses table from master database.
But I realised that for some reason the master..sysprocesses.CPU column stays static or just keeps on adding to existing values.
Is there any ways thru which I can clear this data ( cpu column in sysprocesses table) after I have captured it in a table ?
Any help is appreciated.
Thanks.
We have a SQL 7.0 Standard Server running on a Windows 2000 Servermachine with 2 800mhz Pentium III with 2GB memory. Our front end isAccess 97 and 2000 with most ADO connections for the scripts but someDAO for forms and reports. We recently "released" a new version ofthe "database" that caused a catastrophic event to start happeningwith our SQL server.Using PerfMon we monitored the CPU utilization on the server andnoticed that the CPU load would drop to 0 for approx 5-10 seconds andthen jump back up to our average 60-70% utilization. During thisdrop, there is NO disk activity no new connections being made, etc.We then took the process a step further and loaded a "stress" programthat put about 30% load on the server to start with. Then wemonitored each processes load. SQL Server process would drop to 0%while the stress process continued at 30%.The problem is that the SQL does absolutely NOTHING for 5-10 seconds.You cannot connect, any querys that are running stop, their is no diskactivity (logs, data drives), and you cannot even get sp_who2 to runfrom Query Analyser. We thought maybe blocking (we have built an"app" that monitors this), but we don't see any blocking before itlocks and nothing after it locks.Out of despiration we "rolled back" to our previous version to getpeople working again. After business hours, we have tried toduplicate the problem on machines (2 or 3 at a time) but cannot get itto duplicate the problem.The only experience we had previous to this was using DNS to resolvethe server name which caused a problem EXTREMELY familar to thisproblem. However, we have double checked every machine we have, andnone of them are using DNS to resolve.Any idea's would be most appreciated.Patrick Moore
View 2 Replies View RelatedSQL 7 SP2
Hi.
True or false on these:
1. A table has a PK of EmployeeID (non-clustered). The sql statement where clause uses something like "WHERE E.Action > 1 AND E.User = 1001 AND E.EmployeeID = 12345
Question: Will the PK index be used in determining the result set ?
2. A table has an index of EmployeeID + Company + State (clustered). The sql statement where clause is "Where EmployeeID = 1001".
Question: Will the index be used in determining the result set ?
Thanks,
Craig
I installed 4GB of memory and I have never seen SQL memory utilization go beyond 2GB. I have SQL server set up to use as much memory as it needs. Does anyone no if SQL server can make use of more than 2GB.
View 1 Replies View RelatedOurs is a SQL 7.0 Enterprise edition with NT 4.0 Enterprise Edition. SQL Server has been configured with the default, 'Dynamic memory Allocation'. The system has 4GB of RAM. This is a dedicated SQL Server machine. But SQL Server seems to use only 1.8GB RAM(Counter: Total Server Memory) The page faults seem to be a max. of 600 and an avg. of 100. The processor utilization has suddenly increased to 90%. Is there anything wrong with the way SQL server is using memory? Is is not true that SQL Server 7.0 Enterprise edition can use upto 3GB RAM in a 4GB system?
Are there any links that can help troubleshoot this problem?
Thank you.
-Praveena
After a fresh install of SQL 6.5 with SP5a(or without), the cpu is running at anywhere from 50%-80%. It is loaded on a PDC, but when I stop the sql service cpu utilization drops to 0-2%. When I start the sql service it's right back up there, does anyone have a suggestion as to how to fix this or why the service would be doing this?
Thanks, Christel
I am getting high CPU utilization on the SQL Server process (>90%).
However the overall utilization (NT -- entire box) always seems to be under 50%.
Can someone explain why this is happening. The server is a quad; the SQL server process seems to be using only two CPUs at a time (not the same ones all the time).
Lightweitht pooling has been turned on and the maximum worker thread size has been left at the default value (255).
How can I configure SQL options to spread the load across all four CPUs ??????
I have windows 2003 server with SQL Server installed on it for live calls billing but the CPU utilization is reaching the maximam and it's average above 60% which is causing lot of problems specially for the live environment. I have enough memory and free hard disk space is more than 40GB,
so where is the problem?!
What do people think is normal for memory utilization? I know that's toobroad, so here are some basics.MS SQL Server 2000, Windows 2000 Server, 2GB RAMDb 1, size = 2.0 GBDb 2, size = 300MBDb 3, size = 50MBDb 4, size = 30MBDb 5, size = 30MBTypically 4-6 users, moderate usage 8-hrs/day. Performance has not slowed.Reboot on Sunday. sqlservr.exe in the Task Manager reports the followingSun 61MBMon 200MBTues 800MBWed 1,124MBThu 1,424MBFri 1,303MBI was getting srv 2020 errors when I had just 1 GB RAM: "The server wasunable to allocate from the system paged pool because the pool was empty."Then I did several updates to address this and got more RAM. I haven't seenthe errors since, but I haven't waited for them to happen: I'm rebootingevery week now. The memory numbers make me suspect SQL Server.Scratching my head. Not sure if my problem is gone, and this is normal SQLServer 2000 behavior, or if my problem is still lurking and I've only mutedit a bit.Any thoughts greatly appreciated.Tom
View 3 Replies View RelatedI have windows 2003 server with SQL Server installed on it for live calls billing but the CPU utilization is reaching the maximam and it's average above 60% which is causing lot of problems specially for the live environment. I have enough memory and free hard disk space is more than 40GB,
so where is the problem?!
Dear all,
One of the server is having 2 GB of RAM and task manager is showing 1.87 GB memory in use.
I have to migrate few databases on the same server.
With high IO Operations.
I know server require more RAM, but how can i prove that server needs more RAM ?
Regards
Mohd Sufian
Could anyone help me in finding out why the cpu utilization is very high??
I have two servers say, Server A , server B. There is a transactional replication going on from server A to B
There is a table say Table A on server A, which is being replicated to server B.
I created a trigger insert and update trigger on Table A on server B (i.e. on subscriber). Since then, the CPU utilization for server B is very high 80-90%
when i used profiler, i could see .whenever replication stored proc for insert or update executes..cpu utilization goes up..
trigger just insert the updated/inserted rows into some other table.
Could anyone tell me why the cpu utilization has gone up so much?? i am using sql server 2005
thanx
Hi
Please let me know which DMV is best to cpature total system CPU utilization. There are plenty of views so i am a lil bit confused
thanks in advance,
I have a few in house developed application (VB based) that access the SQL server for adding, appending , creating tables. The application does the changes thru queries dynamically generated at the application level.
My MS SQL Server runs on a PIII / 256 MB Ram / 18 GB HDD
The problem is that the memory utilization of SQL server keeps growing constantly. Out of 512 MB (256 Physical + 256 Virtual) available teh memory utilization reaches a level of 490 MB and statys constant. Though SQL Server shows a utilization of 150 MB.
I suspect that SQL is not releasing memory back to the system. Please help in resolving. The problem may lie at the applications developed.
Jdindian
I created an indexed view in SQL 2000, and I expected to see the index created on the view referenced in the execution plan when I query the view. Instead, I see the index for the base table referenced in the execution plan. Why?
There are 6,000,000+ records in the base table, and the view only references 256 of these rows.
Here is some of the DDL if you need it:
CREATE TABLE [alarm_t] (
[ct_dtm] [datetime] NOT NULL ,
[dst_flg] [char] (3) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[stn_nm] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[alarm_txt] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[utc_dtm] [datetime] NOT NULL ,
[create_utc_dtm] [datetime] NOT NULL
) ON [PRIMARY]
GO
CREATE CLUSTERED INDEX [alarm_idx2] ON [dbo].[alarm_t]([ct_dtm], [stn_nm], [dst_flg]) ON [PRIMARY]
GO
create view dbo.alarm_Mapbd_v with schemabinding
as
SELECT
[ct_dtm],
[dst_flg],
[stn_nm],
[alarm_txt],
[utc_dtm],
[create_utc_dtm]
FROM [dbo].[alarm_t]
WHERE[stn_nm]= 'Mapbd'
GO
create unique clustered index alarm_Mapbd_idx1 on dbo.alarm_Mapbd_v
( stn_nm, ct_dtm, dst_flg )
go
update statistics alarm_t
go
update statistics alarm_Mapbd_v
go
The following 2 queries have the exact same execution plan, both showing a cost of 50%. I expected to see the index created on the view referenced in the execution plan for the first query. Is the index created on the view being used?
selectstn_nm, ct_dtm, dst_flg
fromalarm_Mapbd_v
go
SELECT
[ct_dtm],
[dst_flg],
[stn_nm],
[alarm_txt],
[utc_dtm],
[create_utc_dtm]
FROM [dbo].[alarm_t]
WHERE[stn_nm]= 'Mapbd'
go
Thanks for your assistance.
Tom
Environment: Win2003 SP1, 32 bit, SQL Server 2K5
My server has 16GB RM but it is using only 3GB. And I see my server is using 3GB of Virtual Memory, too. Why my physical memory is not being utilized? How can I increase Physical Memory usage and decrease VM usage?
Canada DBA
Happy New Year everyone!I would like to capture CPU Utilization % using TSQL. I know this canbe done using PerfMon but I would like to run TSQL command (maybe onceevery 5 minutes) and see what is the CPU Utilization at that instant sothat I can insert the value in a table and run reports based on thedata.I have spent a good amount of time scouring google groups but this isall I have found:SELECT(CAST(@@CPU_BUSY AS float)* @@TIMETICKS/ 10000.00/ CAST(DATEDIFF (s, SP2.Login_Time, GETDATE()) AS float)) ASCPUBusyPctFROMmaster..SysProcesses AS SP2WHERESP2.Cmd = 'LAZY WRITER'Problem is this gives me total amount of time CPU in %) has been busysince the server last started. What I want is the % for the instant -the same number we see in Task Manager and PerfMon.Any help would be appreciated.Thanks
View 3 Replies View RelatedHello
I created a table with column name "description" as varchar(8000). My doubt is if I am not storing 8000 characters in this column, will SQL Server use memory space needed for 8000 characters ? or will it use only the space that needs for my text. ?
Thanking You
Navaneeth
I'm having a problem with one of my SQL servers (2000 Build 8.00.2140) where it is always reading CPU utilization of 70-100% (more often pegged at 100%).
I have an exact same SQL server running at a different location on a much less powerful (hardware wise) server that gets more traffic but only shows 7-21% CPU utilization.
Taskmgr shows Process sqlservr.exe as consuming all the resources. This is a dual 2 core 3.66Ghz (4 real CPUs) with 4GB RAM and 5 x 146 15K SCSI drives hooked up to a $1200 SCSI contoller (Dell server). RAM usage is pretty low, most I've seen is 1GB.
Is there any way to determine what specific connection/thread is causing this? Any diagnostic tools or anything that can show me specifically what is consuming this SQL server? Connection thread or anything that points back to a specific IP?
Thanks, Rob.
I have multiple instances of SQL 2012 Std Edition on a 40 physical core server.What I have done is the use the Process - SQLServr -% Processor time Stat and divided by 16 ( the max number of Cores Std ed. can use) as a instance level measure. I also use processor object stats to show how busy the server is. How to represent the servers CPU utilization?
View 1 Replies View RelatedHI
I am facing a strange problem with SQL Server 2005 . The CPU utilization with SQL Server 2005 is higer by about 70% comapared to SQL 2000.
In the same kind of Hardware and with the DB server up , I performed the following tests
Declare @i int
Set @i = 10
While @i < 100000
Begin
Insert into arup_emp values(@i,'M',0)
Set @i = @i + 1
end
The CPU utilization average on SQL 2005 was 45% and on SQL 2K it was just 25% , I am seeing a lot of people who seems to be facing this problem but unfortunately not seeing any solution to this.
Can anyone through some light .
Please note that I have tried to also see the MAXDOP options, but get the same results.
Please help.
Thanks
Arup
I was tuning a query testing out SARG with these two queries:
select col1 from table1 (nolock) where col1 like '#,%ABC%' or col1 like 'BC,%ABC%'
select col1 from table1 (nolock) where col1 like '%ABC%'
I flushed out the cache, added an index to col1, then ran those two together. Provided below are the actual query plan and stat time:
Query 1: Query cost (relative to the batch): 0%
select col1 from table1 (nolock) where col1 like '#,%ABC%' or col1 like 'BC,%ABC%'
----------------------------------------------------------------------------------------------------------------------------
SELECT Index Seek
Cost:0% <------- [DB1].[dbo].[table1].[Idx..]
Cost: 100%
Query 2: Query cost (relative to the batch): 100%
select col1 from table1 (nolock) where col1 like '%ABC%'
----------------------------------------------------------------------------------------------------------------------------
SELECT Index Scan
Cost:0% <------- [DB1].[dbo].[table1].[Idx..]
Cost: 100%
------------------------------------------------STAT TIME-----------------------------------------------------------
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 1 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 1 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 7 ms.
(3 row(s) affected)
Table 'table1'. Scan count 2, logical reads 2932, physical reads 0, read-ahead reads 18, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 938 ms, elapsed time = 943 ms.
(3 row(s) affected)
Table 'table1'. Scan count 1, logical reads 2927, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 515 ms, elapsed time = 505 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 1 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 1 ms.
------------------------------------------------STAT TIME-----------------------------------------------------------
As expected, SARGable Query 1 did a nonclustered index seek and nonSARGable Query 2 did an index scan instead. According to the query plan, Query 1 consumed 0% relative to the batch whereas Query 2 is 100%. When I checked the CPU time, I was a bit confused because Query 1 showed CPU time of 938ms whereas Query 2 showed 515ms. I triple checked and every time I got similar results. I am sure I'm missing something, could someone please tell me what I'm missing? Thanks a bunch!
Our company recently combined our DBs into one SQL 2005 Server.
Dell Power Edge 1800 with 3.00 GHz Xeon Processor 800 FSB, 1 GB of RAM
Dell Power Edge 1600 with 2.80 GHz Xeon Processor 533 FSB, 1 GB of RAM
Combined into one:
Dell Power Edge 2950 Dual Core 1.6 GHz Xeon Woodcrest Processor, 4 GB of RAM
However, the CPU utilization on this new server is maintaining at about 90% with 3.82 GB of RAM used as well. It's a Windows Server 2003 R2 x64 edition running SQL Server 2005 SP2 x64. I have searched around Microsoft's website for any information that could be of help to me, but I was unable to locate anything. I was hoping that someone could provide some insight as to why this might be occuring. Or if this is a known issue.
Thanks,
Peter
We are using an IBM Xeon server with 4 GB RAM with windows 2000 server and MS Sql server 2005.
More frequently our server responce time is very slow, although the cpu utilization is between 7 - 10 %. we can not able to run even notepad on the server. we observed that the memory occupaid by the sqlserver prog is high. If we restart the server then it will return to normal level. But we do not want restart the server frequently.
Kindly provide me suitale solution.
Srinath
DBA
Reid & Taylor
Mysore, India