We are using an IBM Xeon server with 4 GB RAM with windows 2000 server and MS Sql server 2005.
More frequently our server responce time is very slow, although the cpu utilization is between 7 - 10 %. we can not able to run even notepad on the server. we observed that the memory occupaid by the sqlserver prog is high. If we restart the server then it will return to normal level. But we do not want restart the server frequently.
We are using an IBM Xeon server with Sevrer 2000 and sql server 2005 with 4B RAM.
We noticed that the responce time of the server is very slow. During this time the memory occupancy by the sqlserver is very high. Althouh the cpu utilization is very low (7-10%) we are unable to run even notepad on the server. In some other situations the cpu utilization will be 100% (for more than one hour) during this time also the sql server occupaies more memory. If we reatarts the server then the problem will be fixed, but we do not want to restart the server very frequently.
Our company recently combined our DBs into one SQL 2005 Server.
Dell Power Edge 1800 with 3.00 GHz Xeon Processor 800 FSB, 1 GB of RAM Dell Power Edge 1600 with 2.80 GHz Xeon Processor 533 FSB, 1 GB of RAM
Combined into one: Dell Power Edge 2950 Dual Core 1.6 GHz Xeon Woodcrest Processor, 4 GB of RAM
However, the CPU utilization on this new server is maintaining at about 90% with 3.82 GB of RAM used as well. It's a Windows Server 2003 R2 x64 edition running SQL Server 2005 SP2 x64. I have searched around Microsoft's website for any information that could be of help to me, but I was unable to locate anything. I was hoping that someone could provide some insight as to why this might be occuring. Or if this is a known issue.
We are having a big performance issue at our site. Here is the configuration of the box running SQL Server 2005:
64 bit Windows Enterprise Edition + SP1
Dual CPU, 16GB RAM
RAID 1 and RAID 5 - internal
SQL Server 2005 64-bit Enterprise Edition
With SP2 (CTP from December)
The "Lock Pages in Memory" is set and is being run under the same account that is being used to run SQL Server Services.
We are noticing that under load, the CPU utilization becomes nearly 100%. I have researched this and have come across a couple of posts that indicate that this issue was fixed in SP2 - example: One post talked about the hotfix #716 which is also a part of SP2 but even after the application of that service pack, we are still having this issue. I haven't tried setting the parameterization option to forced for the database yet.
Is this a known issue with SP2? If not, what can we look for and fix in our environment? Please let me know if I can provide more information.
After a fresh install of SQL 6.5 with SP5a(or without), the cpu is running at anywhere from 50%-80%. It is loaded on a PDC, but when I stop the sql service cpu utilization drops to 0-2%. When I start the sql service it's right back up there, does anyone have a suggestion as to how to fix this or why the service would be doing this?
I am getting high CPU utilization on the SQL Server process (>90%). However the overall utilization (NT -- entire box) always seems to be under 50%.
Can someone explain why this is happening. The server is a quad; the SQL server process seems to be using only two CPUs at a time (not the same ones all the time).
Lightweitht pooling has been turned on and the maximum worker thread size has been left at the default value (255).
How can I configure SQL options to spread the load across all four CPUs ??????
Could anyone help me in finding out why the cpu utilization is very high??
I have two servers say, Server A , server B. There is a transactional replication going on from server A to B
There is a table say Table A on server A, which is being replicated to server B.
I created a trigger insert and update trigger on Table A on server B (i.e. on subscriber). Since then, the CPU utilization for server B is very high 80-90%
when i used profiler, i could see .whenever replication stored proc for insert or update executes..cpu utilization goes up..
trigger just insert the updated/inserted rows into some other table.
Could anyone tell me why the cpu utilization has gone up so much?? i am using sql server 2005
I have a few in house developed application (VB based) that access the SQL server for adding, appending , creating tables. The application does the changes thru queries dynamically generated at the application level.
My MS SQL Server runs on a PIII / 256 MB Ram / 18 GB HDD
The problem is that the memory utilization of SQL server keeps growing constantly. Out of 512 MB (256 Physical + 256 Virtual) available teh memory utilization reaches a level of 490 MB and statys constant. Though SQL Server shows a utilization of 150 MB.
I suspect that SQL is not releasing memory back to the system. Please help in resolving. The problem may lie at the applications developed.
In SQL 2005 Replication Monitor i was not seeing details for any of the publications on the "Distributor to Subscriber Histroy Tab" so i decided to stop and start synchronisation on this one publication. At this time there were approximayely 20000 undistributed commands. After the stop/start of the distribution agent, i started seeing messages like "x trasactions with x commands were delivered". Then i went and restarted all the other distribution agents using the Replication Monitor.
Has anyone experienced this kind of a behaviour?
The second issue is that our trasnactional replication looked to have caught up but i was supprised find that the distribution server was running at 100%. A profiler trace of the distribution database revealed that sp_MSget_repl_commands procedure was being executed and costing approximately in excess of 400 000 reads, 7000 in CPU cost and 15sec in duration. To me it looked as if sp_MSget_repl_commands has chosen an inefficient execution plan but then realised i couldn't recompile system procedures. I think a stop and start of the SQL instance is the only option i have.
We have a system(32GB RAM and 2 TB hard disk, Windows7,SQL SERVER 2008R2 enterprise 64 bit). Looks like whenever i run some query(even query result 50 records) on the database, the Memory utilization is very high(30 GB) in task manager. How can i control this over usage? The memory setting is default in server properties(min 0 and max 2147483647).
I have a question for anyone who mas some tips/pointers for optimizing SQL merge replication publications.
The front end web server is running IIS 6.0 on Windows 2003 x86 Server Standard (Server A). The back end database server is running SQL 2000 Standard on Windows 2003 x86 Standard (Server B). The merge replication clients connect via HTTPS over the Internet from a custom C#.NET 2005 application using SQL 2005 Mobile running on Windows Mobile 5.0 (Client).
The publication itself has several filters on it. The entry point uses the user's Windows username to start the filter. Based on the user, it then filters the records in multiple tables. There are 68 articles and 44 filter statements. The filters extend multiple layers deep, in other words they are not all filtering off the HOST_NAME() variable, some tables filter from records in tables that filter from the HOST_NAME() variable. The publication is set to minimize data sent to the clients, and considers a subscription out of date if it has not synced in the last 4 days. All the rowguids are indexed as well.
There are approximately 35 clients actively using the application at any given time. On average, a client will initiate a merge replication 3-4 times per hour from 8am-5pm. Generally, a sync will take between 10 seconds and 2 minutes to complete, with most of them being around 30 seconds on average.
When a client starts a sync, there is a spike to about 50% on the server's CPU graph. If multiple clients attempt to sync at the same time the CPU utilization can be pushed to 100% for extended periods (more than 30 seconds).
I recently completed a project to increase the bandwidth available to the clients, and plan to reduce the number of filters significantly (although this will obviously increase the amount of data going to the clients and the storage needs on the individual devices). I also plan on changing the setting to not minimize the amount of data sent to the clients.
Having said all that, does anyone have any information about how to further optimize merge publications to mobile clients? The next publication will be on SQL 2005 x64 Standard if I can solve the issues in the text environment. I would like to enhance the publication as much as possible to make the end user experience better than it currently is.
windows is sql server 2012 64 bit edition and sql server is 2012 64 bit edition.
RAM installed on both server is around 65 gb of which 49 gb is max server memory allocated for sql services on both servers.
database related to reporting services are also in always on group .
We have also configured for reporting services and both are running on their respecting server.
Issue is on primary server reporting service is using almost 7 gb while on secondary it is using 10 gb even when there are 5 reports and its used within offices .
what issue and how to check why ssrs is using high memory..
any query , perfmon counters
reports are randomly used at client side
i have checked memory utilization through task manger..
Hi, we work with the Reporting Services of the Itanium Edition of SQL Server 2005. With some reports (only some) we have the problem that very long time is needed for the processing of the report. I checked the ExecutionLog table in my Reportserver database and detected that values in the column TimeDataRetrieval (TDR) or pretty slow, but them in the column TimeProcessing (TP) are very high. Report1: TDR = 8304ms TP = 34377ms . In other (most) reports the values are completely different Report2: TDR 8336ms TP = 233ms
Now the most interesting thing: When I execute the same report on our test server which is a xeon machine (same data volume, no user workload) I get the following results: Report1: TDR = 5244ms TP = 11731ms Report2: TDR = 4750ms TP = 163 The differences in TimeDataRetrieval (TDR) should be ok, because the machine is used by over 700 people and so the response times of the Analysis Server database could differ. Report1 and 2 do not differ too much in complexity. A few groupings, parameters and so on.
The Itanium machine is a 2 way dual core system with 16 Gigabyte RAM. The Xeon machine is a 2 way xeon system with 8 Gigabyte RAM (32bit processors).
What is going on there? How could I optimize the TimeProcessing of Report1 on the Itanium machine? Which performance counters or tools should I use to go deeper finding out where the problem is?
Hi,I've been creating a db application using MS Access and MSDE. Only twoof us are using the application, and the server and the app both rungreat on my laptop (1.6 GHz Pentium M, 2GB RAM, W2KPro). Only problemis when I take my laptop home, my coworker loses access to the server.We recently purchased a dedicated server to run the db on at theoffice. It's a 2.8 GHz Dual Xeon, 2GB RAM, running XPPro. We alsobought SQL Server, but I installed the Personal Edition becuase we arenot using a server OS. It's my understanding that XP can utilize bothprocessors, and the Personal Edition can use both processors as well.(On a side note, why is Enterprise Manager showing that I have 4processors - why?) In addition, I understand PE has a work-loadgovernor that cripples performance when more than 5 TSQL commands arebeing run simultaneously.I backed up the db on my laptop and restored it on our new server. Butwhen I run the exact same queries with the exact same number of rows,my queries on the new server are take 3x longer(!?). Can someoneplease offer a few suggestions for why this is happening? What can Ido to improve performance on the server machine?Please let me know if I need to supply more information.Thanks,Alex
Is there a way to configure mirroring to go from High Availability to High Protection without having to reconfigure Database Mirroring? Using the interface in Management Studio, I can change the configuration option to High Performance, but not High Protection despite both of them being Synchronous.
If not, what are the recommended steps to configure the mirror once it already has been configured? Is just like initially setting up the mirror or would there be any shortcuts I could take? If I stop the mirroring and remove the witness, will the High Protection option be available?
I realise this is a stupid quesiton but i cannot really find any confirmation of this in BOL.
If you are running High Safety with automatic failover, when failover occurs does this automatically change to High Performance mode. SInce for failover to occur something has happen with the primary , it will be impossible to commit transactions on the new primary and mirror asyncronously since 1 of them is no longer available.
So am i correct in assuming that automatic failover also automatically changes the mode to High Performacne for that session?
I'm having a problem with one of my SQL servers (2000 Build 8.00.2140) where it is always reading CPU utilization of 70-100% (more often pegged at 100%).
I have an exact same SQL server running at a different location on a much less powerful (hardware wise) server that gets more traffic but only shows 7-21% CPU utilization.
Taskmgr shows Process sqlservr.exe as consuming all the resources. This is a dual 2 core 3.66Ghz (4 real CPUs) with 4GB RAM and 5 x 146 15K SCSI drives hooked up to a $1200 SCSI contoller (Dell server). RAM usage is pretty low, most I've seen is 1GB.
Is there any way to determine what specific connection/thread is causing this? Any diagnostic tools or anything that can show me specifically what is consuming this SQL server? Connection thread or anything that points back to a specific IP?
I am planning to build a server to be used as a SQL Server and web server.Right now I can only use a single box for both.I have read some threads were dual processors are having problems with someparallel queries and the suggestions of having sql server use a single CPU.My budget is limited so I am debating whether to get 2.6G dual xeon 533FSBor dual P4 800FSB (DRR@ ram) or stick with a speedy single cpu.If I get a dual cpu motherboard, is it a good idea to have 1 cpu used forsql server and the other for everything else?John Dalberg
HI I am facing a strange problem with SQL Server 2005 . The CPU utilization with SQL Server 2005 is higer by about 70% comapared to SQL 2000.
In the same kind of Hardware and with the DB server up , I performed the following tests Declare @i int Set @i = 10 While @i < 100000 Begin Insert into arup_emp values(@i,'M',0)
Set @i = @i + 1 end
The CPU utilization average on SQL 2005 was 45% and on SQL 2K it was just 25% , I am seeing a lot of people who seems to be facing this problem but unfortunately not seeing any solution to this.
Can anyone through some light . Please note that I have tried to also see the MAXDOP options, but get the same results.
We are seeing that the %Processor Time for the sqlservr process in Perfmon is over 100%. I am trying to understand how can the percentage of use be over 100%, and why it is over 100%. Someone told me that if the machine has multiple processors, that it will be over 100%. If that is the case, how can I determine what the maximum and normal values are? If I have 4 processors, does that mean 400% is the max? Does not make sense since it is suppose to be a percentage value...
Could someone explain to me how the CPU Utilization value is being measured, and if it is going over 100%, why that is and how I can determine what the threshold should be for monitoring?
For the last week, our production SQL server is running very slow and causing the CPU usage to go 80-100 % almost all the time. This causes certain queries to time out. Our application has never timed out before ever. Also, we did not do any updates on our production machine or installed anything recently.
Has anyone of you ever experienced this issue? If yes, then what did you do to resolve it? any help would be greatly appreciated.
I have a 2003 server with sql 2005 on it and the sqlservr.exe is using 880 meg of memeory and it will climb to 1.4 gig. if i reboot server it will go back to 100 meg and slowly climb back up. any ideas i am not a sql guy.
We have a live server that has had very high CPU usage in the last few days, therefore the site is extremely slow. There are about 60,000 users per day on the average. It always had high CPU usage, but not as bad as in the last few days, in the 90s, sometimes even reaching a 100.
Any solutions? I've run SQl Profiler to check on queries that have high CPU usage. When I run the same queries on our staging server, they are very quick. For example, the same query can take 1 minute on stage, but 12 minutes on the live site. We do know that this is also related to traffic, during lower traffic times, the same query takes less time, but still never as fast as stage.
Oh yeah, it's SQL Server 2005 and asp code running IIS6 on windows 2003 server.
Please help. I really appreciate any advice. Thank you.
I've been asked to look into the possibility of using SQL Server in a high availability environment. We have a few web based applications that use SQL Server back end DBs. What we are looking into is whether we can use multiple instances (on multiple physical servers) of SQL Server using some type of clustering/load balancing. I haven't worked with SQL Replication before, so I'm not even sure where to start in exploring the possible avenues we can explore.
Can anyone push me in the right direction? Any info would be greatly appreciated!
To All,I have a SQL2KSP3a database(<1GB) running on a 4x3GB physical CPU with4GB of ram. It is Windows Server 2003 with hyper-threading turn on.There are ~420 .Net users/cxns (fat client, no web/app servers) withconnection pooling and ~1 trx/sec. The database growth is neglegeableand actually is not even relevent which I will explain in a minute.99% of the trxs are from one SP that does a select. The resultsets arerelatively small as well 1~100 rows. Yes I have tuned it with indextuning wizard as well, changed the SQL memory configurations, etc....My problem is this...The first day after a reboot, the server runs 6%CPU during peak hours.During the non-peak hours until the next day something apparentlyhappens. The next day (2nd day after a reboot), it jumps to 40%CPUduring peak hours. The server will continue to run at 40%CPU duringpeak hours until the next reboot. This phenomenon has been occurringfor 6 months or more and the traffic on the server is the same for day1 as it is for day 2,3,4,... This database was on another server with100+ dbs and exibited the same behavior, thus bringing that server toits knees, and thus we had to move it to the server in question with noother dbs.I have googled my eyes out, Microsoft site, white papers, perfmon,SQLDiag, PSSDiag, execution plans, index tuning wizard, and the listgoes on! I currently have a case open with Microsoft that has beenopen for months now. I have been passed around to the 3rd "MS TechSpecialist". I have ran PSSDiag a total of 6 times for them for hourson end. I have changed MAXDOP. I could give more information, but Iwould be here for days. I am running out of patience/ideas andMicrosoft is apparently blowing smoke.Any ideas are greatly appreciated!Thanks in advance!JL
I posted a link to a prior article in here, that one about highperformance hierarchies, and have the first two parts of a new series.Hopefully this is of value to someone.http://www.yafla.com/papers/SQL_Ser..._sql_server.htmThanks.
Hi,We have a prod server running on SQL server 2000 64 bit. It is a4cpu server with 16GB of RAM. we have a maxmemory setting of 15.5GBfor sql server. Inspite of 15GB being available for sql server, itstill uses paging file space, a lot. When looking thru task maanger wecan see sql server using 15.5GB of Memory usage and 22GB of Virtualmemory usage. I don't understand why it should even be using closer to7GB of Paging space, when it has so much memory. How does SQl serveruse Virtual memeory vs Physical memory?HAs anyone seen this before.ThanksGG
If you delete rows in a table and do a full table scan...Is that supposed to read up to the highest block/extent that thetable ever attended.(like in some databases I use)If so what is the best way to take care of such tables in sql server.I appreciate your responsesVince
I made a backup of the database on the QA box, and restored it on the staging box. Yet when I run something as simple as a select query (select * from <table>), the less powerful QA box is faster.
I figured maybe the statistics are different on the staging box. I ran dbcc showcontig to make sure the statistics were identical. Also ran RedGate's SQL compare and data compare to make sure everything was identical.
I figured maybe the query optimizer needs to be tweaked. I recreated the indexes and updated statistics on the staging box. The queries actually got slower as a result.
I thought maybe SCSI drives are slower. Tried breaking the mirror on the staging box. No luck. Put the mirror back in place, ran a test where I copied a large folder from one directory to another on the staging box. Repeated the same test with the same data on the QA box. The staging box was more than twice as fast than the QA box.
It doesnt appear to be a problem with the query, adjusting memory in SQL server has not effect, both boxes are using SQL server 2000 SP3, why is the bigger machine running queries hundreds of milliseconds slower than the smaller machine? Any help will be appreciated!
I am looking to improve the performance of my sql server databases.
I currently have a dual location system, the database server setup is basically a quad xeon with 4gb at my office and a double xeon with 4gb at a remote webhosting location. There are separate application/web/intranet servers at each site. The two databases servers are replicated with the local server publishing to the remote server.
The relational database holds circa 26 million records, growing by a volume of 10,000 per day, there are approximately 50,000 queries performed per day.
My theory is that the replication of the two databases is causing a slowdown; despite fast network connections (averaging 200ms between servers) the replication seems to place a large load on the local server. Would it be sensible to replicate to a second local server and then replicate to the remote server, placing any burden on the second server?
I am planning to upgrade the local server to a high capacity 4+ cpu 64bit server, my problem is that although I have noticed a slow down in performance over time, I am unsure how to go about measuring and quantifying this in order to diagnose the bottlenecks and ensure that investing in a new server would be worthwhile. Where would one be best advised to start this project?
database : MS SQL server ver 6.50.201 problem: server startup / server time out details SQL server shows usage of 100 % CPU utilization, if started automatically / manually. and does not proceed / hangs up(server time out), but win nt operates extremely slow.
The background of the situation in brief is as stated below. The errorlog was found to have the messages warning : OPEN OBJECTS parameter may be too low. Attempt was made to free up descriptors in localdes() Run sp_configure to increase parameter value.
Error : 644, severity : 21, state; state 1 the non clustered leaf row entry for page 2 row 1 was not found in index page 40 indexid 2 database 'tempdb'
Error : 2620 severity 21 state 3 the offset of the row number at offset 32 does not match the entry in the offset table of the follwoing page : page pointer = 0x1395800, page no= 40, status = 0x2, objectid = 1, indexid = 2
Action taken I used the sp_option to increase the open objects from default 500 to 1000, and the LE threshold maximum value to 400 from the default 200. after which I used reconfigure go and restarted the computer to take the effect. The following did work fine and server was working ok. But then from yesterday I am having the problem that the MS SQL sever is utilizing the cpu to 98 - 99 % and I am not able to connect to the server. I tried to start the server with the minimal configuration by specifying the -f option in the service manager start up options, it showed the following message 00/12/07 10:40:49.73 ods Unable to connect. The maximum number of '5' configured user connections are already connected. System Administrator can configure to a higher value with sp_configure. 00/12/07 10:40:50.02 ods Unable to connect. The maximum number of '5' configured user connections are already connected. System Administrator can configure to a higher value with sp_configure.
afterwhich I tried with the option of starting the sql server with the option -c -f which in the event detail of the win nt shows "mesg 18109: Recovery dbid 6 ckpt(55813,8) oldest tras= (55813,0)"
The open procedure for service "MSSQLServer" in DLL "SQLCTR60.DLL" has taken longer than the established wait time to complete. The wait time in milliseconds is shown in the data. DB-LIBRARY error - SQL Server connection timed out.
The dbid is / was 6 for both the instances, the database which is having this dbid is database name "AM" and is having above 1300 tables (approx) what could be the problem and solution for it? kindly help me out of this situation.
If any more information is needed please contact me on devendrakulkarni@yahoo.com / devendra@me.iitb.ernet.in