DB Engine :: How Many Actual Processor (CPUs) Being Used At A Given Time
Nov 10, 2015
Having a SQL Server 2012 Enterprise (x64) on a Windows 2012 R2. We need to know, a reliable way, the number of processor sql server is using at a give time. We already know how many total processor are available to sql by getting info from sys.dm_os_sys_info.
For instance, a server has 40 processors, we want to know how many of those are being used at a given time. Since the load on the server may not be that high, we would like to know how many processors we can eliminate and the load will still be unaffected.
After watching the server performance for a while, we are predicting we may only need 16. But we would like to get some statistics before we reduce it to this number.
I have two queries yielding the same result that I wanted to compare for performance. I did enter both queries in one Mangement Studio query window and execute them as one batch with the actual query plan included.Query 1 took 8.2 seconds to complete and the query plan said that the cost was 21% of the batchQuery 2 took 2.3 seconds to complete and the query plan said that the cost was 79% of the batch.The queries were run on my local development machine. I was the only user. No other programs were running at the time of this test. The results are repeatable.I understand that the query with the lowest cost is not necessarily the fastest query. On the other hand, the difference is quite big. The query that has approx. 80% of the cost takes 20% of the time and the other way around. I have two questions:
Is such a discrepancy normal?Can conclusions be drawn from the cost distribution? For instance, does the query that takes 8.2 seconds but only costs 21% scale better?
SQL Express 2005 with one instance on a networked dedicated stand-alone window XP€¦ We use this for custom software written in VB.net. We are not SQL experts€¦ Question 1) I am getting a event viewer warning that says The time stamp counter of CPU on scheduler id 1 is not synchronized with other CPUs. This seems strange, since there is only one CPU on this machine. Any ideas? Question 2) In SQL Server Management Studio Express, we do not have the actual data files attached to the instance, but the custom program does use the actual data files just fine, defined by the connection string. Is there any pro/con to having them attached in SQL Server Management Studio Express? Thanks! Bob
There is a stored procedure. It uses linked server. As we will be migrating to amazon cloud, our architect instructed not to replace linked server with openquery.
We are currently looking at consolidating 10 servers into one cluster server.
Some servers may be busier than others. Is there any reason to split them up and give the busy databases specific CPUs or is it always better to have them on one instance?
I have a server with little control over most of the codeset and db design. Recently I have seen both the Processor - %Processor time and Processor - % User time go fom about 6.3 to about 24.3. The system queue length has also gone from about .2 to 1.1. In my humble opinion both of the are signs of a problem coming (luckily the cache hit ratio is still sitting at about 99%). I have been running profiler to catch the things that take more that 4500 MS, and I can probably tie the 2 together. Any opinions, or real world comparisons appreciated
We are seeing that the %Processor Time for the sqlservr process in Perfmon is over 100%. I am trying to understand how can the percentage of use be over 100%, and why it is over 100%. Someone told me that if the machine has multiple processors, that it will be over 100%. If that is the case, how can I determine what the maximum and normal values are? If I have 4 processors, does that mean 400% is the max? Does not make sense since it is suppose to be a percentage value...
Could someone explain to me how the CPU Utilization value is being measured, and if it is going over 100%, why that is and how I can determine what the threshold should be for monitoring?
I have built a Sales Forecast model to predict the sales value. Along with making historic predictions for previous time periods I also want to retrieve the actual sales values for those periods.
How can I achieve this in a time series model?
I also would like to know how do mining models store the data.
Do they store the data in the same table/view format as their respecive data source view or in the Model Content format.
I have a view in SQLServer 2005. It took 30 sec. to finish. Then I deleted 4500 records from one table that is used in view. It took 90 sec. to finish now. I did a comparison on Actual Execution Plan between before I deleted data and after I deleted data, they are almost same, only different is Actual Number Rows become less after deleted data. So, I wonder why data become less but time become more. When I look closely on the Actual Execution Plan, the ridiculous thing is, there are only Estimated Operation Cost on each step, no Actual Operation Cost. I guess there are something wrong with optimizer because reuse same Execution Plan, but how can I tell which step wrong without Actual Operation Cost.
How many processors can SQL server 7.0 support? If the sql server box has 2 CPUs, will sql server 7.0 utilize both ? or sql server use one and NT use the other?
I'm upgrading a pair of active/passive cluster nodes running Windows 2003 Enterprise and SQL 2000. I'm going from 4 CPUs to 8 CPUs per server. What, if anything should I expect from SQL as far as licensing issues when performing this upgrade? We are using the per CPU license method and have the addtional licenses.
I have been unsuccessful locating information about the maximum numberof CPU's SQL Server 2005 Enterprise supports.However I did find info in a document about configuring a fail overcluster solution. It said 4 CPUs, but it is unclear how that works outwith multi core processors.Is it 4 sockets? So with a quad core quad socket we could go have a 16core system?BTW the chart shows the CPU count jumping to 64 CPU's under Server2003 Datacenter.TIA
We're running a SQL-Server 2012 and for a while now my accessing records from bigger tables became tricky.There is a Tomcat-8 running which sometimes can't access these tables at all or only after a long delay. As this happened first I went to the Server-Room and opened the Database with the Management Studio to see if there were any issues. open the Database but expanding the directories for "Tables" or "Views" failed after 10 Seconds with the Error 1222.
I turned the Tomcat-8 off to find out whether some unclosed connections are open. Same result, no changes even after one hour.Another 3rd-Party program which we are using seems to connect via other mechanisms to the SQL-Server (Is there a way to list current connections and their types in the Management-Studio, I'm under the impression this program does a lot of caching, it's much faster than the Management-Studio itself.The question is now how can I find out why these time-outs happen? I'm not an expert in SQL-Servers so.
We have an issue where servers goes slow from morning 2 AM EST to 10 AM EST.
Using SP_WhoIsActive we somehow found that there is netbackup from Symantec which runs during that time phrase.
As per Symantec team there backup should get over by 5 AM EST, per their testing for almost 100 Dbs on the server( not big in size , all of them in total would be 60 GB).
Using SP_whoisactive we only see, start time of that virtual backup occurring on tape, for one database at time.
So is there a way we actually determine when does the above backup kicks and stops?Also, could SQL server be the problem in making these backups run slow or there is something else i need to monitor?
Hi Settled into the new giggero now. Since I have moved here we have upgraded our 4 SQL Servers to 2005 64 bit. All 2 proc dual core hyper threaded with 4GB RAM. Windows 2003. SQL Server compatibility level is 90. SP 1 and the CTP for SP2 are installed. We are hanging on with sp2. Maxdop is 0 (but has been tried on 2). Up to a score or so databases per machine, between 8 and 300 GB in size. Offline, reasonably heavy duty ETL batch processing. Since upgrading we have had trouble with the CPUs intermittently maxing out at 100% usage. This is when intensive routines are run. We know that many of these routines are crappy (SELECT... INTO creating tables with 10s or 100s of millions of rows, cursors etc.) and we are rewriting the code. Estimated time of completion of code review and rewrite is end of next year so we can't wait that long. The same poor code ran (albeit slowly) in SS2k but is causing the server to lock up now it is being run on SS2k5. Symptoms include connection attempts timing out, routines running very slowly. Existing connections can continue to make requests but these are very slow. If we get on the box the CPUs are at a never wavering 100% utilisation. This continues until intervention is taken, often restarting the service. We have left a maxed out box alone over a weekend and it was still in the same state when we came in on Monday. Nothing to glean from the error logs (AFAIK). Obviously the long term fix is to get the code sorted but this is an ongoing process (in fact it is one of the reasons I was hired). Our immediate concern is to get the servers back up to the sort of performance we were realising before we upgraded. Does anyone have any ideas?As ever eternal gratitude and a crate of virtual beer for any help :D
I need to test a SQL Server Agent job composed of several steps. I'd like to stop automatically the job when an elaboration time period is passed for a job step. In this way I could check and optimize the code to run the queries in a more efficient manner.
I have been searching for a means to change the System Failure Error Check policy that comes as part of the Best Practice policies. I want to look back 24 hours. The WQL query shipped with the policy doesn't have a WHERE clause component that looks at TimeGenerated. That query looks like:
IsNull(ExecuteWql('Numeric', 'rootCIMV2', 'select EventCode from Win32_NTLogEvent where EventCode=6008 and Logfile="System"'), 0)
After searching for an example of how to do this and not finding any that are specific to PBM, I decided to fall back to a very basic approach - use wbemtest.exe to try out where clause additions and see how they work, then plug the result into the policy and see if it works. As a start, I tried the following query using wbemtest.exe:
select Event Code from Win32_NTLogEvent where EventCode = 6008 and Logfile = 'System' and TimeGenerated > '20130101010000.000000–000'
This works great in wbemtest.exe. My next step was to plug this into the policy condition expression as follows: IsNull(ExecuteWql ('Numeric', 'rootCIMV2', 'select EventCode from Win32_NTLogEvent where EventCode=6008 and Logfile="System" and TimeGenerated > "20130101010000.000000–000"'), 0)
When I try to manually evaluate this policy in SSMS, I receive an "Invalid Query" error message.I assume that SWbemDateTime isn't available to use inside Policy Based Management policies. All the examples of how to handle the kind of dynamic date creation I have seen are for use in PowerShell, VBScript, or SSIS. I've played with using DateDiff, DateAdd, and GetDate inside the query string, with no success.
Why does the ExecuteWql above fail?Is it at all possible to dynamically generate a datetime (say, 24 hours ago) as part of the query string parameter of the ExecuteWql call?What might that look like?
"Error: 8624, Severity: 16, State: 1 Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services."
I have traced this to an insert statement that executes as part of a stored procedure.
INSERT INTO ledger (journal__id, account__id,account_recv_info__id,amount)
There is also an auto-increment column called id. There are FK contraints on all of the columns ending in "__id". I have found that if I remove the contraint on account__id the procedure will execute without error. None of the other constraints seem to make a difference. Of course I don't want to remove this key because it is important to the database integrity and should not be causing problems, but apparently it confuses the optimizer.
Also, the strange thing is that I can get the procedure to execute without error when I run it directly through management studio, but I receive the error when executing from .NET code or anything using ODBC (Access).
I'm getting a "Input string was not in a correct format." when I'm running a insert statement against my SQL Server 2005 db table. This helps me zilch as I cant see the actual SQL statement to see which one wasnt right. Using a SQLDatasource and a Formview btw. Datasource is called xSqlIB and formview is called fmvIB. Any ideas?
I have a XL source file, the first column contains the name, 199001,19902.....,199012, In SSIS package using XL source if click the 1st column contains headings it automatically covert my heading as name,F2,F3.....F13.
But i need as it is like Name,199001,19902,.....,199012 as a heading than i am using unpivot transformation and convert the column into row to my staging table.
Please sort out my problem i need the XL column like i mentioned above( Name,199001,19902,.....,199012 )