My question is, My perfmon counters reads anywhere between 0 and 1200 with the average being around 250 faults/second.
My concern is my memory max size being too large, I have 4gb ram, SQL reports usage at 2.9 GB, my max mem size is 3.9 gb, should I maybe set my SQL server to use a fixed memory size of 2.9 gb?
Thanks in advance
Pete Karhatsu
Copied from SWYNKs' article:
Process: Page Faults/sec
If this value is greater then 0 then the SQL Server process is producing soft page faults and as a result CPU overhead. Try setting the working set size value to be as close to the SQL Server's memory allocation.
I am running an application on one NT Server, running against SQL Server 6.5 sp 3, and SQL 7 with sp1 applied.
The application is a 'data migration' type application - ie heavy insert and update workload - against many (50+ tables) with many different SQL statements.
The SQL 7 server is configured with 'floating' memory.
On SQL 7 - I am experiencing very high page faults/second for the sqlservr process - sometimes peaking at over 1,000. I was under the impression any number greater than 10 indicates a problem with system performance.
The same application, same data, same NT configuration etc against SQL 6.5 does not page fault. SQL Server 6.5 completes the work faster than 7.
Hi, all.We have a couple of pathological sql servers that have lots and lots ofpage faults per second, up to 4000. Our client programs are written inC#/.NET 1.1 and utilizes connection pooling.Some of the client programs seems to log in hundred of times persecond, as reported by perfmon->.SQLServer:GeneralStatistics->Logins/sec. Stopping the client programs reduces thatnumber significantly.We've done code reviews of the client programs and they look OK.Monitoring .NET connections&pools does not show anything suspicicous.We're currently rewriting the clients to use one db connection insteadof the pools, but that takes some time and may introduce bugs. Doesanyone know why we have these problems and/or why logins/sec is sohigh? I'm thinking "bugs in the .NET client", but really have noidea...One thought I had was that the Page Faults reported for sqlsrv.exe isrelated to memory mapped IO and therefore can be ignored. Right orwrong?Any thoughs/pointers/ideas, even wild guesses, are most welcome.BjørnPS: The server memory is fixed at 1.5GB out of 2GB physical ram,clients run on the same machine and use TCP/IP comm.(I know...) Thehost itself is not paging.
Hi all. Dorky question , but I am still relatively new to the world ofms database servers so bare with me. I am monitoring the page faultrate on a server and it runs at 100% almost all of the time. Cansomeone help me understand what that means?
We are running SQL Server 2000 Enterprise Edition on a 2-node cluster with IIS/ASP.NET front-end hosting 150-200 active connections. There is a SVCHOST process running under LOCAL SERVICE account - hosting the Remote Registry process that is using only 4,200K but is page faulting 200-500 times per second. I realize this process is used for failover, but the page fault seems excessive. Any thoughts on this?
The servers are running Windows Server 2003 with 4 processors and 4gb RAM.
Hi,When i eg. manually ad entries to a table and, cancels the insert Ms SQLincrement the counter on the ID anyway. Is there a way to avoid thisbehavior?RegardsAnders
I have a SQL 7.0 environment with 5 servers running SP1. I use an NT4.0 workstation with a copy of 7.0 installed on it as my workstation. When I run performance monitor with SQL 7.0 not running on my workstation, I don't see the SQL Server counters. When I have my local SQL 7 copy running on my workstation, they show up, but only when I have my workstation selected for monitoring; when I select another server, they don't show up. Anyone have a suggestion or work-around for this? Makes it kinda hard to monitor my servers remotely...
I have a sql6.5 sp4 on nt sp4, the counters for sqlserver are not listed in performance monitor when I use [start][programs][administrative tools][performance monitor] and choose the sqlserver machine. I also get flat lines on the counters that are built in to sqlserver performance monitor and can't add any counters, as none are available. It should not be flat lined as there are 200 users on the system. How do I get the sqlserver counters in perfmon? If a reinstall is necessary, should it be NT or SQL?
I'm running a % Processor Time on _Total (all processors) on my sql server. In perfmon, the graph of the processor will be going along, stop and then continue on, leaving gaps in the line on the graph. I do get an occasional message telling me there was a problem with the "sampling."
I'm wondering if the stop/start behavior is an indicator of some type of performance challenge. These "gaps" in the processor line do correspond to decreased performance but I can't correlate them to anything. If I look at Current Activity/Process info it doesn't look like anything unusual is going on.
I am not showing any sql objects when trying to monitor my sql server through perfmon. This is the situation when running NT perfmon or SQL perfmon locally on the server or from a workstation through the network. I have found some tech net articles but they all say that the ojects should exist locally on the server. Any ideas?
Are reports you can generate when obtaining data from a perfmon log based on an average or cumulative data. If on an average how is the average calculated? THanks
At my company we recently needed to reload SQL 7 on one of our production servers. We then loaded SQL SP2 on it. Later we realized that none of the perfmon counters are showing up for SQL. I tried the whole unlodctr and lodctr routine but it didn't help. Anyone have any suggestions on how to fix this? Any help would be much appreciated.
I am working on a machine with SQL server 6.5 that is missing the SQL Server performance monitor counters that link in with NT`s performance monitor. The performance monitor itself seems to function ok. But I can`t check SQL server`s performance using the built in objects.
I`m told, a few months ago, this same problem was occuring. At that time, they did a reinstall of SQL and the counters remained missing. So they rebuilt the server from scratch, starting with a new OS install (NT server 4.0). The counters were there after the rebuild. About that time, they were also encountering database corruptions (page allocation problems), that went away after the rebuild.
Now, the performance monitor counters for SQL have mysteriously disappeared again. Also, data corruption errors are beginning to show up.
I`ve searched MS knowledgebase and found some suggestions. I`ve tried re-registering using the commandline option of "RegistryRebuild=Yes" on setup and I`ve also checked the permissions on the registry keys. Nothing has helped so far.
Has anyone encountered this problem before and is there anything to do besides rebuild the computer? I don`t want to bother rebuilding if it`s just going to fail in a few months anyway.
Some other characteristics about the system. It`s a huge database (9Gb) and there is disk compression on the data device. I am now in the process of removing compression, because I`m sure it can`t be helping SQL.
Ok, I'm really stumped on this one and I'm hoping someone can shed some light on this. I was using PerfMon to monitor SQL Recompilations (SQL Re-Compilations/Sec counter), and found that there were small spikes of about 3 or 4 re-compilations every minute. Probably nothing to lose sleep over, but I thought I would investigate further to see if I could eliminate the cause.
So I ran a profiler trace, which included the SP:Recompile Event, but it came up empty. Kind of odd, so I ran both PerfMon and Profiler concurrently to see if the event would show up in Profiler when the graph spiked in PerfMon. I saw the PerfMon graph spike, but Profiler didn't pick up anything. I double-checked my Profiler settings by running a test script that would cause a recomplile (interleaved DDL and DML statetments) and Profiler correctly picked up the event.
Does anyone have any idea why the two tools would report different results?
Does anyone have a completed script that will import Perfmon logs from csv format into an existing database that follows the SQL Log File Schema EG; CounterData, CounterDetails, DisplayToID tables accurately. I don't know enough T-SQL to get it right. Thanks for any input you may have.
I am training to be a DBA in a company running about 30 machines with MS SQL Server (2000 and 2005). Last week I went to a class where the instructor recommended establishing a performance baseline using windows performance monitor. He also advised to run perfmon remotely so as to not effect the performance.
What I am wondering is since I have so many different machines to baseline, can I run perfmon on one box, using a seperate counter log for each server? I would like to get a nice week-long baseline for each machine, but I also don't want to get bad data by running too many logs at the same time.
My plan is to do a small set of counters for processor, memory, disk, and the SQL server instance(about 10 counters total).
If anyone has experience in this area, I would appreciate any advice that you might have.
I have created one reports but all the records are displaying on one page.find a solution to display the records page by page. I created the same report without group so the records are displaying in page by page.
Recently, using PERFMON, I've been rather dismayed to find that ourapplication is averaging 3 - 4 lock timeouts per second, andfrequently has extended periods of several minutes where this figurereaches the hundreds.Average LockWaits/sec are less than 0.05, and TableLockEscalations/secare less than 0.5These last two seem very good to me, and as a result I wouldintuitively expect a LockTimeout figure of near-zero.Can anyone suggest why the measured LockTimeout value might be so high?Is the measured value actually considered "high" at all? Doubtlessthis depends on a number of things (transaction rate, number of usersetc), but a rule-of-thumb opinion would be welcomed.
Hi, Can anybody explain to me what's going on with my Target memory and Total memory in Perfmon? Last week, before I upgraded my servers memory, they were both almost the same, at around 24 on the graph. Target was just fractionally above total, but there was almost no space between the two. Then I doubled my servers memory to 4GB and expected to see total go way up and target stay the same. However, target went up to 72, and total came down to 16. When I looked this morning, target is now around 47 and total is 25. I guess I expected these numbers to fluctuate, but not as much as this, and also why is there now such a big difference between target and total?
I have a SQL Server 2000 cluster running on x64 OS. I found the threads in the forum to run perfmon locally by using the x86 version of perfmon (mmc /32 perfmon). However, I cannot run a perfmon remotely from another machine and see the SQL Server perfmon data on any of the nodes in the cluster. The remote perfmon picks up all of the other perfmon variables but no SQL.
I found another thread where somebody asked this question but it wasn't answered. Thanks in advance.
Have a sql 2000 db which I have no say in design, just make it run. My typical sql counters such as system queue, and buffer cache and cache hit ratio are all good. If I need to monitor disk activity (mainly how fast my data is being read, how long the user is waiting for that data for both reads and inserts), what are the best counters for this, and what value should throw up a red flag.
We are collecting perfmon counters every 15 second . Among the counters is Average Disk sec/Write . I under stand that it is the average time taken in second to write a data to the disk. However I do not understand what the is quantum of data written in one second. What is the measure or unit  of data in Average Disk sec/Write or is it simply the average time taken for an I/O write request.
I was trying to extract data from the source server using OLEDB Source and SQL Server Destination when i encountered this error:
"Transaction (Process ID 135) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.".
What must be done so that even if the table being queried is locked, i wouldn't experience any deadlock?
Hello all, I am running into an interesting scenario on my desktop. I'm running developer edition on Windows XP Professional (9.00.3042.00 SP2 Developer Edition). OS is autopatched via corporate policy and I saw some patches go in last week. This machine is also a hand-me-down so I don't have a clean install of the databases on the machine but I am local admin.
So, starting last week after a forced remote reboot (also a policy) I noticed a few of the databases didn't start back up. I chalked it up to the hard shutdown and went along my merry way. Friday however I know I shut my machine down nicely and this morning when I booted up, I was in the same state I was last Wenesday. 7 of the 18 databases on my machine came up with
FCB:pen: Operating system error 32(The process cannot access the file because it is being used by another process.) occurred while creating or opening file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf'. Diagnose and correct the operating system error, and retry the operation. and it also logs FCB:pen failed: Could not open file C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf for file number 1. OS error: 32(The process cannot access the file because it is being used by another process.).
I've caught references to the auto close feature being a possible culprit, no dice as the databases in question are set to False. Recovery mode varies on the databases from Simple to Full. If I cycle the SQL Server service, whatever transient issue it was having with those files is gone. As much as I'd love to disable the virus scanner, network security would not be amused. The data and log files appear to have the same permissions as unaffected database files. Nothing's set to read only or archive as I've caught on other forums as possible gremlins. I have sufficient disk space and the databases are set for unrestricted growth.
Any thoughts on what I could look at? If it was everything coming up in RECOVERY_PENDING it's make more sense to me than a hit or miss type of thing I'm experiencing now.
Dear list Im designing a package that uses Microsofts preplog.exe to prepare web log files to be imported into SQL Server
What Im trying to do is convert this cmd that works into an execute process task D:SSIS ProcessPrepweblogProcessLoad>preplog ex.log > out.log the above dos cmd works 100%
However when I use the Execute Process Task I get this error [Execute Process Task] Error: In Executing "D:SSIS ProcessPrepweblogProcessLoadpreplog.exe" "" at "D:SSIS ProcessPrepweblogProcessLoad", The process exit code was "-1" while the expected was "0".
There are two package varaibles User::gsPreplogInput = ex.log User::gsPreplogOutput = out.log
How do I use the execute process task? I am trying to unzip the file using the freeware PZUnzip.exe and I tried to place the entire command in a batch file and specified the working directory as the location of the batch file, but the task fails with the error:
SSIS package "IngramWeeklyPOS.dtsx" starting.
Error: 0xC0029151 at Unzip download file, Execute Process Task: In Executing "C:ETLPOSDataIngramWeeklyUnzip.bat" "" at "C:ETLPOSDataIngramWeekly", The process exit code was "1" while the expected was "0".
Then I tried to specify the exe directly in the Executable property and the agruments as the location of the zip file and the directory to unzip the files in, but this time it fails with the following message:
SSIS package "IngramWeeklyPOS.dtsx" starting.
Error: 0xC002F304 at Unzip download file, Execute Process Task: An error occurred with the following error message: "%1 is not a valid Win32 application".
The command in the batch file when run from the command line works perfectly and unzips the file, so there is absolutely no problem with the command, I believe it is just the set up of the variables on the execute process task editor under Process. Any input on resolving this will be much appreciated.
I am designing a utility which will keep two similar databases in sync. In other words, copying the new data from db1 to db2 and updating the old data from db1 to db2.
For this I am making use of the 'Tablediff' utility which when provided with server name, database, table info will generate .sql file which can be used to keep the target table in sync with the source table.
I am using the Execute Process Task and the process parameters I am providing are:
The customer.bat file will have the following code: tablediff -sourceserver "LV-SQL5" -sourcedatabase "TC_CTI" -sourcetable "CUSTOMER_1" -destinationserver "LV-SQL2" -destinationdatabase "TC_CTI" -destinationtable "CUSTOMER" -f "c:SQL_bat_Filessql5TC_CTIsql_filescustomer1"
the .sql file will be generated at: C:SQL_bat_Filessql5TC_CTIsql_filescustomer1.
The Problem: The Execute Process Task is working fine, ie., the tables are being compared correctly and the .SQL file is being generated as desired. But the task as such is reporting faliure with the following error :
[Execute Process Task] Error: In Executing "C:SQL_bat_FilesSQL5TC_CTIpackage_occurrence.bat" "" at "C:Program Files (x86)Microsoft SQL Server90COM", The process exit code was "2" while the expected was "0". ]
Some of you may suggest to just set the ForceExecutionResult = Success (infact this is what I am doing now just to get the program working), but, this is not what I desire.
Im backing up to a network directory thats actually a mount point on a different server.My backup was slower than usual so i opened up perfmon to have a look.
When selecting the mount point from the Logical Disks section in perfmon i can see that writes/sec & write bytes/sec both show zero for a long period of time, even though the backup percent complete is increasing.Then all of a sudden the writes to the network share jump massively.
Is there some caching mechanism for backups in sql where during a backup data is only flushed to the disk periodically during backup?
I'm pulling data from Oracle db and load into MS-SQL 2008.For my data type checks during the data load process, what are options to ensure that the data being processed wouldn't fail. such that I can verify first in-hand with the target type of data and then if its valid format load it into destination table else mark it with error flag and push into errors table... All this at the row level.One way I can think of is to load into a staging table then get the source & destination table -column data types, compare them and proceed.
should I just try loading the data directly and if it fails try trouble shooting(which could be a difficult task as I wouldn't know what caused error...)
I am having this table locking issue that I need to start paying attention to as its getting more frequent.
The problem is that the data in the tables is live finance data that needs to be changed and viewed almost real time so what I have picked up so far is that using 'table Hints' may not be a good idea.
I have a guy at work telling me that introducing a data access layer is the only way to solve this, I am not convinced but havnt enough knowledge to back my own feeling up. (asp system not .net).