Does anyone know of a way to determine which SPs or which tables are being accessed the most heavily or often from an application? I have inherited a site that is heavily used, but poorly tuned. However we have distributed C/S apps that hit the server and do not contain debug code. In addition, the SPs are encrypted and I do not want to replace them in the short term with new SPs that log hits.
I realize I can use triggers, but is there any other method?
We are in plan to build a Monitoring tool using PowerShell and Performance Monitor which could monitor 10 to 20 servers. Do you have any reference of any existing tool using Performance Monitor to monitor the SQL Server and available for free? I didn't want to put some effort, if something is available already.
I am new to sql server and sould like to know how I can monitor the server performance on sql servers 6.5 and 7. In sybase we can run sp_sysmon. Is there anything similar to this for sql server.
I want to collect performance measures regarding the import of data and the growth of resulting extract_tables.
I use - say - 15 tables from a erp-system (like JDE Edwards) to build a -say - sales-warehouse and a MS-OLAP-cube.
For every incoming table I got a dts-package witch is protocolled into msdb.sysdtspackagelog. Every package got the name [Build]_[Subsystem]_[Table_name] e.g. JDEdwards_Sales_F0005 The destination table is namend e.g. extr_F0005
Now: With a seperate DTS-package I transport the records from msdb-db into my build-db - say - JDEdwardsExtract. Name: extr_performance_monitor (eventually filter on buildname, because there are several builds in my system)
So this result is quit good and easy to handle for seeing elapsed time per day.
But the dtslog won't tell me, how many records the dtspackage had to copy.(and there is one at least with no records (Cubeupdate))
Now the count(*) comes in.
In the dts-package sys...log ---- to --- extr_performance_monitor I added the columns extr_table_name, extr_table_rowcount, extr_table_timestamp.
With select name, 'extr_' + replace(name, '[Build]_[Subsystem]','') as extr_table_name from extr_performance_monitor I cut the original dts packagename down to the extr_.. name.
I think about a package wich is running after the last data_import (and cube_refresh) is done. (but the same day)
So the result could be:
Table_name (as dimension category) Time to perform Number of records in import table Records per second.
The next step could be to look for required space.
The result should be a grafik - say - over 12 month were you can easily see the amount of data performend time consumend, (table space used), and - very important - you could extrapolate your hardware requirements.
My SQL Server 2000 system is pegged. Disk activity is maxed out and system is very unresponsive. Several people have database tasks running through this system and I'm pretty sure there is a single application that is the culprit and I'd like to identify which one.
Does anyone have any practical tips on using "Process Info" in Enterprise Manager? What units are CPU and Physical I/O displayed in? Why does the column sort on these fields not work as expected?
Do I just pick the process with the largest Physical I/O and assume that's the problem?
I was wondering if the CPU is the bottleneck. Hence I used the PerformanceMonitor to look up some values. Here are the results.Counter Scale Average MaximumMinimum%Disk Time 1 2.74 24.9320Current Disk Queue Length 10 0.090 6 0Buffer cache hit ratio 1 99.752 99.75299.751%Processor Time (Total) 1 2.307 9.6880.313%Processor Time (sqlserv) 1 1.218 7.188 0Processor Queue Length 10 6.960 11 5The thing that puzzles me is that the Processor Time is quite low but theProcessor Queue Length is very high. Why is the processor processes queuingup when the processor not bogged down ? I would assume both values to be onthe high side. Is the high buffer hit ratio an indicator that the CPU iswaiting for some reads from the buffer before processing the next process?The users often run reports on historical data. That might be the reason forthe high buffer hit rates.Am I monitoring the wrong values ?Any advise would be very much appreciated and thanks in advance.Pit
Hi In performance monitor, Add Counters interface, it is pointed to the local SQL server as default.
Probably a kind of 'stu..' question. When I typied a remote sql server name in the "select counters from computer dropdown listbox", which is in my E.M. registered already, I got the error of "Unable To Connect To the Machine".
I am looking for a good solution for monitoring the performance on my SQL Server. I've used Performance Monitor but think there are better solutions out there. Specifically, I'd like to run the app on my desktop to monitor the server remotely. I'd like the app to run constantly and notify me if anything weird starts to happen.
I'm looking at i3 for SQL Server by Veritas as an option, but it's gonna cost ~$2k.
Anybody have comments, suggestions with the Veritas product or any other packages out there you would recommend.
Running our software on many client sites. Application: 1) using a 4gl language 2) uses ntwdblib.dll Client MSSQL Servers: 1) varies from site to site on average dual core, 4GB, 20-100GB HD 2) MSSQL 2000 sp3-4, 2005 sp1-2
A handfull of our clients have complained about how slow things seem to be working. So after getting online with the customer and doing some investigating/testing, I have found that 'slow' is not the word to use. We are talking about snails pace. Now this same software at our other mssql sites doesnt seem to have any problems. very good performance numbers. Some of our customers with single cpu, 2GB, 10GB HD have better performance than a couple of our 4 cpu, 16GB, 2 - 20GB HD systems. we have written a simple test program that we run on these slow sites that individually (useing local drives/files) are comparable to our faster sites. We then run the test program against the MSSQL database/tables. On the servers the run times are about the same. However, when we run it on a workstation the results go off the scale. At our good sites, the test time is around 10 minutes, on our bad sites the test time is around 1 HR 30 minutes. Now i know this sounds like a network issue, but one of these slow sites had someone come out and check the network traffic/packets/routers/switches. then we paid to send that person to one of our faster sites to test their network. The results were that our slower client had a faster network. Our slower client uses CISCO smart switches/hubs/router.
Now for my question, does anyone know of or could recommend something that i could use to determine what is happening between the server and workstation on these slow sites. Is there a problem with CISCO equipment and MSSQL 2005, NTWDBLIB.DLL. I am looking for anything here.
I have SQL 7 running on an Active/Passive Cluster configuration. My problem is that I can not see any SQL objects listed on perfmon. Can this be because of the Cluster configuration? I trying to run perfmon directly from the main system console.
Hi, I'm looking for tool that will help me to monitor and analyze SQL server performance including SQL activities and server activities(cpu, memory usage) as well. I appriciate any feedback,
Does SQL Compact Edition expose performance counters to tools like Perfmon as SQL Server does? For instance, can you view lock wait times, cache hit ratio, etc.?
Hi, I'm using Visual Web Developer Express and Management Studio Express, and my web site is on a shared web host´, running SqlServer2000. I'm looking for software that enables me to monitor the server, but is it possible? The only apps I've found (and downloaded and installed and unistalled) so far need administrative rights to the server so they won't work on a shared web host. All help would be welcome! Thanks in advance, Pettrer
Hi guys , may I know is there any examples of store procedure/scripts for monitoring replication status and performance ? I just know about sp_replmonitorhelppublisher. Thx for the assistance.
If I'm on a remote machine, meaning a computer not in the WSFC cluster, and I open SSMS 2014, point it to a SQL Instance, and open activity monitor:
1. I get all the panes and charts except % Processor Time.
2. Then, if I authenticate to the cluster's domain by mapping a drive with valid domain credentials, I'm free to put performance counters in the Perfmon - - - but SQL Activity Monitor shuts down with“The Activity Monitor is unable to execute queries against server SQL-V01INSTANCE1..Activity monitor for this instance will be placed into a paused state.Use the context menu in the overview pane to resume the activity monitor.
Additional information: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))(Mscorlib)”
3. Of course, the Activity monitor can't be resumed via the context menu. Removing counters and closing the perfmon do not work. I dropped the mapped drive and rebooted the machine. That brought back 95% of the information in the Activity monitor.
4. Further experimentation showed that any mapping of drive shares present on the SQL Server to the computer running SSMS cut off functionality of the 'overview' pane in the remote machine's SQL Activity monitor -- the monitor that had been trying to watch the server offering the shares.
I understand the rule of thumb that the CPU should not be over 90%. If you take the four counters (%processor time,%privileged time, %user time, %interrupt time, and interrupt seconds), what combination gives you your CPU time ?
I have been asked to monitor SQL to tell me when we are performingbetter than others. Can anyone tell me what kinds of scheduled jobs orscripts they utilize?
Hi ,Is there a way/tool in Sql Server 2000 SP3 tomonitor all activities going on in the Database ?For example, I first create an empty database.Then I have an ERWIN generated DDL to createall views and tables. After that, I have INSERTscripts that populate all the base tables. What Iwant to monitor is success or failure for eachscript.Thanks,N.
Hi, I have implemented health monitoring for my web-site, using the SQL provider. Health monitoring works fine when the website is run from VS2005, using the built in web server, all the expected events are inserted into the aspnet database. However when I deploy the site onto IIS, no events are ever inserted into the database. I would appreciate some help figuring out why this is happening! The code that implements the health monitoring in my web.config file is:1 <healthMonitoring 2 enabled="true" 3 heartbeatInterval="0"> 4 <bufferModes> 5 6 <remove name="Analysis"/> 7 8 <add name="Analysis" 9 maxBufferSize="10" 10 maxFlushSize="2" 11 urgentFlushThreshold="2" 12 regularFlushInterval="00:00:02" 13 urgentFlushInterval="00:00:01" 14 maxBufferThreads="1"/> 15 16 </bufferModes> 17 18 <providers> 19 20 <remove name ="SqlWebEventProvider"/> 21 22 <add name="SqlWebEventProvider" type="System.Web.Management.SqlWebEventProvider, 23 System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a" 24 25 connectionStringName="SQL_ASPNET" 26 maxEventDetailsLength="1073741823" 27 buffer="true" 28 bufferMode="Analysis" 29 30 /> 31 32 </providers> 33 34 <eventMappings> 35 36 <remove name ="All Events"/> 37 <add name="All Events" 38 type="System.Web.Management.WebBaseEvent, System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"/> 39 40 </eventMappings> 41 42 <profiles> 43 44 <remove name="Default"/> 45 <add name="Default" 46 minInstances="1" 47 maxLimit="Infinite" 48 minInterval="00:10:00" 49 /> 50 </profiles> 51 52 <rules> 53 54 <add name="All Events" 55 eventName="All Events" 56 provider="SqlWebEventProvider" 57 profile="Default" 58 minInterval="00:00:01" 59 minInstances="1" /> 60 61 </rules> 62 63 </healthMonitoring> 64
Can anyone show me in SQL7 how to obtain Available Space on a particular filegroup in a database (not the database or datafile). I am trying to include this in a script to monitor my database which uses Filegroups and I have every other info that I need (from the sysfiles table) except the available space. Thanks in advance!
I do all my monitoring locally for disk space, locks, blocking, I've 10 production servers, We need to centralised the monitoring server so from one server all the monitors can be done. Does anyone has any ideas how memory, cpus consumption, disk space, all alerts, locks, blocking, log space and job completition monitoring can be handled.
In Sql 7 What is the easiest way to monitor the number of connections? I have been asked to create a report that monitors the number of logins every hour.
Folks! Is it possible to monitor several SQL servers in one window and notify operator about error messages. May be some new software can make it possible? Thank you.
Can anybody help me with the following on my MS SQL Server 2000 database.
1. All tables should have a lastModificationDate column. Any changes and inserts should have the system time updated with a trigger or so. We shouldnt be inserting the value using SQL statements into this column.
2. There shouldnt be any deletes on the table. Any deleted records should be marked as inactive or deleted, so it wont come in queries, but should be physically present in the tables.
3. A modification log table, which will carry the table name, the column identifier, user modified, old value and the timestamp.
I would just like to know what everyone uses to monitor SQL usage? We have a SQL 2000 server that already has several applications sharing it and everyone wants to keep forcing more onto it.
I want to be able to judge when this server has reached it's capacity or how much more it can allow. Can SQL profiler alone do this for me?
Is there any way to watch the activity of the tempdb in 6.5 similar to using 7.0's profiler. I would like to see how often it is utilized and how large it grows during utilization. Any help would be appreciated.
Can anybody help me with the following on my MS SQL Server 2000 database.
1. All tables should have a lastModificationDate column. Any changes and inserts should have the system time updated with a trigger or so. We shouldnt be inserting the value using SQL statements into this column.
2. There shouldnt be any deletes on the table. Any deleted records should be marked as inactive or deleted, so it wont come in queries, but should be physically present in the tables.
3. A modification log table, which will carry the table name, the column identifier, user modified, old value and the timestamp.