Can anybody help me with the following on my MS SQL Server 2000 database.
1. All tables should have a lastModificationDate column. Any changes and inserts should have the system time updated with a trigger or so. We shouldn’t be inserting the value using SQL statements into this column.
2. There shouldn’t be any deletes on the table. Any deleted records should be marked as inactive or deleted, so it won’t come in queries, but should be physically present in the tables.
3. A modification log table, which will carry the table name, the column identifier, user modified, old value and the timestamp.
We are in plan to build a Monitoring tool using PowerShell and Performance Monitor which could monitor 10 to 20 servers. Do you have any reference of any existing tool using Performance Monitor to monitor the SQL Server and available for free? I didn't want to put some effort, if something is available already.
Can anybody help me with the following on my MS SQL Server 2000 database.
1. All tables should have a lastModificationDate column. Any changes and inserts should have the system time updated with a trigger or so. We shouldn’t be inserting the value using SQL statements into this column.
2. There shouldn’t be any deletes on the table. Any deleted records should be marked as inactive or deleted, so it won’t come in queries, but should be physically present in the tables.
3. A modification log table, which will carry the table name, the column identifier, user modified, old value and the timestamp.
In SQL Server 6.5 what can you use to monitor the size of you database?
I know you can use the SQLPerfMon counters to constantly check the percentage of the log spaced used, I was just hoping there was a similar counter for the database size because I want to generate a system event when the database gets 80% full. Is there such a utility??
Does 2005 have some kind of new feature that audits/monitors changes to a database kind of like an antivirus or something. Reason for question: 1) inserting records into database, 1000 records takes about 2 minutes. 2) reading those 1000 records takes about 45 seconds 3) updating those 1000 records takes about 15 minutes 4) yes we are using ntwdblib.dll and a 4gl language
i was running a test program to add, read, update, delete 1000 records and that is when i noticed that insert, update, delete took a performance hit whereas reading didnt. i ran my test program on a control server (in house) and then at the clients side(matching OS, MSSQL 2005 SP2). Results from test program: The UPDATE process on client side took about 4x longer, INSERT about 2x longer, DELETE about 1.5x longer, READ was actually faster on the clients system. so this made me wonder if their was some kind of database monitoring/auditing going on.
Have audit requirements that require that all schema changes to certain database are tracked and monitored. Any tools that can monitor these changes and log them for audit review?
I am developing a process to monitors a table in a high transaction database. The process will count the number of lines in the table to verify if it has changed or it is stuck. Due to the fact that the database has a lot of transaction I don't want to execute a query on database too often.l Is there another suitable way to accomplish this goal ?
I understand the rule of thumb that the CPU should not be over 90%. If you take the four counters (%processor time,%privileged time, %user time, %interrupt time, and interrupt seconds), what combination gives you your CPU time ?
I have been asked to monitor SQL to tell me when we are performingbetter than others. Can anyone tell me what kinds of scheduled jobs orscripts they utilize?
Hi ,Is there a way/tool in Sql Server 2000 SP3 tomonitor all activities going on in the Database ?For example, I first create an empty database.Then I have an ERWIN generated DDL to createall views and tables. After that, I have INSERTscripts that populate all the base tables. What Iwant to monitor is success or failure for eachscript.Thanks,N.
Hi, I have implemented health monitoring for my web-site, using the SQL provider. Health monitoring works fine when the website is run from VS2005, using the built in web server, all the expected events are inserted into the aspnet database. However when I deploy the site onto IIS, no events are ever inserted into the database. I would appreciate some help figuring out why this is happening! The code that implements the health monitoring in my web.config file is:1 <healthMonitoring 2 enabled="true" 3 heartbeatInterval="0"> 4 <bufferModes> 5 6 <remove name="Analysis"/> 7 8 <add name="Analysis" 9 maxBufferSize="10" 10 maxFlushSize="2" 11 urgentFlushThreshold="2" 12 regularFlushInterval="00:00:02" 13 urgentFlushInterval="00:00:01" 14 maxBufferThreads="1"/> 15 16 </bufferModes> 17 18 <providers> 19 20 <remove name ="SqlWebEventProvider"/> 21 22 <add name="SqlWebEventProvider" type="System.Web.Management.SqlWebEventProvider, 23 System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a" 24 25 connectionStringName="SQL_ASPNET" 26 maxEventDetailsLength="1073741823" 27 buffer="true" 28 bufferMode="Analysis" 29 30 /> 31 32 </providers> 33 34 <eventMappings> 35 36 <remove name ="All Events"/> 37 <add name="All Events" 38 type="System.Web.Management.WebBaseEvent, System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a"/> 39 40 </eventMappings> 41 42 <profiles> 43 44 <remove name="Default"/> 45 <add name="Default" 46 minInstances="1" 47 maxLimit="Infinite" 48 minInterval="00:10:00" 49 /> 50 </profiles> 51 52 <rules> 53 54 <add name="All Events" 55 eventName="All Events" 56 provider="SqlWebEventProvider" 57 profile="Default" 58 minInterval="00:00:01" 59 minInstances="1" /> 60 61 </rules> 62 63 </healthMonitoring> 64
Can anyone show me in SQL7 how to obtain Available Space on a particular filegroup in a database (not the database or datafile). I am trying to include this in a script to monitor my database which uses Filegroups and I have every other info that I need (from the sysfiles table) except the available space. Thanks in advance!
I do all my monitoring locally for disk space, locks, blocking, I've 10 production servers, We need to centralised the monitoring server so from one server all the monitors can be done. Does anyone has any ideas how memory, cpus consumption, disk space, all alerts, locks, blocking, log space and job completition monitoring can be handled.
In Sql 7 What is the easiest way to monitor the number of connections? I have been asked to create a report that monitors the number of logins every hour.
I am new to sql server and sould like to know how I can monitor the server performance on sql servers 6.5 and 7. In sybase we can run sp_sysmon. Is there anything similar to this for sql server.
Folks! Is it possible to monitor several SQL servers in one window and notify operator about error messages. May be some new software can make it possible? Thank you.
I would just like to know what everyone uses to monitor SQL usage? We have a SQL 2000 server that already has several applications sharing it and everyone wants to keep forcing more onto it.
I want to be able to judge when this server has reached it's capacity or how much more it can allow. Can SQL profiler alone do this for me?
Does anyone know of a way to determine which SPs or which tables are being accessed the most heavily or often from an application? I have inherited a site that is heavily used, but poorly tuned. However we have distributed C/S apps that hit the server and do not contain debug code. In addition, the SPs are encrypted and I do not want to replace them in the short term with new SPs that log hits.
I realize I can use triggers, but is there any other method?
Is there any way to watch the activity of the tempdb in 6.5 similar to using 7.0's profiler. I would like to see how often it is utilized and how large it grows during utilization. Any help would be appreciated.
I am new to the db world so please let me know if I don't make any since.
I was wondering if there is an open source SQL database monitoring tool out there that I can run on my desktop that would give me a log of what is going on on the database (or the server).
I'm wonderring if someone has the script which can run on each server to get all dbs size , free space on this server ? Curently I am using the enterprise manager to check the db space usage manually, but this is very frustrated due to a server has many dbs located on it
I believe we are reaching some limitations with SQL Server and I have been monitoring certain items in the Performance Monitor such as: pages/sec; Bytes received/sec; Bytes sent/sec; % disk read time; % disk write time; % processor time; Log growths; percent log used; and transactions/sec. I notice quite a few spikes in Bytes sent/sec and when the % disk read time spikes for more than a few seconds, users notice a delay.
My thoughts are that 1: We need more memory on our SQL Server box (we currently have 768 meg, need a faster SQL Server box and need to distribute the load of some databases to another SQL Server; and 2: We also have a bottleneck when users are connecting via Citrix to SQL Server via our Terminal2 server (which has been tracked down to simply a slow Terminal2 box with a slow nic card - This has been confirmed that our Terminal2 is definately taking a toll and will time-out when large queries are executed.)
We also have been monitoring each of the server boxes. Are there any other recommendations for SQL Server Performance monitor that anyone could see which would be good to monitor (there are several things which can be selected to monitor?)
We've also noticed that bound MSAccess forms seem to play a significant role in the long spikes for Bytes sent/sec. I'm assuming this might be normal for bound forms and the slow SQL Server box with limited memory. Unbound MSAccess forms do not seem to present any problem and show as quick spikes for the Performance monitor.
Another problem is that I also can't seem to tie back the Performance monitor spikes with specific transactions in the SQL Profiler. Is there any way to pinpoint a spike in the Performance monitor with a specific transaction other than trying to catch the spike and quickly switching to SQL Profiler?
We are planning on upgrading our SQL Server box and also adding in another SQL Server box to help distribute the load with certain databases. We are also getting a faster box for our Terminal2 (citrix) server as these slow-downs/time-outs do not happen internally or when we use Remote Desktop Connection to connect externally (only when we connect externally via Terminal2).
Any help would be greatly appreciated! Thank you in advance.
I'm a SQL Server 7/2000 DBA and manages about 40 servers in different networks. Every morning I check through the Enterprise Manager if all Jobs (backup, maintenance, etc.) have run successfully. This check costs me 1 hour per day.
Because of a reorganization I've got some new college's and lost some college's. My new college's think this is to much work, so it should be automated. They only want the failed jobs to report an error on a website or something like that, and don't want to check 40 servers. I don't agree in this, because I'm affraid I'm going to miss some errors.
I want to collect performance measures regarding the import of data and the growth of resulting extract_tables.
I use - say - 15 tables from a erp-system (like JDE Edwards) to build a -say - sales-warehouse and a MS-OLAP-cube.
For every incoming table I got a dts-package witch is protocolled into msdb.sysdtspackagelog. Every package got the name [Build]_[Subsystem]_[Table_name] e.g. JDEdwards_Sales_F0005 The destination table is namend e.g. extr_F0005
Now: With a seperate DTS-package I transport the records from msdb-db into my build-db - say - JDEdwardsExtract. Name: extr_performance_monitor (eventually filter on buildname, because there are several builds in my system)
So this result is quit good and easy to handle for seeing elapsed time per day.
But the dtslog won't tell me, how many records the dtspackage had to copy.(and there is one at least with no records (Cubeupdate))
Now the count(*) comes in.
In the dts-package sys...log ---- to --- extr_performance_monitor I added the columns extr_table_name, extr_table_rowcount, extr_table_timestamp.
With select name, 'extr_' + replace(name, '[Build]_[Subsystem]','') as extr_table_name from extr_performance_monitor I cut the original dts packagename down to the extr_.. name.
I think about a package wich is running after the last data_import (and cube_refresh) is done. (but the same day)
So the result could be:
Table_name (as dimension category) Time to perform Number of records in import table Records per second.
The next step could be to look for required space.
The result should be a grafik - say - over 12 month were you can easily see the amount of data performend time consumend, (table space used), and - very important - you could extrapolate your hardware requirements.
My SQL Server 2000 system is pegged. Disk activity is maxed out and system is very unresponsive. Several people have database tasks running through this system and I'm pretty sure there is a single application that is the culprit and I'd like to identify which one.
Does anyone have any practical tips on using "Process Info" in Enterprise Manager? What units are CPU and Physical I/O displayed in? Why does the column sort on these fields not work as expected?
Do I just pick the process with the largest Physical I/O and assume that's the problem?