Excessive Maintenance ?
Feb 11, 1999
Help !!
I am running a database of 500-600mb 20-30% of which is new data daily (5 day old data being deleted as part of the nightly maintenance) And my nightly maintenance is regularly taking an hour plus.
CheckDB, New Alloc, Catalog, re-indexing and dumps are performed nightly (2am ish) and as the system is in constant use I cannot afford such a long task. I can't use weekly dumps/checkDB as we use transaction log replication and these are dumped every minute. I really need some suggestions on how I can improve matters. The deletion of old data in particular is taking a long time due to the use local variables but is there a faster way to do this :
OPEN tnames_cursor
FETCH NEXT FROM tnames_cursor INTO @connectionid
WHILE (@@fetch_status <> -1)
BEGIN
IF (@@fetch_status <> -2)
BEGIN
Select @dRent = DeliveredRetention from ControlDB..connectiontable
where ControlDB..connectiontable.Cid = @connectionid
Delete from MyDB..Table where Cid = @cid
and DateDelivered != NULL
and Datediff(hh,MyDB..Table.DateDelivered,getdate()) >= (@dRent*24)
END
FETCH NEXT FROM tnames_cursor INTO @connectionid
END
DEALLOCATE tnames_cursor
GO
These jobs have also started running out of locks and deadlocking on occaision which seems odd as the system has 10000 available (escalating at 2000)
Any Suggestions would be very much appreciated
Damon
View 1 Replies
ADVERTISEMENT
Nov 27, 2006
Hi there group.
Could some please point me in the right direction?
We have a database and it's about 28GB in size, recently the SQL server process that runs uses approximately 1.6GB of Memory.
I have tried running SQL profiler to find out which Stored Procedure is causing this but came up unsuccessful.
When restarting SQL the process it run's at about 50MB for about 20sec and then starts climbing up to 1.6GB of memory usage.
Please assist.
View 12 Replies
View Related
Apr 16, 2008
I'm running into a blocking problem on my SQL 2000 server. I have a table that is frequently read/written to (inserts, updates, deletes) -- I don't place any explicity locks but I do a SELECT @@Identity after I insert a record to get the Identity value via a sqlCommand.ExecuteScalar.
So my questions:
#1 Is blocking normal? (40-90 blocks consistantly - 350 or so client connections)
#2 Is there any better coding solution to avoid blocks?
#3 I need to get the Identity value after the recorded is added and I thought ExecuteScalar is the fastest and least overhead, put perhaps I'm wrong?
Any suggestions or hints welcome.
Thanks, Rob.
.NET 2.0
View 4 Replies
View Related
Mar 16, 1999
We recently upgraded from SQL 6.5 to SQL 7. I have a few .sql files that were each running around 5 - 8 minutes under 6.5. These same files now each take over 30 minutes to run. Has anybody had problems with their queries taking longer to run under 7.0? These files are quite large and are comprised of 3 - 4 batches with several queries in each batch. If anybody has any thoughts on the cause please let me know.
Thanks in advance.
View 1 Replies
View Related
May 23, 2006
Hi there,
Currently using SQL Server 2000 (SP4). The following condition started occurring last week:
- Server has excessive blocking
- Majority of the processes are in runnable state
- Excessive blocking happens for a few mins. and repeats again during the day. Does not happen at night.
- Nothing on the server errorlog, profiler
- CPU averages 40 - 50% at that point of excessive blocking
Any help would be greatly appreciated.
Thanks.
View 7 Replies
View Related
Jun 25, 2007
Since the other related topic is closed/answered...
The Short version:
SQL is now logging too much info with every package. The volume of the new "User: Diagnostic" event has caused some packages to fail and the command-line exclusion option appears to have no effect on the events logged to the SQL provider. Is this a bug in dtexec or am I using the wrong syntax to exclude log entries? I don't want to modify all of my SSIS packages...
More Info:
SQL SP2 introduced new logging events, most of which appear to get logged by default. So far, none of our packages have used any sort of explicit logging configuration; it's all been set at the command line using a syntax like shown below:
dtexec.exe /FILE "D:SSIS PackagesMyAppVendors.dtsx" /MAXCONCURRENT " -1 " /CHECKPOINTING OFF /REP E;Diagnostic /LOGGER "{6AA833A1-E4B2-4431-831B-DE695049DC61}";"MyDBConnName"
This does appear to correctly limit what gets logged to the console (and thereby the SQL Agent's job step log), but has no effect on what's logged to the database. Normally, I'd use /REP EWDCI, but I was attempting to limit the log entries to Errors only.
I first came across this error when a package failed, but it only logged the following to the console with nothing in sysdtslog90 (while not the "latest/greatest" server, this is a relatively low-utilized quad 2.8ghz xeon ProLiant DL580 G2):
Error: 2007-06-21 06:01:30.45
Code: 0xC0202009
Source: MYPACKAGENAME Log provider "{0C3CBE9B-D828-41C2-98D2-99BA498B314A}"
Description: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Connection is busy with results for another command".
End Error
Error: 2007-06-21 06:01:30.46
Code: 0xC0014010
Source: MYPACKAGESTEP Load
Description: The SSIS logging provider "{0C3CBE9B-D828-41C2-98D2-99BA498B314A}" failed with error code 0xC0202009 ((null)). This indicates a logging error attributable to the specified log provider.
End Error
I changed this one package to only log OnError events, but I'd rather not have to change every package to do the same, plus I'd like the ability to easily turn on verbose or any other logging level when needed.
View 1 Replies
View Related
May 22, 2006
have a 3rd party sql 2000 app, mostly bad sql. have lock issues, when monitoring sql locks/req per second, I get normally between 500,000 and 1,000,000 requests. For a 4 way box with 16 gig of memory, what is considered an excessive amounts of locks.
View 3 Replies
View Related
Apr 18, 2000
I have got SQLv6.5 SP5a with SMS1.2 SP4 on seperate Alpha boxes. I have automated the backups so they are scheduled for after hours. SMS gets backed up first and TEMPDB shortly afterwards. However, since a back log in SMS MIFS has happened, the TEMPDB backup displays of 100,000pages backed up. When you back it up on its own, it only shows 170+ pages.
The SMS DB is 600MB in size, the Log is 210MB, Open objects is 5000, and TEMPDB is set 210MB on its own device.
Any ideas
View 1 Replies
View Related
Jul 23, 2005
Hello!I am trying to investigate strange problem with particular storedprocedure. It runs OK for several days and suddenly we start gettingand lotof locks. The reason being [COMPILE] lock placed on this procedure. Asaresult, we have 40-50 other connections waiting, then next connectionusingthis procedure has [COMPILE] lock etc. Client is fully qualifyingstoredprocedure by database/owner name and it doesn't start with sp_. I knowthese are the reasons for [COMPILE] lock being placed. Is theresomethingelse that might trigger this lock? When troubleshooting this issue, Inoticed there was no plan for this procedure in syscacheobjects. Thestoredprocedure is very simple (I know it could be rewritten/optimized butourdeveloper wrote it):CREATE PROCEDURE [dbo].[vsp_mail_select]@user_id int,@folder_id int,@is_read bit = 1, --IF 1, pull everything, else just pull unread mail@start_index int = null, --unused for now, we return everything@total_count int = null output, -- count of all mail in specifiedfolder@unread_count int = null output -- count of unread mail in specifiedfolderASSET NOCOUNT ONselect m1.* from mail m1(nolock) where m1.user_id=@user_id andfolder_id=@folder_id and ((@is_read=0 and is_read=0) or (@is_read=1))orderby date_sent descselect @total_count = count(mail_id) from mail m1(nolock) wherem1.user_id=@user_id and folder_id=@folder_id and ((is_read=0 and@is_read=0)or (@is_read=1))select @unread_count = count(mail_id) from mail m1(nolock) wherem1.user_id=@user_id and folder_id=@folder_id and is_read=0GOI was monitoring server for a couple of day before and I am not surewhythis happens every 3-4 days only!Any help on this matter would be greately appreciated!Thanks,Igor
View 1 Replies
View Related
Aug 9, 2007
Hi all,
I'm trying to get an understanding of a serious problem I have with a large DB in production. This is going to be obvious to someone (everyone probably) <bg>
I have a table which consists of numerous varchars and ints but also a Text type field. This table resides in a SQL 2000 Database. This DB currently has a data file size of 16Gb and a Transaction Log size of 17Gb. When I edit the table and increase the size of a Varchar field from 50 to 100 these files grow to more than double their size!
Why is this happening and how can I prevent this?
TIA
NozFx
View 1 Replies
View Related
Jun 8, 2015
I am getting this massage in error log .
"Database XYZ has more than 1000 virtual log files which is excessive. Too many virtual log files can cause long startup and backup times. Consider shrinking the log and using a different growth increment to reduce the number of virtual log files."
I am using  sql server 2008r2.
View 5 Replies
View Related
Jul 31, 2007
We are running SQL Server 2000 Enterprise Edition on a 2-node cluster with IIS/ASP.NET front-end hosting 150-200 active connections. There is a SVCHOST process running under LOCAL SERVICE account - hosting the Remote Registry process that is using only 4,200K but is page faulting 200-500 times per second. I realize this process is used for failover, but the page fault seems excessive. Any thoughts on this?
The servers are running Windows Server 2003 with 4 processors and 4gb RAM.
View 1 Replies
View Related
Aug 3, 2007
We have a SQL2000 database (Publisher) replicating inserts and updates across a 10Mb link to a SQL 2005 database (Subscriber). The Publisher has two tables we are interested in, 1 with 50 columns and 1 with 15. Both tables have 6 insert/update triggers that fire when a change is made to update columns on the publisher database.
We have set up a pull transactional replication from the Subscriber to occur against the Publisher every minute. We have limited the subscription/replication configuration to Publsih 6 columns from table 1 and 4 from table 2. Any change occuring on any other columns in the Publisher are of no interest. The SQL 2005 database has a trigger on table 1 and table 2 to insert values into a third table. There are around 7,000 insert/updates on table 1 and 28,000 on table 2 per day. All fields in the tables are text.
We are seeing "excessive" network traffic occuring of approximately 1MB per minute (approx 2GB per 24 hrs). We also see that the Distributor databases are getting very large -- upto around 30GB and growing until they get culled. We have reduced the culling intrval from 72 hrs to 24 hours to reduce the size.
Does anyone have any suggestions as to how this "excessive" network traffic can be minimised and how the distributor database size can be minimised. I think that maybe they are both related?
Thanks,
Geoff
WA POLICE
View 5 Replies
View Related
Aug 17, 2007
Hello,
I have a question that I hope someone can clear up for me. I have come across a number of different suggestions on DB maintenance, for example reindexing with the following script:
USE DatabaseName --Enter the name of the database you want to reindex
DECLARE @TableName varchar(255)
DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = 'base table'
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
DBCC DBREINDEX(@TableName,' ',90)
FETCH NEXT FROM TableCursor INTO @TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
My question is, doesn't the maintenance plan have this functionality inherent in it when you create the maintenance jobs to reindex? Is there a benefit to scripting things out vs just using the maintenance plan wizard for this sort of thing and any of the items it covers? I came from an Oracle background where this was a no-brainer but I am a bit confused on the choices with SQL Server.
Thanks.
View 1 Replies
View Related
Sep 25, 2007
We have a large number of clients attempting to replicate two publications on 2005 Express databases (2 publications subscribed to the one subscriber database) with our 2005 Server (9.00.3042.00 SP2 Standard Edition) and experiencing two significant problems:
1) Users experience the following message:
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Subscriber for changes not yet sent to the Publisher. You must reinitialize the subscription (without upload).
This problem should not apparently occur with SQL Server 2005 (or 2005 Express) instances with SP2 applied. All clients experiencing this problem have SP2 installed as does our Server and the retention period is 30 days. The subscribers have been replicating well under that.
2) Replications never succeed after appearing to replicate/loop around for hours
This issue is the most critical as we have clients who have been installed and re-installed with new instances of SQL Server 2005 Express, new empty databases (on subscriber before snapshot extraction), and using fresh snapshots (less than an few hours old) which cannot successfully replicate.
Interestingly there is at least 1 instance where several computers are subscribed and successfully replicating the same database as another where replication refuses to succeed.
To test we have taken a republished database from another 2005 Server which is working fine and restored it to the same server as the one holding the database with which we are experiencing problems and subscribed to it. This test worked fine and replication of both publications went through fast and repeatedly without showing any signs of problem.
This indicates that the problem is perhaps data related as it appears localised to that database.
Below are two screenshots which may assist.
Screenshot 1 Shows that on the server side the replication attempts look like they are succeeding despite the fact that the subscriber end does not indicate success. Also the history indicates the the subscription has spent all it's time initialising and not merging any changes.
Screenshot 2 Shows a rogue process which has appears on many of the problem child subscribers. It shows a process running with no end time even though the job indicates failure in the message and even though other replication attempts appear to have succeeded after it. This process stays in the history showing that it is running even when I can find no corresponding process for it.
Can anyone suggest a further course of action/further testing/further information required which may assist?
This is extremely urgent and any assistance would be greatly appreciated!
Thanks in advance!
Scott
View 5 Replies
View Related
Jun 18, 2015
I am testing some maintenance tasks sql commands such as index rebuild, index reorg, update statistics and db integrity check on a SQL Server 2014 Database. This is a new non-production vendor database (DB Size 500 GBs, Log Size 25 GBs) which eventually will be created in production. Currently, it is in full recovery model and without log backups. The database has a whole lot of indexes. I am just trying to rebuild and reorganize all the indexes (that need it), in addition to trying to get an idea of how long these maintenance task will take and the space needed in the log file to complete these tasks/commands. I would like to execute these tasks manually (the first time) to gather the duration and space required information. Eventually, I would probably schedule a weekly job to perform this maintenance.
I ran the index rebuild task on the database and noticed that the log file grew by over 50 GBs. I killed the process and truncated and shrunk the log file back down.
1. Does the index rebuild, index reorg, update statistics and db integrity check commands all use the log file?
2. Does Indexs Reorg have less impact on log file then Index Rebuild?
3. Should a truncate log and shrink log file be performed after these maintenance commands?
4. Should a full database backup be performed after these maintenance commands? Or before the maintenance commands?
I have read and understand that shrinking is not good for the database (could lead to more fragmentation and more data file growth when data is added) and I know about rebuilding indexes when fragmentation is GT 30% and reorganizing indexes when fragmentation is GT 5% and LE 30%.
Since this is a non-production database maybe I should set the recovery model to simple, run the maintenance commands and leave the database in simple recovery model unless the vendor needs it in full recovery model for some unknown reason.
5. With the simple recovery model the log file should be reused in a circular manner and not grow during these maintenance tasks. Is this correct?
View 3 Replies
View Related
Mar 5, 2001
I have deleted a database from SQL Enterprise Manager. Anyone know a way to clear that database from my maintenance plan? I do not wish to just uncheck the deleted database or create a new database plan.
Thanks!
View 1 Replies
View Related
Jun 11, 2001
My index maintenance job that was setup through Enterprise manager database maintenance fails with the following notice. It ran great for several weeks then it started failing. Any suggestions!!
sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029). The step failed.
View 1 Replies
View Related
Aug 31, 2001
Hi, anyone who administering the pretty big database not less than 30 Gb with the average number of rows in a table about 2M and more, please share you experience with maintenace of such a db. Esspecially i'm interesting in:
1) Indexes maintenance (When and how - just regular dbcc, maint. plan or some script to split the job twice and so on.)
2) Remove unused space from db. (not major)
The serever works 24*7, and it's transactional environment. SQL 7 sp3 on claster.
I run the sp. to rebuild all the indexes it takes about 2-3 hrs to determin the objects withfragmentation less than 80% and actually rebuilding, during this process the users experience the performance (specially for update/insert) problem. It looks like I need to change the plan or strategy to do this. Any thoughts appreciated!
Thanks in advance.
Dmitri
View 2 Replies
View Related
Oct 23, 2001
Hello All
I have been given a SQL Server 2000 database to look after which has been set up with a Database maintenance plan. The plan is set to backup the complete database and the transaction log. The backups are written to the local disk correctly but the plan is also set to remove any backup files (both database .BAK and transaction log .TRN) that are over one week old. Complete database .BAK files are written daily and the .TRN are written every hour daily. The .BAK files are removed ok automatically but the .TRN files are not - they are just slowly filling the disk. There does not seen to be anything different between the way the main database and the transaction log is set up in the maintenance plan.
I would be very grateful for any ideas
View 1 Replies
View Related
Feb 14, 2000
I am looking for opinions of setting up a database maintenance plan. I want to know if it is safe to trust the wizard and let it set up all of the jobs, or if it is better to write your own procedures to handle backups and maintenance as in 6.5. All sugestions and opinions are welcome. Thanks.
View 2 Replies
View Related
Feb 2, 2000
For SQL server 7.0, is it necessary to schedule database maintenance plan on a regulare basis? I know it is necessary for SQL server 6.5.
Thanks.
Su Ge
View 1 Replies
View Related
Aug 31, 2000
I have a strange thing in one of our Maintenance plans.
On the first tab where you check which databases you're including in the plan I have (say my database name is CAT) a 'CAT' and 'cat' database listed and the one chosen is 'cat'. However my database in all other views shows up in all caps. (even when I do an sp_helpdb)
The backups look like they're working, etc. but it just seems weird. If I go to create a new plan it only gives me the one option 'CAT' which is really what's there. I'm new and I'm thinking the database at one time was 'cat' and this is when the maintenance plan was created. Then it was renamed to 'CAT' and there's the two db's showing in the old mainenance plan.
What would you do? Create a new plan with "CAT" and just get rid of the old one with the weird 'cat' and 'CAT'?
Any other suggestions or ideas on what happened..
ann
View 1 Replies
View Related
Oct 4, 1999
I've created a database maintenance plan to backup a database, but it just
isn't happening, am i missing something. The maintenance plan appears to be
created successfully.
responses appreciated.
thanks
Todd Minifie
View 6 Replies
View Related
Feb 20, 2003
SQL7: I have added a Maintenance Plan to backup to 4mm dat tape the master and msdb SQL databases as well as another database relative to our application called WISE. This works fine; however, it appears to always append to the media as opposed to overwriting (preferred). Any help would be appreciated.....
View 1 Replies
View Related
Aug 22, 2005
Hi,
I am going to set up maintenance plans on all our SQL servers (7.0 and 2000). I have found several 'tutorials' on how to do this, but no one is describing the options in detail. Can you guys/gals please help me out? We have alot of small databases and some medium (1-2GB).
Thanks//Stefan
View 3 Replies
View Related
Feb 1, 2002
hi,
I have SQL 2000 ProductionBox.It is in 24x7 environment.
We need to maintain data only for 30 days.
Even if I schedule deletetion of data on daily basis - SQL will take the lock as data we receive is too huge.
Secondly,Since the indexes are heavily used - defrag don't work for them.the only option I think I am left with is to rebuild the indexes.Though in SQL2K - index creation is on fly - but here we are talking of table sizes of 8-10GB.
I suggested my Boss to bring down the box for few hours for maintenance.but he insists that since this is 24x7 - Box can never be brought down.
I am finding it hard to convice him and do my job.
Any idea on how to rebuild the indexes & how to delete this many records(avg.50k per day-data of xml type)without creating a block is highly appreciated...or how to convince my boss to give me a window for maintenance...:)
TIA
View 3 Replies
View Related
Apr 12, 1999
SQLMaint is run once a week for the following a database on SQL 6.5. The following is the information for the database when I see it through the Enterprise Manager:
Data Size 650 MB
Data Space Available 0.00
Log Size 360
Log Space Available 359.99
The following is the syntax built by the DATABASE Maintenance Wizard:
SQLMAINT.EXE -D CATS -CkDB -CkAl -CkTxtAl -CkCat -UpdSts -RebldIdx 100 -Rpt E:MSSQLLOGCATS_maint.rpt
It runs once a week and takes about 40 mins and runs successfully. Last it run was 4/11/99 at 2:00 AM
The result set I get from sp_spaceused is as follows:
database_name database_size unallocated space
CATS 1010.00 MB 273.96 MB
reserved data index_size unused
------------------ ------------------ ------------------ ------------------
753710 KB 280360 KB 426494 KB 46856 KB
What I don’t understand is how come the data space available shows 0 in Enterprise Manager? Shouldn't SQLMAINT, which is run once a week, allow for correct information to be reported?
Could someone please explain.
Thanks
Shashu
View 2 Replies
View Related
May 6, 2001
Hello,
I am just getting started with SQL 2000 Server, and we have our database online, and starting transfer all the data from ACCESS to SQL 2000.
Need to know what type of maintenance I need to do to keep the data clean on SQL.
Any help would be appreciated.
View 2 Replies
View Related
Oct 1, 2004
Can you generate script for a maintenance plan?
I know how to script a job, I was wondering about a plan.
If not, whats the best way to record the configuration?
Thanks
Lystra
View 3 Replies
View Related
Dec 14, 2004
I have been having an extremely annoying problem with SQL server. About 3 to 4 time a day, it starts running some job that takes 30+ minutes to finish. The problem is that it bogs the system down, and consumes so many resources, that it it is almost impossible to run anything while the job is running. Most of the time, this job runs when the server is idle. And, much of the time, it has been idle for at least 30 minutes, and often longer. Also, there is excessive hard drive activity while this task runs.
I am unable to find out what is going on because Enterprise manager times out trying to connect to it, and other tasks remotely connecting either time out or get a network error trying to connect. I have task manager running all the time and it shows task 'sqlservr.exe' hogging the system when this is happening.
Can anyone shed any light on what is happening, why, and how I can stop this?? If it is performing maintenance, is there a way to get it to schedule this for specific times rather than during normal idle system activity?
View 11 Replies
View Related
Feb 15, 2005
We have Veritas' Backupexec running in our Enterprise and the Veritas Install actually installs MS SQL Server MSDN on each Server in the Enterprise.
It looks like it also sets up a default Maintenance plan within each of the MSDN Instances.
I guess my question is.. Can I manage the Maintenance Plans on these MSDN Instances via the SQL Server EM GUI from my desktop?? Seems like when I look at the Maintenance plans alot of the options are greyed out or not available. What I am trying to do is modify one of the maintenance plans to have the backups deleted after one week (One of the Instances has been running a complete backup on the Backupexec Databases for a year and there are a years worth of backups on the Server) but the option to "remove files older than" is 'greyed out' ??????
View 6 Replies
View Related
Nov 3, 2005
Hi. I'm totally new to this whole sql server world so bear with me on this. I tried creating a maintenance plan which consist of a backup of all of my db on the server and whenever I try to run the job it creates, it just hang there like I have done nothing. I have created the maintenance plan with the administrator account on the server and I have tried to run it like that but no dice. If any of you can give me any hint on what could be happening, I would be very appreciated.
Thanks!
View 2 Replies
View Related