Does anyone have any idea how long DBCC SHRINKDATABASE takes to run? I have a 20GB database with 10.5 GB used. I know hardware is part of the equation, but I don't know the specifics of the machine this particular DB is on, other than it is an 8 cpu compaq.
should i use the SHRINK DATABASE maintenance plan once a month on all USERS DB? if so what parameters should i put this the "Shrink Database when i grows beyond XXX MB" and the "Amount of free space to remain after shrink"
I have strange problem: 1) When I run "DBCC SHRINKDATABASE" (from my application) on computer with SQL Server 2000, it works fine. 2) When I run "DBCC SHRINKDATABASE" (from my application) on computer with MSDE that is connected the network with other computer with SQL Server 2000, it works fine. 3) When I run "DBCC SHRINKDATABASE" (from my application) on computer with MSDE that isn't connected the network, it stops response. I have wait for 15 minuts but it still doesn't work, neither the database is clear before. Why? 4) Also when I run "DBCC SHRINKDATABASE" (from my application) on computer with MSDE that is connected the network without other computer with SQL Server 2000, it stops response. I have wait for 15 minuts but it still doesn't work, neither the database is clear before. Why?
May be the "DBCC SHRINKDATABASE" need some DLLs that installed on SQL Server 2000 only and doesn't deliver with MSDE? May be this is SQL-DMO or/and SQL-NS?
My appliation created by C# in VS.NET 2003. I am using OLEDB.
Is there anyway to tell how long this will run for -- or how far it has got? I have a large database that has just had most of the data removed. The command has been running for 8 hours and I have just stopped it to let something else run quickly. Any way of telling how much longer it will take?
I have a database that appears like it's much bigger than it should be. Looking at its properties in Enterprise Manager, it's 13GB big, but it says 7GB is available in free space. It's like it's grown, data was deleted, and it was never shrunk back down.
So I'm considering running a DBCC SHRINKDATABASE but am worried about the ramifications. Here are my concerns:
- Is the data in the database in any danger? Is the SHRINKDATABASE function safe?
- Can SHRINKDATABASE be run with the DB still in use? I'll run it after hours, but want to know if I need to put it in admin mode
- Will SHRINKDATABASE even do what I want?
- I've read the SHRINKDATABASE can fragment indexes. Is this true? If yes, how do I avoid it?
I schedule dbcc checkdb command in the SQL schedule task program at 3:00am. It ran successfully but since it ran as a schedule task, I don't know where to find the results. Can anyone help?
I run DBCC PAGE (dbname, 1, 136, 3) with trace 3604 on:
Server: Msg 8968, Level 16, State 1, Line 1 Table error: DBCC PAGE page (1:136) (object ID 0, index ID 0) is out of the range of this database. DBCC execution completed. If DBCC printed error messages, contact your system administrator.
One of our databases seems to be looking dodgy as some scheduled jobs are failing, but DBCC CHECKDB is no use since it has been running for over 1/2 hour without giving me any results, just the spinning globe.
How do I find out what is wrong without resorting to backups?
I have lost the reference but I read somewhere that when running DBCC DBREINDEX against a clustered index, all the secondary indexes on the table are automatically re-indexed as well. I did a test of this on a small table and it seemed to confirm this. However, now I've put this into practice, I am finding that it doesn't seem to work this way. I noticed that having run DBCC DBREINDEX against a table's clustered index (DBCC DBREINDEX ('tablename', 'clusteredIndexName', fill_factor)), the secondary indexes were not automatically re-indexed - as born out by the fact that they remained badly fragmented.
First of all, do the dba's who read this beleive it is correct that DBCC DBREINDEX run against a clustered index will automatically rebuild the secondary indexes too? If so, why wouldn't it work in all cases?
Normally, after I use DBCC DBREINDEX, I can be sure that Scan Density on a clustered or non-clustered index is very good - eg. 99% or 100%. However, I have one database where there are a number of indexes that are not showing any improvement in Scan Density after running DBCC DBREINDEX. In on case, a clustered index, I run it on two days in succession and Scan Density actually go worse! Can anyone give me a reason for this? Can anyone suggest how to fix it?
Dear All, i've used the DBCC showcontig command against my table table103 but i dont know how to analyze the results of fragmentation levels. please give me some explanations or some good links.....
Does anyone know if there is a simple way to get the results of a DBCCINPUTBUFFER() request into a table? I have a process for monitoringactivity that will give me the results of sp_who2 into a temp table,and want to scroll through the active connections and get the inputbuffers into another table for review:Insert into #TmpWhoexec sp_who2 'active'Something like that with the dbcc command.I am using SQL 2000 SP4.Thanks,Tom
Guys, I need to send a group of people a list of specific processes running on the server, one of the requirements is to send them what's actually being ran on the machine. I have the information I was on the sysprocess tabke and the results of the DBCC Inputbuffer. Is there a way to link both result sets?
This is the criteria of the processes that neeed to be sent out to my users:
SELECT * FROM master.dbo.sysprocesses p WHERE last_batch < DATEADD(mi, -5, GETDATE()) AND dbo.fncGetNumLocks(p.spid, DB_ID('EngDataMart')) > 1 GROUP BY p.spid, p.loginame, p.hostname, dbo.fncGetNumLocks(p.spid, db_id('DBName')) ORDER BY p.spid
I can find many examples of loading DBCC results into tables. They all begin with a create table statement defining the results. My question is , other than trial and error, is there a way to determine what data types will be returned. Sure you can say that first element looks like an integer, but is it really a bigint, and that text string can be varchar(max) but will char(2) work.
I'm not looking for an answer for a specific DBCC function, but rather a generic way I can determine the characteristics of any DBCC result set.
I tried
SELECT * INTO #tmp FROM OPENROWSET('SQLOLEDB', 'Server=ray;Trusted_Connection=Yes;Database=Ed_sandbox', 'Set FmtOnly OFF; DBCC loginfo WITH tableresults ')
but I got back
Msg 11527, Level 16, State 1, Procedure sp_describe_first_result_set, Line 1
The metadata could not be determined because statement 'DBCC loginfo WITH tableresults' does not support metadata discovery.
Can someone please help me interpret this result set below and suggeston way I can speed up my table? What changes should I make?DBCC SHOWCONTIG scanning 'tblListing' table...Table: 'tblListing' (1092914965); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 97044- Extents Scanned..............................: 12177- Extent Switches..............................: 13452- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 90.17% [12131:13453]- Logical Scan Fragmentation ..................: 0.86%- Extent Scan Fragmentation ...................: 2.68%- Avg. Bytes Free per Page.....................: 1415.8- Avg. Page Density (full).....................: 82.51%DBCC execution completed. If DBCC printed error messages, contact yoursystem administrator.Thank you.
how could I load "DBCC showcontig (@TableName) with tableResults" to a table? I created a table and let insert to it via above "DBCC...." but problem is the results with "DBCC execution completed. If DBCC printed error messages, contact your system administrator.", or how could I mute this output line for each table ? thanks David --========================================= DBCC execution completed. If DBCC printed error messages, contact your system administrator. ObjectName ObjectId IndexName IndexId Level Pages Rows MinimumRecordSize MaximumRecordSize AverageRecordSize ForwardedRecords Extents ExtentSwitches AverageFreeBytes AveragePageDensity ScanDensity BestCount ActualCount LogicalFragmentation ExtentFragmentation -------------------------------------------------------------------------------------------------------------------------------- ----------- -------------------------------------------------------------------------------------------------------------------------------- ----------- ----------- -------------------- -------------------- ----------------- ----------------- ---------------------- -------------------- -------------------- -------------------- ---------------------- ---------------------- ---------------------- -------------------- -------------------- ---------------------- ---------------------- sysproxylogin 37575172 clust 1 0 0 0 0 0 0 0 0 0 0 0 100 0 0 0 0
I just inherited a database nightmare - and I've got a few questions I hope you folks can help with.
I ran a shrinkdatabase command from the enterprise manager (I should have used DBCC I realize now), and then my session died. The process is still going - but this is a huge database, and I'd like to know how long it will run. Is there any way to translate physical io (or anything else) to % complete?
This is being run to open up space on a disk array that is completely full. However, since kicking that off, additional storage has mysteriously been made available. <sigh> I assume that killing the shrink wouldn't involve rollbacks - but might leave the database in an unpredictable state. Is this correct?
Any other recommendations to speed this along? And, yes, I've got all the users off the system.
Hi all, after using "DBCC SHRINKDATABASE (MyDB, TRUNCATEONLY )" command my log file stoped growing. It still remains 1024KB after 4-5 days of activity. What might be the problem?
I am playing with DBCC command to check the contsrainst on a perticular table (DBCC CHECKCONSTRAINTS ('myTable') WITH ALL_CONSTRAINTS), it always gives the following result:
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
what is the difference in shrinkdatabase between sql 2000 and sql 2005. i am just doing a plain shrink, no reorg. it runs very fast in 2000 and runs forever in 2005.
2 weeks ago I deleted about 200GB of data from a 300GB+ database. It's a custom DB we want to use to test few things. We wanted a smaller size DB for our testing and since we didn't have any we grabbed a production backup, removed sensitive data and ran a large archiving script on it... Anyway so far so good but our data file was still the same size as before.
So we started a shrinkdatabase... it has been running for 2 weeks now! After about 1 week I interrupted the shrinkdatabase process and ran a dbcc shrinkdatabase('DB', truncateonly) just to see if the data file will get reduced a bit or not. It did get reduced by about 20GB. I assume that dbcc shrinkdatabase('DB', 0) has free up enough pages at the end of the data file so a truncateonly was able to free up some space... Anyway after this we started the dbcc shrinkdatabase('DB', truncateonly) again... still running...
The database was never shrank before and every index is highly fragmented... Is that why it's taking so long? Am I actually going to have to wait for another few weeks before that thing finishes??
Anyone has experience running shrink on large DBs?
I am working with a large database that has its tables stored on a secondary filegroup. I'm trying to shrink the size of the files but I can't seem to get the system to free up the unused space. I've tried shrinkdatabase and shrinkfile both with and without the truncateonly option. Has anyone else had this problem? Is there a workaround? Any help would be greatly appreciated.
I have maintenance plan on DBABC backup log to .trn job to run every 90 minutes (daily).
in order to keep the log file small, I also set up a job (T-SQL) to run at 4:15 am to backup log ABC with truncate_only, then run dbcc shrinkdatabase (DBABC, 10)
it looks "backup log ABC with truncate_only" has conflicts with the every90 minutes backup transaction log.
Question: could I keep the backup transaction log every90 minutes, but still could shrink the log file. The log file is growing very fast.
Or I have to use differential backup instead of backup tran log?