We´ve migrated from SQL Server 7.0 EE to SQL Server 2000 in a 8processors, 8 GB RAM server, using W2000.
All seems go ok, but after reorganizing indexes, when we want to recover free space in the differents files using DBCC SHRINKFILE, that recovers are taking the triple of time than with previous SQL Server 7.0.
Shrinking big files (6GB to recover 1.5 GB), previously taking 3 hours now are needing 9 hours.
I am having a problem with "dbcc shrinkfile (datafile, emptyfile)". It does not totally empty the whole data file, any idea? It seems like it always leaves 0.06MB behind.
Hello, I am wondering if there will be any performance issues concerning executing the DBCC SHRINKFILE action against a production database? So far all I have read, and it would make sense, would be to disable the scheduled Transaction Log Task to ensure that no backup is running against the database. Does anyone else have any more or other information about this subject? Thanks in Advance, Daimon
Hi! We made a mistake to run dbcc shrinkfile at the same time with database backup. We stoped and restarted services and run dbcc shrinkfile again. No success. System doesn't shrink file at all. What we can do?
I have a job that has multiple steps. Step 1 rebuilds the indexes, step 2 truncates the transaction log, and step 3 shrinks the transaction log via dbcc shrinkfile command. The job has been running for quite a while without any problems until this past weekend. The job ran successfully but when I looked at the size of the transaction log, it was the same as before the job ran. I have read on BOL that if part of the logical log is in the virtual logs beyond the target_size mark, SQL Server 2000 frees as much space as possible and issues an informational message. My questions is where is this message stored? How can I read it?
I used dbcc shrinkfile to shrink transaction log, but it worked for only one day. When I checked the properties, transaction log was back to the size I started with. TL was 1586 MB and I set the target size to 1 MB. Any idea why it happened?
I have a job that has multiple steps. Step 1 rebuilds the indexes, step 2 truncates the transaction log, and step 3 shrinks the transaction log via dbcc shrinkfile command. The job has been running for quite a while without any problems until this past weekend. The job ran successfully but when I looked at the size of the transaction log, it was the same as before the job ran. I have read on BOL that if part of the logical log is in the virtual logs beyond the target_size mark, SQL Server 2000 frees as much space as possible and issues an informational message. My questions is where is this message stored? How can I read it?
How long does it take to execute DBCC Shrinkfile(DB_FILE, emptyfile) on a 10GB datafiles? If you put your datafiles together with the tempdb datafiles on the same logical drive do we have a performance issue?
Hi, Are there any effects(negative) of running dbcc shrinkdatabase/file on a production box at low/high usage time or high/low activity period of db? TIA
I have a log that has grown unchecked for a long time. I truncated it, used DBCC SHRINKFILE on it, and backed it up. It has not shrunken. I still have a database with an allocated size of 506 MB with 435 MB unused.
I have seen messages that others have posted where they have used DBCC SHRINKFILE without success. It was recommended that they use the sp_force_shrink_log script that is available on the www.sqlpass.org web page.
Has anyone used this script that can tell me how and where to run it? I'm new to this. I tried creating a script using Jobs in the Enterprise Manager and got an error that the command was too long.
When I execute a DBCC SHRINKFILE or try shrinking database files through enterprise manager it works fine, except when I reboot the server the files return to the original size. Here is the statement I used:
the message "Could not locate file 'TEST' in sysfiles. DBCC execution completed. If DBCC printed error messages, contact your system administrator." raiseed. please what is it? thank a lot
hi i got problem when i want to shrink file into one of my database it always cause error "A severe error occurred on the current command. The results, if any, should be discarded."
it happend in DBCC ShrinkFile (@name, 0) i dont know why it occurs anybody can help?
thanks
-- shrink all files within the database Declare @curFiles Cursor Declare @Name sysname
Set @curFiles = Cursor Local Fast_Forward Read_Only For Select RTrim(LTrim(name)) from sysfiles
Open @curFiles
Fetch Next From @curFiles Into @Name While @@Fetch_Status = 0 Begin
-- Cause problem because transaction log backup had run at the same time --that the shrink was occurring which is what caused this latch problem DBCC ShrinkFile (@name, 0) Fetch Next From @curFiles Into @Name End Close @curFiles; deallocate @curFiles; go
arifliminto86
[edit by tkizer]: moved thread out of Data Corruption forum
Hi,When I use dbcc shrinkfile to shrink LOG file, following error occurs:DBCC SHRINKFILE(2)---------------------------------------------------------------------------------------Cannot shrink log file 2 (myDB_log) because all logical log files are inuse.(1 row(s) affected)I have only one transaction log file in my Database, who can tell me what'tthe matter?If my current log file is in use, how can I find who is using it and stopusing then do the shrink operation?Thanks.Scarab
Any body got any ideas how we might get around the following error.
command used:
dbcc shrinkfile('DB_Data',EMPTYFILE)
Result:
DBCC SHRINKFILE: Page 3:9224674 could not be moved because the partition to which it belonged was dropped.
Msg 2555, Level 16, State 2, Line 1
Cannot move all contents of file "DB_Data" to other places to complete the emptyfile operation.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
the file needs to be split from 1 x 200G file into multiple data files in the same filegroup. works for a couple of hours the gives this error, file is still 100G, but has 99% empty space.
Hi! I have filegroup that has to few files. I added new files (corresponding to number of cores). Now I'm trying to move data to these new files: dbcc shrinkfile ('oldfile', EMPTYFILE)
But now there are gone nearly one day and dbcc is not finished yet. It is stressing not to see some progress.
Is it possible to see in some sys table how much data are moved?
The command has been sitting with no visible movement (neither the size of the file is changing nor the CPU/Memory changes in Task Manager) for 3 hours already.
I have some space available in the database, I tried dbcc shrink database and srrink file. I am not getting the disk space. But the amount of free space on the database sometime get increased.
I'm running full recovery mode and doing log shipping so changing to simple mode is not an option.
I'm running BACKUP LOG right before and when I check it says my log is 99% free (on a 180GB log).
When I do DBCC LOGINFO('dbname') right before and after I see a dozen entries and they are all over the file and not just at the starting offset areas. The BACKUP LOG doesn't clean out the file completely.
Is there any explanation for this? Even though I'm doing this at off hours, is it possible that someone on the site in that split second is putting new entries in the log? Why are they spread out though? If they just put entries at the beginning I could shrink the file to a normal size still.
SQL2000 Server, SP4, a database with a 17Gb log file. It has been backed up so all transactions should be validated, now the real file size needs to be shrunk because I need the diskspace plus I want to speed up the backup process.
http://support.microsoft.com/kb/272318/ Tells me what to do but not where to do it.
So I need to run this code : DBCC SHRINKFILE(pubs_log, 2)
File ID 4 of database ID 13 cannot be shrunk as it is either being shrunk by another process or is empty.
Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
*/
The commented lines is what I get in return. There's nothing being executed on this file. I dropped a few indexes. I need the space back from the file. Backups and everything are run on it normally. Is there something I'm missing or is there something wrong with it? I don't do 'AutoShrink'. Also, the file is not empty. Checkdb is working fine. Dbcc ShrinkDatabase also works fine but doesn't even recognize these files. It doesn't even show this files in the results pane when executing the command. Thank you for your help.
On a SQL Server 7.0 database I support, I've been unsuccessful trying to shrink a data file using dbcc shrinkfile (datafile_logical_name, 0). This worked fine for shrinking the log, but of the 4 datafiles that were created 2 shrank successfully and 2 remain unchanged. Unless the information on the General Tab on Enterprise Manager is incorrect, of the 15000MB allocated for one of the files, only 700 MB are used.
What is the best way to control Transaction log sizes?? The logs keep growing and when I manually truncate them and use the dbcc shrinkfile command, it doesn't want to shrink it to the specified size. In some cases, our data file is smaller than the log file. It'll have a Gig of space allocation but only contain 40 megs of data. Any suggestions on how I can shrink the log file??
I am using SQL Server SP 2 on Windows 2003 Server Standard edition:
Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Feb 9 2007 22:47:07 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)
I have a datbase that's rather large. The log file is 94656 pages, and the data file itself is 94197200 pages. There's only one data file and one log file. The database passes DBCC CHEKCDATABASE with no errors.
When I run DBCC SHRINKDATABASE against the database, the command runs for about twenty seconds then produces this error:
Msg 0, Level 11, State 0, Line 0 A severe error occurred on the current command. The results, if any, should be discarded.
I can't find anything interesting in the ERRORLOG around the time that I run this command. The error appears if I use the TRUNCATEONLY option or not.
How do I fix this problem?
And in general, why are the engine errors in SQL Server so confusing and not directly actionable?
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.
way to invalidate cached query plans? I would rather target a specific query instead of invalidating all of them.
Also do you know of any sql server setting that will cause cached query plans to invalidate even though only one character in the queries has changed?
exec sp_executesql N'select cast(5 as int) as DisplaySequence, mt.Description + '' '' + ct.Description as Source,
[Code].....
In this query we have seen (on some databases) simply changing ‘@CustomerId int',@CustomerId=1065’ too ‘@customerId int',@customerId=1065’ fixed the a speed problem….just changed the case on the Customer bind parameter. On other servers this has no effect. I’m thinking the server is using an old cached query plan, but don’t know for sure.
As part of my data warehouse nightly build, I truncate my tables in mytarget database.As example, I find it is much quicker to do a bulk API load of 13Mrecords and to do an update/insert of 100K rows. I also drop theindexes before the builds and reindex after. Thats an aside.What I am wondering is how is this impacting the statistics? Do I needto update them?Not well versed on statistics and any data is welcomed.ThanksRob
Any way to invalidate cached query plans? I would rather target a specific query instead of invalidating all of them. Also any sql server setting that will cause cached query plans to invalidate even though only one character in the queries has changed?
exec sp_executesql N'select cast(5 as int) as DisplaySequence, mt.Description + '' '' + ct.Description as Source, c.FirstName + '' '' + c.LastName as Name, cus.CustomerNumber Code, c.companyname as "Company Name", a.Address1, a.Address2,
[code]....
In this query we have seen (on some databases) simply changing ‘@CustomerId int',@CustomerId=1065’ too ‘@customerId int',@customerId=1065’ fixed the a speed problem….just changed the case on the Customer bind parameter. On other servers this has no effect.the server is using an old cached query plan, but don’t know for sure.