Hi,
i have several tables in production whose contents are renewd totally in 1
week. So everyd day we delete ~15% records and then insert 15% new.
And after a few days, the performances drops :
Hi guys. I an application here developed by a third party software house. In the past, for some reason, the database would fail daily. The software house recommended that we use dbcc reindex on all tables within the databases twice daily. This was scheduled and is now running. Now the database no longer fails. The fix works and I don't understand why. I don't understand why this would fix the problem. Why would reindex twice daily solve the problem. It seems excessive to have to reindex every user table twice daily.
I tried to run dbcc reindex on all user tables in a database. There are no clustered indexes, but multiple non-clustered indexes on each table.
The output file from dbcc reindex shows that it should have worked. But when I run DBCC SHOWCONTIG, the scan density of the indexes that were in bad shape did not improve.
Hello everybody. 1. I have 28 Gig database with 4 tables above 4 Gig each with very bad fragmenataion, each table has between 3 and 5 indexes 2. Database set for full recovery and I use custom log Shipping to restore db on stand by server every 15 min.
I tried to run DBCC INDEXDEFRAG on one index on 4Gig table . following took place 1. It took 4 hrs to complite DBCC INDEXDEFRAG 2. log shipping fail. 3. log file size of 2 Gig generated after DBCC INDEXDEFRAG complited
I tried to run drop and create clustered index on table it create same problem - log growing, log shipping fails
(if log shipping fails and stand by database get suspended it will take 6- 8 hrs to restore it from backup and apply all logs)
So my question is
What would be best way to rebuild- reindex - defragment 28 Gig database when it set to fully recovery and log shipping
Hi guys, I need a help with this question. In SQL 2000, Can i run update,insert or delete queries while the Indexes of that table is being Rebuilt? Will i get blocked by the DBCC DBreindex process?
I have a process that runs each day and it imports about 550K records into a database. My questions is it appears I have to reindex the database after each import otherwise the sp's that I have written will just run and run and run. After the reindex job things run within 60 seconds. I am just looking for some insight on why, I understand why a reindex is done, but dont know if I understand why I have to reindex every time.
Hi experts, I would like to ask for this error that occurs upon executing my reindexing script
here is my script
USE mydatabase
DBCC DBREINDEX('outpatient', '', 70) go
Then this error message will appear. I researched for this error and from the site that I've found they say that the table is corrupted? That I need to restore a better backup..?
The statement has been terminated. Msg 824, Level 24, State 2, Line 1 SQL Server detected a logical consistency-based I/O error: torn page (expected signature: 0x55555555; actual signature: 0x55555545). It occurred during a read of page (1:353409) in database ID 10 at offset 0x000000ac902000 in file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataBizbox_HS7.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
I am currently running the Back Office Resource Kit Log shipping option for a database running on an SQL 7 installation. As part of the on-going maintenance work that we are being asked to perform by the application vendor I need to run a DBCC REINDEX run on most of the tables in the database. Currently this is done by stopping the log shipping routine and then running the reindex script, then taking a full backup and restoring the backup to the secondary server then restarting the log shipping scripts. This is a very time consuming task that has to be performed at unsociable hours.
Has anybody got an opinion as to if this would work at the same time as the log shipping scripts or do I have to continue as at present.
I am maintaining a large table with millions of rows that has two non clustered indexes and data changing frequently, I need to keep the indexes fresh. Update Statistics runs much quicker than Reindex. What is the appropriate situation for each and why? Thanks in advance.
I first ran indexdefrag on a table with 1.5 billion rows. logical fragmentation was at 95%. logical frag went down to 3% with no real effect on disk.
DBCC reindex had previously been bombing undetected.
Now I've run a reindex on this table: Reindex Job with Fillfactor =100 Ran in 3:05 Free Disk went from ~150GB before operation to 49GB File4 went from 347GB to 504GB
Why has so much free disk been consumed by this operation and not released ??????????
I have a production 60GB database set to Full Recovery and every 15 minutes I am log shipping to a Stand by Server .
During the production hours there are no problems but at night when I run DBCC DBREINDEX, the log grows to 22GB and because of this I have a problem sending this over the network to the stand by server.
I tried changing the recovery model to Bulk_Logged but the there is no difference in log file backup size.
Hi Folks,SQL Server 2000 SP3 on Windows 2000. I have a database on which I ranthe command :dbcc dbreindex ('tablename')gofor all tables in the database. Then I compared the dbcc showcontigwith all_index output from before and after the reindex and on thelargest table in the database I found this. First output is prior toreindex:Table: 'PlannedTransferArchive' (1975014117); index ID: 1, database ID:7TABLE level scan performed.- Pages Scanned................................: 184867- Extents Scanned..............................: 23203- Extent Switches..............................: 23324- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 99.07% [23109:23325]- Logical Scan Fragmentation ..................: 11.13%- Extent Scan Fragmentation ...................: 35.46%- Avg. Bytes Free per Page.....................: 60.0- Avg. Page Density (full).....................: 99.26%Second output is from after the reindex:DBCC SHOWCONTIG scanning 'PlannedTransferArchive' table...Table: 'PlannedTransferArchive' (1975014117); index ID: 1, database ID:8TABLE level scan performed.- Pages Scanned................................: 303177- Extents Scanned..............................: 37964- Extent Switches..............................: 42579- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 89.00% [37898:42580]- Logical Scan Fragmentation ..................: 43.19%- Extent Scan Fragmentation ...................: 24.78%- Avg. Bytes Free per Page.....................: 75.1- Avg. Page Density (full).....................: 99.07%Following are my concerns:The following numbers are all higher after reindex than before reindex:pages scanned, extent switches, logical scan fragmentation, avg bytesfree per page, avg page density.scan density is lower after reindex than before reindexSeems to me that the numbers that are higher after reindex should belower and numbers that are lower after reindex should be higher? Ididn't specify the fill factor in the dbcc reindex command so it shouldhave used the default fill factor. The fill factor has never beenchanged on this machine.Am I missing something?Thanks,Raziq.*** Sent via Developersdex http://www.developersdex.com ***
Does running DBCCReindex update the space allocated columns in sysindexes? I understand that running dbcc updateusage updates the space allocated columns in the sysindexes table. But, I cannot find any documentation that indicates whether dynamically rebuilding the indexes as opposed to drop and recreating the indexes updates the space allocated columns in the sysindexes table?
Any information would be helpful. Thanks. Gail Wade Database Administration Raymond James Financial gwade@it.rjf.com
I have a standard reorganise/reindex job running against a 32GB database on SQl Srever 2000. When trying to run the job it fails and returns Error 1105 <'PRIMARY' filegroup is full>. What's confusing me is that I have 53GB free on the drive on which my Primary file group sits.
Has anyone else come accross this problem when trying to set up a regular reindex job?
(more detail) the maintenance plan only includes the reorganisation/reindex job, no other jobs - including backing up the DB - are included. The DB in question is the only DB on the server: it's a test server.
I have a weekly Maintenance Plan Reindex job that has failed because of a deadlock. My question seems simple enough and I'm ashamed to say I ought to know this answer, but here goes: Does the rest of a given job continue after such failures (this one was maybe 3/4 through the log) occur?
Does anyone know what the commands would be? I am trying to create a job that puts a DB in simple mode then launches a reorg and re-index, then sets it back to full when it is complete. This way I can eliminate large transaction logs being created.
I inherited a system which has an index on a set of columns which allow more than 900 bytes of data in it. We know one of the fields can be shortened to shrink the potential key size below 900 bytes.
The problem is the table is about 120m rows, and the index currently on that column is seeked (sought?) on about 2.5m times a day.
At its simplest, I want to drop the existing index, alter the column to shrink the varchar size, and then rebuild the index on the newly shortened column.
On a smaller, less used table, I might just do this in outside of business hours and call it a day, but I'm concerned that this will take a long time and block a lot of operations.
1) IIRC, shrinking a column, unlike widening it, is much more expensive, even if there are no values which would actually end up trunacted. Is this right?
2) I did a few tests on some other smaller (2+ m) row tables and was still able to select data out of the table. I don't think this covered all the read scenarios, but are there known scenarios which would simply not work during an index build?
3) I haven't yet tried DML operations to the table while it's doing either the column update or an index build. what scenarios would or would not be blocked?