I am currently running the Back Office Resource Kit Log shipping option for a database running on an SQL 7 installation. As part of the on-going maintenance work that we are being asked to perform by the application vendor I need to run a DBCC REINDEX run on most of the tables in the database. Currently this is done by stopping the log shipping routine and then running the reindex script, then taking a full backup and restoring the backup to the secondary server then restarting the log shipping scripts. This is a very time consuming task that has to be performed at unsociable hours.
Has anybody got an opinion as to if this would work at the same time as the log shipping scripts or do I have to continue as at present.
Hi guys. I an application here developed by a third party software house. In the past, for some reason, the database would fail daily. The software house recommended that we use dbcc reindex on all tables within the databases twice daily. This was scheduled and is now running. Now the database no longer fails. The fix works and I don't understand why. I don't understand why this would fix the problem. Why would reindex twice daily solve the problem. It seems excessive to have to reindex every user table twice daily.
I tried to run dbcc reindex on all user tables in a database. There are no clustered indexes, but multiple non-clustered indexes on each table.
The output file from dbcc reindex shows that it should have worked. But when I run DBCC SHOWCONTIG, the scan density of the indexes that were in bad shape did not improve.
Hi Folks,SQL Server 2000 SP3 on Windows 2000. I have a database on which I ranthe command :dbcc dbreindex ('tablename')gofor all tables in the database. Then I compared the dbcc showcontigwith all_index output from before and after the reindex and on thelargest table in the database I found this. First output is prior toreindex:Table: 'PlannedTransferArchive' (1975014117); index ID: 1, database ID:7TABLE level scan performed.- Pages Scanned................................: 184867- Extents Scanned..............................: 23203- Extent Switches..............................: 23324- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 99.07% [23109:23325]- Logical Scan Fragmentation ..................: 11.13%- Extent Scan Fragmentation ...................: 35.46%- Avg. Bytes Free per Page.....................: 60.0- Avg. Page Density (full).....................: 99.26%Second output is from after the reindex:DBCC SHOWCONTIG scanning 'PlannedTransferArchive' table...Table: 'PlannedTransferArchive' (1975014117); index ID: 1, database ID:8TABLE level scan performed.- Pages Scanned................................: 303177- Extents Scanned..............................: 37964- Extent Switches..............................: 42579- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 89.00% [37898:42580]- Logical Scan Fragmentation ..................: 43.19%- Extent Scan Fragmentation ...................: 24.78%- Avg. Bytes Free per Page.....................: 75.1- Avg. Page Density (full).....................: 99.07%Following are my concerns:The following numbers are all higher after reindex than before reindex:pages scanned, extent switches, logical scan fragmentation, avg bytesfree per page, avg page density.scan density is lower after reindex than before reindexSeems to me that the numbers that are higher after reindex should belower and numbers that are lower after reindex should be higher? Ididn't specify the fill factor in the dbcc reindex command so it shouldhave used the default fill factor. The fill factor has never beenchanged on this machine.Am I missing something?Thanks,Raziq.*** Sent via Developersdex http://www.developersdex.com ***
Does running DBCCReindex update the space allocated columns in sysindexes? I understand that running dbcc updateusage updates the space allocated columns in the sysindexes table. But, I cannot find any documentation that indicates whether dynamically rebuilding the indexes as opposed to drop and recreating the indexes updates the space allocated columns in the sysindexes table?
Any information would be helpful. Thanks. Gail Wade Database Administration Raymond James Financial gwade@it.rjf.com
The syntax of the command is DBCC writepage ({ dbid, 'dbname' }, fileid, pageid, offset, length, data)
what I only know is that this command can change the structure of data page,it replace data in data page, and it may cause exception when i scan the table after executed dbcc writepage command the error message is:Could not continue scan with NOLOCK due to data movement
I want to know what is the purpose that sqlserver provide "DBCC WRITEPAGE" command and how to use it. could anyone give me a particular introduction to dbcc wirtepage command?
I can't find any information about it in internet.
So every time i run a DBCC command like DBCC SHOW_STATISTICS and DBCC INDEXFRAG, the query executes and goes on for ever. When i do a sp_who2 i notice that my DBCC SPID goes to sleep mode with high CPUTime and High DiskIO's. i also noticed that my server has no activity, no inserts, no updates and no select statements running by other users. Why cant i run a DBCC command?
You thoughts and suggestions are highly appreciated! Thanks in advance
Lana
THE LADDERS (The Most $100k+ Jobs.) www.TheLadders.com
For reasons beyond the scope of my question, is there a way to run this command within a Stored Procedure from a low privileged user login? I can grant the entity "db_ddladmin" privilege and the proc runs, but I'd rather not give out that level of permission to what is basically a glorified web access login.
I am using SQL Server SP 2 on Windows 2003 Server Standard edition:
Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Feb 9 2007 22:47:07 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)
I have a datbase that's rather large. The log file is 94656 pages, and the data file itself is 94197200 pages. There's only one data file and one log file. The database passes DBCC CHEKCDATABASE with no errors.
When I run DBCC SHRINKDATABASE against the database, the command runs for about twenty seconds then produces this error:
Msg 0, Level 11, State 0, Line 0 A severe error occurred on the current command. The results, if any, should be discarded.
I can't find anything interesting in the ERRORLOG around the time that I run this command. The error appears if I use the TRUNCATEONLY option or not.
How do I fix this problem?
And in general, why are the engine errors in SQL Server so confusing and not directly actionable?
I am playing with DBCC command to check the contsrainst on a perticular table (DBCC CHECKCONSTRAINTS ('myTable') WITH ALL_CONSTRAINTS), it always gives the following result:
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Hello everybody. 1. I have 28 Gig database with 4 tables above 4 Gig each with very bad fragmenataion, each table has between 3 and 5 indexes 2. Database set for full recovery and I use custom log Shipping to restore db on stand by server every 15 min.
I tried to run DBCC INDEXDEFRAG on one index on 4Gig table . following took place 1. It took 4 hrs to complite DBCC INDEXDEFRAG 2. log shipping fail. 3. log file size of 2 Gig generated after DBCC INDEXDEFRAG complited
I tried to run drop and create clustered index on table it create same problem - log growing, log shipping fails
(if log shipping fails and stand by database get suspended it will take 6- 8 hrs to restore it from backup and apply all logs)
So my question is
What would be best way to rebuild- reindex - defragment 28 Gig database when it set to fully recovery and log shipping
Hi guys, I need a help with this question. In SQL 2000, Can i run update,insert or delete queries while the Indexes of that table is being Rebuilt? Will i get blocked by the DBCC DBreindex process?
I have a process that runs each day and it imports about 550K records into a database. My questions is it appears I have to reindex the database after each import otherwise the sp's that I have written will just run and run and run. After the reindex job things run within 60 seconds. I am just looking for some insight on why, I understand why a reindex is done, but dont know if I understand why I have to reindex every time.
Hi experts, I would like to ask for this error that occurs upon executing my reindexing script
here is my script
USE mydatabase
DBCC DBREINDEX('outpatient', '', 70) go
Then this error message will appear. I researched for this error and from the site that I've found they say that the table is corrupted? That I need to restore a better backup..?
The statement has been terminated. Msg 824, Level 24, State 2, Line 1 SQL Server detected a logical consistency-based I/O error: torn page (expected signature: 0x55555555; actual signature: 0x55555545). It occurred during a read of page (1:353409) in database ID 10 at offset 0x000000ac902000 in file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataBizbox_HS7.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
Hi,i have several tables in production whose contents are renewd totally in 1week. So everyd day we delete ~15% records and then insert 15% new.And after a few days, the performances drops :TABLE level scan performed.- Pages Scanned................................: 169617- Extents Scanned..............................: 21630- Extent Switches..............................: 153827- Avg. Pages per Extent........................: 7.8- Scan Density [Best Count:Actual Count].......: 13.78% [21203:153828]- Logical Scan Fragmentation ..................: 45.06%- Extent Scan Fragmentation ...................: 52.66%- Avg. Bytes Free per Page.....................: 5042.5- Avg. Page Density (full).....................: 37.70%I can't program a dbcc reindex every day because of concurrent access (itlocks the tables too long), actually i can only program it on sunday.What else can i do ? I can adjust the fill factor but how to find the goodvalue if i don't want to waste space.The total size of the database is ~150GB.Thx
I am maintaining a large table with millions of rows that has two non clustered indexes and data changing frequently, I need to keep the indexes fresh. Update Statistics runs much quicker than Reindex. What is the appropriate situation for each and why? Thanks in advance.
I first ran indexdefrag on a table with 1.5 billion rows. logical fragmentation was at 95%. logical frag went down to 3% with no real effect on disk.
DBCC reindex had previously been bombing undetected.
Now I've run a reindex on this table: Reindex Job with Fillfactor =100 Ran in 3:05 Free Disk went from ~150GB before operation to 49GB File4 went from 347GB to 504GB
Why has so much free disk been consumed by this operation and not released ??????????
I have a production 60GB database set to Full Recovery and every 15 minutes I am log shipping to a Stand by Server .
During the production hours there are no problems but at night when I run DBCC DBREINDEX, the log grows to 22GB and because of this I have a problem sending this over the network to the stand by server.
I tried changing the recovery model to Bulk_Logged but the there is no difference in log file backup size.
I followed the advice of Paul Randal, but Im stumped as I am not able to determin what the corruption issues are. This is SQL 2000 and the database is a Solomon database that was recently upgraded to 6.5. the error I get when running the DBCC checkdb is as follows:
Server: Msg 8966, Level 16, State 1, Line 1 Could not read and latch page (1:18645) with latch type SH. sysindexes failed. DBCC execution completed. If DBCC printed error messages, contact your system administrator.
DBCC execution completed. If DBCC printed error messages, contact your system administrator. DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Now, the bad news. I am a bit of a novice and have picked this up from someone who left my company. It appears the latch error has been around for some time and only reared up when I instituted a new back up system that runs a dbcc check befor backing up. I don't think I have any clean backups.
I have a standard reorganise/reindex job running against a 32GB database on SQl Srever 2000. When trying to run the job it fails and returns Error 1105 <'PRIMARY' filegroup is full>. What's confusing me is that I have 53GB free on the drive on which my Primary file group sits.
Has anyone else come accross this problem when trying to set up a regular reindex job?
(more detail) the maintenance plan only includes the reorganisation/reindex job, no other jobs - including backing up the DB - are included. The DB in question is the only DB on the server: it's a test server.
I have a weekly Maintenance Plan Reindex job that has failed because of a deadlock. My question seems simple enough and I'm ashamed to say I ought to know this answer, but here goes: Does the rest of a given job continue after such failures (this one was maybe 3/4 through the log) occur?
Does anyone know what the commands would be? I am trying to create a job that puts a DB in simple mode then launches a reorg and re-index, then sets it back to full when it is complete. This way I can eliminate large transaction logs being created.
I inherited a system which has an index on a set of columns which allow more than 900 bytes of data in it. We know one of the fields can be shortened to shrink the potential key size below 900 bytes.
The problem is the table is about 120m rows, and the index currently on that column is seeked (sought?) on about 2.5m times a day.
At its simplest, I want to drop the existing index, alter the column to shrink the varchar size, and then rebuild the index on the newly shortened column.
On a smaller, less used table, I might just do this in outside of business hours and call it a day, but I'm concerned that this will take a long time and block a lot of operations.
1) IIRC, shrinking a column, unlike widening it, is much more expensive, even if there are no values which would actually end up trunacted. Is this right?
2) I did a few tests on some other smaller (2+ m) row tables and was still able to select data out of the table. I don't think this covered all the read scenarios, but are there known scenarios which would simply not work during an index build?
3) I haven't yet tried DML operations to the table while it's doing either the column update or an index build. what scenarios would or would not be blocked?
i am using visual web developer 2005 and SQL 2005 with VB as the code behindi am using INSERT command like this Dim test As New SqlDataSource() test.ConnectionString = ConfigurationManager.ConnectionStrings("DatabaseConnectionString1").ToString() test.InsertCommandType = SqlDataSourceCommandType.Text test.InsertCommand = "INSERT INTO try (roll,name, age, email) VALUES (@roll,@name, @age, @email) " test.InsertParameters.Add("roll", TextBox1.Text) test.InsertParameters.Add("name", TextBox2.Text) test.InsertParameters.Add("age", TextBox3.Text) test.InsertParameters.Add("email", TextBox4.Text) test.Insert() i am using UPDATE command like this Dim test As New SqlDataSource() test.ConnectionString = ConfigurationManager.ConnectionStrings("DatabaseConnectionString").ToString() test.UpdateCommandType = SqlDataSourceCommandType.Text test.UpdateCommand = "UPDATE try SET name = '" + myname + "' , age = '" + myage + "' , email = '" + myemail + "' WHERE roll 123 " test.Update()but i have to use the SELECT command like this which is completely different from INSERT and UPDATE commands Dim tblData As New Data.DataTable() Dim conn As New Data.SqlClient.SqlConnection("Data Source=.SQLEXPRESS;AttachDbFilename=|DataDirectory|Database.mdf;Integrated Security=True;User Instance=True") Dim Command As New Data.SqlClient.SqlCommand("SELECT * FROM try WHERE age = '100' ", conn) Dim da As New Data.SqlClient.SqlDataAdapter(Command) da.Fill(tblData) conn.Close() TextBox4.Text = tblData.Rows(1).Item("name").ToString() TextBox5.Text = tblData.Rows(1).Item("age").ToString() TextBox6.Text = tblData.Rows(1).Item("email").ToString() for INSERT and UPDATE commands defining the command,commandtype and connectionstring is samebut for the SELECT command it is completely different. why ?can i define the command,commandtype and connectionstring for SELECT command similar to INSERT and UPDATE ?if its possible how to do ?please help me
i am using a OLE DB Source in my dataflow component and want to select rows from the source based on the Name I enter during execution time. I have created two variables,
enterName - String packageLevel (will store the name I enter)
myVar - String packageLevel. (to store the query)
I am assigning this query to the myVar variable, "Select * from db.Users where (UsrName = " + @[User::enterName] + " )"
Now in the OLE Db source, I have selected as Sql Command from Variable, and I am getting the variable, enterName,. I select that and when I click on OK am getting this error.
Error at Data Flow Task [OLE DB Source [1]]: An OLE DB error has occurred. Error code: 0x80040E0C. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E0C Description: "Command text was not set for the command object.".
Can Someone guide me whr am going wrong?
myVar variable, i have set the ExecuteAsExpression Property to true too.