Hi experts, I would like to ask for this error that occurs upon executing my reindexing script
here is my script
USE mydatabase
DBCC DBREINDEX('outpatient', '', 70)
go
Then this error message will appear. I researched for this error and from the site that I've found they say that the table is corrupted? That I need to restore a better backup..?
The statement has been terminated.
Msg 824, Level 24, State 2, Line 1
SQL Server detected a logical consistency-based I/O error: torn page (expected signature: 0x55555555; actual signature: 0x55555545). It occurred during a read of page (1:353409) in database ID 10 at offset 0x000000ac902000 in file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataBizbox_HS7.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
I have a standard reorganise/reindex job running against a 32GB database on SQl Srever 2000. When trying to run the job it fails and returns Error 1105 <'PRIMARY' filegroup is full>. What's confusing me is that I have 53GB free on the drive on which my Primary file group sits.
Has anyone else come accross this problem when trying to set up a regular reindex job?
(more detail) the maintenance plan only includes the reorganisation/reindex job, no other jobs - including backing up the DB - are included. The DB in question is the only DB on the server: it's a test server.
Hi guys. I an application here developed by a third party software house. In the past, for some reason, the database would fail daily. The software house recommended that we use dbcc reindex on all tables within the databases twice daily. This was scheduled and is now running. Now the database no longer fails. The fix works and I don't understand why. I don't understand why this would fix the problem. Why would reindex twice daily solve the problem. It seems excessive to have to reindex every user table twice daily.
I tried to run dbcc reindex on all user tables in a database. There are no clustered indexes, but multiple non-clustered indexes on each table.
The output file from dbcc reindex shows that it should have worked. But when I run DBCC SHOWCONTIG, the scan density of the indexes that were in bad shape did not improve.
Hello everybody. 1. I have 28 Gig database with 4 tables above 4 Gig each with very bad fragmenataion, each table has between 3 and 5 indexes 2. Database set for full recovery and I use custom log Shipping to restore db on stand by server every 15 min.
I tried to run DBCC INDEXDEFRAG on one index on 4Gig table . following took place 1. It took 4 hrs to complite DBCC INDEXDEFRAG 2. log shipping fail. 3. log file size of 2 Gig generated after DBCC INDEXDEFRAG complited
I tried to run drop and create clustered index on table it create same problem - log growing, log shipping fails
(if log shipping fails and stand by database get suspended it will take 6- 8 hrs to restore it from backup and apply all logs)
So my question is
What would be best way to rebuild- reindex - defragment 28 Gig database when it set to fully recovery and log shipping
Hi guys, I need a help with this question. In SQL 2000, Can i run update,insert or delete queries while the Indexes of that table is being Rebuilt? Will i get blocked by the DBCC DBreindex process?
I have a process that runs each day and it imports about 550K records into a database. My questions is it appears I have to reindex the database after each import otherwise the sp's that I have written will just run and run and run. After the reindex job things run within 60 seconds. I am just looking for some insight on why, I understand why a reindex is done, but dont know if I understand why I have to reindex every time.
Hi,i have several tables in production whose contents are renewd totally in 1week. So everyd day we delete ~15% records and then insert 15% new.And after a few days, the performances drops :TABLE level scan performed.- Pages Scanned................................: 169617- Extents Scanned..............................: 21630- Extent Switches..............................: 153827- Avg. Pages per Extent........................: 7.8- Scan Density [Best Count:Actual Count].......: 13.78% [21203:153828]- Logical Scan Fragmentation ..................: 45.06%- Extent Scan Fragmentation ...................: 52.66%- Avg. Bytes Free per Page.....................: 5042.5- Avg. Page Density (full).....................: 37.70%I can't program a dbcc reindex every day because of concurrent access (itlocks the tables too long), actually i can only program it on sunday.What else can i do ? I can adjust the fill factor but how to find the goodvalue if i don't want to waste space.The total size of the database is ~150GB.Thx
I am currently running the Back Office Resource Kit Log shipping option for a database running on an SQL 7 installation. As part of the on-going maintenance work that we are being asked to perform by the application vendor I need to run a DBCC REINDEX run on most of the tables in the database. Currently this is done by stopping the log shipping routine and then running the reindex script, then taking a full backup and restoring the backup to the secondary server then restarting the log shipping scripts. This is a very time consuming task that has to be performed at unsociable hours.
Has anybody got an opinion as to if this would work at the same time as the log shipping scripts or do I have to continue as at present.
I am maintaining a large table with millions of rows that has two non clustered indexes and data changing frequently, I need to keep the indexes fresh. Update Statistics runs much quicker than Reindex. What is the appropriate situation for each and why? Thanks in advance.
I first ran indexdefrag on a table with 1.5 billion rows. logical fragmentation was at 95%. logical frag went down to 3% with no real effect on disk.
DBCC reindex had previously been bombing undetected.
Now I've run a reindex on this table: Reindex Job with Fillfactor =100 Ran in 3:05 Free Disk went from ~150GB before operation to 49GB File4 went from 347GB to 504GB
Why has so much free disk been consumed by this operation and not released ??????????
I have a production 60GB database set to Full Recovery and every 15 minutes I am log shipping to a Stand by Server .
During the production hours there are no problems but at night when I run DBCC DBREINDEX, the log grows to 22GB and because of this I have a problem sending this over the network to the stand by server.
I tried changing the recovery model to Bulk_Logged but the there is no difference in log file backup size.
Hi Folks,SQL Server 2000 SP3 on Windows 2000. I have a database on which I ranthe command :dbcc dbreindex ('tablename')gofor all tables in the database. Then I compared the dbcc showcontigwith all_index output from before and after the reindex and on thelargest table in the database I found this. First output is prior toreindex:Table: 'PlannedTransferArchive' (1975014117); index ID: 1, database ID:7TABLE level scan performed.- Pages Scanned................................: 184867- Extents Scanned..............................: 23203- Extent Switches..............................: 23324- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 99.07% [23109:23325]- Logical Scan Fragmentation ..................: 11.13%- Extent Scan Fragmentation ...................: 35.46%- Avg. Bytes Free per Page.....................: 60.0- Avg. Page Density (full).....................: 99.26%Second output is from after the reindex:DBCC SHOWCONTIG scanning 'PlannedTransferArchive' table...Table: 'PlannedTransferArchive' (1975014117); index ID: 1, database ID:8TABLE level scan performed.- Pages Scanned................................: 303177- Extents Scanned..............................: 37964- Extent Switches..............................: 42579- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 89.00% [37898:42580]- Logical Scan Fragmentation ..................: 43.19%- Extent Scan Fragmentation ...................: 24.78%- Avg. Bytes Free per Page.....................: 75.1- Avg. Page Density (full).....................: 99.07%Following are my concerns:The following numbers are all higher after reindex than before reindex:pages scanned, extent switches, logical scan fragmentation, avg bytesfree per page, avg page density.scan density is lower after reindex than before reindexSeems to me that the numbers that are higher after reindex should belower and numbers that are lower after reindex should be higher? Ididn't specify the fill factor in the dbcc reindex command so it shouldhave used the default fill factor. The fill factor has never beenchanged on this machine.Am I missing something?Thanks,Raziq.*** Sent via Developersdex http://www.developersdex.com ***
Does running DBCCReindex update the space allocated columns in sysindexes? I understand that running dbcc updateusage updates the space allocated columns in the sysindexes table. But, I cannot find any documentation that indicates whether dynamically rebuilding the indexes as opposed to drop and recreating the indexes updates the space allocated columns in the sysindexes table?
Any information would be helpful. Thanks. Gail Wade Database Administration Raymond James Financial gwade@it.rjf.com
I have a weekly Maintenance Plan Reindex job that has failed because of a deadlock. My question seems simple enough and I'm ashamed to say I ought to know this answer, but here goes: Does the rest of a given job continue after such failures (this one was maybe 3/4 through the log) occur?
Does anyone know what the commands would be? I am trying to create a job that puts a DB in simple mode then launches a reorg and re-index, then sets it back to full when it is complete. This way I can eliminate large transaction logs being created.
I inherited a system which has an index on a set of columns which allow more than 900 bytes of data in it. We know one of the fields can be shortened to shrink the potential key size below 900 bytes.
The problem is the table is about 120m rows, and the index currently on that column is seeked (sought?) on about 2.5m times a day.
At its simplest, I want to drop the existing index, alter the column to shrink the varchar size, and then rebuild the index on the newly shortened column.
On a smaller, less used table, I might just do this in outside of business hours and call it a day, but I'm concerned that this will take a long time and block a lot of operations.
1) IIRC, shrinking a column, unlike widening it, is much more expensive, even if there are no values which would actually end up trunacted. Is this right?
2) I did a few tests on some other smaller (2+ m) row tables and was still able to select data out of the table. I don't think this covered all the read scenarios, but are there known scenarios which would simply not work during an index build?
3) I haven't yet tried DML operations to the table while it's doing either the column update or an index build. what scenarios would or would not be blocked?
I'm currently receiving the following error message whilst attempting to install SQL Server 2005 Standard Edition on Windows Server 2003 (32 Bit): Error 29528. The setup has encountered an unexpected error while Installing performance counters. The error is: The system cannot find the file specified.
This server already has an install of SQL Server 2000 as the default instance. I'm attempting to install a new named instance of SQL Server 2005.
Extract from log:
<Func Name='LaunchFunction'> Function=Do_sqlPerfmon2 <Func Name='GetCAContext'> <EndFunc Name='GetCAContext' Return='T' GetLastError='0'> Doing Action: Do_sqlPerfmon2 PerfTime Start: Do_sqlPerfmon2 : Tue Jun 12 10:20:02 2007 <Func Name='Do_sqlPerfmon2'> <EndFunc Name='Do_sqlPerfmon2' Return='0' GetLastError='2'> PerfTime Stop: Do_sqlPerfmon2 : Tue Jun 12 10:20:02 2007 MSI (s) (4C:FC) [10:20:02:833]: Executing op: ActionStart(Name=Rollback_Do_sqlPerfmon2.D20239D7_E87C_40C9_9837_E70B8D4882C2,Description=Removing performance counters,) <EndFunc Name='LaunchFunction' Return='0' GetLastError='0'> MSI (s) (4C:FC) [10:20:02:849]: Executing op: CustomActionSchedule(Action=Rollback_Do_sqlPerfmon2.D20239D7_E87C_40C9_9837_E70B8D4882C2,ActionType=1281,Source=BinaryData,Target=Rollback_Do_sqlPerfmon2,CustomActionData=100Removing performance counters200000DTSPipelineC:Program FilesMicrosoft SQL Server90DTSBinnDTSPERF.INI) MSI (s) (4C:FC) [10:20:02:849]: Executing op: ActionStart(Name=Do_sqlPerfmon2.D20239D7_E87C_40C9_9837_E70B8D4882C2,Description=Installing performance counters,) MSI (s) (4C:FC) [10:20:02:849]: Executing op: CustomActionSchedule(Action=Do_sqlPerfmon2.D20239D7_E87C_40C9_9837_E70B8D4882C2,ActionType=1025,Source=BinaryData,Target=Do_sqlPerfmon2,CustomActionData=100Installing performance counters200000C:Program FilesMicrosoft SQL Server90DTSBinnDTSPERF.INIC:Program FilesMicrosoft SQL Server90DTSBinnDTSPERF.HC:Program FilesMicrosoft SQL Server90DTSBinnDTSPipelinePerf.dllDTSPipeline0DTSPipelinePrfData_OpenPrfData_CollectPrfData_Close) MSI (s) (4C:94) [10:20:02:864]: Invoking remote custom action. DLL: C:WINDOWSInstallerMSI1683.tmp, Entrypoint: Do_sqlPerfmon2 <Func Name='LaunchFunction'> Function=Do_sqlPerfmon2 <Func Name='GetCAContext'> <EndFunc Name='GetCAContext' Return='T' GetLastError='0'> Doing Action: Do_sqlPerfmon2 PerfTime Start: Do_sqlPerfmon2 : Tue Jun 12 10:20:02 2007 <Func Name='Do_sqlPerfmon2'> <EndFunc Name='Do_sqlPerfmon2' Return='2' GetLastError='2'> PerfTime Stop: Do_sqlPerfmon2 : Tue Jun 12 10:20:02 2007 Gathering darwin properties for failure handling. Error Code: 2 MSI (s) (4C!F0) [10:23:46:381]: Product: Microsoft SQL Server 2005 Integration Services -- Error 29528. The setup has encountered an unexpected error while Installing performance counters. The error is: The system cannot find the file specified.Error 29528. The setup has encountered an unexpected error while Installing performance counters. The error is: The system cannot find the file specified.
You can ignore this and it will complete the installation, but subsequently trying to patch with SP2 will fail on the same sections - Hotfix.exe crashes whilst attempting to patch Database Services, Integration Services and Client Components (3 separate crashes).
I've removed SQL Server 2005 elements and tried to re-install, but it's not improved the situation.
I have an File System Task that copies a file from one directory ot another. When I hard code the target directory (c:dirfile.txt) it works fine. When I change it to a virtual directory (\serverdirfile.txt) I get a security error:
[File System Task] Error: An error occurred with the following error message: "Access to the path '\gracehbtest oS2TMM_Live_Title_000002.xml' is denied.".
I cannot execute a package by using Execute Package task. I supplied sa credentials to connection manager, and it shows the list of Packages on SQL Server but when running the task it says
Error 0xC0202009 while preparing to load the package. An OLE DB error has occurred. Error code: 0x%1!8.8X!.
I am running dts in Sql Server 2005 management studio from Management, Legacy and data Transformation Services.
Once the dts has run, I get this error message "Error Source : Microsoft Data Transformation Services (DTS) Package Error Description : Error accessing Windows Event Log."
We have reports deployed in the Report Server. While connecting from client, we are getting the error "An internal error occurred on the report server. See the error log for more details. rsInternal Error)"
Then we went to Report Server, Reporting Service and SQL Server service are all are running fine.
Important thing is some time the reports are working fine, sometimes i am receiving this error. Please help.
We predict whether the services are automatically restarted or transaction logs exceeding the limit or any other parameters to set to avaoid this error?
I'm trying to use an XML Task to do a simple XSLT operation, but it fails with this error message:
[XML Task] Error: An error occurred with the following error message: "There are multiple root elements. Line 5, position 2.".
The source XML file validates fine and I've successfully used it as the XML Source in a data flow task to load some SQL Server tables. It has very few line breaks, so the first 5 lines are pretty long: almost 4000 characters, including 34 start-tags, 19 end-tags, and 2 empty element tags. Here's the very beginning of it:
<?xml version="1.0" encoding="UTF-8"?> <ESDU releaselevel="2006-02" createdate="26 May 2006"><package id="1" title="_standard" shorttitle="_standard" filename="pk_stan" supplementdate="01/05/2005" supplementlevel="1"><abstract><![CDATA[This package contains the standard ESDU Series.]]></abstract>
There is only 1 ESDU root element and only 1 package element.
Of course, the XSLT stylesheet is also an XML document in its own right. I specify it directly in the XML Task:
I have the error above, but no error log. I can preview the sub report - but this main report fails after working this morning. This is for internal company reports and I rebuilt this one after converting from access. I have looked where the error logs should be, but there are no error logs. I rebuilt the query as I needed to change this, but this did not help. Is there someone who could point me in the correct direction.
I'm running Vista Ultimate. SQL 2005 is set as my Default instance, and SQL2000 is set as (local)SQL2000.
Today, actually half way through today, I restarted my computer after installing Photoshop Updates.
Upon getting my computer back up and running, I cannot access SQL2000 from any website on my computer, nor can I access it from SQL2005 Management Stdio. I CAN access it from Enterprise Manager (SQL2000 tool).
Whenever I run an web app that connects to it I get this error:
An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)
Now I usually get these when ASP.NET can't point to the right spot, but in this case I'm pointing exactly where I need to go. Any thoughts?
--Edit
I should also add my password got changed a few days ago on our Domain. This was the first time restarting after the PW change.
My link server was working just fine until friday evening. It stopped worked over the week end. with and error Error 7399: OLE DB provider 'MSDAORA' reported an error. OLE DB error .
---my oracle 10g client is working just fine --TNS names looks fine ---i recreated the link but i am still getting the same error.
I need your help because a lot of jobs are using that link on Monday it is going to be crazzzy.
We have a production SQLServer 6.5 running with service pack SP5a update:
I got the following 2 errors.....
1.
Error : 806, Severity: 21, State: 1 Could not find virtual page for logical page 67833121 in database 'tempdb' database 'tempdb'
2.
I got error when I ran a job for Update statistics Error : 614, Severity: 21, State: 3 A row on page 2697653 was accessed that has an illegal length of -8631 in database 'abc'.
For Error 2: I ran update statistics using query analyser. It is fine Is there anything I have to do further?
For Error 1 : The work around given by Microsoft ================================================= I ran DBCC CHECKTABLE(syslogs)
I am getting the following message on : master: Checking syslogs The total number of data pages in this table is 1. *** NOTICE: Notification of log space used/free cannot be reported because the log segment is not on its own device. Table has 11 data rows. DBCC execution completed. If DBCC printed error messages, see your System Administrator.
model: Checking syslogs The total number of data pages in this table is 47. *** NOTICE: Notification of log space used/free cannot be reported because the log segment is not on its own device. Table has 532 data rows. DBCC execution completed. If DBCC printed error messages, see your System Administrator.
tempdb:
Checking syslogs The total number of data pages in this table is 1. *** NOTICE: Notification of log space used/free cannot be reported because the log segment is not on its own device. Table has 31 data rows. DBCC execution completed. If DBCC printed error messages, see your System Administrator.
I ran dbcc checkdb on master,model and tempdb . Still I get the same problem.
for tempdb:
Checking 8 The total number of data pages in this table is 1. *** NOTICE: Notification of log space used/free cannot be reported because the log segment is not on its own device. Table has 19 data rows.
for master: Checking 8 The total number of data pages in this table is 1. *** NOTICE: Notification of log space used/free cannot be reported because the log segment is not on its own device. Table has 27 data rows.
for model: Checking 8 The total number of data pages in this table is 47. *** NOTICE: Notification of log space used/free cannot be reported because the log segment is not on its own device. Table has 532 data rows.
All system databases and userdatabase recovered successfully when I restarted sqlserver.