I have a process that runs each day and it imports about 550K records into a database. My questions is it appears I have to reindex the database after each import otherwise the sp's that I have written will just run and run and run. After the reindex job things run within 60 seconds. I am just looking for some insight on why, I understand why a reindex is done, but dont know if I understand why I have to reindex every time.
Hi guys. I an application here developed by a third party software house. In the past, for some reason, the database would fail daily. The software house recommended that we use dbcc reindex on all tables within the databases twice daily. This was scheduled and is now running. Now the database no longer fails. The fix works and I don't understand why. I don't understand why this would fix the problem. Why would reindex twice daily solve the problem. It seems excessive to have to reindex every user table twice daily.
I tried to run dbcc reindex on all user tables in a database. There are no clustered indexes, but multiple non-clustered indexes on each table.
The output file from dbcc reindex shows that it should have worked. But when I run DBCC SHOWCONTIG, the scan density of the indexes that were in bad shape did not improve.
Hello everybody. 1. I have 28 Gig database with 4 tables above 4 Gig each with very bad fragmenataion, each table has between 3 and 5 indexes 2. Database set for full recovery and I use custom log Shipping to restore db on stand by server every 15 min.
I tried to run DBCC INDEXDEFRAG on one index on 4Gig table . following took place 1. It took 4 hrs to complite DBCC INDEXDEFRAG 2. log shipping fail. 3. log file size of 2 Gig generated after DBCC INDEXDEFRAG complited
I tried to run drop and create clustered index on table it create same problem - log growing, log shipping fails
(if log shipping fails and stand by database get suspended it will take 6- 8 hrs to restore it from backup and apply all logs)
So my question is
What would be best way to rebuild- reindex - defragment 28 Gig database when it set to fully recovery and log shipping
Hi guys, I need a help with this question. In SQL 2000, Can i run update,insert or delete queries while the Indexes of that table is being Rebuilt? Will i get blocked by the DBCC DBreindex process?
Hi experts, I would like to ask for this error that occurs upon executing my reindexing script
here is my script
USE mydatabase
DBCC DBREINDEX('outpatient', '', 70) go
Then this error message will appear. I researched for this error and from the site that I've found they say that the table is corrupted? That I need to restore a better backup..?
The statement has been terminated. Msg 824, Level 24, State 2, Line 1 SQL Server detected a logical consistency-based I/O error: torn page (expected signature: 0x55555555; actual signature: 0x55555545). It occurred during a read of page (1:353409) in database ID 10 at offset 0x000000ac902000 in file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataBizbox_HS7.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
Hi,i have several tables in production whose contents are renewd totally in 1week. So everyd day we delete ~15% records and then insert 15% new.And after a few days, the performances drops :TABLE level scan performed.- Pages Scanned................................: 169617- Extents Scanned..............................: 21630- Extent Switches..............................: 153827- Avg. Pages per Extent........................: 7.8- Scan Density [Best Count:Actual Count].......: 13.78% [21203:153828]- Logical Scan Fragmentation ..................: 45.06%- Extent Scan Fragmentation ...................: 52.66%- Avg. Bytes Free per Page.....................: 5042.5- Avg. Page Density (full).....................: 37.70%I can't program a dbcc reindex every day because of concurrent access (itlocks the tables too long), actually i can only program it on sunday.What else can i do ? I can adjust the fill factor but how to find the goodvalue if i don't want to waste space.The total size of the database is ~150GB.Thx
I am currently running the Back Office Resource Kit Log shipping option for a database running on an SQL 7 installation. As part of the on-going maintenance work that we are being asked to perform by the application vendor I need to run a DBCC REINDEX run on most of the tables in the database. Currently this is done by stopping the log shipping routine and then running the reindex script, then taking a full backup and restoring the backup to the secondary server then restarting the log shipping scripts. This is a very time consuming task that has to be performed at unsociable hours.
Has anybody got an opinion as to if this would work at the same time as the log shipping scripts or do I have to continue as at present.
I am maintaining a large table with millions of rows that has two non clustered indexes and data changing frequently, I need to keep the indexes fresh. Update Statistics runs much quicker than Reindex. What is the appropriate situation for each and why? Thanks in advance.
I first ran indexdefrag on a table with 1.5 billion rows. logical fragmentation was at 95%. logical frag went down to 3% with no real effect on disk.
DBCC reindex had previously been bombing undetected.
Now I've run a reindex on this table: Reindex Job with Fillfactor =100 Ran in 3:05 Free Disk went from ~150GB before operation to 49GB File4 went from 347GB to 504GB
Why has so much free disk been consumed by this operation and not released ??????????
I have a production 60GB database set to Full Recovery and every 15 minutes I am log shipping to a Stand by Server .
During the production hours there are no problems but at night when I run DBCC DBREINDEX, the log grows to 22GB and because of this I have a problem sending this over the network to the stand by server.
I tried changing the recovery model to Bulk_Logged but the there is no difference in log file backup size.
Hi Folks,SQL Server 2000 SP3 on Windows 2000. I have a database on which I ranthe command :dbcc dbreindex ('tablename')gofor all tables in the database. Then I compared the dbcc showcontigwith all_index output from before and after the reindex and on thelargest table in the database I found this. First output is prior toreindex:Table: 'PlannedTransferArchive' (1975014117); index ID: 1, database ID:7TABLE level scan performed.- Pages Scanned................................: 184867- Extents Scanned..............................: 23203- Extent Switches..............................: 23324- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 99.07% [23109:23325]- Logical Scan Fragmentation ..................: 11.13%- Extent Scan Fragmentation ...................: 35.46%- Avg. Bytes Free per Page.....................: 60.0- Avg. Page Density (full).....................: 99.26%Second output is from after the reindex:DBCC SHOWCONTIG scanning 'PlannedTransferArchive' table...Table: 'PlannedTransferArchive' (1975014117); index ID: 1, database ID:8TABLE level scan performed.- Pages Scanned................................: 303177- Extents Scanned..............................: 37964- Extent Switches..............................: 42579- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 89.00% [37898:42580]- Logical Scan Fragmentation ..................: 43.19%- Extent Scan Fragmentation ...................: 24.78%- Avg. Bytes Free per Page.....................: 75.1- Avg. Page Density (full).....................: 99.07%Following are my concerns:The following numbers are all higher after reindex than before reindex:pages scanned, extent switches, logical scan fragmentation, avg bytesfree per page, avg page density.scan density is lower after reindex than before reindexSeems to me that the numbers that are higher after reindex should belower and numbers that are lower after reindex should be higher? Ididn't specify the fill factor in the dbcc reindex command so it shouldhave used the default fill factor. The fill factor has never beenchanged on this machine.Am I missing something?Thanks,Raziq.*** Sent via Developersdex http://www.developersdex.com ***
Does running DBCCReindex update the space allocated columns in sysindexes? I understand that running dbcc updateusage updates the space allocated columns in the sysindexes table. But, I cannot find any documentation that indicates whether dynamically rebuilding the indexes as opposed to drop and recreating the indexes updates the space allocated columns in the sysindexes table?
Any information would be helpful. Thanks. Gail Wade Database Administration Raymond James Financial gwade@it.rjf.com
I have a standard reorganise/reindex job running against a 32GB database on SQl Srever 2000. When trying to run the job it fails and returns Error 1105 <'PRIMARY' filegroup is full>. What's confusing me is that I have 53GB free on the drive on which my Primary file group sits.
Has anyone else come accross this problem when trying to set up a regular reindex job?
(more detail) the maintenance plan only includes the reorganisation/reindex job, no other jobs - including backing up the DB - are included. The DB in question is the only DB on the server: it's a test server.
I have a weekly Maintenance Plan Reindex job that has failed because of a deadlock. My question seems simple enough and I'm ashamed to say I ought to know this answer, but here goes: Does the rest of a given job continue after such failures (this one was maybe 3/4 through the log) occur?
Does anyone know what the commands would be? I am trying to create a job that puts a DB in simple mode then launches a reorg and re-index, then sets it back to full when it is complete. This way I can eliminate large transaction logs being created.
I inherited a system which has an index on a set of columns which allow more than 900 bytes of data in it. We know one of the fields can be shortened to shrink the potential key size below 900 bytes.
The problem is the table is about 120m rows, and the index currently on that column is seeked (sought?) on about 2.5m times a day.
At its simplest, I want to drop the existing index, alter the column to shrink the varchar size, and then rebuild the index on the newly shortened column.
On a smaller, less used table, I might just do this in outside of business hours and call it a day, but I'm concerned that this will take a long time and block a lot of operations.
1) IIRC, shrinking a column, unlike widening it, is much more expensive, even if there are no values which would actually end up trunacted. Is this right?
2) I did a few tests on some other smaller (2+ m) row tables and was still able to select data out of the table. I don't think this covered all the read scenarios, but are there known scenarios which would simply not work during an index build?
3) I haven't yet tried DML operations to the table while it's doing either the column update or an index build. what scenarios would or would not be blocked?
A view named "Viw_Labour_Cost_By_Service_Order_No" has been created and can be run successfully on the server. I want to import the data which draws from the view to a table using SQL Server Import and Export Wizard. However, when I run the wizard on the server, it gives me the following error message and stop on the step Setting Source Connection
Operation stopped...
- Initializing Data Flow Task (Success)
- Initializing Connections (Success)
- Setting SQL Command (Success) - Setting Source Connection (Error) Messages Error 0xc020801c: Source - Viw_Labour_Cost_By_Service_Order_No [1]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "SourceConnectionOLEDB" failed with error code 0xC0014019. There may be error messages posted before this with more information on why the AcquireConnection method call failed. (SQL Server Import and Export Wizard)
Exception from HRESULT: 0xC020801C (Microsoft.SqlServer.DTSPipelineWrap)
- Setting Destination Connection (Stopped)
- Validating (Stopped)
- Prepare for Execute (Stopped)
- Pre-execute (Stopped)
- Executing (Stopped)
- Copying to [NAV_CSG].[dbo].[Report_Labour_Cost_By_Service_Order_No] (Stopped)
- Post-execute (Stopped)
Does anyone encounter this problem before and know what is happening?
I am trying to import an xlsx spreadsheet into a sql 2008 r2 database using the SSMS Import Wizard. When pointed to the spreadsheet ("choose a data source")  the Import Wizard returns this error:
"The operation could not be completed" The Microsoft ACE.OLEDB.12.0 provider is not registered on the local machine (System.Data)
How can I address that issue? (e.g. Where is this provider and how do I install it?)
I am attempting to import data from Microsoft Access databases to SQL Server 2000 using the DTS Import/Export Wizard. I have a few errors.
Error at Destination for Row number 1. Errors encountered so far in this task: 1. Insert error column 152 ('ViewMentalTime', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Insert error column 150 ('VRptTime', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Insert error column 147 ('ViewAppTime', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Insert error column 144 ('VPreTime', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Insert error column 15 ('Time', DBTYPE_DBTIMESTAMP), status 6: Data overflow. Invalid character value for cast specification. Invalid character value for cast specification. Invalid character value for cast specification. Invalid character value for cast specification. Invalid character value for cast specification.
Could you please look into this and guide me Thanks in advance venkatesh imtesh@gmail.com
I am trying to simplify a query given to me by one of my collegues written using the query designer of Access. Looking at the query there seem to be some syntax differences, so to see if this was the case I thought I would import the database to my SQL Server Developer edition.
I tried to start the wizard from within SQL Server Management Studio Express as shown in one of the articles on MSDN which did not work, but the manual method also suggested did work.
Trouble is that it gets most of the way through the import until it spews forth the following error messages:
- Prepare for Execute (Error) Messages Error 0xc0202009: {332B4EB1-AF51-4FFF-A3C9-3AEE594FCB11}: An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft JET Database Engine" Hresult: 0x80004005 Description: "Could not start session. Too many sessions already active.". (SQL Server Import and Export Wizard)
Error 0xc020801c: Data Flow Task: The AcquireConnection method call to the connection manager "SourceConnectionOLEDB" failed with error code 0xC0202009. (SQL Server Import and Export Wizard)
Error 0xc004701a: Data Flow Task: component "Source 33 - ATable" (2065) failed the pre-execute phase and returned error code 0xC020801C. (SQL Server Import and Export Wizard).
There does not seem to be any method of specifying a number of sessions, so I don't see how to get round the problem.
Does anyone know how I can get the import to work?
I am not sure how to implement the following, but I believe it entails using DTS, and hopefully it is fine that I post it here b/c ultimately I will need this backend data for my frontend .aspx pages:
On a weekly basis, I need to IMPORT some data located on a remote Oracle DB into SQL Server 2k. Since there is so much data to transfer, I would only like to transfer the data that is new to the table since the last IMPORT, i.e. a week ago and leave behin the OLD data.
Is DTS the correct way to go or do I have more control via DTS with STORED PROCEDURES? Does anyone have any good references for me?
On a similar note, once this Oracle data is IMPORTED into a certain table, I would like to EXPORT some of these NEWLY acquired rows matching certain criteria into another table for auditing purposes. For this scenario, should I implement a TRIGGER UPDATE event here on the first table?
when trying to Ãmport files to our database server from a client, I keep getting an error:
- Validating (Error) Messages Error 0xc00470fe: Data Flow Task: The product level is insufficient for component "Source_txt" (1). (SQL Server Import and Export Wizard)
Error 0xc00470fe: Data Flow Task: The product level is insufficient for component "Data Conversion 1" (175). (SQL Server Import and Export Wizard)
... doing the same import when logged on the server, hasn't been giving me any errors, how come. I can from my client without trouble import tables from other DB servers but when ever it is files it won't do it.
I tried as mentioned in other threads rerun setup to re-install SSIS, but as it was already installed it wouldn't re-install. My next move would be to make a clean install, but not sure it would help, as I think this is a buck.
Hi,I am having trouble importing data from an excel spreadsheet into MSSQL Server 2000 using DTS Wizard. The DTS import process issuccessfull, no errors, but only 50 rows of approx. 1500 rows of dataare imported. I tried to remove 20 rows in the excel spreadsheet inthe interval row 0-50. When i later ran the import, only 30 rows wereimported. I deleted almost every row in the interval 0-50, with theresult of the import having 0 rows imported (but job ransuccessfully). I decided to delete rows 0-100 in the spreadsheet inorder to see if the resolved the problem, but it didn't. As Isuspected something in the excel file to be the cause, I exported theexcel spreadsheeet to a tab delimited textfile, with only one row. ADTS import resulted in importing approx 100 rows, double the amount ofthe textfile, but the other 1400 rows were not imported. The data inthe column is containing numeric values only.Please help me! What could possibly be the cause of DTS skipping rowslike that. DTS doesn't feel reliable at all :/Regards,Björn
I have some c# code where I import data to SQL from an xml file. Can this be done with type Image? I test for it and turn the gobbly gook into a byte[] array, but I get an out of memory error on my c# app when I try to view it. Is this possible?
I have a text file that I must import into a table I created but am having terrible difficulty trying to use the command line BCP utility to do so. Can anyone please tell me how to do this?
Text file and table properties below:
Text File
1 Untitled Mark Rothko Oil 1961 5'9"x4'2"
2 The Letter Jan Vermeer Oil 1666 1'5.25"x1'3.75"
3 Four Apostles Albrecht Durer Oil 1526 7'1"x2'6"
4 Big Self-Portrait Chuck Close Acrylic 1968 8'11"x6'11"x2
5 Three Angels Andrei Rublyev Tempura on wood 1410 4'8"x3'9"
6 Voltaire Jean-Antoine Houdon Marble 1781
7 Jaguar Devouring a Hare Antoine-Louis Barye Bronze 1851 1'4"x3'1"
8 The Peacock Skirt Aubrey Beardsley Pen and Ink 1894
9 Untitled Film Still #35 Cindy Sherman Black-and-white photograph 1979 10"x8"
10 Reclining Figure Henry Moore Elm wood 1939 3'1"x2'6"
I am trying to do a DTS Import in SQL Server 7. I am importing from a text file to a SQL Server format. When I run the import to append the data I get the following error:
Error during Transformation 'DirectCopyXform' for row number 1. Errors encountered so far in this task: 1 TransformCopy "DirectCopyXform'conversion error Conversion invalid for datatypes on column pair 6(source column 'Col006' (DBTYPE_STR), destination column 'END_DT'(DBTYPE_DBTIMESTAMP).
Could anyone tell me how to correct this problem? Any help would be greatly appreciated.