Index Fragmentation Strategy For VLDB
Nov 20, 2007
I need to manage the problem of negative performance implications when I fragment a 1TB+ DB. I want to perform Index Reorganization if fragmentation is no higher than 30%, and Index Rebuild if the fragmentation exceeds 30%.
Firstly can anyone recommend a script which uses sys.dm_db_index_physical_stats system to ascertain the
fragmentation level. Secondly, is there a technique I can employ to prevent the ONLINE operation completely killing performance on 27/4 production system?
ALTER INDEX REORGANIZE/REBUILD WITH (ONLINE=ON)
View 2 Replies
ADVERTISEMENT
Jun 22, 2008
What functions of tools do you use for managing index fragmentation?
DBCC?
I am working through MS Press SQL 2005 book and it mentions the
sys.dm_db_index_physical_stats function? It then give an example of code which is very involved.
Does anybody use this function?
Thanks
View 10 Replies
View Related
May 10, 2007
before rebuild index
---------------------
tablenameIndexname
avg_fragmentation_in_percent
Payoff_QuotePK_Payoff_QuoteDetail
83.3333333333333
ALTER INDEX ALL ON Payoff_Quote REBUILD
after issue the above statement result remain same.
View 1 Replies
View Related
Jun 24, 2007
Hi
When I run sys.dm_db_index_physical_stats on AdventureWorks.Production.Product, I get an average fragmentaion of 46%, 0%, 25% & 50% on the Indexes.
I run ALTER INDEX ALL ON AdventureWorks.Production.Product
REBUILD WITH (FILLFACTOR = 80, SORT_IN_TEMPDB = ON,
STATISTICS_NORECOMPUTE = ON)
and the index fragmentation does not change...
Why is that?? I have tried refreshing the table and the statistics but why does it look like rebuilding the indexes has done nothing?
View 1 Replies
View Related
Sep 25, 2007
I have a nice script that will look at the index fragmentation by using the DMV (sys.dm_db_index_physical_stats) and if its above a specificed threshhold, it will rebuild or reorg the index.
What I have noticed is that even after reorging and/or rebuilding the index, the fragmentation percent in the DMV does not change for some indexes. Other indexes are updated just fine.
Is this a result of updating statistics? Why would the fragmentation change for some indexes and not others? Why does it seem no matter how much rebuilding or reorging is done, the fragmentation percent for some indexes does not change?
- Eric
View 6 Replies
View Related
May 11, 2015
We have nightly job running to reorg all in one of our prod database. But the index on one of the table fragmenting quickly by the morning showing 90% fragmentation.
View 9 Replies
View Related
Feb 9, 2008
Hi,
It is natural that index gets fragment overtime. But I would like to know how you do the reindexing or defragment when your database table is big and you cannot afford the time to rebuild them. Thanks.
View 6 Replies
View Related
Nov 8, 2007
I have several databases in which our indexes have not been rebuilt/reorganized. I have worked primarily on SQL 2005 but with 2000 I am not familiar with Logical vs Extent fragmentation. And on the 2000 server there is high Logical Fragmentation (1000+%). With me rebuilding the indexes on the 2000 server databases, will this reduce performance or halt functionality of the servers while this process is running???
View 6 Replies
View Related
Jun 2, 2014
We have few tables where we do truncate load or only do insert activities , why do the cluster index get fragmented very often to > 80%?
View 9 Replies
View Related
Apr 23, 2007
I have been testing methods to maintain indexes in a SQL Server 2005 database which has been migrated from SQL Server 2000. The compatibility level is still set to 80. I used the query below to inspect the degree of fragmentation amongst other things.
SELECT a.index_id
, name
, database_id
, avg_fragmentation_in_percent
,index_type_desc
,fragment_count
,page_count
FROM sys.dm_db_index_physical_stats (NULL, NULL, NULL, NULL, 'DETAILED') AS a
JOIN sys.indexes AS b ON a.object_id = b.object_id AND a.index_id = b.index_id
Some of the indexes in the database had a high degree of fragmentation based on the avg_fragmentation_in_percent value. I tried drop+create, rebuild and reorganise commands on those indexes. Predictably, drop + create was the most effective, but even that did not always reduce fragmentation much. Sometimes the fragmentation was the same no matter what method I used. Other times drop+create helped, rebuild made it worse.
What is going on?
View 11 Replies
View Related
Mar 23, 2015
Two tables:
GeoIpLocation (Id, City, Country, Latitude, Longitude; PK: Id; 2m rows)
GeoIpBlock (IpFrom, IpTo, GeoIpLocation_Id; PK: IdFrom, IdTo; 100k rows)
Simple concept: get location for given IP. So:
SELECTTOP 1
GeoIpLocation.Country,
GeoIpLocation.City
FROMGeoIPLocation
INNER JOIN GeoIPBlock
ON GeoIPBlock.GeoIpLocation_Id = GeoIPLocation.Id
WHERE@nIpNumber BETWEEN GeoIPBlock.IpFrom AND GeoIPBlock.IpTo
Result: disaster.
The between operator uses the index on PK on GeoIpBlock, which results on half the table for first part of between and half the table for the second part of between. A bit better query is with FORCESCAN on GeoIPBlock, but it still runs slow, as it scans a lot of records.
Question: Is there a better way to index this kind of data?
View 9 Replies
View Related
Jan 23, 2015
I have a table that has a clustered index that is only the identity column on the table. The table is somewhere around 200K rows and has 3800 pages in the index. We run our index maintenance every other day on this database using Ola's scripts and this index is rebuilt because it is 40-60% fragmented after 2 days. Overall, this isn't really too much of a problem since the index rebuild doesn't take too long, but I am puzzled as to how this index is getting fragmented since the only column in it is the identity.
CREATE TABLE [dbo].[Example](
[ExampleID] [int] IDENTITY(1,1) NOT NULL,
[ExampleCode] [varchar](10) NOT NULL,
[ForeignID] [int] NOT NULL,
[AnotherID] [int] NOT NULL,
[code]...
) ON [PRIMARY]There is nothing strange like updates to the identity happening and while some records are deleted, there has only been about 20,000 in the life of the table (months). Not enough to account for the level of fragmentation that we're seeing on the index.About the only thing I can think of that would cause fragmentation on this index in this scenario are:
1. Page splits caused by starting with a small value in one of the VARCHARs and later inserting a larger value
2. Page splits caused by the NULLABLE column, ExampleDate, starting with NULLs and later updating them to a date.
For #1, I had development check the update scenarios for the varchar columns, especially the varchar(1000) one, and they didn't see it as a common thing where the values would go from small (or empty) to large.
For #2, I checked and found that the only value for that column in the table is NULL so while it always starts as NULL, it never gets updated to anything else.
I've tried looking at sys.dm_db_index_operational_stats and the leaf_update_count is around 300,000, but unless those updates are causing page splits, I don't see how they would contribute to fragmentation.
View 9 Replies
View Related
Sep 17, 2007
I have a non-clustered index on a table. If I rebuild or reorganize it in SQL 2005, the total fragmentation percent reported by properties/fragmentation on the index stays at 33%.
Why doesn't the fragmentation go to 0% ?
If I totally drop/create the index, starts even higher, but beorg or rebuild simply goes to 33%. This even if using with use temp db for sort option.
View 1 Replies
View Related
Dec 11, 2007
I have been reworking my index maintenance jobs from my old SQL 2000 table and view references to the DMV's and System Tables in SQL 2005, and I noted that some of my indexes end up being more fragmented after a reorganization and or rebuild. That doesn't make much sense to me at all. The code I am executing is:
Code Block
print ' '
print '************* Beginning Index Updates for '+db_name()+' *************'
print ' '
DECLARE @tablename varchar(250),
@indexname varchar(250),
@fragpcnt decimal(18,1),
@indexid int,
@dbID int
-- Determine DB ID
SELECT @dbID = DB_ID()
DECLARE tnames_cursor CURSOR FOR
SELECT b.name, c.name, a.avg_fragmentation_in_percent, a.index_id
FROM sys.dm_db_index_physical_stats (@dbID, NULL, NULL, NULL, NULL) a
JOIN sys.indexes b ON a.object_id = b.object_id
AND a.index_id = b.index_id
JOIN Sys.objects c ON b.object_id = c.object_id
WHERE a.index_id > 0
ORDER by a.page_count DESC
OPEN tnames_cursor
FETCH NEXT FROM tnames_cursor INTO @indexname, @tablename, @fragpcnt, @indexid
WHILE (@@fetch_status = 0)
BEGIN
-- Declare and determine the tablename ID
declare @tablenameID int
select @tablenameID = object_id(@tablename)
IF @fragpcnt > 30
BEGIN
EXEC('ALTER INDEX ['+@indexname+'] ON ['+@tablename+'] REBUILD')
PRINT '***************************************************'
PRINT 'Index '+@indexname+' was rebuilt.'
PRINT 'Original framentation Percent: ' + convert(varchar, @fragpcnt) + '%'
SELECT @fragpcnt = avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (@dbID, @tablenameID, @indexid, NULL, NULL) a
JOIN sys.indexes b ON a.object_id = b.object_id
AND a.index_id = b.index_id
JOIN Sys.objects c ON b.object_id = c.object_id
PRINT 'Post Rebuild fragmentation Percent: ' + convert(varchar, @fragpcnt) + '%'
PRINT ''
END
ELSE IF @fragpcnt BETWEEN 5 AND 30
BEGIN
EXEC('ALTER INDEX ['+@indexname+'] ON ['+@tablename+'] REORGANIZE')
PRINT '***************************************************'
PRINT 'Index '+@indexname+' was Reorganized.'
PRINT 'Original framentation Percent: ' + convert(varchar, @fragpcnt) + '%'
SELECT @fragpcnt = avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (@dbID, @tablenameID, @indexid, NULL, NULL) a
JOIN sys.indexes b ON a.object_id = b.object_id
AND a.index_id = b.index_id
JOIN Sys.objects c ON b.object_id = c.object_id
PRINT 'Post Reorganization fragmentation Percent: ' + convert(varchar, @fragpcnt) + '%'
PRINT ''
END
ELSE
BEGIN
PRINT '***************************************************'
PRINT 'Index '+@indexname+' was left alone.'
PRINT 'Original framentation Percent: ' + convert(varchar, @fragpcnt) + '%'
PRINT ''
END
FETCH NEXT FROM tnames_cursor INTO @indexname, @tablename, @fragpcnt, @indexid
END
print ' '
print '************* NO MORE TABLES TO INDEX *************'
PRINT 'All indexes for the '+db_name()+' database have been updated.'
print ' '
DEALLOCATE tnames_cursor
Below are some snipits of the output:
***************************************************
Index _dta_index_wuci_history_8_1123587141__K2_K5 was rebuilt.
Original framentation Percent: 58.3%
Post Rebuild fragmentation Percent: 58.3%
***************************************************
Index PK__batchjob__776C5C84 was left alone.
Original framentation Percent: 0.0%
***************************************************
Index PK__ContactWebDetail__116A8EFB was rebuilt.
Original framentation Percent: 44.4%
Post Rebuild fragmentation Percent: 77.8%
***************************************************
Index PK__managed_object_s__5DCAEF64 was left alone.
Original framentation Percent: 0.0%
***************************************************
Index kb_IX_kb_scope_scope_role was rebuilt.
Original framentation Percent: 75.0%
Post Rebuild fragmentation Percent: 87.5%
***************************************************
Index PK__query__09A971A2 was left alone.
Original framentation Percent: 0.0%
***************************************************
Index PK__email_message__38996AB5 was rebuilt.
Original framentation Percent: 85.7%
Post Rebuild fragmentation Percent: 0.0%
***************************************************
If the index begins with PK, then it is the primary key index which is generally the clustered index on the table, but not always. If it has an IX on it, it is generally a non-clustered index on the table, but again not always. In the case of the above, the PK is a clustered index, and the IX is a non-clustered index.
Anyone have any ideas why this is functioning in this manner?
Thanks,
Jon
View 6 Replies
View Related
Jun 26, 2015
In my database all columns have Identity Value as a PK.
Now today I check Index Fragmentation I saw that many cluster and Non cluster Index are avg.Fragmentation is around 99 % I thought that in Identity column record is always inserted at bottom so there is no fill factor assigned to it.
So in this case Do I need to set Fill factor for Cluster and Non Cluster Index?
If Yes Then For PK How much - 95 % or what? and same for Non cluster or It should around 85 to 90
View 4 Replies
View Related
Sep 4, 2007
I am using sys.dm_db_index_physical_stats to identify indexes that need to be rebuilt based on a fragmentation limit. Once identified, I execute and ALTER INDEX... REBUILD on the index. If the index is clustered, only that index gets rebuilt for the table. After all the indexes are complete, I receive a report on the indexes that were rebuilt in the databases and what level of fragmentation the index was at before rebuilt. After checking these indexes, I still see that all the Primary Key indexes are still at the same fragmentation level. I run the process again and it does not change. I updated table usage and also ran update statistics after running the rebuild again, but the fragmentation does not change. Why can€™t these PK Clustered indexes be rebuilt as expected? Do I need to drop and recreate the PK before this fragmentation changes?
View 7 Replies
View Related
Dec 17, 2014
I've noticed that indexes in my AX 2012 database get very fragmented very quickly. Is this normal and that I should just defragment them regularly or is there a possible issue here that needs investigated?
View 6 Replies
View Related
Mar 10, 2007
I thought I would delve into index fragmentation and I found somegreat sql from many posters (thanks Erland!).My question is how bad is bad? I know this is very subjective.Some scripts I found would reindex if the LogicalFragmenation is over30%.I have some tables that are 98% (I'm guessing really bad). I know itall depends..more as a learning point: I found a table that had over 30%logicalfragmentation, I dropped the indexes, created then ran thescript that used type code segment:'DBCC SHOWCONTIG(' + @TableName + ') WITH TABLERESULTS, ALL_INDEXES,NO_INFOMSGS')In one case, the indexes for the table dropped below 30%, in anothercase the index was still fragmented ever after I dropped and re-created index.SQL Server 2005 x64 SP2This is the script I am running (I found this in another thread thatErland posted):SET NOCOUNT ONUSE ds_v6_sourceDECLARE @TableName VARCHAR(100)-- Create a table to hold the results of DBCC SHOWCONTIGIF OBJECT_ID('Tempdb.dbo.#Contig') IS NOT NULLDROP TABLE #ContigCREATE TABLE #Contig ([ObjectName] VARCHAR(100), [ObjectId] INT,[IndexName]VARCHAR(200),[IndexId] INT, [Level] INT, [Pages] INT , [Rows] INT ,[MinimumRecordSize] INT,[MaximumRecordSize] INT , [AverageRecordSize] INT,[ForwardedRecords] INT ,[Extents] INT, [ExtentSwitches] INT, [AverageFreeBytes]NUMERIC(6,2),[AveragePageDensity] NUMERIC(6,2), [ScanDensity]NUMERIC(6,2) ,[BestCount] INT ,[ActualCount] INT , [LogicalFragmentation] NUMERIC(6,2) ,[ExtentFragmentation] NUMERIC(6,2) )DECLARE curTables CURSOR STATIC LOCALFORSELECT Table_NameFROM Information_Schema.TablesWHERE Table_Type = 'BASE TABLE'OPEN curTablesFETCH NEXT FROM curTables INTO @TableNameSET @TableName = RTRIM(@TableName)WHILE @@FETCH_STATUS = 0BEGININSERT INTO #Contig EXEC('DBCC SHOWCONTIG(' + @TableName + ') WITHTABLERESULTS, ALL_INDEXES, NO_INFOMSGS')FETCH NEXT FROM curTables INTO @TableNameENDCLOSE curTablesDEALLOCATE curTables
View 8 Replies
View Related
Jul 20, 2005
hello everyone,we dropped the clustered & nonclustered indeces on a table, thenrebuilt them. logical fragmentation is near zero, but extentfragmentation is about 40%. how can this be if the indeces are brandnew?
View 2 Replies
View Related
Apr 26, 2015
We have a database with a table that contains around 180m records. Each day a further 70k are inserted. No records are ever deleted as this table is used for archiving only.Users are required to perform SELECTs on this table constantly but due to the high number of INSERTs the indexes become very fragmented very quickly. My aim is to avoid daily rebuilds of the indexes which is what our software house is telling us we have to do.
This is the DDL for the table:
CREATE TABLE [dbo].[Inventory](
[EAN] [bigint] NOT NULL,
[Day] [smalldatetime] NOT NULL,
[State] [int] NOT NULL,
[Quantity] [int] NULL,
[StockValue] [float] NULL,
CONSTRAINT [PK_Inventory] PRIMARY KEY CLUSTERED
[code]...
There are also three clustered Indexes on this table each referencing a single column. The problem from my side is that I cannot understand why the three columns in a primary key would also be configured as non-clustered indexes.My solution would be one of the following:
1. Accept the tables are going to be fragmented and require a daily rebuild (don't like this one!)
2. Partition the table
3. Remove the non-clustered Indexes and let the clustered index for the primary key do the work.
View 9 Replies
View Related
Feb 22, 2000
I have a database approximately 30 GB in sixe which need to be moved from one SQL server to another. Does anyone know the most efficient way of doing this, other then backing up to tape?
View 1 Replies
View Related
Jul 16, 2007
Hi experts,
We have SQL Server 2005 installed in MS Windows server 2003 with 8 GB RAM. This server has 4 processors.
Ours is a VLDB and a single table has 400 million records occupying nearly 40 GB of space.
We find it vert difficult to meet the response time set by the clients in many occasions.
Should the RAM be atleast as big as the biggest table in the database ? Is this mandatory ?
Even any other suggestions for improving the performance are welcome.
Thanks & Regards,
Hariarul
View 15 Replies
View Related
Jul 31, 2007
Hello All,
I have been experimenting with SQL Server 2005 partitions. I loaded a terabyte of information into 2 tables. The first holds the document information and the second holds the actual binary document (in this case pdf). Most of the documents are about 1 megabyte in size, but the largest is 212 megabytes.
SQL Server has no problem storing the blobs. The problem occurs when I attempt to get the data.
I did some quick tests to test how fast I could pull the documents out. The largest took about 24 seconds. The 1 meg documents are sub-second.
Here is how the 212 meg doc breaks down:
Time to load datatable: 18.79 seconds
Time to load byte array: 3.84 seconds
Time to Write and open document: 0.01 seconds
If I access the file from a file server, the time is 0.04 seconds to begin showing the document.
As you can see, the longest time period is related to retrieving the data from SQL, and it is much slower that launching it from disk across the network. (note: the sql server and file server used to test are next to each other).
My question is, how can I speed up the access from SQL Server? I believe the keys are "partition aligned". Any suggestions would be appreciated.
I will add the table definitions and partition information as a reply since only 5000 chars are allowed in the post.
View 12 Replies
View Related
Oct 11, 2006
Hi There
I have a 50Gig OLTP production database that currently takes +- 50 minutes to backup, (normal sql flat file backup to disk).
This database will grow to +- a terrabyte by next year.
My major concern is how will i be able to backup this DB when it is that big in 2 hours or less.
I have been checking out my options, in terms of SAN snapshots/clones. Also multiple backup devices and using differential/filegroup/full backup strategy.
What i want to know is if anyone out there is backing up VLDB's what strategy/methos/tools are you using, even 3rd party tools for faster,smaller backups?
Any pointers/best practices for VLDB backups would be greatly appreciated.
Thanx
View 4 Replies
View Related
Jul 17, 2007
Hi experts,
We have SQL Server 2005 installed in MS Windows server 2003 with 8 GB RAM. This server has 4 processors.
Ours is a VLDB and a single table has 400 million records occupying nearly 40 GB of space.
We find it vert difficult to meet the response time set by the clients in many occasions.
Should the RAM be atleast as big as the biggest table in the database ? Is this mandatory ?
Even any other suggestions for improving the performance are welcome.
Thanks & Regards,
Hariarul
View 4 Replies
View Related
Jan 13, 2000
I hard that SQL Server 7.0 has problems when the database reaches
50 - 100GB, in areas such as backup, transaction logging, and database
admin and that by 100GB parallel queries are also affected.
Is this true ? Where I can get information on this ?
Thanks in Advance.
Regards,
Vidyadhar
View 1 Replies
View Related
Sep 14, 1998
Hello
Does anyone have experience/advice with large databases (5-10 Gig)? If so, I was wondering about
performance/other benefits of spanning a large database across multiple devices (different disks). Would anyone
vote for or against doing this?
Suggestions...
Thanks
Tim
View 1 Replies
View Related
Jul 20, 2005
I have a production database that is in the low gigabyte size andgrowing steadily. No issue there.I wish to completely refresh the development database daily on asecond server. What is going to be the fastest easiest way to do thiswith hindering performance on the production system ?Thanks,Craig
View 2 Replies
View Related
Jul 20, 2007
I've got a few VLDB's that we want to make smaller. Since the tables are running on legacy stuff, all of it's basically made with int's and char's and it's horriably inefficant.
The problem that I came across is when I made a new table with the best data types and copied the data from the old table, the table size was the exact size (excluding the index size). It was estimated that a total of ~20 GB would be saved with this change. As it turned out, 0 bytes of data were saved with the data types chagnes.
Why are the two tables the same, even though one has much more efficant data types?
If you want more information about the table I'm using:
391 columns.
50,147,035 rows.
65,295.625 MB in size.
View 3 Replies
View Related
Feb 4, 2015
I have been using AlwaysON AG for a long time now and currently have about 10TB of data across 120 databases and 3 AG groups for any application that is on SQL 2012 with great success. Each AG group is running on patch level 11.0.5058.0 with 2 synchronous replica(on different SANS) in Primary Data center and 1 ASYNC replica in DR. Migration has been a non-issue because none of the databases weren't substantial enough that I could not fit into my maintenance window which is 12-4AM on SAT morning.
My issue is that my last application to migrate to 2012 includes a 4TB TDE encrypted databases database which is about 10x larger than any of the previous ones I have migrated. The database takes 4 hours to backup after tuning extensively(I hate TDE!!)
The restore to the primary replica is instant because of seeding incremental but the issue comes from having to backup the database before adding to the availability group. 4 hours is my exact outage window and I can't get any more. My plan to migrate application is to -
First Outage Window
1) Restore Database from 2008 to 2012 Primary Replica
2) Change application ARECORD(or cname not sure which) to Primary replica
3) Run database on single node until next outage window
Week Later
1) Add database to availability group
2) Change ARECORD/CNAME to listener
What I don't like about this is I am going an entire week with 1 node instead of 3 which is worrisome. How to accomplish this I would love to hear from you or any type of comment from people who have worked with VLDB in availability groups and what you like/hate/loved about doing it. I am trying to go all in on this software and have loved it so far but getting worried when it comes to the VLDB migration.
View 4 Replies
View Related
May 1, 2007
The column I'm adding needs to be part of the clustered PK (it will be the last of three columns) so I need to recreate all the indexes.
My DB is set for FULL recovery mode ALLOW_SNAPSHOT_ISOLATION ON. I've tried two methods so far.
Method 1:
BEGIN TRANSACTION
CREATE TABLE dbo.Tmp_copyoftablewithnewfield
(
) ON PRIMARY
IF EXISTS(SELECT * FROM dbo.originaltable)
EXEC('INSERT INTO dbo.Tmp_copyoftablewithnewfield (<original fields>)
SELECT <original fields> FROM dbo.originaltable WITH (HOLDLOCK TABLOCKX)')
GO
DROP TABLE dbo.originaltable
GO
EXECUTE sp_rename N'dbo.Tmp_copyoftablewithnewfield', N'originaltable',
'OBJECT'
GO
<recreate PK constraint>
<rebuild indexes>
COMMIT
Pro's: Lets me add the new field in the spot I'd like it (not a big deal)
Con's: Tons of wasted space and time. It took about 15 hours.
Method 2:
SET XACT_ABORT ON
GO
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
<drop PK constraint>
<drop indexes>
ALTER TABLE [dbo].[originaltable] ADD
[newfield] [tinyint] NOT NULL CONSTRAINT [DF_originaltable_newfield] DEFAULT
((1))
<recreate PK constraint>
<rebuild indexes>
COMMIT TRANSACTION
Pro's: No making a copy of the entire table taking up 200GB more space in the db data file
Con's: My tempdb grew to accomodate the row versioning info for every row in the 200GB table. It took over 30 hours.
A lot of time and disk space is wasted with both.
Since the db is going to be unavailable to users I have some flexibility here. I was considering turning ALLOW_SNAPSHOT_ISOLATION OFF and then trying method 2 again which should stop the versioning in tempdb and then turning it back on.
I was also curious if setting the database recovery mode to SIMPLE would cut down on db log usage and then I could set it back to FULL when done.
Do these really need to be in a transaction? If there's some hardware failure or something unexpected I can just restore from backup and do the conversion again. If the presence of the transaction itself is causing more disk usage for logging or any other slowdown, I think I'd rather do without.
Given the amount of time this conversion takes, I wanted to get some
feedback other than "just try it" before doing any new tests.
Thanks.
View 3 Replies
View Related
Mar 24, 2008
Hi,
Im a Jr DBA and have been given an assignment by my lead to find information on the following.
We are to migrate existing db of size 4TB to a
DELL PowerEdge 2950[Mem:Up to 32GB]
OS : Windows Server 2003 Std Edition X64 SP2
DB : SQL Server Enterprise Edition x64
I am to find on how to design the db to provide optimum performance,fail over and consider the growing factor of the db.
1)What would be the recommended RAID settings?
2)Placement of the tempdb ?
3)Should we do clustering and why ?
4)What Data partioning would do to help?
5)Any Other aspects to be considered for sizing db ?
6)Placement of data files and log file on separate physical disk ?
7)Indexing?
I have read many sites.I would appreaciate if someone could write suggestions and opinions based on their current db design spec or previous experience,by selecting best db design points.Thank You.
View 8 Replies
View Related
Nov 14, 2001
Hi,
I would like to delete a data from a 750million row table in chunks of 10000,without blocking the users.As ours is a 24/7 shop I donot want to block the users for a long time.
Answer for this is highly appreciated.
Thanks
Samna
View 3 Replies
View Related