Slow Insert, Logical Fragmentation
Mar 23, 2006
We had a table that had logical fragmentation of 50%. After rebuilding
(with default fillfactor 0) I noticed that inserts are much faster. If
my page density is 100% wouldn't I get more page splits? I know I am
missing something fundamental here. Could someone get me back on
track?
Table Size 1.5 million
Insert Size 70K
Before: 15 minutes
After: 3 minutes
Index:
Compound clustered across varchar columns
There are also a couple non-clustered indexes
View 11 Replies
ADVERTISEMENT
Aug 16, 2007
I have a table with 50% Logical Scan Fragmentation. [ according to Dbcc Showcontig (myTable) ]
Why after running DBCC INDEXDEFRAG (myDB,myTable) does it still sit at 50%.
Why isn't it lower?
View 7 Replies
View Related
Jun 1, 2007
we had some slow down complaints lately and this query seems to be the culprit almost every single time. The estimated execution plan is a clustered index seek as there is a clustered index on the uidcustomerid column. setting profile statistics on shows that every time it executes it does an index seek.
profiler session showed a huge number of reads for these queries depending on the value being looked up. 1500 through 50000. i set up profile io on and the culprit is lob logical reads. everything else is 0 or very low. in this case lob logical reads is over 1700.
3 of the columns in the select statement are text columns. when i take them out of the query the lob logical reads drops to 0 and goes up incrementally as i add each column back in.
is there anyway to improve the performance without changing data types to varchar(max)?
select SID,Last_name,Name_2,First_name,Middle_initial,Descriptives,Telephone_number,mainline,Residence,ADL,
DID_number,Svce_street,Svce_town,Svce_state,Svce_appt,Mailing_street,Mailing_town,Mailing_state,Mailing_appt,
Mailing_zip,Listing,Addl_listing,Published,Listed,Gold_number,PIN,status,SSnumber,tax_jurisdiction,
Bill_date,Past_balance,Service_start_date,Service_end_date,LOA,FCC_type,Line_type,I_W,Jacks,Voice_messaging,
vms_ring_cycles,CCS,phonesmarts,ringmate,voice_dialing,Bill_detail,Contact_Number,Contact_extension,
Best_Time,suspend,suspend_start,suspend_end,credits_allowed,credits_granted,home_region,Calling_Plan,Local_Plan,
Local_Plan_Rate,Flat_Rate,Sales_agent,Community,Building_Mgmt,How_Heard,Incentive_1,Incentive_1a,Incentive_1b,
Incentive_1c,Incentive_2,Incentive_2a,Incentive_2b,Incentive_2c,Incentive_3,Incentive_3a,Incentive_3b,
Incentive_3c,block_operator,block_collect,block_group,block_adult,block_call_return,block_repeat_dialing,
block_call_trace,block_caller_id,block_anonymous,block_all_high_toll,block_regional_and_ld,block_DA_Call_Completion,
block_DA,block_3rd_party,bank,prepayment,dial_around_number,custid,waive_interest,Financial_Treatment,
Other_Feature_1_code,Other_Feature_1_rate,Other_Feature_2_code,Other_Feature_2_rate,Other_Feature_3_code,
Other_Feature_3_rate,Other_Feature_4_code,Other_Feature_4_rate,Partial_Account,mail_date,snp_1_date,snp_2_date,
terminate_date,snp1notified,snp1peak,snp1offpeak,snp2notified,snp2peak,snp2offpeak,avg_days_paid,Pulled_Ld,SNP1,
SNP2,Treatment,Collections,Installment,Nynex_BTN,LD_rate,local_discount,to_month,rounds_up,full_package_made,
local_made,PIC,LPIC,tax_exempt_local,tax_exempt_federal,CommissionedAgent,LDRateID,UidCustomerId,
accVchLineClassUSOC,block_Inter_Reg_LD,block_international,block_DA_3rd_Collect,block_DH2,block_ISP_2,block_ISP_3,
block_ISP4_3_GBAS,block_ISP3_3_GBAS,block_collect_only,block_LD_Reg_DA,block_usage_based,block_ISP5_3_GBAS,
block_ISP5_2_GBAS,block_group_adult,csr_PIC,csr_LPIC,csr_SA,csr_exception,cutover_status,cutover_datetime,
OutsideAgent,prfVchAttributes,uidResellerID,Category,uidDealID
from profiles where UidCustomerID in (352199267)
View 3 Replies
View Related
Apr 25, 2002
The many that I have spoken to all are clueless on this one. Thanks in advance for the right solution!
The insert trigger I created works fine (well, nearly fine), except that AFTER the first insert operation (ie second, third etc), it always produces the correct results BUT FOR THE PREVIOUS INSERTED ROW. It is as if there is a latency of one row in the temp table INSERTED.
I would greatly appreciate a 'why', and more importantly, a 'how to fix it' for this problem.
If you need to look at the code, a NOTEPAD file is attached.
Much appreciated
--start trigger--
create trigger UpdateAffiliateEarnings
on Orders
for insert, update
as
--declare variables
declare @ProductType varchar(15),
@AffID int,
@Earnings money,
@CurrentEarnings money,
@AffTotalEarnings money,
@AffTotalPayments money,
@AffOutstandingBalance money
--check existence of affiliateid, and for product type
select @AffID=AffID, @ProductType=Source
from Orders
where AffID IS NOT NULL
--get relevant information
if @AffID IS NOT NULL
begin
if @ProductType = 'FLOWER'
begin
select @Earnings=CONVERT(money,ChargedAmount*CommissionRa te)
from Orders o,FlowerOrder t,Commission c
where o.OrderID = t.OrderID
and t.CommissionID = c.CommissionID
and PaymentConfirmedYN='YES'
end
if @ProductType='PHONE'
begin
select @Earnings=CONVERT(money,ChargedAmount*CommissionRa te)
from Orders o,PhoneOrder t,Commission c
where o.OrderID = t.OrderID
and t.CommissionID = c.CommissionID
and PaymentConfirmedYN='YES'
end
--update Affiliate account
--get totals to update
select @CurrentEarnings=AffTotalEarnings, @AffTotalPayments=AffTotalPayments
from Affiliates
where AffID=@AffID
--calculate new totals for affiliate account
set @AffTotalEarnings=@CurrentEarnings+@Earnings
set @AffOutstandingBalance=@AffTotalEarnings-@AffTotalPayments
--update affiliate account to new totals
update Affiliates
set AffTotalEarnings=@AffTotalEarnings, AffOutstandingBalance=@AffOutstandingBalance
where AffID=@AffID
--roll back the transaction if there is an error
if @@ERROR !=0
rollback tran
end
-- end of trigger --
View 1 Replies
View Related
Feb 22, 2008
Hi All
I decided to change over from Microsoft Access Database file to the New SQLServerCe Compact edition. Although the reading of data from the database is greatly improved, the inserting of the new rows is extremely slow.
I was getting between 60 to 70 rows per sec using OLEDB and an Access Database but now only getting 14 to 27 rows per sec using SQLServerCe.
I have tried the below code changes and nothing seams to increase the speed, any help as I would prefer to use SQLServerCe as the database is much smaller and I€™m use to SQL Commands.
Details:
VB2008 Pro
.NET Frameworks 2.0
SQL Compact Edition V3.5
Encryption = Engine Default
Database Size = 128Mb (But needs to be changes to 999Mbs)
Where Backup_On_Next_Run, OverWriteQuick, CompressAns are Booleans, all other column sizes are nvarchar and size 10 to 30 expect for Full Folder Address size 260
TRY1
Direct Insert Using Data Adapter.
Me.BackupDatabaseTableAdapter1.Insert(Group_Name1, Full_Folder_Address1, File1, File_Size_KB1, Schedule_To_Run1, Backup_Time1, Last_Run1, Result1, Last_Modfied1, Last_Modfied1, Backup_On_Next_Run1, Total_Backup_Times1, Server_File_Number1, Server_Number1, File_Break_Down1, No_Of_Servers1, Full_File_Address1, OverWriteQuick, CompressAns)
14 to 20 rows per second (Was 60 to 70 when using OLEDB Access)
TRY 2
Using Record Sets
Private Sub InsertRecordsIntoSQLServerce(ByVal Group_Name1 As String, ByVal Full_Folder_Address1 As String, ByVal File1 As String, ByVal File_Size_KB1 As String, ByVal Schedule_To_Run1 As String, ByVal Backup_Time1 As String, ByVal Last_Run1 As String, ByVal Result1 As String, ByVal Last_Modfied1 As String, ByVal Latest_Modfied1 As String, ByVal Backup_On_Next_Run1 As Boolean, ByVal Total_Backup_Times1 As String, ByVal Server_File_Number1 As String, ByVal Server_Number1 As String, ByVal File_Break_Down1 As String, ByVal No_Of_Servers1 As String, ByVal Full_File_Address1 As String, ByVal OverWriteQuick As Boolean, ByVal CompressAns As Boolean)
Dim conn As SqlCeConnection = Nothing
Dim CommandText1 As String = "INSERT INTO BackupDatabase (Group_Name, Full_Full_Folder_Adress, File1,File_Size_KB, Schedule_To_Run, Backup_Time, Last_Run, Result, Last_Modfied, Latest_Modfied, Backup_On_Next_Run, Total_Backup_Times, Server_File_Number, Server_Number, File_Break_Down, No_Of_Servers, Full_File_Address, OverWrite, Compressed) VALUES ('" & Group_Name1 & "', '" & Full_Folder_Address1 & "', '" & File1 & "', '" & File_Size_KB1 & "', '" & Schedule_To_Run1 & "', '" & Backup_Time1 & "', '" & Last_Run1 & "', '" & Result1 & "', '" & Last_Modfied1 & "', '" & Latest_Modfied1 & "', '" & CStr(Backup_On_Next_Run1) & "', '" & Total_Backup_Times1 & "', '" & Server_File_Number1 & "', '" & Server_Number1 & "', '" & File_Break_Down1 & "', '" & No_Of_Servers1 & "', '" & Full_File_Address1 & "', '" & CStr(OverWriteQuick) & "', '" & CStr(CompressAns) & "')"
Try
conn = New SqlCeConnection(strConn)
conn.Open()
Dim cmd As SqlCeCommand = conn.CreateCommand()
cmd.CommandText = "SELECT * FROM BackupDatabase"
cmd.ExecuteNonQuery()
Dim rs As SqlCeResultSet = cmd.ExecuteResultSet(ResultSetOptions.Updatable Or ResultSetOptions.Scrollable)
Dim rec As SqlCeUpdatableRecord = rs.CreateRecord()
rec.SetString(1, Group_Name1)
rec.SetString(2, Full_Folder_Address1)
rec.SetString(3, File1)
rec.SetSqlString(4, File_Size_KB1)
rec.SetSqlString(5, Schedule_To_Run1)
rec.SetSqlString(6, Backup_Time1)
rec.SetSqlString(7, Last_Run1)
rec.SetSqlString(8, Result1)
rec.SetSqlString(9, Last_Modfied1)
rec.SetSqlString(10, Latest_Modfied1)
rec.SetSqlBoolean(11, Backup_On_Next_Run1)
rec.SetSqlString(12, Total_Backup_Times1)
rec.SetSqlString(13, Server_File_Number1)
rec.SetSqlString(14, Server_Number1)
rec.SetSqlString(15, File_Break_Down1)
rec.SetSqlString(16, No_Of_Servers1)
rec.SetSqlString(17, Full_File_Address1)
rec.SetSqlBoolean(18, OverWriteQuick)
rec.SetSqlBoolean(19, CompressAns)
rs.Insert(rec)
Catch e As Exception
MessageBox.Show(e.Message)
Finally
conn.Close()
End Try
End Sub
€™20 to 24 rows per sec
TRY 3
Using SQL Commands Direct
Private Sub InsertRecordsIntoSQLServerce(ByVal Group_Name1 As String, ByVal Full_Folder_Address1 As String, ByVal File1 As String, ByVal File_Size_KB1 As String, ByVal Schedule_To_Run1 As String, ByVal Backup_Time1 As String, ByVal Last_Run1 As String, ByVal Result1 As String, ByVal Last_Modfied1 As String, ByVal Latest_Modfied1 As String, ByVal Backup_On_Next_Run1 As Boolean, ByVal Total_Backup_Times1 As String, ByVal Server_File_Number1 As String, ByVal Server_Number1 As String, ByVal File_Break_Down1 As String, ByVal No_Of_Servers1 As String, ByVal Full_File_Address1 As String, ByVal OverWriteQuick As Boolean, ByVal CompressAns As Boolean)
Dim conn As SqlCeConnection = Nothing
Dim CommandText1 As String = "INSERT INTO BackupDatabase (Group_Name, Full_Full_Folder_Adress, File1,File_Size_KB, Schedule_To_Run, Backup_Time, Last_Run, Result, Last_Modfied, Latest_Modfied, Backup_On_Next_Run, Total_Backup_Times, Server_File_Number, Server_Number, File_Break_Down, No_Of_Servers, Full_File_Address, OverWrite, Compressed) VALUES ('" & Group_Name1 & "', '" & Full_Folder_Address1 & "', '" & File1 & "', '" & File_Size_KB1 & "', '" & Schedule_To_Run1 & "', '" & Backup_Time1 & "', '" & Last_Run1 & "', '" & Result1 & "', '" & Last_Modfied1 & "', '" & Latest_Modfied1 & "', '" & CStr(Backup_On_Next_Run1) & "', '" & Total_Backup_Times1 & "', '" & Server_File_Number1 & "', '" & Server_Number1 & "', '" & File_Break_Down1 & "', '" & No_Of_Servers1 & "', '" & Full_File_Address1 & "', '" & CStr(OverWriteQuick) & "', '" & CStr(CompressAns) & "')"
Try
conn = New SqlCeConnection(strConn)
conn.Open()
Dim cmd As SqlCeCommand = conn.CreateCommand()
cmd.CommandText = CommandText1
'cmd.CommandText = "INSERT INTO BackupDatabase (€¦"
cmd.ExecuteNonQuery()
Catch e As Exception
MessageBox.Show(e.Message)
Finally
conn.Close()
End Try
End Sub
€˜ 25 to 30 but mostly holds around 27 rows per sec I
Is this the best you can get or is there a better way. Any help would be greatly appericated
Kind Regards
John Galvin
View 3 Replies
View Related
Jul 23, 2005
I have a stored procedure that can take over 5 seconds to add simpledata. Please give any advice on optimizing?Here are the details:***** Access Info ******~10 rows are added to the table every secondthe table is read ~20 times a second (with no lock)***** Stored Procedures *****-------CREATE PROCEDURE dbo.getMessagesForSR@serviceRequestId numericASselect *from SRMessages with (nolock)where srid= sridorder by SRMessageIDGOSET QUOTED_IDENTIFIER OFFGOSET ANSI_NULLS ONGO--------CREATE PROCEDURE dbo.addMessage@sridnumeric,@timestamp bigint,@type smallint,@initiator nvarchar (100),@data ntextASINSERT INTO SRMessages VALUES(@srid,@timestamp,@type,@initiator,@data)GO---------***** The Table *****CREATE TABLE [dbo].[SRMessages] ([SRMessageId] [bigint] IDENTITY (1, 1) NOT NULL ,[srid] [bigint] NOT NULL ,[Timestamp] [bigint] NOT NULL ,[Type] [smallint] NOT NULL ,[Initiator] [nvarchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL ,[Data] [ntext] COLLATE SQL_Latin1_General_CP1_CI_AS NULL) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]GOALTER TABLE [dbo].[SRMessages] ADDCONSTRAINT [PK_SRMessages] PRIMARY KEY NONCLUSTERED([SRMessageId]) ON [PRIMARY]GOCREATE INDEX [SRMessages2] ON [dbo].[SRMessages]([srid]) ON [PRIMARY]GO
View 1 Replies
View Related
Aug 16, 2006
Hi,It takes 4 minutes to insert 40 000 records on a SQL Server and 40 secon an other SQL Server. The slower run on windows 2000 terminal serverand the faster on windows 2000. The slower The slower have 2 Go ofram, the faster 512 Mo!!!How to diagnostic what's going wrong on the slower server? I can'tchange the application who do the insertions.
View 4 Replies
View Related
Sep 9, 2007
Hi,
I have a table with 110 nvarchar (255) columns in the database. I receive the data from the server (array) and them loop through it to insert the data into the database. The problem is it takes more than 3 minutes to insert (or update) (in memory) the 310 records it got from the array. I am reusing the Statement (with parameters). How could I improve the performance?
It's really a small quantity of data, the CPU in the device is 520Mhz... it should be alright!
Can aynone help me please?
Thanks
View 5 Replies
View Related
Mar 7, 2005
Hello,
I don't know what to do anymore ;o(
I've got 2 servers, with sql server 2000 sp3 and ms windows 2003 server.
I've written a very simple stored procedure to insert 20,000 rows into a very simple table TEST (id int, msg varchar_50)
On the first server (P-IV 2 GHz), it takes 700 ms / 1000 insertions
and on the second (2x Xeon 2,6 GHz), it takes 13 s / 1000 insertions...
(insertion is : INSERT INTO TEST (id, msg) VALUES (@id, 'dummy text'))
...
SQL Server was installed exactly in the same way...
what could I do see where the problem is ? With profiler, I see no difference while logging all events....
please help or give ideas
View 1 Replies
View Related
Apr 6, 2007
I support a website (ASP.NET 2.0) where recently users have been unable to insert data due to timeout issues. The functionality executes a query that inserts a single row into a SQL Server 2005 db table. I tried running this query from the backend, and it took 4:48 to insert a single row! Interestingly, after that initial agony any similar inserts I tried took no time whatsoever. I have checked the execution plan, but unfortunately don't really know what I'm looking for with inserts, as most of my experience with execution plans is with select statements. Any resources anyone could point me to for troubleshooting this would be much appreciated.
Thanks,
-Dave
View 3 Replies
View Related
Dec 26, 2007
Hi everybody !!! I have a problem with database SQL server 2000, and I need anyone help me.
I have PC machine with hardware configuration following:
+ RAM : 512M
+ CPU : 2.4 GHz (intel Pentium 4)
+ System : Microsoft Windows Server 2003 Enterprice Edition , Service Pack 1
(this is call Machine1 )
and other Server machine with hardware configuration the following :
+ RAM : 2G
+ CPU : 3.6 GHz( intel Xeon)
+System : Microsoft Windows Server 2003 Enterprice Edition , Service Pack 2
(use RAID 5)
(this is call Machine2 )
.And SQL Server 2000 is installed in these machine.
I created one table and one script to insert several records into this table. and script for inserting:
declare @count int , @max int
set @count =1
set @max = 5000
WHILE (@count < @max or @count = @max)
BEGIN
insert into TBAT_STOCKEOD(TRDT,SEQNO,COMPID,TRANSTYPE) values('20071219',cast(@count as varchar(5)),'13215','1')
set @count = @count +1
END
I execute this script on Machine1 and Machine2 :
+ Machine1 : spent 3 seconds.
+ Machine2 : spent 30 seconds. I don't understand why Machine2 is slower than Machine1 ( while configuration of Machine2 is better than Machine1's ). I don't know what happen.
I checked resource usages in perfmon when SQL server execute script and result following :
Machine1:
Proccessor used 100%
whereas,Machine2 used physical Disk 100%
Is there the way to configurate SQL server to improve speed writting of hard disk?
View 5 Replies
View Related
Dec 16, 2001
Originally posted by Jeremy at 12/10/2001 11:39:38 AM
Hello all,
I've written a simple dts job that uses oracle (8.x) as a source and oracle (8.x) as a destination. I'm using SQL 2000 and Microsoft's oledb provider for oracle as the two connections. I've chosen "Transform Data Task" with the following SQL "SELECT * FROM REPORTER_STATUS
WHERE LASTOCCURRENCE > TRUNC(SYSDATE)". As you can see, it's very simple, however it's very very very slow. (averages about 1000 rows per minute). In my column transformations, I've selected many to many versus the one to one. There are no activex scripts or anything along those lines. Just a simple push of the data from one oracle box to the other. The table schemas are identical etc... I've had this problem before with writing to Oracle and I can't imagine that it's really supposed to be this slow. If you need more details, please just let me know.
Thank you,
Jeremy
-----------------------------More -------------------------------
The official response from microsoft is that dts only allows for single inserts... not bulk or bcp for oracle. There must be someone out there who has figured out how to configure / modify / call (something) from a dts pacakage to insert millions of records into Oracle in a decent time frame...
thnx again..
View 2 Replies
View Related
Sep 29, 2015
I am using an aggregate with the OVER clause.Running the script is fast less than 1 second but when I say insert into a temp table the execution plan is very different at it take 8 seconds.I have attached the execution plans. Also the Statistics IO, Time messages. I am using SQL Server 2014 with backward compatibility to 2008 R2.
if (select OBJECT_ID('tempdb..#MM')) is not null drop table #MM
CREATE TABLE #MM ([MyTableID] [int], [ParticipantID] [int], [ConferenceID] [nvarchar](50), [Points] [money], [DateCreated] [datetime], [StartPoints] [money], [EndPoints] [money], [LowPoints] [money], [HighPoints] [money])
insert into #MM ([MyTableID], [ParticipantID], [ConferenceID], [Points], [DateCreated], [StartPoints], [EndPoints], [LowPoints], [HighPoints])
selectmm.MyTableID, mm.ParticipantID, mm.ConferenceID, mm.Points, mm.DateCreated,
[code]....
View 2 Replies
View Related
May 16, 2015
I am running A View that INSERTS into #Temp Table - On Only Certain Days the INSERT Speed into #tempDB is so slow.
Â
Attached snapshot that shows after one minute so many few records are inserted - and it dosent happen every day somedays its very fast.Â
View 3 Replies
View Related
Jan 4, 2007
Hello,
In a Data Flow Task, I have an insert that occurs into a SQL Server 2000 table from a fixed width flat file. The SQL Server table that the data goes into is accessed through an OLE DB connection manager that uses the Native OLE DBMicrosoft OLE DB Provider for SQL Server.
In the OLE DB Destination, I changed the access mode from Table or View - fast load to Table or View because I needed to implement OLE DB Destination Error Output. The Error output goes to a SQL Server 2000 table that uses the same connection manager.
The OLE DB Destination Editor Error Output 'Error' option is configured to 'Redirect' the row. 'Set this value to selected cells' is set to 'Fail component'.
Was changing the access mode the simple reason why the insert from the flat file takes so much longer, or could there be other problems?
Thank you for your help!
cdun2
View 32 Replies
View Related
May 13, 2008
i want to do fragmentation using SQL 2005 Express. Can their be any helping material for making fragmentation. Any Example
View 2 Replies
View Related
Mar 29, 2001
What DBCC or otherwise one would be able to find table fragmentation in SQL 7 and how to take some proactive actions ?
Thanks very much for your time.
-Tim
View 2 Replies
View Related
Jun 1, 2004
Would someone please let me know which is the better way to reduce fragmentation on Windows:NT drive?
dbcc indexdefrag
or
dbcc reindex
or
dropping and recreating indexes through SQL?
Thanks
Vinnie
View 5 Replies
View Related
Jul 27, 2007
Sql Server 2000.
For simplicity, I am avoiding minor details.
Here is a scenario.
Create Table T (T_ID int PK, €¦..) and this table is referenced by 20 other places on T_ID.
And these 20 tables have T_ID as PK or part of PK.
We have large volume of deletes in T and in the other 20 tables based on T_ID.
Once the deletes are done, will dropping and re-creating the FK€™s that reference the T_ID
help reduce the fragmentation caused by the deletes?
View 4 Replies
View Related
May 8, 2001
hi all,
i have inherited a database running on sql7 sp2. a lot of the tables do not have a clustered index. some of these tables are highly fragmented. below is a sample of the dbcc showcontig output
TABLE level scan performed.
- Pages Scanned................................: 104934
- Extents Scanned..............................: 13124
- Extent Switches..............................: 13123
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.95% [13117:13124]
- Extent Scan Fragmentation ...................: 5.84%
- Avg. Bytes Free per Page.....................: 7757.4
- Avg. Page Density (full).....................: 4.16%
this table has 5 non-null char columns whose total length is 43 chars and the table occupies about 900 mb for only about 670,000 rows!
select on the table sucks!
since there is no clustered index i have to rebuild the table to remove the fragmentation. is there a way to reorganize the table without recreating it?
thanx
:-)
hemant
View 1 Replies
View Related
Oct 13, 2000
Hi!
I have a problem with defragmenting the tables. When i ran dbcc showcontig for the tables, it shows 94%, 50%, 33% fragmentations . I have rebuild indexes on all those tables and ran dbcc showcontig again but still it shows the same % of fragmentations. Is there any way to remove fragmentation on all the tables completely. Any help would be appreciated.
Thanks,
Saran
View 2 Replies
View Related
Apr 14, 2004
One of my production servers has been determined to be 92% fragmented.
What's the proper procedure for defraging a database server?
I couldn't find anything very helpful in BOL, nor Knowledge Base.
Sidney Ives
Database Administrator
Sentara Healthcare
View 3 Replies
View Related
Aug 9, 2004
What's the best way to find out if disk fragmentation on Windows 2000 Server is affecting SQL Server performance?
If disk fragmentation is shown to be a cause of performance problems, what are the recommendations for a disk fragmentation strategy? eg. use the win 2000 built in disk defrag utility or buy a 3rd party product like DiskKeeper? How much of an overhead is a product like DiskKeeper that defrags in the background?
Clive
View 1 Replies
View Related
Oct 1, 2004
Hello,
How can I mesure the database fragmentation ? Cause DBCC SHOWCONTIG shows obects fragmentation only. I would like to see the whole database fragmentation.
Thanks for help
View 12 Replies
View Related
Jun 22, 2008
What functions of tools do you use for managing index fragmentation?
DBCC?
I am working through MS Press SQL 2005 book and it mentions the
sys.dm_db_index_physical_stats function? It then give an example of code which is very involved.
Does anybody use this function?
Thanks
View 10 Replies
View Related
May 10, 2007
before rebuild index
---------------------
tablenameIndexname
avg_fragmentation_in_percent
Payoff_QuotePK_Payoff_QuoteDetail
83.3333333333333
ALTER INDEX ALL ON Payoff_Quote REBUILD
after issue the above statement result remain same.
View 1 Replies
View Related
Jun 24, 2007
Hi
When I run sys.dm_db_index_physical_stats on AdventureWorks.Production.Product, I get an average fragmentaion of 46%, 0%, 25% & 50% on the Indexes.
I run ALTER INDEX ALL ON AdventureWorks.Production.Product
REBUILD WITH (FILLFACTOR = 80, SORT_IN_TEMPDB = ON,
STATISTICS_NORECOMPUTE = ON)
and the index fragmentation does not change...
Why is that?? I have tried refreshing the table and the statistics but why does it look like rebuilding the indexes has done nothing?
View 1 Replies
View Related
Sep 25, 2007
I have a nice script that will look at the index fragmentation by using the DMV (sys.dm_db_index_physical_stats) and if its above a specificed threshhold, it will rebuild or reorg the index.
What I have noticed is that even after reorging and/or rebuilding the index, the fragmentation percent in the DMV does not change for some indexes. Other indexes are updated just fine.
Is this a result of updating statistics? Why would the fragmentation change for some indexes and not others? Why does it seem no matter how much rebuilding or reorging is done, the fragmentation percent for some indexes does not change?
- Eric
View 6 Replies
View Related
May 8, 2007
I'm attempting to debug some query timeouts on a production server.
Several tables involved in this query have logical scan fragmentation levels around 10%, and extent scan fragmentation levels 90%+.
Running INDEXDEFRAG or DBREINDEX reduces the logical scan fragmentation down to almost 0%, but the extent scan fragmentation stays upwards of 90%.
Why? What can I do to get this fragmentation down? Is it because this particular index is on a GUID?
View 7 Replies
View Related
Dec 13, 2006
We backup a lot of SQL databases. The db admin uses SQL to dump the databases to a *.bak file on a network share, then we pick them up to tape.
The problem is that for some reason, the backup files are MASSIVELY fragmented, which kills our backup speed to tape (via Netbackup Enterprise). If I defragment the drive (via the built-in tool on Win2K3 or PerfectDisk 8) our speed to tape more than doubles (can go from 16MB/sec to 36MB/sec). However, after the next SQL backup, the drive is completley fragmented once again (around 70%).
Is there any way to improve how these files are written to disk (to keep them relatively in one piece)? For some reason it appears this is a bigger issue with the SQL backup files than any other file type. Thanks.
Jim B
View 3 Replies
View Related
Aug 8, 2007
I am running this script to see the fragmentation of indexes.
SELECT a.database_id
,a.object_id
,a.index_id
,b.name
,a.avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (7,NULL, NULL, NULL, NULL) AS a
JOIN sys.indexes AS b
ON a.object_id = b.object_id
AND a.index_id = b.index_id
where a.database_id = db_id()
order by a.avg_fragmentation_in_percent desc
When I rebuild an index the fragmentation does not change 10% or lower (default index fill factor is 0, which I think is 90% fill)
SQL Server 2005 EE SP 2
Thanks,
~Joseph~
View 1 Replies
View Related
Mar 30, 2004
Hi,
How do you defragement SQL database data files? Running the DBCC DBREINDEX , DBCC INDEXDEFRAG, does that minimize fragmentations of the tables and the indexes files or is it just the index files?
Thanks in advance
a
View 6 Replies
View Related
May 11, 2015
We have nightly job running to reorg all in one of our prod database. But the index on one of the table fragmenting quickly by the morning showing 90% fragmentation.
View 9 Replies
View Related