We had a table that had logical fragmentation of 50%. After rebuilding
(with default fillfactor 0) I noticed that inserts are much faster. If
my page density is 100% wouldn't I get more page splits? I know I am
missing something fundamental here. Could someone get me back on
track?
Table Size 1.5 million
Insert Size 70K
Before: 15 minutes
After: 3 minutes
Index:
Compound clustered across varchar columns
There are also a couple non-clustered indexes
I have a table with 50% Logical Scan Fragmentation. [ according to Dbcc Showcontig (myTable) ] Why after running DBCC INDEXDEFRAG (myDB,myTable) does it still sit at 50%. Why isn't it lower?
we had some slow down complaints lately and this query seems to be the culprit almost every single time. The estimated execution plan is a clustered index seek as there is a clustered index on the uidcustomerid column. setting profile statistics on shows that every time it executes it does an index seek.
profiler session showed a huge number of reads for these queries depending on the value being looked up. 1500 through 50000. i set up profile io on and the culprit is lob logical reads. everything else is 0 or very low. in this case lob logical reads is over 1700.
3 of the columns in the select statement are text columns. when i take them out of the query the lob logical reads drops to 0 and goes up incrementally as i add each column back in.
is there anyway to improve the performance without changing data types to varchar(max)?
select SID,Last_name,Name_2,First_name,Middle_initial,Descriptives,Telephone_number,mainline,Residence,ADL, DID_number,Svce_street,Svce_town,Svce_state,Svce_appt,Mailing_street,Mailing_town,Mailing_state,Mailing_appt, Mailing_zip,Listing,Addl_listing,Published,Listed,Gold_number,PIN,status,SSnumber,tax_jurisdiction, Bill_date,Past_balance,Service_start_date,Service_end_date,LOA,FCC_type,Line_type,I_W,Jacks,Voice_messaging, vms_ring_cycles,CCS,phonesmarts,ringmate,voice_dialing,Bill_detail,Contact_Number,Contact_extension, Best_Time,suspend,suspend_start,suspend_end,credits_allowed,credits_granted,home_region,Calling_Plan,Local_Plan, Local_Plan_Rate,Flat_Rate,Sales_agent,Community,Building_Mgmt,How_Heard,Incentive_1,Incentive_1a,Incentive_1b, Incentive_1c,Incentive_2,Incentive_2a,Incentive_2b,Incentive_2c,Incentive_3,Incentive_3a,Incentive_3b, Incentive_3c,block_operator,block_collect,block_group,block_adult,block_call_return,block_repeat_dialing, block_call_trace,block_caller_id,block_anonymous,block_all_high_toll,block_regional_and_ld,block_DA_Call_Completion, block_DA,block_3rd_party,bank,prepayment,dial_around_number,custid,waive_interest,Financial_Treatment, Other_Feature_1_code,Other_Feature_1_rate,Other_Feature_2_code,Other_Feature_2_rate,Other_Feature_3_code, Other_Feature_3_rate,Other_Feature_4_code,Other_Feature_4_rate,Partial_Account,mail_date,snp_1_date,snp_2_date, terminate_date,snp1notified,snp1peak,snp1offpeak,snp2notified,snp2peak,snp2offpeak,avg_days_paid,Pulled_Ld,SNP1, SNP2,Treatment,Collections,Installment,Nynex_BTN,LD_rate,local_discount,to_month,rounds_up,full_package_made, local_made,PIC,LPIC,tax_exempt_local,tax_exempt_federal,CommissionedAgent,LDRateID,UidCustomerId, accVchLineClassUSOC,block_Inter_Reg_LD,block_international,block_DA_3rd_Collect,block_DH2,block_ISP_2,block_ISP_3, block_ISP4_3_GBAS,block_ISP3_3_GBAS,block_collect_only,block_LD_Reg_DA,block_usage_based,block_ISP5_3_GBAS, block_ISP5_2_GBAS,block_group_adult,csr_PIC,csr_LPIC,csr_SA,csr_exception,cutover_status,cutover_datetime, OutsideAgent,prfVchAttributes,uidResellerID,Category,uidDealID from profiles where UidCustomerID in (352199267)
The many that I have spoken to all are clueless on this one. Thanks in advance for the right solution!
The insert trigger I created works fine (well, nearly fine), except that AFTER the first insert operation (ie second, third etc), it always produces the correct results BUT FOR THE PREVIOUS INSERTED ROW. It is as if there is a latency of one row in the temp table INSERTED.
I would greatly appreciate a 'why', and more importantly, a 'how to fix it' for this problem.
If you need to look at the code, a NOTEPAD file is attached.
Much appreciated
--start trigger-- create trigger UpdateAffiliateEarnings on Orders for insert, update as
--check existence of affiliateid, and for product type select @AffID=AffID, @ProductType=Source from Orders where AffID IS NOT NULL
--get relevant information if @AffID IS NOT NULL begin if @ProductType = 'FLOWER' begin select @Earnings=CONVERT(money,ChargedAmount*CommissionRa te) from Orders o,FlowerOrder t,Commission c where o.OrderID = t.OrderID and t.CommissionID = c.CommissionID and PaymentConfirmedYN='YES' end if @ProductType='PHONE' begin select @Earnings=CONVERT(money,ChargedAmount*CommissionRa te) from Orders o,PhoneOrder t,Commission c where o.OrderID = t.OrderID and t.CommissionID = c.CommissionID and PaymentConfirmedYN='YES' end
--update Affiliate account --get totals to update select @CurrentEarnings=AffTotalEarnings, @AffTotalPayments=AffTotalPayments from Affiliates where AffID=@AffID
--calculate new totals for affiliate account set @AffTotalEarnings=@CurrentEarnings+@Earnings set @AffOutstandingBalance=@AffTotalEarnings-@AffTotalPayments
--update affiliate account to new totals update Affiliates set AffTotalEarnings=@AffTotalEarnings, AffOutstandingBalance=@AffOutstandingBalance where AffID=@AffID
--roll back the transaction if there is an error if @@ERROR !=0 rollback tran end -- end of trigger --
I decided to change over from Microsoft Access Database file to the New SQLServerCe Compact edition. Although the reading of data from the database is greatly improved, the inserting of the new rows is extremely slow.
I was getting between 60 to 70 rows per sec using OLEDB and an Access Database but now only getting 14 to 27 rows per sec using SQLServerCe.
I have tried the below code changes and nothing seams to increase the speed, any help as I would prefer to use SQLServerCe as the database is much smaller and I€™m use to SQL Commands.
Details: VB2008 Pro .NET Frameworks 2.0 SQL Compact Edition V3.5 Encryption = Engine Default Database Size = 128Mb (But needs to be changes to 999Mbs)
Where Backup_On_Next_Run, OverWriteQuick, CompressAns are Booleans, all other column sizes are nvarchar and size 10 to 30 expect for Full Folder Address size 260
14 to 20 rows per second (Was 60 to 70 when using OLEDB Access)
TRY 2
Using Record Sets
Private Sub InsertRecordsIntoSQLServerce(ByVal Group_Name1 As String, ByVal Full_Folder_Address1 As String, ByVal File1 As String, ByVal File_Size_KB1 As String, ByVal Schedule_To_Run1 As String, ByVal Backup_Time1 As String, ByVal Last_Run1 As String, ByVal Result1 As String, ByVal Last_Modfied1 As String, ByVal Latest_Modfied1 As String, ByVal Backup_On_Next_Run1 As Boolean, ByVal Total_Backup_Times1 As String, ByVal Server_File_Number1 As String, ByVal Server_Number1 As String, ByVal File_Break_Down1 As String, ByVal No_Of_Servers1 As String, ByVal Full_File_Address1 As String, ByVal OverWriteQuick As Boolean, ByVal CompressAns As Boolean)
cmd.CommandText = "SELECT * FROM BackupDatabase" cmd.ExecuteNonQuery() Dim rs As SqlCeResultSet = cmd.ExecuteResultSet(ResultSetOptions.Updatable Or ResultSetOptions.Scrollable)
Dim rec As SqlCeUpdatableRecord = rs.CreateRecord()
rec.SetString(1, Group_Name1) rec.SetString(2, Full_Folder_Address1) rec.SetString(3, File1) rec.SetSqlString(4, File_Size_KB1) rec.SetSqlString(5, Schedule_To_Run1) rec.SetSqlString(6, Backup_Time1) rec.SetSqlString(7, Last_Run1) rec.SetSqlString(8, Result1) rec.SetSqlString(9, Last_Modfied1) rec.SetSqlString(10, Latest_Modfied1) rec.SetSqlBoolean(11, Backup_On_Next_Run1) rec.SetSqlString(12, Total_Backup_Times1) rec.SetSqlString(13, Server_File_Number1) rec.SetSqlString(14, Server_Number1) rec.SetSqlString(15, File_Break_Down1) rec.SetSqlString(16, No_Of_Servers1) rec.SetSqlString(17, Full_File_Address1) rec.SetSqlBoolean(18, OverWriteQuick) rec.SetSqlBoolean(19, CompressAns) rs.Insert(rec) Catch e As Exception MessageBox.Show(e.Message) Finally conn.Close() End Try End Sub
€™20 to 24 rows per sec
TRY 3
Using SQL Commands Direct
Private Sub InsertRecordsIntoSQLServerce(ByVal Group_Name1 As String, ByVal Full_Folder_Address1 As String, ByVal File1 As String, ByVal File_Size_KB1 As String, ByVal Schedule_To_Run1 As String, ByVal Backup_Time1 As String, ByVal Last_Run1 As String, ByVal Result1 As String, ByVal Last_Modfied1 As String, ByVal Latest_Modfied1 As String, ByVal Backup_On_Next_Run1 As Boolean, ByVal Total_Backup_Times1 As String, ByVal Server_File_Number1 As String, ByVal Server_Number1 As String, ByVal File_Break_Down1 As String, ByVal No_Of_Servers1 As String, ByVal Full_File_Address1 As String, ByVal OverWriteQuick As Boolean, ByVal CompressAns As Boolean)
I have a stored procedure that can take over 5 seconds to add simpledata. Please give any advice on optimizing?Here are the details:***** Access Info ******~10 rows are added to the table every secondthe table is read ~20 times a second (with no lock)***** Stored Procedures *****-------CREATE PROCEDURE dbo.getMessagesForSR@serviceRequestId numericASselect *from SRMessages with (nolock)where srid= sridorder by SRMessageIDGOSET QUOTED_IDENTIFIER OFFGOSET ANSI_NULLS ONGO--------CREATE PROCEDURE dbo.addMessage@sridnumeric,@timestamp bigint,@type smallint,@initiator nvarchar (100),@data ntextASINSERT INTO SRMessages VALUES(@srid,@timestamp,@type,@initiator,@data)GO---------***** The Table *****CREATE TABLE [dbo].[SRMessages] ([SRMessageId] [bigint] IDENTITY (1, 1) NOT NULL ,[srid] [bigint] NOT NULL ,[Timestamp] [bigint] NOT NULL ,[Type] [smallint] NOT NULL ,[Initiator] [nvarchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL ,[Data] [ntext] COLLATE SQL_Latin1_General_CP1_CI_AS NULL) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]GOALTER TABLE [dbo].[SRMessages] ADDCONSTRAINT [PK_SRMessages] PRIMARY KEY NONCLUSTERED([SRMessageId]) ON [PRIMARY]GOCREATE INDEX [SRMessages2] ON [dbo].[SRMessages]([srid]) ON [PRIMARY]GO
Hi,It takes 4 minutes to insert 40 000 records on a SQL Server and 40 secon an other SQL Server. The slower run on windows 2000 terminal serverand the faster on windows 2000. The slower The slower have 2 Go ofram, the faster 512 Mo!!!How to diagnostic what's going wrong on the slower server? I can'tchange the application who do the insertions.
I have a table with 110 nvarchar (255) columns in the database. I receive the data from the server (array) and them loop through it to insert the data into the database. The problem is it takes more than 3 minutes to insert (or update) (in memory) the 310 records it got from the array. I am reusing the Statement (with parameters). How could I improve the performance?
It's really a small quantity of data, the CPU in the device is 520Mhz... it should be alright!
I've got 2 servers, with sql server 2000 sp3 and ms windows 2003 server. I've written a very simple stored procedure to insert 20,000 rows into a very simple table TEST (id int, msg varchar_50)
On the first server (P-IV 2 GHz), it takes 700 ms / 1000 insertions and on the second (2x Xeon 2,6 GHz), it takes 13 s / 1000 insertions...
(insertion is : INSERT INTO TEST (id, msg) VALUES (@id, 'dummy text'))
...
SQL Server was installed exactly in the same way...
what could I do see where the problem is ? With profiler, I see no difference while logging all events....
I support a website (ASP.NET 2.0) where recently users have been unable to insert data due to timeout issues. The functionality executes a query that inserts a single row into a SQL Server 2005 db table. I tried running this query from the backend, and it took 4:48 to insert a single row! Interestingly, after that initial agony any similar inserts I tried took no time whatsoever. I have checked the execution plan, but unfortunately don't really know what I'm looking for with inserts, as most of my experience with execution plans is with select statements. Any resources anyone could point me to for troubleshooting this would be much appreciated.
Hi everybody !!! I have a problem with database SQL server 2000, and I need anyone help me.
I have PC machine with hardware configuration following:
+ RAM : 512M
+ CPU : 2.4 GHz (intel Pentium 4)
+ System : Microsoft Windows Server 2003 Enterprice Edition , Service Pack 1
(this is call Machine1 )
and other Server machine with hardware configuration the following :
+ RAM : 2G
+ CPU : 3.6 GHz( intel Xeon)
+System : Microsoft Windows Server 2003 Enterprice Edition , Service Pack 2
(use RAID 5)
(this is call Machine2 )
.And SQL Server 2000 is installed in these machine.
I created one table and one script to insert several records into this table. and script for inserting:
declare @count int , @max int
set @count =1
set @max = 5000
WHILE (@count < @max or @count = @max)
BEGIN
insert into TBAT_STOCKEOD(TRDT,SEQNO,COMPID,TRANSTYPE) values('20071219',cast(@count as varchar(5)),'13215','1')
set @count = @count +1
END
I execute this script on Machine1 and Machine2 :
+ Machine1 : spent 3 seconds.
+ Machine2 : spent 30 seconds. I don't understand why Machine2 is slower than Machine1 ( while configuration of Machine2 is better than Machine1's ). I don't know what happen.
I checked resource usages in perfmon when SQL server execute script and result following :
Machine1:
Proccessor used 100%
whereas,Machine2 used physical Disk 100%
Is there the way to configurate SQL server to improve speed writting of hard disk?
Originally posted by Jeremy at 12/10/2001 11:39:38 AM
Hello all,
I've written a simple dts job that uses oracle (8.x) as a source and oracle (8.x) as a destination. I'm using SQL 2000 and Microsoft's oledb provider for oracle as the two connections. I've chosen "Transform Data Task" with the following SQL "SELECT * FROM REPORTER_STATUS WHERE LASTOCCURRENCE > TRUNC(SYSDATE)". As you can see, it's very simple, however it's very very very slow. (averages about 1000 rows per minute). In my column transformations, I've selected many to many versus the one to one. There are no activex scripts or anything along those lines. Just a simple push of the data from one oracle box to the other. The table schemas are identical etc... I've had this problem before with writing to Oracle and I can't imagine that it's really supposed to be this slow. If you need more details, please just let me know.
The official response from microsoft is that dts only allows for single inserts... not bulk or bcp for oracle. There must be someone out there who has figured out how to configure / modify / call (something) from a dts pacakage to insert millions of records into Oracle in a decent time frame...
I am using an aggregate with the OVER clause.Running the script is fast less than 1 second but when I say insert into a temp table the execution plan is very different at it take 8 seconds.I have attached the execution plans. Also the Statistics IO, Time messages. I am using SQL Server 2014 with backward compatibility to 2008 R2.
if (select OBJECT_ID('tempdb..#MM')) is not null drop table #MM CREATE TABLE #MM ([MyTableID] [int], [ParticipantID] [int], [ConferenceID] [nvarchar](50), [Points] [money], [DateCreated] [datetime], [StartPoints] [money], [EndPoints] [money], [LowPoints] [money], [HighPoints] [money]) insert into #MM ([MyTableID], [ParticipantID], [ConferenceID], [Points], [DateCreated], [StartPoints], [EndPoints], [LowPoints], [HighPoints]) selectmm.MyTableID, mm.ParticipantID, mm.ConferenceID, mm.Points, mm.DateCreated,
I am running A View that INSERTS into #Temp Table - On Only Certain Days the INSERT Speed into #tempDB is so slow. Â Attached snapshot that shows after one minute so many few records are inserted - and it dosent happen every day somedays its very fast.Â
In a Data Flow Task, I have an insert that occurs into a SQL Server 2000 table from a fixed width flat file. The SQL Server table that the data goes into is accessed through an OLE DB connection manager that uses the Native OLE DBMicrosoft OLE DB Provider for SQL Server.
In the OLE DB Destination, I changed the access mode from Table or View - fast load to Table or View because I needed to implement OLE DB Destination Error Output. The Error output goes to a SQL Server 2000 table that uses the same connection manager.
The OLE DB Destination Editor Error Output 'Error' option is configured to 'Redirect' the row. 'Set this value to selected cells' is set to 'Fail component'.
Was changing the access mode the simple reason why the insert from the flat file takes so much longer, or could there be other problems?
Would someone please let me know which is the better way to reduce fragmentation on Windows:NT drive? dbcc indexdefrag or dbcc reindex or dropping and recreating indexes through SQL?
i have inherited a database running on sql7 sp2. a lot of the tables do not have a clustered index. some of these tables are highly fragmented. below is a sample of the dbcc showcontig output
this table has 5 non-null char columns whose total length is 43 chars and the table occupies about 900 mb for only about 670,000 rows! select on the table sucks!
since there is no clustered index i have to rebuild the table to remove the fragmentation. is there a way to reorganize the table without recreating it?
I have a problem with defragmenting the tables. When i ran dbcc showcontig for the tables, it shows 94%, 50%, 33% fragmentations . I have rebuild indexes on all those tables and ran dbcc showcontig again but still it shows the same % of fragmentations. Is there any way to remove fragmentation on all the tables completely. Any help would be appreciated.
What's the best way to find out if disk fragmentation on Windows 2000 Server is affecting SQL Server performance?
If disk fragmentation is shown to be a cause of performance problems, what are the recommendations for a disk fragmentation strategy? eg. use the win 2000 built in disk defrag utility or buy a 3rd party product like DiskKeeper? How much of an overhead is a product like DiskKeeper that defrags in the background?
How can I mesure the database fragmentation ? Cause DBCC SHOWCONTIG shows obects fragmentation only. I would like to see the whole database fragmentation.
What functions of tools do you use for managing index fragmentation?
DBCC?
I am working through MS Press SQL 2005 book and it mentions the sys.dm_db_index_physical_stats function? It then give an example of code which is very involved.
before rebuild index --------------------- tablenameIndexname avg_fragmentation_in_percent Payoff_QuotePK_Payoff_QuoteDetail 83.3333333333333 ALTER INDEX ALL ON Payoff_Quote REBUILD after issue the above statement result remain same.
I have a nice script that will look at the index fragmentation by using the DMV (sys.dm_db_index_physical_stats) and if its above a specificed threshhold, it will rebuild or reorg the index.
What I have noticed is that even after reorging and/or rebuilding the index, the fragmentation percent in the DMV does not change for some indexes. Other indexes are updated just fine.
Is this a result of updating statistics? Why would the fragmentation change for some indexes and not others? Why does it seem no matter how much rebuilding or reorging is done, the fragmentation percent for some indexes does not change?
We backup a lot of SQL databases. The db admin uses SQL to dump the databases to a *.bak file on a network share, then we pick them up to tape.
The problem is that for some reason, the backup files are MASSIVELY fragmented, which kills our backup speed to tape (via Netbackup Enterprise). If I defragment the drive (via the built-in tool on Win2K3 or PerfectDisk 8) our speed to tape more than doubles (can go from 16MB/sec to 36MB/sec). However, after the next SQL backup, the drive is completley fragmented once again (around 70%).
Is there any way to improve how these files are written to disk (to keep them relatively in one piece)? For some reason it appears this is a bigger issue with the SQL backup files than any other file type. Thanks.
How do you defragement SQL database data files? Running the DBCC DBREINDEX , DBCC INDEXDEFRAG, does that minimize fragmentations of the tables and the indexes files or is it just the index files? Thanks in advance
We have nightly job running to reorg all in one of our prod database. But the index on one of the table fragmenting quickly by the morning showing 90% fragmentation.