We are planning to standardize our newly deployed sql server, As a part of it we have configured 2 maintenance plans 1) Update Statistics which runs daily and 2) Index Reorganize which runs on weekly.
Apart from above, any other things to be in place for better maintenance of the sql server.
Also, how to Index Rebuild activity for clustered indexes requires any downtime.
I am using SQL Server 2008 (RTM) Standard Edition.
In my environment, one of my Database size is 75 gb and I have to create a plan for index rebuild using maintenance plan.But when we rebuild indexes, it requires some space on data and log files of database.how can we calculate disk space requirement for index rebuild process ?
This calls the Sp that does the Reindex. It fails at the update statistics with a very generic message. like " Command: UPDATE STATISTICS [xxxx_DB].[dbo].[xxxx_xxx] [_WA_Sys_00000007_49C3F6B7] [SQ... The step failed."
I suspect it has more error but this is all it is showing me when I right click on the job history. therefore, I updated the job step in the advance tab with log to a txt file. Am I on the right track or there is another way to see error some where else.
I looked at the logs but they didn't show any thing.
We face slow performance issue for like taking long time for same query execution after We apply index rebuild and reorganize index. But, after execution of query or procedure for 2 -3 times, performance will be faster. I have following questions
1 do we need to update stats after we rebuild an reorganize index. 2. is it will be slow for 1-2 times for every query and stored procedure execution after we rebuild and reorganize index?
I have a requirement to only rebuild the Clustered Indexes in the table ignoring the non clustered indexes as those are taken care of by the Clustered indexes.
In order to do that, I have taken the records based on the fragmentation %.
But unable to come up with a logic to only consider rebuilding the clustered indexes in the table.
Hi All, SQL 7 REQUIRES THE DBA TO MANAGE THE INDEXES AND IN PARTICULAR THE FILL FACTOR AVAILABILITY. DOES SQL SERVER 2000 AUTOMATICALLY ADDRESS THIS OR DO WE STILL HAVE TO PERIODICALLY RUN,
In the maintance plans there is a Rebuild Index choice. If u choose tables and views the plan executes ALTER INDEX <index> ON <table> ;REBUILD for all indexes in the datebase. I am currently using this plan on our production DB, scheduled for every Saturday night. I wonder if there is a downside of using maintance plans. Because it seems to be doing the job. Any comments?
I am upgrading from SQL2000 to SQL2005. I have restored my 2000 db to 2005. I have changed the Compatiblilty level to 90. Now I need to reindex. How do I reindex all the tables at once? Thanks for ALL your help r/p
So I'm reading http://www.sql-server-performance.com/tips/clustered_indexes_p2.aspx and I come across this: When selecting a column to base your clustered index on, try to avoid columns that are frequently updated. Every time that a column used for a clustered index is modified, all of the non-clustered indexes must also be updated, creating additional overhead. [6.5, 7.0, 2000, 2005] Updated 3-5-2004 Does this mean if I have say a table called Item with a clustered index on a column in it called itemaddeddate, and several non-clustered indexes associated with that table, that if a record gets modified and it's itemaddeddate value changes, that ALL my indexes on that table will get rebuilt? Or is it referring to the table structure changing? If so does this "pseudocode" example also cause this to occur: sqlstring="select * from item where itemid=12345" rs.open sqlstring, etc, etc, etc rs.Fields("ItemName")="My New Item Name" rs.Fields("ItemPrice")=1.00 rs.Update Note I didn't explicitly change the value of rs.fields("ItemAddedDate")...does rs.Fields("ItemAddedDate")=rs.Fields("ItemAddedDate") occur implicitly, which would force the rebuild of all the non-clustered indexes?
My SSIS package is running very slow taking so much time to execute, One task is taking 2hr for inserting 100k records, i have disabled unused index still it is taking time.I am rebuilding/Refreshing indexes and stats once in month if i try to execute on daily basis will it improve my SSIS Package performance?
We have a script running everyday for rebuild and re-organisation of indexes. But, somehow its getting failed. Attached script for your consideration. There is no database name with amoperations. There is table called DatabaseObjectAudit but it exist on master db.
Message:
-->Start Index Maint -> Gathering fragmentation information (can take a while!) -> Gathering COMPLETE : Total of 43 databases were found. -> Gathering COMPLETE : Total of 1622 indexes were found.
I have a number of databases that are being transactionally replicated from SQL 2000 Enterprise edition publisher to SQL 2005 Enterprise edition subscriber. I have included indexes in the replication. The subscriber database is then accessed and the data de-normalised and aggregated for reporting purposes.
My question is this: I want to periodically re-build the indexes on the publisher and subscriber via an automated task. If I rebuild the indexes on the publisher, will that automatically replicate to the subscriber? Will there be a problem with the "snapshot being out of date", and therefore replication stopping? I run a new snapshot once a day in the small hours of the morning. If there is likely to be a problem with the rebuild throwing the replication out, would it be wise to have the rebuild job running just before the new snapshot is taken?
Our inhouse app used to run on a SQL2000, but we've recently moved it to 2005. The move was done by way of backup and restore (it was on a whole new server).
After the move, an odd problem showed up: once in a while, the server seems incapable of finding/using its indexes: everything starts working slowly, until I run a maintenance plan that rebuilds its indexes etc. In the database/server all relevant options seem to be ok (auto update statictiscs etc.), and my conclusion that it doesn't use its indexes comes from the fact that: * it gives the results from certain select statements in a totally different order (although the set of rows is the same), * performance is (all of a sudden) very slow (seconds turning to minutes!!)
This happens: * at least after a reboot of the server * sometimes just in the middle of the day
The only way I've found to solve the matter, is by running the maintenance plan to rebuild the indexes, but sometimes this only seems to work the second time.
Does anybody share this experience, or know what to do about it?
Hi, I have a script to rebuild and reorganize indexes for sybase i.e reorg rebuild index... like command i have. Now i want similar command for MSSQLSqlserver.plz help me.
We have a maintenance plan running everyday for rebuild and re-organisation of indexes. But, somehow its getting failed. Here is the script that we are running for rebuild or re-org.
/* Script to handle index maintenance Tuning constants are set in-line current values are; SET @MinFragmentation SET @MaxFragmentation SET @TrivialPageCount
Normally we use rebuild, reorganize indexes when it is required, I used a SQL job using maintenance plan to run daily and rebuild, reorganize indexes and update statistics but I do not know if it runs either they are required or not. Should this plan automatically execute the build upon required indexes to be rebuild or it fires either they are required to be executed or not.
Has anyone noticed a performance improvement during trading hours when they replaced sp_updatestats with UPDATE STATISTICS FULLSCAN in their nightly maintenance? Or is it negligible?
I'm fairly new to SQL Server and I'm just wondering if it's possible to Update Statistice for all indexes somehow? I'm looking at the Update Statistics command and it doesn't seem to be possible.
The situation we have is a reporting DB that basically has all it's tables truncated and remade every night by some DTS jobs that import from another datasource and change the data and build some denormalzed tables etc. Some of the large Insert operations go from taking 8 mins to taking several hours sometimes and updating the stats seems to fix the problem for a while. So I'd like to make sure the optimizer has all the latest stats for all tables.
Does anyone know how to tell how long it took for an auto update statistics to run? I looked under DBCC Show_Statistics and it shows the time the stats were last updated, but not how long it took to update them. Thanks.
I am having the following errors with the script below
Msg 102, Level 15, State 1, Line 44 Incorrect syntax near '?'. Msg 319, Level 15, State 1, Line 47
Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon. it also does not seem to loop around each db
Recently we moved few of our databases from SQL 2000 to SQL 2005 (SP 2) using backup and restore. After the restore I did Reindex and update stats on the databases. Since then we have observed performance issues on SQL 2005 databases but this performance problem vanishes the moment we run (sp_updatestats). Is this a problem with SQL 2005 that we have to run sp_updatestats 2 times a days or 3 times a day. In SQL 2000 we ran it only Once a week and still we never had any performance issues. Is there any config change we need to do to fix this problem in SQL 2005?
(Assuming SQL Server 2000, auto create statistics on, auto updatestatistics on.)DoesDBCC DBREINDEX(<tablename>)update statistics? If yes, are the statistics equivalent to thosethat would be produced by:UPDATE STATISTICS <tablename> WITH FULLSCAN
Recently a production server suffered a critical blocking period and I wanted to know if I could solicit some input. It seems that a stored procedure was in the middle of recompiling while and auto update statistics started. This caused blocking for like an hour on the single object (stored procedure) that was originally called. The table that the update occurred on and that the stored procedure is reading form is quite large. It is 2 mil rows and about 140 columns wide. Some info from sysprocesses is below. The table alone takes up almost 4GB of space, when looking at sp_spaceused. I have some questions. 1. Can the update statistics for a '_WA%' stats cause blocking on a table? 2. Does an update stats on an index survive a restart of SQL server? We tried restarting, but the blocking did not end. 3. If the stored procedure is running under a compile, can the server automatically start an update stats and cause the stored procedure to wait? 4. Can the server automatically start an update stats on more than one column stats at a time, causing one to be blocked by the other? 5. We had never seen this issue before going to SQL2K clustering. Is this something specific to SQL2K and not SQL7 ?
Thanks for your input. John Lee
This is the lock info for the blocking processes.
spid dbid ObjId IndId Type Resource Mode Status name ------ ------ ----------- ------ ---- ---------------- -------- ------ ------------------------- 142 7 2 1 KEY (6f00035ef42b) S GRANT sysindexes 142 7 2 1 KEY (6f00035ef42b) S GRANT sysindexes 142 7 421576540 0 TAB Sch-S GRANT tJob 142 7 1141579105 0 TAB Sch-S GRANT tPatient_info 142 7 1141579105 0 TAB [UPD-STATS] Sch-M GRANT tPatient_info 142 7 1659921035 0 TAB [COMPILE] X GRANT iDBGetPatInfoRecord 142 7 1659921035 0 TAB Sch-S GRANT iDBGetPatInfoRecord
These are the processes that are being blocked:
spid ------ 137 140
Below this is a snapshot of all the SQL processes on the server being blocked. Save the report and send to the whole database group.
Microsoft states that dbcc DBREINDEX automatically updates statistics but INDEXDEFRAG does not. If this is the case, does MS mean that only the affected statistics are updated or all statistics? Also, is it a good idea to run 'Update Statistics' after doing INDEXDEFRAG?
How do i update the stats of tables when we insert data into it. I believe Auto stats update happens only when 500+ 20% of the rows are changed for a table. Once we insert say some 1000 records in to a particular table the query time takes too long (more than 1 min). The same query executes faster once i manually update the stats.
I am using Ola Hallengrens scripts for index and stats maintenance but I am wondering what most people to in terms of the maintenance schedules. At present we do an index rebuild reorg weekly, but do people also do update stats nightly?
I suppose there is an element of "it depends" here in that the data may be fairly static so the update stats may not be required, or if heavily updated then perhaps rebuilding indexes may be required more frequently.
We have a file import job. This job typically imports millions of records into a SQL2008 DB. After the load the DB performance goes down the drain. Thus far, their solution has been to rebuild indexes on effected tables. I'd like to come up with a better solution. My guess is that after the load, the statistics are shot until the next stats update.
What is the best way to handle this scenario? There must be some way to keep the stats current during a big data load.
When I checked the Properties of the Statistic I can see it is on a varchar(3) field when there are only 3 different values in there - all char(1)
The total size of the data in the table according to the Disk Usage By Top Table Report is 199,680,712KB
So my question is this...
For the UPDATE STATS on this one column with FULL SCAN, does SQL Server read the entire table into the Buffer Pool. If so then if the table had 199,680,712KB of data then why did the session request 145,705,216KB.
Or does SQL Server just read the column and ClusteredIndex Key into the Buffer Pool?