I am using Ola Hallengrens scripts for index and stats maintenance but I am wondering what most people to in terms of the maintenance schedules. At present we do an index rebuild reorg weekly, but do people also do update stats nightly?
I suppose there is an element of "it depends" here in that the data may be fairly static so the update stats may not be required, or if heavily updated then perhaps rebuilding indexes may be required more frequently.
This calls the Sp that does the Reindex. It fails at the update statistics with a very generic message. like " Command: UPDATE STATISTICS [xxxx_DB].[dbo].[xxxx_xxx] [_WA_Sys_00000007_49C3F6B7] [SQ... The step failed."
I suspect it has more error but this is all it is showing me when I right click on the job history. therefore, I updated the job step in the advance tab with log to a txt file. Am I on the right track or there is another way to see error some where else.
I looked at the logs but they didn't show any thing.
We face slow performance issue for like taking long time for same query execution after We apply index rebuild and reorganize index. But, after execution of query or procedure for 2 -3 times, performance will be faster. I have following questions
1 do we need to update stats after we rebuild an reorganize index. 2. is it will be slow for 1-2 times for every query and stored procedure execution after we rebuild and reorganize index?
We have a file import job. This job typically imports millions of records into a SQL2008 DB. After the load the DB performance goes down the drain. Thus far, their solution has been to rebuild indexes on effected tables. I'd like to come up with a better solution. My guess is that after the load, the statistics are shot until the next stats update.
What is the best way to handle this scenario? There must be some way to keep the stats current during a big data load.
When I checked the Properties of the Statistic I can see it is on a varchar(3) field when there are only 3 different values in there - all char(1)
The total size of the data in the table according to the Disk Usage By Top Table Report is 199,680,712KB
So my question is this...
For the UPDATE STATS on this one column with FULL SCAN, does SQL Server read the entire table into the Buffer Pool. If so then if the table had 199,680,712KB of data then why did the session request 145,705,216KB.
Or does SQL Server just read the column and ClusteredIndex Key into the Buffer Pool?
My SSIS package is running very slow taking so much time to execute, One task is taking 2hr for inserting 100k records, i have disabled unused index still it is taking time.I am rebuilding/Refreshing indexes and stats once in month if i try to execute on daily basis will it improve my SSIS Package performance?
Has anyone noticed a performance improvement during trading hours when they replaced sp_updatestats with UPDATE STATISTICS FULLSCAN in their nightly maintenance? Or is it negligible?
At one of your client sides we have configured Always on with synchronous mode.Also we have schedule rebuild index and update statistics job which runs in night every alternate day. the issue is there are more then 100 sleeping queries which is blocking update statistics job.
I have to stop update statistics job manually once i come to office manually.
Once I have killed blocking sleeping query but then other sleeping query blocked it and so on.
I'm fairly new to SQL Server and I'm just wondering if it's possible to Update Statistice for all indexes somehow? I'm looking at the Update Statistics command and it doesn't seem to be possible.
The situation we have is a reporting DB that basically has all it's tables truncated and remade every night by some DTS jobs that import from another datasource and change the data and build some denormalzed tables etc. Some of the large Insert operations go from taking 8 mins to taking several hours sometimes and updating the stats seems to fix the problem for a while. So I'd like to make sure the optimizer has all the latest stats for all tables.
Does anyone know how to tell how long it took for an auto update statistics to run? I looked under DBCC Show_Statistics and it shows the time the stats were last updated, but not how long it took to update them. Thanks.
I am having the following errors with the script below
Msg 102, Level 15, State 1, Line 44 Incorrect syntax near '?'. Msg 319, Level 15, State 1, Line 47
Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon. it also does not seem to loop around each db
Recently we moved few of our databases from SQL 2000 to SQL 2005 (SP 2) using backup and restore. After the restore I did Reindex and update stats on the databases. Since then we have observed performance issues on SQL 2005 databases but this performance problem vanishes the moment we run (sp_updatestats). Is this a problem with SQL 2005 that we have to run sp_updatestats 2 times a days or 3 times a day. In SQL 2000 we ran it only Once a week and still we never had any performance issues. Is there any config change we need to do to fix this problem in SQL 2005?
(Assuming SQL Server 2000, auto create statistics on, auto updatestatistics on.)DoesDBCC DBREINDEX(<tablename>)update statistics? If yes, are the statistics equivalent to thosethat would be produced by:UPDATE STATISTICS <tablename> WITH FULLSCAN
Recently a production server suffered a critical blocking period and I wanted to know if I could solicit some input. It seems that a stored procedure was in the middle of recompiling while and auto update statistics started. This caused blocking for like an hour on the single object (stored procedure) that was originally called. The table that the update occurred on and that the stored procedure is reading form is quite large. It is 2 mil rows and about 140 columns wide. Some info from sysprocesses is below. The table alone takes up almost 4GB of space, when looking at sp_spaceused. I have some questions. 1. Can the update statistics for a '_WA%' stats cause blocking on a table? 2. Does an update stats on an index survive a restart of SQL server? We tried restarting, but the blocking did not end. 3. If the stored procedure is running under a compile, can the server automatically start an update stats and cause the stored procedure to wait? 4. Can the server automatically start an update stats on more than one column stats at a time, causing one to be blocked by the other? 5. We had never seen this issue before going to SQL2K clustering. Is this something specific to SQL2K and not SQL7 ?
Thanks for your input. John Lee
This is the lock info for the blocking processes.
spid dbid ObjId IndId Type Resource Mode Status name ------ ------ ----------- ------ ---- ---------------- -------- ------ ------------------------- 142 7 2 1 KEY (6f00035ef42b) S GRANT sysindexes 142 7 2 1 KEY (6f00035ef42b) S GRANT sysindexes 142 7 421576540 0 TAB Sch-S GRANT tJob 142 7 1141579105 0 TAB Sch-S GRANT tPatient_info 142 7 1141579105 0 TAB [UPD-STATS] Sch-M GRANT tPatient_info 142 7 1659921035 0 TAB [COMPILE] X GRANT iDBGetPatInfoRecord 142 7 1659921035 0 TAB Sch-S GRANT iDBGetPatInfoRecord
These are the processes that are being blocked:
spid ------ 137 140
Below this is a snapshot of all the SQL processes on the server being blocked. Save the report and send to the whole database group.
Microsoft states that dbcc DBREINDEX automatically updates statistics but INDEXDEFRAG does not. If this is the case, does MS mean that only the affected statistics are updated or all statistics? Also, is it a good idea to run 'Update Statistics' after doing INDEXDEFRAG?
How do i update the stats of tables when we insert data into it. I believe Auto stats update happens only when 500+ 20% of the rows are changed for a table. Once we insert say some 1000 records in to a particular table the query time takes too long (more than 1 min). The same query executes faster once i manually update the stats.
We are planning to standardize our newly deployed sql server, As a part of it we have configured 2 maintenance plans 1) Update Statistics which runs daily and 2) Index Reorganize which runs on weekly.
Apart from above, any other things to be in place for better maintenance of the sql server.
Also, how to Index Rebuild activity for clustered indexes requires any downtime.
I have this t-sql code which will get some table stats on one database at a time, I was wondering how I would get it to loop through all databases so it will pull the stats from all tables in all databases. Here is my code:
Select object_schema_name(UStat.object_id) + '.' + object_name(UStat.object_id) As [Object Name] ,Case When Sum(User_Updates + User_Seeks + User_Scans + User_Lookups) = 0 Then Null Else Cast(Sum(User_Seeks + User_Scans + User_Lookups) As Decimal) / Cast(Sum(User_Updates
I wanted to demonstrate something about CXPACKET wait type. For the purpose of this demo I've created a query in AdventureWorks database that uses a parallel query plan, an extended events session that captured the wait statistics for a single session and a query that shows the extended event's data. I ran it and it worked fine. Then I dropped and recreated the event session (to clear the data), in a new window I wrote a transaction that updated the table fallowed by waitfor statement so the first query will be blocked for few seconds and ran the whole thing again. The select statement was blocked as expected (ran for 9 seconds instead on 1 second as it ran without the blocking), but the wait stats that I got were almost identical to the those that I got without blocking the query.
--Query that uses parallel query plan With MyCTE as ( select top 50 * from Sales.SalesOrderHeader) select top 10000 * from Sales.SalesOrderHeader, MyCTE order by newid()
Msg 2601, Level 14, State 1, Procedure DFP_report_load, Line 161 Cannot insert duplicate key row in object 'dbo.DFP_Reports_History' with unique index 'ix_report_history_creative_id'.
The duplicate key value is (40736326382, 1, 2015-07-03, 67618862, 355324). Msg 3621, Level 0, State 0, Procedure DFP_report_load, Line 161
The statement has been terminated.
Exception in Task: Cannot insert duplicate key row in object 'dbo.DFP_Reports_History' with unique index 'ix_report_history_creative_id'. The duplicate key value is (40736326382, 1, 2015-07-03, 67618862, 355324).
We have recently migrated quite a databases around 20 from SQL 2000 and 2005 to SQL server 2008R2.
I am using Ola's script for index maintenance for those with compatibility level above 80 as i heard it supports that way.
Hence separated in 2 way job where for those with compatibility level 80, we are running job with below query on each database with 80 as compared
USE ABC GO EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)" GO EXEC sp_updatestats GO
I am not sure if this is the only way in for those databases, because we are seeing the database getting because of that somewhere using above query.( seems log file filling very rapidly).
But above is not the case with those databases , with compatibility 90 and above.
My index maintenance job that was setup through Enterprise manager database maintenance fails with the following notice. It ran great for several weeks then it started failing. Any suggestions!!
sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029). The step failed.
Hi, Does anyone here know of any on-line references on how to optimize index maintenance in sql 2005? Also do you know of any good DBA books that will explain database maintenance and or best practices?
Rebuild index maintenance plan is failed, since we don't have space in the C:Drive we have left the option as it is to sort the results in user databases respectively. These user databases are in E: with sufficient space to rebuild index.
Check the below details.
SQL Server 2005: Microsoft SQL Server 2005 - 9.00.5000.00 (X64) Dec 10 2010 10:38:40 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2)
Online reindexing supports in SQL Server 2005 Standard Edition? Job is failing because these options (sort results in tempdb & keep index online while reindexing) is not checked (enabled)?
Are there any examples of maintenance(ReBuild FULL or Incremental) for Full-Text indexes? Are there any index integrity checks that can be done? What is the best way to backup a full-text index?
Greetings My Fuzzy Lookup task works beautifully when it generates the lookup index every time it runs, but as I'm planning on running this hundreds of times I'd like it to maintain the index via the trigger. However when it attempts to install the trigger via sp_FuzzyLookupTableMaintenanceInstall I get:
Description: "A .NET Framework error occurred during execution of user-defined routine or aggregate "sp_FuzzyLookupTableMaintenanceInstall": System.Data.SqlClient.SqlException: Error number 8101 is invalid.
(I've not included the full stack trace as I figured this would be enough)
The table currently has an After Insert and After Update trigger. CLR integration is enabled in this database instance. Is there some other option I need to set somewhere?
For one of my SSIS projects that does a fuzzy lookup on a table, I opted to create an index and
to maintain the stored index. The index got created and subsequent project execution was able to
use that index.
Now I want to update certain rows in that table. When I run the update statement I get the following error.
How can I retain the index and still be able to update the table?
update location_stage set batchid = 'APR07N'
where batchid is null and eventid = '20070528020041';
Msg 6549, Level 16, State 1, Procedure sp_FuzzyLookupTableMaintenanceInvoke, Line 0
A .NET Framework error occurred during execution of user defined routine or aggregate 'sp_FuzzyLookupTableMaintenanceInvoke':
System.Data.SqlClient.SqlException: Transaction is not allowed to roll back inside a user defined routine, trigger or aggregate because the transaction is not started in that CLR level. Change application logic to enforce strict transaction nesting.
System.Data.SqlClient.SqlException:
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.SqlInternalConnectionSmi.ExecuteTransaction(TransactionRequest transactionRequest, String transactionName, IsolationLevel iso, SqlInternalTransaction internalTransaction)
at System.Data.SqlClient.SqlInternalTransaction.Rollback()
at System.Data.SqlClient.SqlTransaction.Rollback()
at Microsoft.SqlServer.Dts.TxBestMatch.TableMaintenance.TranWrap(DataCleaningOperation c)
I am using Full Text Index to index emails stored in BLOB column in a table. Index process parses stored emails, and, if there is one or more files attached to the email these documents get indexed too. In result when I'm querying the full text index for a word or phrase I am getting reference to the email containing the word of phrase if interest if the word was used in the email body OR if it was used in any document attached to the email.
How to distinguish in a Full Text query that the result came from an embedded document rather than from "main" document? Or if that's not possible how to disable indexing of embedded documents?
My goal is either to give a user an option if he or she wants to search emails (email bodies only) OR emails AND documents attached to them, or at least clearly indicate in the returned result the real source where the word or phrase has been found.
Recently upgraded SQL Server 2005 x64 to SP2 and upto Build 3159. Since then the Maintenance Plan for Index Reorgs has failed with a System.OutOfMemoryException error. No other errors are logged anywhere. The plan report file has no information either.
We have a new database with cdc enabled on all of its tables. This causes the index maintenance task to fail with following message:
"Executing the query "EXEC DBName.dbo.IndexDefrag_sp" failed with the following error: "The unique index 'PK_TableName' on source table '[dbo].[TableName]' is used by Change Data Capture. To alter or drop the index, you must first disable Change Data Capture on the table. The transaction ended in the trigger. The batch has been aborted.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly" We would like to run the index maintenance without losing the cdc data. We plan on installing SP2 on SQL Server 2008 R2 soon, would that solve the issue? Disabling the cdc prior to index maintenance and then re-enabling back upon completion; would delete the data as I found in most discussions, but we would like to retain it.
I have a clustered index that consists of 3 int columns in this order: DateKey, LocationKey, ItemKey (there are many other columns in this data warehouse table such as quantities, prices, etc.).
Now I want to add a non-clustered index on just one of the other columns, say LocationKey, like this: CREATE INDEX IX_test on TableName (LocationKey)
I understand that the clustered index keys will also be added as key columns to any NC indexes. So, in this case the NC index will also get the other two columns from the clustered index added as key columns. But, in what order will they be added?
Will the resulting index keys on this new NC index effectively be:
LocationKey, DateKey, ItemKey OR LocationKey, ItemKey, DateKey
Do the clustering keys get added to a NC index in the same order as they are defined in the clustered index?