SQL Server 2012 :: Script To Reorganize All Enabled Indexes
Jul 30, 2015
My index reorganise maintenance plan fails partly due to the disabled indexes
Executing the query "ALTER INDEX [I_ModelSecurityCommon_RECID] ON [dbo]...
" failed with the following error: "Cannot perform the specified operation on disabled index 'I_ModelSecurityCommon_RECID' on table 'dbo. Model SecurityCommon'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I don't want to delete the indexes as they are standard indexes that where on the DB from install.. any script that will reorganise all enabled indexes? and also to rebuild?
I am using the Maintencance Plan wizard, but it only allows me to either select the "reorganize data and indexes" option or the "update statistics" option (in the Optimizations tab). I can't select both of them. What is the reason for this?
Hi, I have a script to rebuild and reorganize indexes for sybase i.e reorg rebuild index... like command i have. Now i want similar command for MSSQLSqlserver.plz help me.
What are driving criteria for creating filtered indexes on SQL server. I am trying to analyze the index stats through DMV,histogram and have to analyze if the filtered indexes should be created on tables. This exercise has to be done for all the transaction tables on the database. What are the approaches I should be looking on?
There was a deadlock on the DB because of huge writes on one of the big tables. Having filtered index on this table for the effected column would reduce the time taken for write operations. Hence we are looking for creating filtered indexes appropriately
on which the following query is based. I need to build indexes so that the query will perform better. Now its very slow..
SELECT DISTINCT C.[afflt_cust_natl_key],[as_of_dt] FROM [dbo].[SF_Affiliate_Customer] C WHERE ( [afflt_intrnl_cust_ind] = 'N' AND [afflt_empl_ind] = 'N' AND (ISNULL([phys_addr_st_rgn_cd],'')<>'CA' AND ISNULL([mlng_addr_st_rgn_cd],'')<>'CA') )AND
I have a scenario where I have 3 columns and all 3 of them are used in the where clauses of simple queries or ones having joins .
TABLE( Column1 int FLAG1 bit FLAG2 bit )
Sample queries :
Select * from TABLE where FLAG1 =1 and FLAG2 =0 (Any combination of these flags) Select * from TABLE inner join SOMEOTHERTABLE on TABLE.Column1 = SOMEOTHERTABLE .Column1 where FLAG1 =1 and FLAG2 =0
( any join and combination of flags)
Questions :
What would be the best nonclustered index strategy :
Column1 as the index key including FLAG1 and FLAG2 or Column1,FLAG1 and FLAG2 in the index key
Points to note :
The queries are part of an ETL process and are used to track new records vs old records. The Flags switch states within the same job . So if we are creating an index on all 3 columns, the index has to be reorganized more than once based on the flag states. If we keep them in the include list , then its only about updating the leaf data with the latest flag values.
On the other hand, an index on all 3 columns will result in an index Seek alone , where as for the included list , there will be an index seek and a predicate .
Does the predicate cause more overhead than reorganizing the index or is it the opposite ?
It's often said or done that when inserting or updating into a 'large' table that disabling the non-clustered indexes can is needed for performance.
Now I know the obvious way to find out if this is best or not is by testing the different options. I was wondering if there was a rule of thumb to this?
Say you have a table with half a billion rows and 4 non-clustered indexes and are only updating half a million rows then sometimes disabling every night and re-enabling can take way more time than the actual update. Haven't found an articles advising to disable them when a table is over X rows and you are updating Y% of them...
We have a server with a database with filestream enabled. The filestream data is in a filegroup with three files spread across 3 LUNs F:, G:, and H: each with a capacity of 1.8 TB.
The file stream containers in those three LUNs reference the same column in the same table.
The F: Drive has only 64 GB free space left. The H: However has around 700 GB free.
We are looking to move some filestream content from the container in F: to the container in H:.
I am aware that TDE protects data at Rest and not during communication or data in motion (UNLESS you use Encrypted communication channels using SSL certs etc). Hence I am thinking of doing data export from a TDE encrypted database to a database on the instance where TDE is not enabled or supported. I believe it works and need to take care of relationships between tables.The target database is hosted on SQL 2012 standard edition on which TDE is not supported.
I have a database that is the publisher in transactional replication and also part of an availability group. I have put the pertinent certificates on all of the involved servers, and it is encrypted on all servers and operated as expected. However, we are adding additional security for personal data and we have targeted columns in multiple tables for column encryption. I have a master key and certificates that are stored in the master database. Following an example where I am to create the database master key:
-- Create database Key USE encrypt_test; GO CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Password123'; GO
But when I try to create a certificate on the database:
-- Create self signed certificate USE encrypt_test; GO CREATE CERTIFICATE Certificate1 WITH SUBJECT = 'Protect Data'; GO
It get the following: Msg 15151, Level 16, State 1, Line 1 Cannot find the certificate 'Certificate1', because it does not exist or you do not have permission.
Can I add a database certificate to an already TDE enabled database and if not to I create the symmetric key through the certificate located on the master database? And how will that effect decrypting the column values in stored procedures and function on the user database?
Our Sql server is not responding, So we restarted the server and modified one of the sp code. After that we are getting frequently every 2 min giving the below error
The queue 855365233 in database 9 has activation enabled and contains unlocked messages but no RECEIVE has been executed for 453199 seconds
I am looking for a sample PowerShell script that allows me to verify that showplan is enabled for a user on a SQL Server 2012 instance. Haven't figured out how to code it.
I receive Error: 3967, Severity: 17, State: 1. Insufficient space in tempdb to hold row versions. We have 8 data files for temp db of 10210 GB size and given 10240 GB as max size.
As MS suggest to calculate the temp db file size and growth rate we need to monitor the perform counters Free Space in Tempdb (KB) and Version Store Size (KB) in the Transactions object.
basic formula: [Size of Version Store] = 2 * [Version store data generated per minute] * [Longest running time (minutes) of your transaction
My report disk utilizations says tempdb is full ? I thonk I need a shrink for the file .
Still I am confused in calculating the size , My perform counter gives me data as such
Free Space in tempdb (KB) 279938496 Version Generation rate (KB/s) 53681040 Version Cleanup rate (KB/s) 53422320 Version Store Size (KB) 258720 Version Store unit count 22 Version Store unit creation 774 Version Store unit truncation 752
I have a scenario where a customer is going to be using Log Shipping to the DR site; however, we need to maintain the normal backup strategy on the current system. (i.e. Nightly Full, Every 6 Hour Differential and Hourly Transaction Log backup)I know how to setup Transaction Log Shipping and Fail-over to DR and backup but now the local backup strategy is going to be an issue. I use the [URL] .... maintenance solution currently.
Is it even possible to do regular backups locally keeping data integrity for your backup strategy with Transaction Log Shipping enabled?
We face slow performance issue for like taking long time for same query execution after We apply index rebuild and reorganize index. But, after execution of query or procedure for 2 -3 times, performance will be faster. I have following questions
1 do we need to update stats after we rebuild an reorganize index. 2. is it will be slow for 1-2 times for every query and stored procedure execution after we rebuild and reorganize index?
1)When we create Indexes, key columns are the columns that use in where clause and included columns are the columns that can be used in the select list and on join clause column.
2) I am thinking that we have to create new Index, only if we found at least 50 msec time save.
How Indexes are allocates on pages? And If a CREATE INDEX Statement Executed on a query Window, Query processor meets and executes these query. However it was meet, who decides to separate indexes onto pages? Storage Engine or Query Processor(Query Optimizer)? Does it work like UPDATE-Statements in Query Optimizer?
I have a new cluster (2 sync, 2 async) with about 50 databases going from 1 to 200gb ( all of the objects are compressed).That at sql server 2012, sp1 CU7.I have several drives for logs with 200gb of space in there...I am having issues at rebuilding indexes on this env, ie, I have a table with the clustered index heavily fragmented (~80%), and the table has about 60gb of data, uncompressed that should be about 160gb.
The index rebuild is creating a log file big enough as to consume all the space that I have for logs, and that is only 1 table, so for sure my old process to maintain indexes (ola.hallengren code) won't work on this scenario.
I'm trying to improve the loading of some tables with large amounts of data that forms part of an ETL. I was going to try removing any indexes before the inserting to speed up the process, but I had some questions on whether or not I should include the clustered index (assuming one exists).
I was originally planning on including a step to disable all indexes on the destination table using the following:
ALTER INDEX ALL ON MyTable DISABLE
Once the load had finished I'd simply rebuild all the indexes.
should I simply disable the non-clustered indexes?
Is there a performance limit on the number of indexes per table / database ? With Filtered indexes there appear to be many more opportunities for more finely defined, and therefore smaller indexes resulting in many more indexes on a single table.
Normally we use rebuild, reorganize indexes when it is required, I used a SQL job using maintenance plan to run daily and rebuild, reorganize indexes and update statistics but I do not know if it runs either they are required or not. Should this plan automatically execute the build upon required indexes to be rebuild or it fires either they are required to be executed or not.
I understand the difference between REBUILD and REORGANIZE. Just wondering if you can do both in the same script or do you have to rebuild the index first and later reorganize?
will maintenance tasks like rebuilding and reorganizing indexes be replicated in transactional replication, or do i have to setup these management tasks on the subscribers as well?
I am unable to access the default port 1433 on my SQL 2003 server. There is no firewall. I run telnet <ip> <port> and get "Connection failed" which explains my inability to connect to the server for weeks. I have being unable to connect to the server after I uninstalled sql 2000 and reinstalled it on my 2003 server. Some of the things I have checked is Network Configuration in EM. Named Pipes and TCP/IP is installed with port set to 1433. HKLMSoftwareMicrosoftMSSQLServerClientSuperSocketNetLibTcpDefaultPort port is set to 1433. Is there anything I need to configure on the server side to have the default sql server port enabled.
Hello, I have to find out whether my servers have hyperthreading enabled or not???? I am running Windows server 2003 Standard edition on many of my machines. I have to configure the SQL Server, server configuration values according to the Hyperthreading. I know about the CPUcount.exe utility but is there anything else apart from it??????