What are driving criteria for creating filtered indexes on SQL server. I am trying to analyze the index stats through DMV,histogram and have to analyze if the filtered indexes should be created on tables. This exercise has to be done for all the transaction tables on the database. What are the approaches I should be looking on?
There was a deadlock on the DB because of huge writes on one of the big tables. Having filtered index on this table for the effected column would reduce the time taken for write operations. Hence we are looking for creating filtered indexes appropriately
I have a query that I'm filtering using Customer ID, CustomerID = '12345', even though I need the query to filter that data, I don't need to see that column in my results. I tried removing it from my Select Distinct group but I'm guessing it needs to be there or the filter won't work(like I said, very green). Is there something that I can add to hide this column?
on which the following query is based. I need to build indexes so that the query will perform better. Now its very slow..
SELECT DISTINCT C.[afflt_cust_natl_key],[as_of_dt] FROM [dbo].[SF_Affiliate_Customer] C WHERE ( [afflt_intrnl_cust_ind] = 'N' AND [afflt_empl_ind] = 'N' AND (ISNULL([phys_addr_st_rgn_cd],'')<>'CA' AND ISNULL([mlng_addr_st_rgn_cd],'')<>'CA') )AND
I have a scenario where I have 3 columns and all 3 of them are used in the where clauses of simple queries or ones having joins .
TABLE( Column1 int FLAG1 bit FLAG2 bit )
Sample queries :
Select * from TABLE where FLAG1 =1 and FLAG2 =0 (Any combination of these flags) Select * from TABLE inner join SOMEOTHERTABLE on TABLE.Column1 = SOMEOTHERTABLE .Column1 where FLAG1 =1 and FLAG2 =0
( any join and combination of flags)
Questions :
What would be the best nonclustered index strategy :
Column1 as the index key including FLAG1 and FLAG2 or Column1,FLAG1 and FLAG2 in the index key
Points to note :
The queries are part of an ETL process and are used to track new records vs old records. The Flags switch states within the same job . So if we are creating an index on all 3 columns, the index has to be reorganized more than once based on the flag states. If we keep them in the include list , then its only about updating the leaf data with the latest flag values.
On the other hand, an index on all 3 columns will result in an index Seek alone , where as for the included list , there will be an index seek and a predicate .
Does the predicate cause more overhead than reorganizing the index or is it the opposite ?
It's often said or done that when inserting or updating into a 'large' table that disabling the non-clustered indexes can is needed for performance.
Now I know the obvious way to find out if this is best or not is by testing the different options. I was wondering if there was a rule of thumb to this?
Say you have a table with half a billion rows and 4 non-clustered indexes and are only updating half a million rows then sometimes disabling every night and re-enabling can take way more time than the actual update. Haven't found an articles advising to disable them when a table is over X rows and you are updating Y% of them...
My index reorganise maintenance plan fails partly due to the disabled indexes
Executing the query "ALTER INDEX [I_ModelSecurityCommon_RECID] ON [dbo]...
" failed with the following error: "Cannot perform the specified operation on disabled index 'I_ModelSecurityCommon_RECID' on table 'dbo. Model SecurityCommon'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I don't want to delete the indexes as they are standard indexes that where on the DB from install.. any script that will reorganise all enabled indexes? and also to rebuild?
1)When we create Indexes, key columns are the columns that use in where clause and included columns are the columns that can be used in the select list and on join clause column.
2) I am thinking that we have to create new Index, only if we found at least 50 msec time save.
How Indexes are allocates on pages? And If a CREATE INDEX Statement Executed on a query Window, Query processor meets and executes these query. However it was meet, who decides to separate indexes onto pages? Storage Engine or Query Processor(Query Optimizer)? Does it work like UPDATE-Statements in Query Optimizer?
I have a new cluster (2 sync, 2 async) with about 50 databases going from 1 to 200gb ( all of the objects are compressed).That at sql server 2012, sp1 CU7.I have several drives for logs with 200gb of space in there...I am having issues at rebuilding indexes on this env, ie, I have a table with the clustered index heavily fragmented (~80%), and the table has about 60gb of data, uncompressed that should be about 160gb.
The index rebuild is creating a log file big enough as to consume all the space that I have for logs, and that is only 1 table, so for sure my old process to maintain indexes (ola.hallengren code) won't work on this scenario.
I'm trying to improve the loading of some tables with large amounts of data that forms part of an ETL. I was going to try removing any indexes before the inserting to speed up the process, but I had some questions on whether or not I should include the clustered index (assuming one exists).
I was originally planning on including a step to disable all indexes on the destination table using the following:
ALTER INDEX ALL ON MyTable DISABLE
Once the load had finished I'd simply rebuild all the indexes.
should I simply disable the non-clustered indexes?
Is there a performance limit on the number of indexes per table / database ? With Filtered indexes there appear to be many more opportunities for more finely defined, and therefore smaller indexes resulting in many more indexes on a single table.
Normally we use rebuild, reorganize indexes when it is required, I used a SQL job using maintenance plan to run daily and rebuild, reorganize indexes and update statistics but I do not know if it runs either they are required or not. Should this plan automatically execute the build upon required indexes to be rebuild or it fires either they are required to be executed or not.
I have a requirement to only rebuild the Clustered Indexes in the table ignoring the non clustered indexes as those are taken care of by the Clustered indexes.
In order to do that, I have taken the records based on the fragmentation %.
But unable to come up with a logic to only consider rebuilding the clustered indexes in the table.
I'm working to improve performance on a database I've inherited, and there are several thousand indexes. I've got a list of ones which should definitely exist within the database, and I'm looking to strip out all the others and start fresh, though this list is still quite large (1000 or so).
Is there a way I can remove all the indexes that are not in my list without too much trouble? I.e. without having to manually go through them all individually. The list is currently in a csv file.
I'm looking to either automate the removal of indexes not in the list, or possibly to generate the Create statements for the indexes on the list and simply remove all indexes and then run these statements.
As an aside, when trying to list all indexes in the database, I've found various scripts to do this, but found they all seem to produce differing results. What is the best script to list all indexes?
I need to include pre k kids in the is script for one school and exclude them for the others. I have them all excluded with the last statement in the where clause. How would I go about accomplishing this.
Code: SELECT DISTINCT C.Student_ID, isnull(left(dbo.capfirst(rtrim([first_name])),12),' ') as First_Name, isnull(left(dbo.capfirst(rtrim([last_name])),17),' ') as Last_Name, LEFT(Middle_Name, 1) as Middle_Initial, RTRIM(CONVERT(CHAR,Birth_Date,101)) AS DOB,
I am having some difficulty in constructing outer joins. I havesimplified what I need to do and have included sample SQL statements:create table tab_a (id int, descr varchar(10), qty int)insert into tab_a values (1, 'item one', 10)insert into tab_a values (2, 'item two', 20)insert into tab_a values (3, 'item three', 30)insert into tab_a values (4, 'item four', 40)create table tab_b (id2 int, descr2 varchar(10), qty2 int)insert into tab_b values (1, 'item one', 10)insert into tab_b values (2, 'item two', 20)insert into tab_b values (3, 'item three', 30)insert into tab_b values (4, 'item four', 40)Here is the statement that I have:SELECT tab_a.id,tab_a.descr,tab_a.qty,tab_b.id2,tab_b.descr2,tab_b.qty2FROM tab_a LEFT OUTER JOIN tab_bON (tab_a.id = tab_b.id2 )WHERE tab_a.qty <= 30 ANDtab_b.qty2 > 20What I am trying to do is left outer join between tab_a and tab_b afterthey have been filtered based on the qty column. (for tab_a: qty <=30; and for tab_b: qty > 20).How would I go about that? I would like to do this efficiently sincethe two tables have about a million records and several other columnseach.
Hi all,I’m a bit new to this so hope this is not too obvious!I am running a query like so (simplified form)Select Cus_no, Cust_Name, course_idWhere Cust_id = 2From course tableHowever not all the Cust_Id’s have been entered therefore so using a twofilters out the information I might need.Is there a way I can get around this?Many thanksSam*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
In asp.net 3.5..... To get the number of rows returned when the select is executed....... Protected Sub SqlDataSource1_Selected(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.SqlDataSourceStatusEventArgs) Handles SqlDataSource1.Selected Session("MessageText") = e.AffectedRows & " Profile(s)"End Sub How do you get the number of rows returned when you apply a filter expression to a selection of rows? Thanks Craig
I'm just trying something that I haven't tried before, and I'm wondering if anyone has done it before. I'm building a report with three charts. This chart is going to go inside of Microsoft CRM. One of the charts has to use CRM filtered views, so that a manager can see his whole team's numbers, but a team member will only see his numbers. I can't do this any other way because of the complexity of the query, which has 3 select statements joined together as tables. It would be too complex to try to use a parameter throughout all of those tables, and I'm not sure how I would set up a parameter to show all of the data if the user were the manager, and only the single user's data if it were someone else.
The other two charts use filtered views as well, but they have similar joins, and I had to hard code a column with meaningless data into each SELECT table so that I could use it to join, as the tables had no other similarities.
The problem is that when you upload a report into CRM, if it uses filtered views, you don't have to go into Report Manager and change the data source from a shared to a custom data source. But on reports where I don't use the filtered views, I find that they always break when upload them, because they use the shared data source. This is the case even when I have created the report without the shared data source. So I usually have to go in and change the data source from shared to custom every time I upload a report without filtered views.
Because of the situation I described, this report uses both filtered views and non-filtered views (for the hard-coded columns). When I upload it into CRM, it won't work either way, with a shared or a custom data source.
I am relatively new to SSRS and having some problems showing the right results from my subreport. I am using a cube for my datasets. I have created a report parameters where I can filter out year and months to see the sales in certain month. I also want to add a calculation to display the sum for the choosen period.
f.exs. if I choose january 2004 my result look like this:
Key Product Amount 41999 prod. x 5,000 42999 prod.y 2,000
Totals: [results from subreport showing total of 7,000]
What I would like to do is to add a total that calculates the total according to the filter. I tried to add a subreport beneath, but I don´t know how to link the filtered condition to the subreport, so it displayes always the total amount for all years from the cube.
When you dril down on MonthYear you get the detail data:
Month Number of Sales Total Sales
- Jan 2007 10 $610.00
1 $10.00
1 $20.00
1 $30.00
1 $40.00
1 $50.00
1 $60.00
1 $70.00
1 $80.00
1 $100.00
1 $150.00
My question is. I added a filter to the detail data to give the Bottom % =75 of sales. So My detail data only displays the following rows:
Month Number of Sales Total Sales
- Jan 2007 10 $610.00
1 $10.00
1 $20.00
1 $30.00
1 $40.00
1 $50.00
My problem is the group still displays the total of my dataset (as seen above), but I want it to display the total of the detail data group, like below:
Month Number of Sales Total Sales
- Jan 2007 5 $150.00
1 $10.00
1 $20.00
1 $30.00
1 $40.00
1 $50.00
If I change the fields in the group to look at the detail data ,for instance =count(Fields!NumberofSales.Value,"Details_Group") I get an scope error.
How can I display the totals of the detail data in the parent group after I added a filter to the detail data?
I am trying to show aggregate information in a grouping and report footer. The details section has a filter applied successfully. For example, if there are three records and one should be filtered out, then only two display. However, the count function returns 3 instead of the desired 2. I have tried to set the scope parameter to body, the table name and every group name on the report. Either this has no effect or returns an error message stating that the appropriate scope isn't applied.
Does anyone know how to perform aggregate functions and exclude the filtered rows?