SQL 2012 :: Performance Limit On Number Of Indexes Per Table / Database
Oct 1, 2014
Is there a performance limit on the number of indexes per table / database ? With Filtered indexes there appear to be many more opportunities for more finely defined, and therefore smaller indexes resulting in many more indexes on a single table.
I've read here at [URL] that with 32 bit it's recommended to only have 10 database mirrors per instance.  However, is there a limit for 64 bit SQL Server 2012?
We're having a problem where we have about 300+ databases and now any new db's we add are timing out when it comes to setting up mirroring and am wondering if this is limited by the instance.
I have a server which is using total of 12 cores running an instance of SQL server. I was told to dedicate only 8 cores of CPU to be used by the instance for licensing purposes.
what the best way is to go about limiting CPU cores?
I'm trying to improve the loading of some tables with large amounts of data that forms part of an ETL. I was going to try removing any indexes before the inserting to speed up the process, but I had some questions on whether or not I should include the clustered index (assuming one exists).
I was originally planning on including a step to disable all indexes on the destination table using the following:
ALTER INDEX ALL ON MyTable DISABLE
Once the load had finished I'd simply rebuild all the indexes.
should I simply disable the non-clustered indexes?
Is there a limit of how many rows a table can have in SQL compact Edition? I didn't find anything in the documentation, but I get regularly a funny error message "Expression evaluation caused an overflow. [ Name of function (if known) = ]" when I try to create record number 32768 (is equal to 2 to the power of 15).
Dear all, I'm using SQL Server 2005 Standard Edetion. I have the following stored procedure that is executed against two tables (RecrodedCalls) and (RecordedCallsTags) The table RecordedCalls has more than 10000000 Records and RecordedCallsTags is about 7500000 Records Now the lines marked in baby blue are dynamic (Dynamic where statement) that varies every time this stored procedure is executed, may it contains 7 columns in condetion statement or may it contains 10 columns, or 2 coulmns.....etc Now I want to create non-clustered indexes on the columns used in the where statement, THE DTA suggests different indexing whenever the where statement changes. So what is the right way to created indexes, to create one index on all the columns once, or to create separate indexes on each columns, sometimes the DTA suggests 5 columns together at one if I€™m using 5 conditions, I can€™t accumulate all the possible indexes hence the where statement always vary from situation to situation, below the SP:
CREATE TABLE #tempLookups (ID int identity(0,1),Code NVARCHAR(100),NameE NVARCHAR(500),NameA NVARCHAR(500))
CREATE TABLE #tempTable (ID int identity(0,1),TypesCount INT,CallsType NVARCHAR(50))
INSERT INTO #tempLookups SELECT Code, NameE, NameA FROM lookups WHERE [Type] = 'CALLTYPES' ORDER BY Ordering ASC
INSERT INTO #tempTable SELECT COUNT(DISTINCT(RecordedCalls.ID)) As TypesCount,RecordedCalls.CallType as CallsType
FROM RecordedCalls LEFT OUTER JOIN RecordedCallsTags ON RecordedCalls.ID = RecordedCallsTags.CallID
WHERE RecordedCalls.ID <= '9369907'
AND (RecordedCalls.CallDate BETWEEN cast ('01 Jan 1910 00:00:00:000' as datetime ) AND cast ( '01 Jan 2210 00:00:00:000' as datetime ))
AND (RecordedCalls.Duration BETWEEN 0 AND 1000000)
AND RecordedCalls.ChannelID NOT IN('62061','62062','62063','62064','64110','64111','64112','64113','64114','69860','69861','69862','69863','69866','69867','69868')
AND RecordedCalls.ServerID NOT IN('2')
AND RecordedCalls.AgentID NOT IN('1000010000')
AND (RecordedCallsTags.TagID is null OR RecordedCallsTags.TagID NOT IN('100','200'))
AND RecordedCalls.IsDeleted='false'
GROUP BY RecordedCalls.CallType
SELECT IsNull(#tempTable.TypesCount, 0) AS TypesCount, CASE('English')
WHEN 'Arabic' THEN #tempLookups.NameA
ELSE #tempLookups.NameE
END AS CallsType FROM
#tempTable RIGHT OUTER JOIN #tempLookups ON #tempTable.CallsType = #tempLookups.Code
DROP TABLE #tempLookups
DROP TABLE #tempTable
Thanks all, Tayseer
Any suggestions how to create efficient indexes??!!
I have a table that I'm loading as part of a control flow that in turn is copied to a target table by using a data flow task. I am doing this because a different set of fields may be used from the source entry to create the target entry based on a field in the source table. That same field may indicate that multiple entries need to be created in the target table from one source table entry for which I use a multi-cast transformation.
The problem I'm having is that no matter how many entries there are in the source table, the OLE DB Source during execution only shows 7,532 entries being taken from the source table. If there are less than 7,532 entries in the source table, everything processes fine. More than 7,532 and the data flow task only takes 7,532 and then seems to hang. It also seems as though only one path of the multi-cast transformation is taken when the conditional split directs a source entry down that path.
Seems like a strange problem I know, but any insight anyone could provide is appreciated. Thanks.
Hi, I have a denormalized table (done so with reason) with around 40 columns. I would never have to retrieve data for all of those columns together. I haven't done any performance measurements yet but just wondering if anyone has ready answer to this: Will there be a performance degradation if I retrieve data from a table with many columns, even if not all columns are referred in the query? (for making it simple, lets assume that all or varchar type of columns, I just want to find out if performance degrades if there are too many columns in table)
I have query that doesn't even register a time when running it. But when I add the lines that are commented out in the code below, it takes between 20 and 30 seconds! When I run the code for functions directly,I know when I include it like this, it loses the Indexing capabilities?
SELECT ----, ISNULL(CAST(NULLIF(dbo.ufnGetRetail(I.ISBN13),0.00) AS VARCHAR(20)), 'N/A') RetailPrice ----, ISNULL(CAST(NULLIF(SP.LocalPrice,0.00) AS VARCHAR(20)),'on request') LocalPrice
[code]...
How to have the functions included but have the query response time come down?
Is it possible to create a cluster configuration, where 2 (or more) SQL Server instances running on different servers will simultaneously serve the same database attached to the same storage? Something like this:
Instance A Instance B Instance C (active) (active) (active) | / | / | / SAN for Database and Quorum
The point would be to improve reliability and performance at the same time. All nodes would share load. If a node fails, the other nodes still work and take over the load.
I know a failover cluster can be done, where 1 instance is active and others are passive (active/passive) which improves reliability, but in this configuration just 1 instance is serving at the same time.
(As far as I understand, the active/active configuration is meant to run different databases on 2 (or more) instances such that the databases on a failing node are taken over by any other instances in the cluster.)
I am doing some SELECT queries on my database through ASP, but for example, I only want to return the 50 most recent entries that match the criteria. Is there any easy way to limit the number of results returned?
Hi, Here's my problem. I have an Area Chart, and I need to plot more than 300,000 records. Problem is RS cannot seem to handle this huge dataset. It would retrieve for 20mins, then after that it will just have an error "Internet Explorer cannot display the webpage". I am not encountering this if I just have a few records to plot. Please help
Processing Association Rules model on SQL 2005 Standard edition produced following error:
"Error (Data mining): The 'WO_3' mining model has 6690 attributes. This number of attributes exceeds the attribute limit of 5000 allowed by the current version of the algorithm associated with the mining model."
I am having a bit of a problem trying to limit a number of columns in a matrix appearing on a page.
At the moment, I have a dataset that lists the month and the mail packages that were sent during the month The matrix works great HOWEVER, if there were more than 8 months in the matrix columns, it does not break and would make the page look like a huge landscape page.
I am trying to limit the number of columns appearing (this is the months column) on the matrix so that the pages stay in a potrait position. IE: every 8 columns appear on one page. Is there an option or an expression I could use in the Matrix ? Thanks!
Is there a way to specify how many bars to display on a bar chart? And if the data set has more than the specified number of bars, then display another bar chart (on the next page) with the remaining bars?
Other than right-clicking on each individual table in SSMS and generating a CREATE script, is there a simple way to generate CREATE TABLE scripts for tables within a given database?
Background: I have a bunch of tables in one database, and I would like to add tables to a second database that have the same names and basic structures of some of the tables from the first database.
I do not need to transfer any data from the tables, this is a seperate project that will use a similar data structure. I just want to generate the CREATE TABLE scripts for 30ish tables within the first database, and then I'll tweak the scripts as appropriate and run them against the new database.
I have to create a table like this across a bunch of servers. I'm thinking that I'm overlooking something with needing two additional CTEs, but maybe not. I have it at 17 seconds, which isn't much faster than a while loop solution that's currently in place.