Performance And Second Table

Sep 27, 2005

Hi,

I have a small theoretical issue.
I have one table, which is prettyu large. There is lot of evaluations
running on this table, that's why, each process need to wait for
another to be finished. Sometimes, for some critical functions, it
takes to long time.

I don't think that I can speed up processes, by changing the indexes on
the tables (to increase scan time for example), because this is
something what I was experimenting with already, and it was not enought
good.

My question is, will it improve performance, if I will create second
table, exactly like this one, and I will split some evaluations, that
the one, which defenately need to run on the source table will run on
the first one, and the second evaluations, will run on the other one.

To keep data consistance between this two tables, I was thinking baout
trigger on insert on the mother table, which will transport the data to
another one.

Second part is: to improve selects on the table, should I set indexes
with option of Fill factor as possible close to 100% or as possible
close to 0%. Or maybe should I set the pad index option?

What about clustered indexes. Is it better to use them if I would like
to increase performace for selects?

Thanks in advance

Mateusz

View 4 Replies


ADVERTISEMENT

Difference In Performance Between Temp-table And Local-table?

Jan 23, 2008

Hi!

What is the difference in performance if I use a Temp-table or a local-table variable in a storedprocedure?

Why?


//Daniel

View 5 Replies View Related

Table Variable Performance

Jan 4, 2004

I've read that table variables give better performance than temporary tables as they are kept in memory and don't need to be recorded in transaction logs ect however I have a stored proceedure which takes 0.183 seconds to execute but when I change the one temporary table used in the proceedure to a table variable the execution time increases to 0.223.

Not much of an increase I admit but it just seems contrary to what I've read.

I want to get the best performance possible so can someone explain to me what is going on ?

View 2 Replies View Related

Performance Joing To Same Table More Than Twice

Apr 30, 2007

I find that joining to the same table twice is OK, but as soon as you do it 3 or more times you get a massive performance hit.
Does anyone know the reason for this? Whats special about 3?
What's the best approach to do this sort of thing?
(I've used the SQL Server 2005 Tuning Advisor to add indexes for the query).

Rather than:
Select ..., sum(a1.<column>), sum(a2.<column>), sum(a3.<column>) from master_table
left join table_1 a1 on ...
left join table_1 a2 on ...
left join table_1 a3 on ...
group by ...

I have to select all the table and filter it using case:
Select ...,
sum(case when table_1.<column> = '...') as a1,
sum(case when table_1.<column> = '...') as a2,
sum(case when table_1.<column> = '...') as a3,
from master_table
left join table_1
group by ...

View 1 Replies View Related

Performance Of A Queue Vs Table

Jul 26, 2006

I know we are not allowed to benchmark SQL Server but..... It would be nice to have material to present which demonstrates the performance gains using a queue compared to insert/delete in a SQL table.



Logically it seems faster to use a queue due to the conversation grouping locking and the service broker itself. But there seems to be some overhead involved just to manage these queues that the service broker has to perform.

I am sure we are not unique with the choice to figure out if we will get a boost in performance using SQL a queue between services rather than a table to queue data. What is available to help understand the performance gains of using a queue?

View 2 Replies View Related

Number Of Columns In Table And Performance

Dec 28, 2007

Hi,
I have a denormalized table (done so with reason) with around 40 columns. I would never have to retrieve data for all of those columns together.
I haven't done any performance measurements yet but just wondering if anyone has ready answer to this: Will there be a performance degradation if I retrieve data from a table with many columns, even if not all columns are referred in the query? (for making it simple, lets assume that all or varchar type of columns, I just want to find out if performance degrades if there are too many columns in table)
 
Thanks in advance,
Sandeep
 

View 1 Replies View Related

1 Billion Row Table (Loan Performance)

Mar 28, 2008

I will be receiving data from a company called Loan Performance, that has one file/table that will hold 1 billion rows. They send data by period, and I plan to load the data via BCP via NT/DOS scripts. The 1 billion rows represents data for 200+ periods.Are the following design plans feasible1. Partition table by period value, I'm not sure of the max number of partitions per table in 2005, but I think we have periods data back to 1992 and a new one gets created every month, so the possibility of having > 1000 partitions exists. I plan on just pre-creating partitions for future data, instead of dynamically creating when a new period is sent.2. Load data via BCP in DOS shell scripts that will drop index (by partition), BCP in data, and they re-create indexes by partition, is this possible ? and will I see a performance increase as opposed to one huge table (I'm pretty much sure I will). There is usually one periods data present per day, but sometimes the vendor resends all data (would get loaded on weekend).I'm a bit unsure of where to start being I never worked with this amount of data. I worked with partitioning in Oracle a long time ago.I plan on having an 2XQuadCore 2.66Ghz CPU with 32GB of RAM and SQL2005EE 64Bit connected to 1 Terabyte SAN Disk.Thanks all,PMA

View 7 Replies View Related

Unable To Get The Performance Partition Table

Jan 11, 2007

I created two tables one is based on partition structure and one is non-partition structure.

File Groups= Jan,Feb.....Dec
Partition Functions='20060101','20060201'......'20061201'
I am using RIGHT Range in Partition function.
Then I defined partition scheme on partition function.

I have more than 7,00,000 data in my database.
I checked filegroups and count rows. It works fine.

But When I check the estimation plan time out for query it is same for both partition table and non partition table.

View 1 Replies View Related

Unable To Get The Performance With Partition Table

Jan 11, 2007

I created two tables one is based on partition structure and one is non-partition structure.

File Groups= Jan,Feb.....Dec
Partition Functions='20060101','20060201'......'20061201'
I am using RIGHT Range in Partition function.
Then I defined partition scheme on partition function.

I have more than 7,00,000 data in my database.
I checked filegroups and count rows. It works fine.

But When I check the estimation plan time out for query it is same for both partition table and non partition table.

View 1 Replies View Related

DROP TABLE, SELECT INTO, Performance

Jul 11, 2007

Hello -- thank you for taking the time to read this.

I have a very large table that is used both for archives and new information. To get the current information, the table is queried by many different users at various polling periods. The SELECT required includes about fifteen JOINS, and only returns about 200 rows at any given time.

So I got to thinking if it might be faster to periodically run the big query as a SELECT INTO into a smaller table and letting the polling clients query the smaller table with SELECT *. Periodically, the smaller table would be DROPPED and refereshed with another SELECT INTO.

Trouble is, the data would have to be updated once every 30 seconds, and there are inbound polls coming at the rate of about 200 per minute. It got me to thinking what might happen if a client attemtped to query the smaller table when it was in the process of being dropped and refilled.

So my question is three-part:

1) assuming a larger table of about 500,000 records and only 500 pertinent at any given time, is there any real potential of performance enhancement by switching to a SELECT INTO table?

2) if so, is there a chance of a client failing a query if the inbound query somehow collides with the DROP/SELECT INTO procedure?

3) if so, is there any way to prevent it or a better way of doing this?

Thanks again for reading, and in advance for any help you can provide. I apologize if I sound like a dummy - it's hard to fake intelligence!

View 3 Replies View Related

Table Size And Performance Degradation

Oct 21, 2007

Hello,

Working with SQL Server 2000, I have a table with the following structure:
ID (INT)
userID (INT, foriegn key)
productID (INT)
productQTY (DECIMAL(5,2))
purchaseDate(smalldatetime)

I have about a 1000 users, entering about 20-30 rows per day each, i.e ~20,000 - 30,000 new rows per day. The table might be queried with a simple "SELECT" for the products a user ordered per day or per time frame (purchaseDate column).
My question (finally) is - when should I expect to see performance degradation? Is there anything I can do to prevent it (i.e splitting this table somehow to several tables)?

Thank you all in advance

View 2 Replies View Related

Copy Table Speeds Performance

Mar 31, 2008

Monthly, I copy a table from one database to another database. I delete the original table and copy the table back speed the performance of the query on the order of 10 to 1. Why does this work?

Detail:
I have a legacy table that a small application queries about once a month. The table was poorly designed and the query runs a date range comparison on one field and has a sub query that runs string comparison against six fields. I cannot change the calling app or table design. When the app calls the query, the call times out due to the inordinate length of time. To fix this until next months query, I copy the table out, delete the original and copy back. What changes when the table is copied to another database and then copied back? The performance of the query changes from 10sec to 1.

View 3 Replies View Related

Increasing Performance By Selecting One Table

Sep 6, 2005

Hello all,I've following problem. Please forgive me not posting script, but Ithink it won't help anyway.I've a table, which is quite big (over 5 milions records). Now, thistable contains one field (varchar[100]), which contains some data inthe chain.Now, there is a view on this table, to present the data to user. Theproblem is, in this view need to be displayed some data from this onelarge field (using substring function or inline function returningvalue).User in the application, is able to filter and sort threw this fields.Now, the whole situation starts to be more complicated, if I would likecombine this table, with another one, where is one additional much morlarger field, from which I need to select data in the same way.Problem is: it takes TO LONG to select the data according to userrequest (user access view, not table direct)Now the question:- using this substring (as in example) is agood solution, or beter todo a inline function which will return me the part of this dataset(probably there is no difference)- will it be much faster, if i could add some fields in toSource_Table, containing also varchar data, but only this part whichI'm interested in and binde these fields in view instead off usingsubstring function?Small example:CREATE TABLE [dbo].[Source_Table] ([CID] [numeric](18, 0) IDENTITY (1, 1) NOT NULL ,[MSrepl_tran_version] uniqueidentifier ROWGUIDCOL NULL ,[Date_Event] [datetime] NOT NULL ,[mama_id] [varchar] (6) COLLATE Latin1_General_CI_AS NOT NULL ,[mama_type] [varchar] (4) COLLATE Latin1_General_CI_AS NULL ,[tata_id] [varchar] (4) COLLATE Latin1_General_CI_AS NOT NULL ,[tata_type] [varchar] (2) COLLATE Latin1_General_CI_AS NULL ,[loc_id] [nvarchar] (64) COLLATE Latin1_General_CI_AS NOT NULL ,[sn_no] [smallint] NOT NULL ,[tel_type] [smallint] NULL ,[loc_status] [smallint] NULL ,[sq_break] [bit] NULL ,[cmpl_data] [varchar] (100) COLLATE Latin1_General_CI_AS NOT NULL ,[fk_cmpl_erp_data] [numeric](18, 0) NULL ,[erp_dynia] [bigint] NULL) ON [PRIMARY]GOcreate view VIEW_AllDataasselect top 100 percentisnull(substring(RODZ.cmpl_data,27,10),'-') as ASO_NO,(RODZ.mama_type + RODZ.mama_Id) as MAMA,isnull(substring(RODZ.cmpl_data,45,5),'-') as MI,isnull(substring(RODZ.cmpl_data,57,3),'-') as ctl_EC,isnull(substring(RODZ.cmpl_data,60,3),'-') as ctl_IC,RODZ.Date_Event as time_time,RODZ.sn_no as SNFROMSource_Table RODZ with (nolock)goThanks in advanceMateusz

View 6 Replies View Related

Need To Tune A Table For Performance Gains

May 2, 2007

Hi :I have a TableA with around 10 columns with varchar and numericdatatypesIt has 500 million records and its size is 999999999 KB. i believe itis kbi got this data after running sp_spaceused on it. The index_size wasalso pretty big in 6 digits.On looking at the tableAit didnot have any pks and hence no clustered index.It had other indicesIX_1 on ColAIX_2 on ColBIX_3 on ColCIX_4 on ColA, ColB and ColC put together.Queries performed in this table are very slow. I have been asked totune up this table.I know as much info as you.Data prior to 2004 can be archived into another table. I need to run aquery to find out how many records that is.I am thinking the following, but dont know if i am correct ?I need to add a new PK column (which will increase the size of thetableA) which will add a clustered index.Right now there are no clustered indices2. I would like help in understanding should i remove IX_1, IX_2, IX_3as they are all used in IX_4 anyway .3. I forget what the textbox is called on the index page. it is set to0 and can be set from 0 to 100. what would be a good value for it ?thank you.RS

View 8 Replies View Related

Temp Table Vs. Union: Which Has Better Performance?

Aug 13, 2007

Right now, a client of mine has a T-SQL statement that does thefollowing:1) Create a temp table.2) Populate temp table with data from one table using an INSERTstatement.3) Populate temp table with data from another table using an INSERTstatement.4) SELECT from temp table.Would it be more efficient to simply SELECT from table1 then UNIONtable 2? The simply wants to see the result set and does not need tore-SELECT from the temp table.

View 1 Replies View Related

CE 3.0 Performance Problem Updating A Table.

Oct 19, 2007

Hi folks,



Environment:

We change from SQL Server CE 2.0 to SQL Server CE 3.0, and we got our customer complaining about the performance lost in the process that makes 50 updates in the Pocket PC with windows mobile 5.0.



Our customer says that when they used the SQL Server CE 2.0 the process was quicker.



Process:

We need to do an update to 50 rows every time we lose the focus. Those 50 update are creating a deterioration of performance.



There is any Patch to add to SQL Server CE 3.0? There is any problem updating 50 times on a row losing performance?



Thanks in advance.

View 1 Replies View Related

Can Partitioning A Table Boost My Performance?

Sep 28, 2007

I have an existing database with a table of about 50 milion records. There are also about 20 other tables, but they are alot smaller. The large table has a uniqueidentifier as it's Primary key (not sequential) and a forien key to a 'parent' table. The table also has a column telling when it was created. So, a bit simplified, it looks like:

ChildTable
---------------
Id uniqueidentifier <PK>
ParentId uniqueidentifier <FK>
CreationDate DateTime

ParentTable
-----------------
Id uniqueidentifier <PK>
CreationDate DateTime


Most of the questions accessing the Child table (the large table) is doing so by referensing the parent table, and not the CreatingDate, i.e.
SELECT *
FROM ChildTable
WHERE ParentId = '......'

All records with a specific ParentId will have very similiar CreationDates.

Now, my question is, will Partitioning the ChildTable boost performance for me? In case it will, what column(s) would define the Partitions? If I do it by CreationDate, a select-query like the one above will have to scan all partitions anyway, doesn't it? Doing it by Id isn't soo easy either I guess? If it helps, it might be possible to change the primary keys in the tables to have sequential guids.

Is there perhaps a performance tool to get help with suggestions about how to partition the table? Something like the 'Performance dashboard' reports, but for partitioning?

Regards Andreas



View 10 Replies View Related

Improving Large Table Performance

Aug 15, 2007

We have a table that is 800GB. We are planning to re-build the clustered index on this table to a different filegroup. The new filegroup and files associated with it will sit on a SAN which will have a 1.5TB allocation. Does anyone have any suggestions in regards to how many files to have associated with the filegroup to provide optimal performance? Apparently we could have 3 LUNS (500gb each), so would 1 file on each LUN provide additional performance as opposed to one file on 1 LUN?

View 1 Replies View Related

How To Improve Performance With A Join Between 2 Table From 2 SQL Servers

Aug 18, 2006

I am making a ASP.NET web application that involves 2 SQL Server(A & B).
I created a view in SQL server A pointing to the table in SQL Server B. I found out my application will run REALLY slow when accessing such a view. so I try to avoid using them. But in the case of 2 table joining from 2 different SQL Servers, I have no choice.
Can anyone help me with this?
Thanks!

View 4 Replies View Related

Function Vs Temp Table Calcs Performance

Feb 11, 2004

I need to know what is the best performance for needing to do calculations for a particular column. I want to do something like:


Select IID
, ItemNo
, StdRun
, ActRun
, dbo.fnCalc(OutCount)
From myTable


The function is basically a set of Case Statements and various calculations dependant upon the Case.

Is this the best (performance wise) way to do it or should I dump the needed info in a Temp Table and do the calcs on it and then tie the select statement to the table.

I've seen both approaches done, but they both seem to be a different way of getting to the same conclusion. I'm just wondering which puts the lightest load on the server.

Thanks,
Tim

View 2 Replies View Related

Similar Table, But Totally Different Search Performance...

Sep 20, 2007

I have two table A and B:

A 2000000 Rows 569288KB 8KB index
B 3000000 Rows 853712KB 8KB index

But when do "SELECT COUNT(*) FROM A/B", table B is significantly slower than A:

A 0 secend
B 8 seconds

Does anyone know why? So I can boost the performance to search table B.

Thanks in advance.

View 8 Replies View Related

Performance Of Joining Two Table In Diffrent Databse

Nov 19, 2007



I have two databses SIS and SIS_Pro. Users tables should be used in both of them because I have some relations between this table with other table in SIS and SIS_Pro. Users in SIS only have one column and it is the UserId which is the primary Key in both of them, but in SIS_Pro Users table have Firstname Lastname and... now. In my program I need some informatin from SIS and some from SIS_Pro so I create a view which is joining of forexample exam in SIS and Users in SIS_Parnian, becuase I don't have the firstname and lastname in a Users table which is in SIS_Pro databse.Does it reduce the performance?is it better to copy datas which are in Users in SIS to Users in SIS_Pro( I mean all columns firstname, lastname ,,.....)

Sincerely

Kianoosh

View 1 Replies View Related

Hiding Table Contents Without Sacrificing Too Much On Performance?

Sep 16, 2006

An application uses a database table having proprietary information. We do not want our customer to be able to look at that.
This being a real-time application, performance can not be sacrificed. What is the best way to keep the table data non-viewable without sacrificing the performance?

View 4 Replies View Related

Transact SQL :: Altering A Table Without Impacting The Performance?

Sep 15, 2015

Altering a table which is having more than 100 million rows. Would like to know the best possible way to add a new column to this table without impacting the performance much.

View 7 Replies View Related

Slow Performance With A Simple Query In A Small Table?

Jul 9, 2001

In my database/MY SERVER (SQL7/Win2K), I run a simple query with a Table/10000 rows (without cluster index):
SELECT * FROM TABLE
it take over 30s. Why it's slow? How can I check for reason? How to configure my server to improve performance?
Thanks in advance.
TH
----------------------------------
SP_CONFIGURE's RESULT in MY SERVER
----------------------------------

Table 'spt_values'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0.
name minimum maximum config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask 0 2147483647 0 0
allow updates 0 1 1 1
cost threshold for parallelism 0 32767 5 5
cursor threshold -1 2147483647 -1 -1
default language 0 9999 0 0
default sortorder id 0 255 52 52
extended memory size (MB) 0 2147483647 0 0
fill factor (%) 0 100 0 0
index create memory (KB) 704 1600000 0 0
language in cache 3 100 3 3
language neutral full-text 0 1 0 0
lightweight pooling 0 1 0 0
locks 5000 2147483647 0 0
max async IO 1 255 32 32
max degree of parallelism 0 32 0 0
max server memory (MB) 4 2147483647 2147483647 2147483647
max text repl size (B) 0 2147483647 65536 65536
max worker threads 10 1024 255 255
media retention 0 365 0 0
min memory per query (KB) 512 2147483647 1024 1024
min server memory (MB) 0 2147483647 0 0
nested triggers 0 1 1 1
network packet size (B) 512 65535 4096 4096
open objects 0 2147483647 0 0
priority boost 0 1 1 1
query governor cost limit 0 2147483647 0 0
query wait (s) -1 2147483647 -1 -1
recovery interval (min) 0 32767 0 0
remote access 0 1 1 1
remote login timeout (s) 0 2147483647 5 5
remote proc trans 0 1 0 0
remote query timeout (s) 0 2147483647 0 0
resource timeout (s) 5 2147483647 10 10
scan for startup procs 0 1 0 0
set working set size 0 1 0 0
show advanced options 0 1 1 1
spin counter 1 2147483647 10000 10000
time slice (ms) 50 1000 100 100
two digit year cutoff 1753 9999 2049 2049
Unicode comparison style 0 2147483647 196609 196609
Unicode locale id 0 2147483647 1033 1033
user connections 0 32767 0 0
user options 0 4095 0 0

Table 'spt_values'. Scan count 43, logical reads 108, physical reads 0, read-ahead reads 0.
Table 'sysconfigures'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 2.

View 4 Replies View Related

Performance Table - Update Statement For 13 Week Average

Oct 16, 2013

I need to figure out the correct update statement syntax for the following integration.

I have a "Performance Table" which i insert weekly performance numbers into for each store. The table is constructed w/ columns such as Store, Weekenddate, Sales, Refunds, #ofPatients

In a "Averages Table" i have every weekenddate for each store populated. So 52 Weeks for 10 stores = 520 Rows of Store numbers & WeekendDates.

What i would like to do is run a loop or update statement which would update the store average for each weekendate based on the last 13 weeks.

This is my query

update performancestore_avgs set SalesAvg =
(select sum(SalesHit)/Count(Store) from performance_store where performance_store.weekenddate >= performancestore_avgs.weekenddate-84 and performancestore_Avgs.store = performance_store.store)

The update statement runs but the averages are completely wrong.

View 3 Replies View Related

SQL 2012 :: Query Performance - Using Inline / Table Functions

Feb 26, 2014

I have query that doesn't even register a time when running it. But when I add the lines that are commented out in the code below, it takes between 20 and 30 seconds! When I run the code for functions directly,I know when I include it like this, it loses the Indexing capabilities?

SELECT
----, ISNULL(CAST(NULLIF(dbo.ufnGetRetail(I.ISBN13),0.00) AS VARCHAR(20)), 'N/A') RetailPrice
----, ISNULL(CAST(NULLIF(SP.LocalPrice,0.00) AS VARCHAR(20)),'on request') LocalPrice

[code]...

How to have the functions included but have the query response time come down?

View 4 Replies View Related

SLOW Performance On Table With Image Fields (SQL 2000)

Nov 15, 2006

HiWe have a SQL server 2000 SP4 on a windows 2003 2x3Ghz XEON 4 GB ram.We have a table looking like this with currently 6 rows. Total data is aprox10 kb i all row all together.CREATE TABLE [dbo].[BIOMETRICPROFILE] ([BIOMETRICPROFILEID] [bigint] IDENTITY (1, 1) NOT NULL ,[FINGERPRINTTEMPLATE1] [image] NOT NULL ,[FINGERPRINTTEMPLATE2] [image] NOT NULL ,[FINGERPRINTTEMPLATE3] [image] NOT NULL ,[FINGERPRINTTEMPLATE4] [image] NOT NULL ,[FINGERPRINTTEMPLATE5] [image] NOT NULL ,[FINGERPRINTTEMPLATE6] [image] NOT NULL ,[TYPE] [nvarchar] (50) COLLATE Danish_Norwegian_CI_AS NOT NULL) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]GOselect * from BIOMETRICPROFILE takes ~4 seconds (!) to execute thourgh Queryanalyzer. Alle other tables has no performance problems.We have a SQL 2005 express instalation on the same server. If we restore abackup from the sql 2000 database the query takes aprox ~ 15 ms.What isgoing on here?Has SQL 2000 problems with image fields? or how can we find the problem?RegardsAnders

View 2 Replies View Related

Large Table/slow Query/ Can Performance Be Improved?

Jul 20, 2005

I am having performance issues on a SQL query in Access. My query isaccessing and joining several tables (one very large one). The tables arelinked ODBC. The client submits the query to the server, separated byseveral states. It appears the query is retrieving gigs of data from thetable and processing the joins on the client. Is there away to perform moreof the work on the server there by minimizing the amount of extraneous tabledata moving across the network and improving performance (woefully slowabout 6 hours)?

View 3 Replies View Related

Performance Of Table-valued Function And Execution Plan

Mar 31, 2008

I am using SQL2005 EE with SP1. The server OS is windows 2K3 sp2

I have a table-valued function (E.g. findAllCustomer(Name varchar(100), gender varchar(1)) to join some tables and find out the result set base the the input parameters.

I have created indexes for the related joinning tables.

I would like to check the performance of a table-valued function and optimize the indexing columns by the execution plan.

I found the graphic explanation only show 1 icon to represent the function performance. I cannot find any further detail of the function. (E.g. using which index in joinning)

If I change the function to stored procedure, I can know whether the T-SQL is using index seek or table scan. I also found the stored procedure version subtree cost is much grether that the table-valued function

I would like to know any configureation in management studio can give more inform for the function performance?

Thanks


View 3 Replies View Related

Creating Indexes On Large Table To Increase Performance

Mar 5, 2008

Dear all,
I'm using SQL Server 2005 Standard Edetion.
I have the following stored procedure that is executed against two tables (RecrodedCalls) and (RecordedCallsTags)
The table RecordedCalls has more than 10000000 Records and RecordedCallsTags is about 7500000 Records
Now the lines marked in baby blue are dynamic (Dynamic where statement) that varies every time this stored procedure is executed, may it contains 7 columns in condetion statement or may it contains 10 columns, or 2 coulmns.....etc
Now I want to create non-clustered indexes on the columns used in the where statement, THE DTA suggests different indexing whenever the where statement changes.
So what is the right way to created indexes, to create one index on all the columns once, or to create separate indexes on each columns, sometimes the DTA suggests 5 columns together at one if I€™m using 5 conditions, I can€™t accumulate all the possible indexes hence the where statement always vary from situation to situation, below the SP:


CREATE TABLE #tempLookups (ID int identity(0,1),Code NVARCHAR(100),NameE NVARCHAR(500),NameA NVARCHAR(500))

CREATE TABLE #tempTable (ID int identity(0,1),TypesCount INT,CallsType NVARCHAR(50))



INSERT INTO #tempLookups SELECT Code, NameE, NameA FROM lookups WHERE [Type] = 'CALLTYPES' ORDER BY Ordering ASC

INSERT INTO #tempTable SELECT COUNT(DISTINCT(RecordedCalls.ID)) As TypesCount,RecordedCalls.CallType as CallsType

FROM RecordedCalls LEFT OUTER JOIN RecordedCallsTags ON RecordedCalls.ID = RecordedCallsTags.CallID

WHERE RecordedCalls.ID <= '9369907'

AND (RecordedCalls.CallDate BETWEEN cast ('01 Jan 1910 00:00:00:000' as datetime ) AND cast ( '01 Jan 2210 00:00:00:000' as datetime ))

AND (RecordedCalls.Duration BETWEEN 0 AND 1000000)

AND RecordedCalls.ChannelID NOT IN('62061','62062','62063','62064','64110','64111','64112','64113','64114','69860','69861','69862','69863','69866','69867','69868')

AND RecordedCalls.ServerID NOT IN('2')

AND RecordedCalls.AgentID NOT IN('1000010000')

AND (RecordedCallsTags.TagID is null OR RecordedCallsTags.TagID NOT IN('100','200'))

AND RecordedCalls.IsDeleted='false'

GROUP BY RecordedCalls.CallType

SELECT IsNull(#tempTable.TypesCount, 0) AS TypesCount, CASE('English')

WHEN 'Arabic' THEN #tempLookups.NameA

ELSE #tempLookups.NameE

END AS CallsType FROM

#tempTable RIGHT OUTER JOIN #tempLookups ON #tempTable.CallsType = #tempLookups.Code

DROP TABLE #tempLookups

DROP TABLE #tempTable


Thanks all,
Tayseer

Any suggestions how to create efficient indexes??!!

View 2 Replies View Related

SQL 2012 :: Performance Limit On Number Of Indexes Per Table / Database

Oct 1, 2014

Is there a performance limit on the number of indexes per table / database ? With Filtered indexes there appear to be many more opportunities for more finely defined, and therefore smaller indexes resulting in many more indexes on a single table.

View 4 Replies View Related

Query Performance Hard Coded Value Versus Table Driven

Nov 6, 2014

I want to be able to return the rows from a table that have been updated since a specific time. My query returns results in less than 1 minute if I hard code the reference timestamp, but it keeps spinning if I load the reference timestamp in a table. See examples below (the "Reference" table has only one row with a value 2014-09-30 00:00:00.000)

select * from A where ReceiptTS > '2014-09-30 00:00:00.000'

select * from A where ReceiptTS > (select ReferenceTS from Reference)

View 5 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved