Improving Large Table Performance

Aug 15, 2007

We have a table that is 800GB. We are planning to re-build the clustered index on this table to a different filegroup. The new filegroup and files associated with it will sit on a SAN which will have a 1.5TB allocation. Does anyone have any suggestions in regards to how many files to have associated with the filegroup to provide optimal performance? Apparently we could have 3 LUNS (500gb each), so would 1 file on each LUN provide additional performance as opposed to one file on 1 LUN?

View 1 Replies


ADVERTISEMENT

Improving Performance

Feb 19, 2007

Hi,I used SQL Server 2005 on my development machine, and whilst thismachine isn't as powerful as the live server, it does at times seem alittle slower than I would expect. So I've been wondering if there isany way for me to tune the machine so that SQL Server is better able tomake use of the resources? I am using WS2003.Short of tweaking with the actual database, which is still underdevelopment, does anyone have any tips to increase peformance?Thanks,--Dylan Parryhttp://electricfreedom.org | http://webpageworkshop.co.ukProgramming, n: A pastime similar to banging one's headagainst a wall, but with fewer opportunities for reward.

View 16 Replies View Related

Improving Performance

Sep 23, 2007

I've written a small application that uses a Microsoft Sql Server 2000 Database and Here lately i've been experiencing alot of performance problems with timeouts and data just taking too long to retreive. I was wondering what could i do to help with this.

Details:
My Program i wrote simply takes windows eventlogs and uploads them into a central database in general , i have also build a small asp.net page, where users could write out queries or save them for later use. The program is hitting our servers and it grabs them remotely, inserts into sql one by one, and clears out the logs when finished, it runs one a day, and generates about 12,000 events daily. (mostly login's and logoffs). once it starts it'll take about 2 and a half hours to finish going through everything.

Database Design...
Single Table, about 3.5 million rows now, takes up about 3 gigs of harddrive space.

Column Definitions
Name, Data Type



EventEntryID INT PrimaryKey(Clustered) (Auto-Increment Identity)
Log varchar(300)

Type varchar(300)
Date smalldatetime
Source varchar(300)
Category varchar(300)
Message varchar(8000)
EventID varchar(300)
UserName varchar(300)
Computer varchar(300)I know the database design can probally use some work, but it's hard to do so now b/c of the massive amoutns of data without killing the transaction log.

Computer Specs:
OS: Window Server 2003
Processor: Intel 700 MHz
RAM: 1 GB

I know it's a small server, but this is pretty much the largest thing this server deals with

I've tried to monitor sql with perfmon and sql performance counters and I just don't see much of anything going wrong with it there, so i'm thinking the the hardware isn't an issue, but then again, i'm not very familar with sql.

View 10 Replies View Related

Improving Performance Of SqlServer

Mar 28, 2006

hello everyone,
does anyone know some tipps or articles that descripe how the sqlserver 2000 can be improved? i have an application which displays newspapers. the text of the newspapers is saved in db. performance is ok but  i have performance-problems when i do a fulltext-search.
 
1) hardware: p4  xeon double core, 2.4 Ghz, 2 GB Ram
2) number of articles in database: ~ 255.000 Records (betwenn 10 - 400 words each)
3) the database is indexed (i am searching with "CONTAINS" and not with the "LIKE"-keyword)
4) in my query i am searching in 3 columns (title,leadtext,content)
5) results (searching for one word, stopping after 150 Results):the time to get a result from database lies between 5 to 10 Seconds (Average: 6.77 sec)!
this is the time only for getting the dataset from the sqlserver. the time of manipulating the dataset and binding it to my repeater needs about 15 milliseconds. using a sqldatareader didn't improve the performance. i got the same results.
in future the number of records will be three to five times higher than now. i am afraid that then the user has to wait half a minute to get a result of 150 search hits.
my questions:
Are you having similiar results in your applications? Are this times acceptable?
The SqlServer 2000 (professional) is installed in its default way with the default settings. Are they possiblities to modify this settings for improving performance? Where can i find info about that?
 
greetings and thx
Klaus

View 2 Replies View Related

Improving Count(*) Performance

Feb 24, 2008

Hi guys,

I'm facing a performance issue with select count(*) table_name when performing pagination to my results. My actualy query joins 4 tables with proper PK and FK contrainsts and indices applied.



select top 100 * from table1
left outer join table2 on tid=rid
left outer join table3 on tid=aid
inner join shipment on sid=sid
where datetime>=convert(datetime,'20070810 00:00:00') and datetime<=convert(datetime,'20070920 23:59:59')

The query above takes 200 ms to 600 ms to execute.


select count(*) from table1
left outer join table2 on tid=rid
left outer join table3 on tid=aid
inner join shipment on sid=sid
where datetime>=convert(datetime,'20070810 00:00:00') and datetime<=convert(datetime,'20070920 23:59:59')

The query above takes 2000ms to 4000 ms to run.



The total records that fulfill the where clause is approximately 5000 to 6000 records. I checked the execution plan but found that the server is actually utilizing the indices with operations like index scan and index seek.

Are there any other things that i can do to improve the counting of total records?



Thanks.

View 5 Replies View Related

Improving Conversion Performance

Sep 20, 2006

I have a varchar(max) field that is on its way to a Term Lookup task, so it requires conversion to DT_NTEXT before it gets there. The conversion is taking a really, really, really long time. In my current example I have about 400 rows. Granted the length of the varchar(max) if I 'select max(len(myColumn))' is approximately 14,000,000 so I understand that I'm moving a lot of data. But I'm on a x64 machine with 4 dual core processors and watching Task Manager I can see that I'm only using 1 core out of 8. Anything I'm missing that can speed this up?



Thanks,

Frank

View 3 Replies View Related

Improving Performance Of Search Queries

Dec 5, 2007



I have a number of complex search stored procedures that use the following syntax to try to simplify the code.

WHERE @SearchParam IS NULL OR SearchCol = @SearchParam

unfortunately it appears that this is really inefficient as far as the database is concerned.

If I run the following query on the AdventureWorks database (SQL Server 2005 with SP2 and fixes up to v3054)




Code Block
SET STATISTICS IO ON

Declare @CustomerID int
SET @CustomerID = 1

SELECT SalesOrderID
FROM Sales.SalesOrderHeader
WHERE @CustomerID IS NULL OR CustomerID = @CustomerID

SELECT SalesOrderID
FROM Sales.SalesOrderHeader
WHERE CustomerID = @CustomerID






Is see that the first select results in 45 logical reads, whereas the second results in only 2 logical reads.

Does anyone have any idea how I can get the benefit of a search procedure that does not have loads of IF blocks, or dynamic SQL without this major performance issue?

Cheers,

Justin

View 3 Replies View Related

Large Table/slow Query/ Can Performance Be Improved?

Jul 20, 2005

I am having performance issues on a SQL query in Access. My query isaccessing and joining several tables (one very large one). The tables arelinked ODBC. The client submits the query to the server, separated byseveral states. It appears the query is retrieving gigs of data from thetable and processing the joins on the client. Is there away to perform moreof the work on the server there by minimizing the amount of extraneous tabledata moving across the network and improving performance (woefully slowabout 6 hours)?

View 3 Replies View Related

Creating Indexes On Large Table To Increase Performance

Mar 5, 2008

Dear all,
I'm using SQL Server 2005 Standard Edetion.
I have the following stored procedure that is executed against two tables (RecrodedCalls) and (RecordedCallsTags)
The table RecordedCalls has more than 10000000 Records and RecordedCallsTags is about 7500000 Records
Now the lines marked in baby blue are dynamic (Dynamic where statement) that varies every time this stored procedure is executed, may it contains 7 columns in condetion statement or may it contains 10 columns, or 2 coulmns.....etc
Now I want to create non-clustered indexes on the columns used in the where statement, THE DTA suggests different indexing whenever the where statement changes.
So what is the right way to created indexes, to create one index on all the columns once, or to create separate indexes on each columns, sometimes the DTA suggests 5 columns together at one if I€™m using 5 conditions, I can€™t accumulate all the possible indexes hence the where statement always vary from situation to situation, below the SP:


CREATE TABLE #tempLookups (ID int identity(0,1),Code NVARCHAR(100),NameE NVARCHAR(500),NameA NVARCHAR(500))

CREATE TABLE #tempTable (ID int identity(0,1),TypesCount INT,CallsType NVARCHAR(50))



INSERT INTO #tempLookups SELECT Code, NameE, NameA FROM lookups WHERE [Type] = 'CALLTYPES' ORDER BY Ordering ASC

INSERT INTO #tempTable SELECT COUNT(DISTINCT(RecordedCalls.ID)) As TypesCount,RecordedCalls.CallType as CallsType

FROM RecordedCalls LEFT OUTER JOIN RecordedCallsTags ON RecordedCalls.ID = RecordedCallsTags.CallID

WHERE RecordedCalls.ID <= '9369907'

AND (RecordedCalls.CallDate BETWEEN cast ('01 Jan 1910 00:00:00:000' as datetime ) AND cast ( '01 Jan 2210 00:00:00:000' as datetime ))

AND (RecordedCalls.Duration BETWEEN 0 AND 1000000)

AND RecordedCalls.ChannelID NOT IN('62061','62062','62063','62064','64110','64111','64112','64113','64114','69860','69861','69862','69863','69866','69867','69868')

AND RecordedCalls.ServerID NOT IN('2')

AND RecordedCalls.AgentID NOT IN('1000010000')

AND (RecordedCallsTags.TagID is null OR RecordedCallsTags.TagID NOT IN('100','200'))

AND RecordedCalls.IsDeleted='false'

GROUP BY RecordedCalls.CallType

SELECT IsNull(#tempTable.TypesCount, 0) AS TypesCount, CASE('English')

WHEN 'Arabic' THEN #tempLookups.NameA

ELSE #tempLookups.NameE

END AS CallsType FROM

#tempTable RIGHT OUTER JOIN #tempLookups ON #tempTable.CallsType = #tempLookups.Code

DROP TABLE #tempLookups

DROP TABLE #tempTable


Thanks all,
Tayseer

Any suggestions how to create efficient indexes??!!

View 2 Replies View Related

Large Log Table ....SELECT * FROM Statement....killing The Performance Of Server..Help Me Out..

Mar 12, 2008

Hi all

I have a Large log table with large size data(I month only),If I run a query like SELECT * FROM <table_name> Server will go€¦very very slow€¦.

Because of large Data system is going slow€¦..

Please some body helps me with suggestion how get good performance.

View 4 Replies View Related

Improving Access To Inserted And Deleted Table

Feb 4, 2008

Is there a configuration or a trick to improve the speed of the access to inserted and deleted tables whithin a trigger? Whenever a trigger is called, the access to inserted or deleted constitute approximatly 95% of the execution time.

Is there a way to have access to inserted and to deleted improved other than copying the data to another table?

View 3 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

View 10 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

Waleed Eissa
http://www.waleedeissa.com

View 9 Replies View Related

Performance Issues With Large Tables

Dec 5, 2007

Hi,

I have a table with over 61 million records having a clustered index on an identity column(Primary key). Simple count queries are taking minutes to execute on this table (ex: select count(1) from table1). I have checked the statistics on the primary key which displayed me the histogram having the 39th million record as the Range-hi-key. I updated the statistics on this column and tried requerying, but still it took atleast 5 minutes to give me the count of records in the table. Also, there were no users using the table when I queried. Inserts into this table were working fine. I have other tables in my database with 41 million records having no such issues. Can anyone point me to the problem areas in such scenarios?


Thanks,
Harish

View 6 Replies View Related

Performance Of NVARCHAR(n) For Large Strings

Jul 25, 2007

What are the performance or storage implications of using a large value for NVARCHAR? For example, if I specify a NVARCHAR(4000) column just to cope with the rare case there are strings that long, but have a table full of strings 255 characters long, is the performance identical to specifying an NVARCHAR(255) column? Is there a reason then NOT to specify NVARCHAR(4000) on everything? i.e. does the query optimizer use it?



I know how long a row is and how many rows can fit in a page (4096 bytes) affects performance, but my understanding is that NVARCHAR only stores the characters needed so it wouldn't be affected unless there was actually a longer string. I wanted to know if there are any other considerations to using a large value here.



Also, how does NTEXT compare to using NVARCHAR? I noticed the documentation said it used a new page after 256 characters, which sounds like it's different from how a 4000 byte NVARCHAR would be stored?

View 1 Replies View Related

Admin Performance Issues, Large Amount Of DBs

Mar 10, 2004

I have a large amout of dbs (150) on my SQL Server and when using the enterprise manager to do administrational tasks like Backup, Restore etc. it takes 1.5 hour to open the Database folder. Server is 2xP4, 3Gb RAM. Any ideas on how to manage this number of dbs on the same server and instance of SQL.
Cheers!

View 2 Replies View Related

How Do You Improve SQL Performance Over Large Amount Of Data?

Jul 23, 2005

Hi,I am using SQL 2000 and has a table that contains more than 2 millionrows of data (and growing). Right now, I have encountered 2 problems:1) Sometimes, when I try to query against this table, I would get sqlcommand time out. Hence, I did more testing with Query Analyser and tofind out that the same queries would not always take about the sametime to be executed. Could anyone please tell me what would affect thespeed of the query and what is the most important thing among all thefactors? (I could think of the opened connections, server'sCPU/Memory...)2) I am not sure if 2 million rows is considered a lot or not, however,it start to take 5~10 seconds for me to finish some simple queries. Iam wondering what is the best practices to handle this amount of datawhile having a decent performance?Thank you,Charlie Chang[Charlies224@hotmail.com]

View 5 Replies View Related

Performance Issue On A Singel Large Insert

Dec 19, 2006



Hi,



I'm testing Mirroing.

1) I have dedicated NIC for Mirroring - 100Mb

There is no issue with the network (file of 25MB goes in 2.5 seconds)

2) I'm issuing the next simple command

Select * into dbo.Table2 from dbo.Table1

3) the size of the table is 25MB



in async mode it takes 3-4 sec (as if it runs only local)

in sync mode it takes 25-29 sec !!!!

WHY ? IS THIS NORMAL ?

is there any configuration i can change ?

View 5 Replies View Related

SQL 2005 Full-Text Performance On Large Results

May 10, 2006

Hello everybody,
I've got a little problem wich i'm trying to solve since 1-2 years and i hoped it would go away with SQL 2005 - but that wasn't the case :(.

Situation:
I've just bought a new Server containing:
SQL 2005
64 Bit Enviroment
4 GB RAM
2x AMD Opteron 2 GHz Prozeccors (Dual Core)
2x RAID Controllers (RAID 1) containing
1.1 System
1.2 Data
2.1 Transaction Logs

I've created a full-text table containing all the search terms i need to search.
Table build:
RecID - int - Primary Key
SrcID - varchar(30)
ArticleID - int - referring to an original table
SearchField - varchar(150) - Containing the search terms
timestamp - timestamp field

Fulltext index:
RecID as Primary Key
SearchField as indexed field - Wordbreaker: Neutral (containing several languages), Accent sensitivity off

Now i've got different tables imported in here resulting in a table size of ~ 13 million rows.

There is no problem with the performance on this catalog if i search a term wich isn't contained in more than 200-300 recordsets - but if i search for a term wich could occur in 200'000 upwards it gets extremely slow.

On the slow query the first records get in after no time, but until the query finished up to 60 seconds pass.
The problem is that i have to sort by a ranking value wich is stored externally - so i need all results to sort them...

current (debugging) query:
SELECT ArticleID FROM fullTextTable AS ft INNER JOIN CONTAINSTABLE(FullTextCatalog,SearchField,'"term*"') AS ftRes ON ftRes.[KEY]=ft.idEntry

Now if i check in the performance monitor:
As soon as i run the query the 'Avg. Disk Read Queue Length' counter on disk D (SQL Data Files) jumps to the top, until the query has finished.
Almost no read/write activity on C: where the Fulltext is stored...

If i rerun the query, after it finished once successfully - it takes place below 1-2 seconds, would be nice to get that result in first place :).

Does anybody know a workaround to this problem?

View 9 Replies View Related

Inconsistent Performance Results With Large Partitioned Tables.

Dec 5, 2007

I have a query that joins two large partitioned tables and depending on the values in the where clause, I can get dramatically different performance results.

The first query completed in around 7s and has 47,000 logical reads.


select mo.monitor_id,

mo.site_id,

mo.testtime,

sum(mo.NumBytes),

sum(mo.DNSTime),

sum(mo.ConnectTime),

sum(mo.FirstByteTime),

sum(mo.ContentTime),

sum(mo.RelocTime)

from monitor_raw mr(nolock), monitor_object mo(nolock)

where mr.monitor_id in (5339, 5341, 5342, 943842, 943866)

and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'

and mo.returncode = 200

and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)

and mr.escalationlevel = 0

and mr.monitor_id = mo.monitor_id

and mr.testtime = mo.testtime

and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime


The second query takes 188s to complete and has 1.8m logical reads. The only difference between the two is the value of the monitor_ids in the where clause.


select mo.monitor_id,

mo.site_id,

mo.testtime,

sum(mo.NumBytes),

sum(mo.DNSTime),

sum(mo.ConnectTime),

sum(mo.FirstByteTime),

sum(mo.ContentTime),

sum(mo.RelocTime)

from monitor_raw mr(nolock), monitor_object mo(nolock)

where mr.monitor_id in (152682, 5339, 5341, 5342, 268080)

and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'

and mo.returncode = 200

and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)

and mr.escalationlevel = 0

and mr.monitor_id = mo.monitor_id

and mr.testtime = mo.testtime

and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime



The two tables have clustered indexes on monitor_id, testtime and site_id. Comparing the execution plan, I can see why there is such a difference in performance. The second query performs a clustered index seek on the monitor_object table starting at the lowest monitor_id, testtime & site_id through the highest monitor_id, testtime & site_id. The first query performs a clustered index seek where the monitor_id, testtime and site_id equals the same values from the monitor_raw table.


My question is, how can I force the second query to use the same execution plan as the first so that I can get better performance?

One possible workaround that I could use is to execute five individual queries, one for each monitor_id and then union the results together but this would require significant code changes to my stored procs.

Thanks,

Tim

View 5 Replies View Related

Does A Large Amount 'image' Data Affect Overall SQL Performance?

Jul 20, 2007

Recently we added a new table into our SQL2000 database specifically to store scanned in images of documents. This new table contains a PK field, a couple of datetime fields, a couple of char(1) fields and one 'image' field.



Before adding this table, the database size was approx 6GB. Six months after adding this new table, the database has grown to 18GB - 11GB of this is due to the scanned in images.



Would this new table affect the SQL performance with regards to accessing other data in the database that has nothing related to the new table?



If so, would moving this new table into it's own database be recommended?



Thanks

Rod

View 1 Replies View Related

Performance - Automatic Expansion Vs Setting Large Initial Size.

Aug 25, 2004

Hi,

We currently have a fairly new SQL server 2000 db (currently about 18mb is size) as a backend to an application (Navision). Performance seems to be below what it should be.

The db is increasing quite rapidly in size, with a lot of data scheduled to be loaded onto the db and also more and more shops and users coming onto the system with alot more transactions going onto the db.

The initial setup of the db has the database File properties set to "Automatically grow file" by "30%" and has an unrestricted file growth.

The server that the db sits on is high spec and very large disk space.

Because the database will be expanding alot and thus reaching its maximum space allocation and then performing a 30% increase in size (which I guess affects performance quite a bit??) quite regularly.

Is it best to set the intitial size of the db to a alot bigger size in the first place as we have large disk space availiable and also set the % increase bigger also.

any advice on best performance would be much appreicated.

Regards,
David

View 1 Replies View Related

SQL Server 2005 Full Text Performance With Large Number Of Records

Dec 12, 2007

Hi
We are using the SQL Server 2005 Full Text Service. The data is not huge, but the kind of data is that each record is small and there are a large number of records. There are 35 million records now with 11 GB of data and about 1.6 GB of FT catalog on the table. This is expected to grow to at least 10 times the size of this data. The issue is with FTS taking a long time to return results when the number of hits (rows) getting returned from FTS is large for some searches, it takes a very long time. With the same data & catalog, those full text queries for less common words return timely. The nature of the problem doesnt allow us to only have top results. We need all the results. So it’s not about the size of data but the number of results getting returned from FT. (As the catalog is inverted). The machine is dual processor with 4 GB RAM.
 
I am considering splitting the table and hence the catalog and using multiple servers to do full text searches in smaller catalogs. Is there any other way this issue can be solved ?
 
If splitting is the only way, can you give me an idea as to what is a statistical/standard limit to the number of search results/cataog size as which FTS gives good results
 
Thanks in advance

View 1 Replies View Related

OLEDB Destination Performance Decreasing Drastically Over Time For Large Input Files

Jul 4, 2006

Hi,

We are processing 60,00, 000 rows(2 GB file) available in a flat file and loading them in to a database tables using OLEDB Destination components. In the data pipeline of an SSIS package we have 1 flat file source reader, 7 look up components(full cache mode), 1 multicast component and 2 OLE DB destinations with fast load option.

We have observed that first 10,00, 000 rows are processed and loaded in to target tables in just 4 minutes time. The second set of 10,00, 000 rows are processed in 15 minutes time. After this for processing each 1,00,000 rows SSIS is taking approximately 8 - 10 minutes time. We are not able to identify the reasons for the unexpected behaviour of SSIS.

We thought that as the input file size is 2 GB SSIS is not able to manage and slowing down over time of execution. We did split the big input file in to 60 small 37 MB (approx) size files. Then we modified the package by adding For-Each loop task to process all the 60 small files and load them in to database server sequentially. Even in this approach also we have identified data loading has slowed down drastically after processing 13 files.

In order to verify is there any problem with reading source file or transformation, we have replaced OLEDB destinations component with Flat File destinations. With Flat file destination the time taken for processing rows is very constant. For every 8 minutes package is able to process 10,00,000 rows and write them in to the destination files. So, there is no problem with the with either Look up components or flat file source reader.

We are sure that target database server is in same state/condition from the starting to the end of package execution. The client box in which we are running the package is having 1 GB RAM. During package execution time the CPU usage is at 30 % and PF usage is 580 MB. SP1 is also installed on both Client and Server.

Does any one have clue what is causing slow down of data load over the time of package execution?

View 3 Replies View Related

Optimizing/Improving WHERE

Oct 24, 2005

Here's a sample SELECT statement:


SELECT T1.F3
FROM T1 INNER JOIN T2 ON T1.F4 = T2.F4
WHERE
(T1.F1 > @iNum AND T2.F1 > @iNum)
OR
( @iNum2 * (T1.F1 - T2.F1)/(T1.F2 - T2.F2) ) + (T1.F1 - ((T1.F1 - T2.F1)/(T1.F2 - T2.F2) * T1.F2) ) > @INum


As you can see, the second part of the WHERE (after the OR) is much more complicated than the part before the OR. My query would run a lot faster if it tried the first part of the OR and didn't bother with the second part if the first part was satisfied. Is there any way to do this?

Thanks!

Tyler

View 3 Replies View Related

Improving The Speed Of A LIKE Query

May 16, 2007

Hi all,

My colleague and I are having some difficulties regarding the speed of a LIKE query on our intranet system. Please see the quote below as posted on another SQL forum (To which no one has responded yet ).

Quote: Howdy,

I've taken control of an SQL Database with an ASP front end. The CPU usage of the SQL Server process periodically jumps up to between 75-99%. I've managed to identify the ASP page that is causing it and it's a search page.

It's basically doing something like this:

SELECT JobNumber, CustomerName, CustomerAddress, CustomerPostCode
FROM Customers
WHERE CustomerName LIKE '%query%'
OR CustomerAddress LIKE '%query%'
OR CustomerPostCode LIKE '%query%'

Where query is the value the user has searched on.

The CustomerName and CustomerPostCode fields are varchar fields, but the CustomerAddress field is a text field.

I know if I replace the CustomerAddress text field with a series of varchar fields then I could make them non-clustered indexes, but would this help speed things up with a LIKE query?

There's a lot of work required on the ASP side if I'm going to split the field out, so I only want to do it if I'm gonna get a significant benefit from doing it.

Failing this, is there a better way of performing a search like this?

Any help is much appreciated,

Thanks!

View 9 Replies View Related

Improving Chatty Applications

Nov 4, 2006

All:

I am presently tasked with improving the performance of a chatty application. What I mean by a chatty application is that the application makes multiple calls to the database server for each user request. In many cases it appears that many of the windows are making multiple calls to the database for the population of dialog boxes. In a couple of cases there are more than 10 dialog boxes that are populated similar to this.

To me this is kind of an "old issue", but my response is (1) cache the dialog information as much as possible and (2) make one call to the database to return all of this data for the dialog boxes if it is not cached. For update calls again make a single call to the database rather than making a call for each individual row of a table that is updated.

I admit that some of this will complicate the code of the application; however, my perspective is that I am supposed to look at this from the database perspective and not from the perspective of "programmer convenience." There will be times in which something that would otherwise get to hairy for a programmer will dictate an additional trip to the server, but this should not normally be the case.

What else do I need to consider?




Dave

View 3 Replies View Related

Improving SQL Server Searching - Buying A 3rd Component

Oct 26, 2007

Does any one know of any good components I can buy in to sit on top of my SQL Server to provide the kind of search functionality that google has - 'Did you mean...,' nearset match etc. I have set up SQL Server Full Text but to build all the extra functionality will take some time and I think it will be more cost effective to buy a component in. If any one has used any could you let me know of you experinces please. CheersScott

View 2 Replies View Related

Does It Store All The Results To Tempdb Database When I Query Against A Large Table Which Joins Another Table?

Jun 25, 2007

Hi, all experts here,



I am wondering if tempdb stores all results tempararily whenever I query a large fact table with over 4 million records which joins another dimension table? Since each time when I run the query, the tempdb grows to nearly 1GB which nearly runs out all the space on my local system drive, as a result the performance totally down. Is there any way to fix this problem? Thanks a lot in advance and I am looking forward to hearing from you shortly for your kind advices.



With best regards,



Yours sincerely,



View 11 Replies View Related

Large Table-Table Partition, View Or Other Method?

Aug 27, 2007

Hi everyone,

I use sql 2005. What is the best practice for dealing with large table (more than million rows)? Table Partition, View or other?

Can you please give some suggestions? It will be very helpful if you can post some references or examples.

Thank you!

View 12 Replies View Related

PK On A Large Table

Nov 16, 2007

I am developing an application that has a table with lots of records(network traffic) but the data is summarize every so often to create summary records (old records are deleted). The problem is that I have a PK based on an autoincrement ID (int) that will run out of numbers. However, this ID is not referenced anywhere, (not a foreign key from another table, not use for deletion and there is no update in this table whatsoever).

So my possibilites are:
1.- reseed the id when it is about to run out.
2.- make the id bigint
3.- remove the id and change the PK to 2 other fields
4.- remove the id and without PK

I am leaning toward option 4, because I do not see the need for a PK, but I understand that it is quite out of the normal.. So I would like to hear from other people ( I do not have much experience with DB).

I also like option 3. I already have a index on one of the other fields (time).

Any input will be appreciated.

Claudio Robles

View 7 Replies View Related

BCP Large Table.

Jul 23, 2005

If I use BCP to export a very large table will that table be blockedfor writes during the export process? I don't want to prevent usersfrom accessing that table during the bcp process?Thank You, TFD.

View 1 Replies View Related

Storing Large PDF's In Table

May 26, 2008

Hi,
I have this page that upload's PFD's to a table. In principle this works fine.
Until I try to upload large files (3 to 4 MB)I need to even upload larger files than that. (Don't really know as of yet what users are going to come up with) I get TimeOut problems. Now some people say it is not possible to exceed a limit of about 4 MB. But that there is a workaround by changing something to the web.config file.Can somebody give me info about that, (I am quite a novice really)I tried to change it like this, but to no avail:
<system.web><httpRuntime maxRequestLength="102400"enable = "True"requestLengthDiskThreshold="102400" useFullyQualifiedRedirectUrl="True"executionTimeout="102400"/></system.web> 
Thanks for any help!

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved