I have a number of complex search stored procedures that use the following syntax to try to simplify the code.
WHERE @SearchParam IS NULL OR SearchCol = @SearchParam
unfortunately it appears that this is really inefficient as far as the database is concerned.
If I run the following query on the AdventureWorks database (SQL Server 2005 with SP2 and fixes up to v3054)
Code Block
SET STATISTICS IO ON
Declare @CustomerID int
SET @CustomerID = 1
SELECT SalesOrderID
FROM Sales.SalesOrderHeader
WHERE @CustomerID IS NULL OR CustomerID = @CustomerID
SELECT SalesOrderID
FROM Sales.SalesOrderHeader
WHERE CustomerID = @CustomerID
Is see that the first select results in 45 logical reads, whereas the second results in only 2 logical reads.
Does anyone have any idea how I can get the benefit of a search procedure that does not have loads of IF blocks, or dynamic SQL without this major performance issue?
Hi,I used SQL Server 2005 on my development machine, and whilst thismachine isn't as powerful as the live server, it does at times seem alittle slower than I would expect. So I've been wondering if there isany way for me to tune the machine so that SQL Server is better able tomake use of the resources? I am using WS2003.Short of tweaking with the actual database, which is still underdevelopment, does anyone have any tips to increase peformance?Thanks,--Dylan Parryhttp://electricfreedom.org | http://webpageworkshop.co.ukProgramming, n: A pastime similar to banging one's headagainst a wall, but with fewer opportunities for reward.
I've written a small application that uses a Microsoft Sql Server 2000 Database and Here lately i've been experiencing alot of performance problems with timeouts and data just taking too long to retreive. I was wondering what could i do to help with this.
Details: My Program i wrote simply takes windows eventlogs and uploads them into a central database in general , i have also build a small asp.net page, where users could write out queries or save them for later use. The program is hitting our servers and it grabs them remotely, inserts into sql one by one, and clears out the logs when finished, it runs one a day, and generates about 12,000 events daily. (mostly login's and logoffs). once it starts it'll take about 2 and a half hours to finish going through everything.
Database Design... Single Table, about 3.5 million rows now, takes up about 3 gigs of harddrive space.
Column Definitions Name, Data Type
EventEntryID INT PrimaryKey(Clustered) (Auto-Increment Identity) Log varchar(300)
Type varchar(300) Date smalldatetime Source varchar(300) Category varchar(300) Message varchar(8000) EventID varchar(300) UserName varchar(300) Computer varchar(300)I know the database design can probally use some work, but it's hard to do so now b/c of the massive amoutns of data without killing the transaction log.
I know it's a small server, but this is pretty much the largest thing this server deals with
I've tried to monitor sql with perfmon and sql performance counters and I just don't see much of anything going wrong with it there, so i'm thinking the the hardware isn't an issue, but then again, i'm not very familar with sql.
hello everyone, does anyone know some tipps or articles that descripe how the sqlserver 2000 can be improved? i have an application which displays newspapers. the text of the newspapers is saved in db. performance is ok but i have performance-problems when i do a fulltext-search.
1) hardware: p4 xeon double core, 2.4 Ghz, 2 GB Ram 2) number of articles in database: ~ 255.000 Records (betwenn 10 - 400 words each) 3) the database is indexed (i am searching with "CONTAINS" and not with the "LIKE"-keyword) 4) in my query i am searching in 3 columns (title,leadtext,content) 5) results (searching for one word, stopping after 150 Results):the time to get a result from database lies between 5 to 10 Seconds (Average: 6.77 sec)! this is the time only for getting the dataset from the sqlserver. the time of manipulating the dataset and binding it to my repeater needs about 15 milliseconds. using a sqldatareader didn't improve the performance. i got the same results. in future the number of records will be three to five times higher than now. i am afraid that then the user has to wait half a minute to get a result of 150 search hits. my questions: Are you having similiar results in your applications? Are this times acceptable? The SqlServer 2000 (professional) is installed in its default way with the default settings. Are they possiblities to modify this settings for improving performance? Where can i find info about that?
I'm facing a performance issue with select count(*) table_name when performing pagination to my results. My actualy query joins 4 tables with proper PK and FK contrainsts and indices applied.
select top 100 * from table1 left outer join table2 on tid=rid left outer join table3 on tid=aid inner join shipment on sid=sid where datetime>=convert(datetime,'20070810 00:00:00') and datetime<=convert(datetime,'20070920 23:59:59')
The query above takes 200 ms to 600 ms to execute.
select count(*) from table1 left outer join table2 on tid=rid left outer join table3 on tid=aid inner join shipment on sid=sid where datetime>=convert(datetime,'20070810 00:00:00') and datetime<=convert(datetime,'20070920 23:59:59')
The query above takes 2000ms to 4000 ms to run.
The total records that fulfill the where clause is approximately 5000 to 6000 records. I checked the execution plan but found that the server is actually utilizing the indices with operations like index scan and index seek.
Are there any other things that i can do to improve the counting of total records?
I have a varchar(max) field that is on its way to a Term Lookup task, so it requires conversion to DT_NTEXT before it gets there. The conversion is taking a really, really, really long time. In my current example I have about 400 rows. Granted the length of the varchar(max) if I 'select max(len(myColumn))' is approximately 14,000,000 so I understand that I'm moving a lot of data. But I'm on a x64 machine with 4 dual core processors and watching Task Manager I can see that I'm only using 1 core out of 8. Anything I'm missing that can speed this up?
We have a table that is 800GB. We are planning to re-build the clustered index on this table to a different filegroup. The new filegroup and files associated with it will sit on a SAN which will have a 1.5TB allocation. Does anyone have any suggestions in regards to how many files to have associated with the filegroup to provide optimal performance? Apparently we could have 3 LUNS (500gb each), so would 1 file on each LUN provide additional performance as opposed to one file on 1 LUN?
In my DB i have hundreds on queries setup for all different reporting purposes..
We have just changed they way our system handles costing and are moving from a average cost function to a standard cost function.. This in turns, means that i need to replace any reports where i use the average cost field and replace it with the standard cost field.
Is it possible, that i can do a full search and replace over all my queries, looking for a specific field name and replacing it with something else.
I am dreading the idea of opening each individual queries and checking if it exists..
If anyone knows of software or someway to write a SP to accomadate this, please let me know. I would be most grateful.
I am updating a db with data from a file, in this data we have new info, info that has been updated and info that is to be removed from the db. Now I was wondering which approach results in better performance/shorter executin time:
1. first update excisting values, then insert new ones, and last delete cancelled data
or
2. delete cancelled data and data that will be updated, then insert new and updated info
I get all this data from a file, in that file all rows are similar and there is one column that defines if the data is new, updated or to be deleted (thus all the updates also include the information for the enty that has not been altered).
Does anyone know how to improve performance on insert statements. I have to run a query of several thousand insert statements, but it just takes too long. Does anyone know of any good tips to improve performance?
Hey there :)Sorry if I'm asking a dumb question here, but I'm still quite new to MS SQL,so this problem might appear larger to me than it really is.I'm trying to create a performance test environment for a Ruby on Rails andMongrel setup with an MS SQL Server 2000.The adapter, mssqlclient, uses some kind of "conversion" for unicode, here'sa quote from the homepage:"Automatically translate from proper UTF-16LE nvarchar fields in thedatabase to UTF-8 Ruby Strings you can display in your application"As far as the local DB designer knows, we're not using UTF16-LE nvarcharfields, unless it's something that happens implicitly.Either way, this is how a query from the mssqlclient adapter might look:SELECT TOP 1 * FROM Item WHERE (Item.Itemnumber = N'45783745')Response time the first couple of times was upwards of 20+ seconds, afterthe sql server has "awaken from its slumber", it's roughly 4 seconds.Omitting the "N" from the WHERE clause, response time is in milliseconds (asone would expect, regardless of the fact that there's currently >2.5million items in the table).Any tips on how to resolve this? Is the SQL statement bad, or is it aquestion of configuring SQL Server correctly?Thanks in advance for any help,Daniel Buus :)--http://www.rhesusb.dk
We have an issue where a cube hasn't been designed properly - when someone queries it with Excel, it is doing a mega-crossjoin. When anyone else tries to do *anything* on the AS server (connect with management studio, etc.) it just hangs. We have to either track down the person running the query (via the flight recorder), or restart the service. Obviously the correct fix is to change the design of the cube - I plan on doing it asap. But it brings up this important question - is there a setting I can change to allow others to use the box while this is going on? Maybe some thread isolation, or parallelism? I'm just throwing out ideas, as I haven't experienced this part of AS administration yet.
I have a table of raw data where each column can be null. The thought was to create an identity key (1,1) and set as primary for each row. (name/ address / zip/country/joindate/spending) with surrogate key: "pkid".However other queries will not use this primary key. So for instance they may count the # of folks at a zip, select all names, addresses etc. The queries may order by join date, or select all the people that joined on a specific date.No other code would logically use the primary key (surrogate primary id key), therefore would it still have any performance benefits? at this time the table would have no clustured or nonclustured indexes or keys. I'm curious if there are millions of records.
Hello everyone, I am hoping someone can help me with this problem. Iwill say up front that I am not a SQL Server DBA, I am a developer. Ihave an application that sends about 25 simultaneous queries to a SQLServer 2000 Standard Edition SP4 running on Windows 2000 Server with2.5 GB of memory. About 11 of these queries are over views (all overthe same table) and these queries are all done from JDBC but I am notsure that matters. Anyway, initially I had no problem with thesequeries on the tables and the views with about 4 years of information(I don't know how many rows off hand). Then we changed the tables toreplicated tables from another server and that increased the amount ofdata to 15 years worth and also required a simple inner join on 2columns to another table for those views.Now here is the issue. After times of inactivity or other times duringthe day with enough time between my test query run I get what lookslike blocking behavior on the queries to the views (remember these allgo to the same tables). I run my 25 queries and the 11 view queriesall take about 120 seconds each to return (they all are withinmilliseconds of each other like they all sat there and then werereleased for processing at the same time). The rest of the queries arefine. Now if I turn around and immediately run the 25 queries again,they all come back in a few seconds which is the normal amount of time.Also, if I run a query on one of views first (just one) and then runthe 25 queries they all come back in a few seconds as well.This tells me that some caching must be involved since the times are sodifferent between identical queries but I would expect that one of thequeries would cache and thus take longer but the other 10 would befast, not all block for 2 minutes. What is more puzzling is that thisbehavior didn't occur before where now the only differences are:1) 3 times more data (but that shouldn't cause a difference from 3seconds to 120 and all tables have been through the index wizard with aSQL trace file to recommend indexes)2) There is now a join between 2 tables where there wasn't before3) The tables are replicated throughout the day.I would appreciate any insight into this problem as 120 seconds is waytoo long to wait. Thanks in Advance.Chris
I have a stored proc which is doing a search from a few tables by doing an inner join. When I enter a new search string, the proc takes some time to get the results. Whereas when I run this again after a few times, it gives results immediately. I know it is due to proc cache but still is there any way to sor this out.
Another problem is that, if a enter a string which is not available in the database itself then it takes a very long time. But takes relatively less time if the string exists in the database.
I am facing some performance issues in a Stored Procedure. The procedure needs to return a resultset based on some search criteria. There are around 20 possible search criteria. Below is the SQL query used in my Stored procedure. Any help to optimize the search will be great:
--get LOV details in table variables INSERT INTO @tblLov (LovCode, LovDesc, ParamCode) SELECT LovCode, LovDesc, ParamCode FROM tp_Lov WITH (NOLOCK) WHERE ParamCode IN('FileSrc', 'CommTrailInd', 'CommTxnStatus', 'AgencyPrincipalInd','ProdSubType','AuditTransStatus')
--get commission transaction according to the search criteria INSERT INTO @tblSearchResults SELECT l1.LovDesc AS TransSource, l2.LOVDesc AS CommTrailInd, r.RemitCode as RemitNumber, t.IntTransId as TransNumber, CONVERT(VARCHAR, t.TrdDt, 110) AS TradeDate, CONVERT(VARCHAR, t.SettlementDt, 110) AS SettlementDate, rp.RepCode, (ISNULL(rp.LstNm,'') + ', ' + ISNULL(rp.FstNm,'')) AS RepName, (CASE WHEN ISNULL(t.IntClntId,0)=0 THEN ISNULL(t.ClntShortNM, '') + (CASE WHEN (t.TransSrc = 'NSM' OR (t.TransSrc = 'MCE' AND ISNULL(t.ProdType,'') <> 'VA')) AND ISNULL(t.FundAcctNum,'')<>'' THEN ' - ' + ISNULL(t.FundAcctNum,'') WHEN (t.TransSrc = 'NSV' OR (t.TransSrc = 'MCE' AND ISNULL(t.ProdType,'') = 'VA')) AND ISNULL(t.PolicyNum,'')<>'' THEN ' - ' + ISNULL(t.PolicyNum,'')
WHEN t.TransSrc IN('PSH','MSR') AND ISNULL(t.ClrHouseAcctNum,'')<>'' THEN ' - ' + ISNULL(t.ClrHouseAcctNum,'') ELSE '' END) ELSE dev.udf_COMM_PCD_GetClientName(t.IntClntId, t.IntTransId) END) AS Client, (CASE WHEN ISNULL(t.CUSIP,'')='' THEN t.ProdName ELSE p.ProdNm END) AS [Product], t.InvAmt AS InvestmentAmt, t.GDC AS GDC, t.ClrChrg AS ClearingCharge, t.NetComm AS NetCommission, (CASE WHEN t.Status IN(@strLov_TxnStatus_Tobepaid, @strLov_TxnStatus_Paid) THEN dev.udf_COMM_PCD_GetPayoutRateString(t.IntTransId) ELSE '' END) AS PayoutRate, (CASE WHEN t.Status IN(@strLov_TxnStatus_Tobepaid, @strLov_TxnStatus_Paid) THEN dev.udf_COMM_PCD_GetPayoutAmountString(t.IntTransId) ELSE '' END) AS Payout, l3.LOVDesc AS TransStatus, t.Comments, t.OrderMarkup AS BDMarkup, t.IntTransId, rp.IntRepId, sch.SchCode, t.IntClntId, t.CUSIP, t.RepIdValue AS RepAlias, t.RepIdType, t.SplitInd, l4.LOVDesc AS AgencyPrincipalInd, t.AgencyPrincipalFee, t.EmployeeTradeInd, t.ShareMarkup, t.UnitsTraded, s.SponsorNm, CASE WHEN t.TransSrc = 'NSM' OR (t.TransSrc = 'MCE' AND ISNULL(t.ProdType,'') <> 'VA') THEN ISNULL(t.FundAcctNum,'') --Production Defect #873 & 877 WHEN t.TransSrc = 'NSV' OR (t.TransSrc = 'MCE' AND ISNULL(t.ProdType,'') = 'VA') THEN ISNULL(t.PolicyNum,'') ELSE t.ClrHouseAcctNum END, CASE WHEN ISNULL(t.ProdSubType,'') IN ('', 'Z') THEN 'Not Defined' ELSE l6.LovDesc END AS ProdSubType, --t.ProdSubType, l5.LOVDesc AS TransAuditStatus, --t.TransAuditStatus, t.TransAuditStatus AS TransAuditStatusCode, t.OriginalTransId, t.RowId, t.Status, t.intParentTransId, t.CancelTrdInd, t.ClrChrgOverrideInd, 9999 AS AuditKey FROM tr_CommTrans t WITH (NOLOCK) INNER JOIN @tblLov l1 ON t.TransSrc = l1.LOVCode and l1.ParamCode = 'FileSrc' INNER JOIN @tblLov l2 ON t.CommTrailInd = l2.LOVCode and l2.ParamCode = 'CommTrailInd' INNER JOIN @tblLov l3 ON t.Status = l3.LOVCode and l3.ParamCode = 'CommTxnStatus' INNER JOIN td_Remit r WITH (NOLOCK) ON t.IntRemitId = r.IntRemitId LEFT OUTER JOIN @tblLov l4 ON t.AgencyPrincipalInd = l4.LOVCode and l4.ParamCode = 'AgencyPrincipalInd' LEFT OUTER JOIN @tblLov l5 ON t.TransAuditStatus = l5.LOVCode AND l5.ParamCode = 'AuditTransStatus' LEFT OUTER JOIN @tblLov l6 ON t.ProdSubType = l6.LOVCode AND l6.ParamCode = 'ProdSubType' LEFT OUTER JOIN tm_BDProd p WITH (NOLOCK) ON t.CUSIP = p.CUSIP LEFT OUTER JOIN tm_BDSponsors s WITH (NOLOCK) ON t.IntBDSponsorId = s.IntBDSponsorId LEFT OUTER JOIN tm_Reps rp WITH (NOLOCK) ON t.IntRepId = rp.IntRepId LEFT OUTER JOIN tm_PayoutSch sch WITH (NOLOCK) ON t.IntSchId = sch.IntSchId WHERE t.IntTransId = (CASE WHEN @intTransId IS NULL THEN t.intTransId ELSE @intTransId END) AND t.TransSrc = @strTransSrc AND r.RemitCode = (CASE WHEN ISNULL(@strRemitCode,'')='' THEN r.RemitCode ELSE @strRemitCode END) AND ISNULL(t.SettlementDt,'01-01-1900') BETWEEN @dtmFromSettlementDt AND @dtmToSettlementDt AND ISNULL(t.TrdDt,'01-01-1900') BETWEEN @dtmFromTradeDt AND @dtmToTradeDt AND t.CommTrailInd = (CASE WHEN @chrShowTrails='Y' THEN t.CommTrailInd ELSE 'C' END) AND t.Status = (CASE WHEN ISNULL(@strStatus,'')='' THEN t.Status ELSE @strStatus END) AND ISNULL(t.ClrHouseAcctNum,'') LIKE (CASE WHEN ISNULL(@strAccountId,'')='' THEN ISNULL(t.ClrHouseAcctNum,'') WHEN (@strTransSrc = 'PSH' OR @strTransSrc = 'MSR' OR @strTransSrc = 'MSA') THEN @strAccountId ELSE ISNULL(t.ClrHouseAcctNum,'') END) AND ISNULL(t.FundAcctNum,'') LIKE (CASE WHEN ISNULL(@strAccountId,'')='' THEN ISNULL(t.FundAcctNum,'') WHEN @strTransSrc = 'NSM' THEN @strAccountId WHEN @strTransSrc = 'MCE' AND ISNULL(t.ProdType,'')<>'VA' THEN @strAccountId ELSE ISNULL(t.FundAcctNum,'') END) AND ISNULL(t.PolicyNum,'') LIKE (CASE WHEN ISNULL(@strAccountId,'')='' THEN ISNULL(t.PolicyNum,'') WHEN @strTransSrc = 'NSV' THEN @strAccountId WHEN @strTransSrc = 'MCE' AND ISNULL(t.ProdType,'')='VA' THEN @strAccountId ELSE ISNULL(t.PolicyNum,'') END) AND ISNULL(t.IntBDSponsorId,-1) = (CASE WHEN @intSponsorId IS NULL THEN ISNULL(t.IntBDSponsorId,-1) ELSE @intSponsorId END) AND ISNULL(t.ProdType,'') = (CASE WHEN ISNULL(@strProdType,'')='' THEN ISNULL(t.ProdType,'') ELSE @strProdType END) AND ISNULL(t.ProdSubType,'') = (CASE WHEN ISNULL(@strProdSubType,'') ='' THEN ISNULL(t.ProdSubType,'') ELSE @strProdSubType END) AND ISNULL(t.CUSIP,'') = (CASE WHEN ISNULL(@strCUSIP,'')='' THEN ISNULL(t.CUSIP,'') ELSE @strCUSIP END) AND ISNULL(rp.SSN, 0) = (CASE WHEN @numRepSSN IS NULL THEN ISNULL(rp.SSN, 0) ELSE @numRepSSN END) AND ISNULL(rp.RepCode,'') = (CASE WHEN ISNULL(@strRepCode,'')='' THEN ISNULL(rp.RepCode,'') ELSE @strRepCode END) AND ISNULL(rp.LstNm, '') = (CASE WHEN ISNULL(@strRepLstNm,'')='' THEN ISNULL(rp.LstNm,'') ELSE @strRepLstNm END) AND ISNULL(rp.FstNm, '') = (CASE WHEN ISNULL(@strRepFstNm,'')='' THEN ISNULL(rp.FstNm,'') ELSE @strRepFstNm END) AND ISNULL(rp.RepStatus,'') <> (CASE WHEN @chrIncludeTerminated='Y' THEN 'Z' ELSE 'T' END) AND ISNULL(t.IntClntId,-1) = (CASE WHEN @intClientId IS NULL THEN ISNULL(t.IntClntId,-1) ELSE @intClientId END) AND ( (@chrAuditReportFlag = 'N' AND t.Status NOT IN(@strLov_TxnStatus_Loaded, @strLov_TxnStatus_Cancelled) AND ISNULL(TransAuditStatus,@strLov_TransAuditStatus_Active) = @strLov_TransAuditStatus_Active ) OR (@chrAuditReportFlag = 'Y' AND t.Status NOT IN(@strLov_TxnStatus_Loaded) DefectID# 880,895
I changed from Access97 to AccessXP and I have immense performanceproblems.Details:- Access XP MDB with Jet 4.0 ( no ADP-Project )- Linked Tables to SQL-Server 2000 over ODBCI used the SQL Profile to watch the T-SQL-Command which Access ( whocreates the commands?) creates and noticed:1) some Jet-SQL commands with JOINS and Where-Statements aretranslated very well, using sp_prepexe and sp_execute, including thesimilar SQL-Statement as in JET.2) other Jet-SQL commands with JOINS and Where-Statements aretranslated very bad, because the Join wasn´t sent as a join, Accesscollects the data of the individual tables seperately.Access sends much to much data over the network, it is a disaster!3) in Access97 the same command was interpreted wellCould it be possible the Access uses a wrong protocol-stack, perhapsJet to OLEDB, OLEDB to ODBC, ODBC to SQL-Server orJet to ODBC, ODBC to OLEDB and OLEDB to SQL-Server instead ofJet to ODBC and ODBC direct to SQL-ServerDoes anyone knows anything about:- Command-Interpreter of JetODBC, Parameters, how to influence thecommand-interpreter- Protocol-Stack of a Jet4.0 / ODBC / SQL-Server applicationThanks , Andreas
I am about to heavily index a table and have to include atleast 3 to 4 olumns in the fulltext index for this table. The table is updated very frequently and the also the columns that are involve in the fulltext indexing undergo frequent updates. As of now, I can't avoid using full text indexing as these columns are very very lengthy and basically contail text. The users of the database will give some key words as the search criteria to get infomation as to what they are looking for. How frequently should I update my full text catalog. This is a scenario where the full text is operating on various tables and each of thses table might be containingaround 300,000 to 800,00 rows. I would appreciate an intelligent siggestion as I need it as soon as possible.
I have a table with 3M rows that contains a varchar(2000) field withvarious keywords. Here is the table structure:PKColumnImageIDFullTextColumnThere is an association table:ImageIDContractIDNow, I want to do a query where the ContractID = x and Contains someword in the FullTextColumn. There is an association table that mapsImages to Contracts - so I can't use the trick of putting the Contractcode in the FullTextColumn.I'm finding that first the FTS service is performing a search on theKeyword (which can take a long time if 100K rows are returned) thenjoining to the association table for the particular contract.Is there anyway to make this faster by telling the FTS service, onlysearch this subset of rows for the keyword based on the contract.Sorry if this sounds convoluted. Appreciate any help you can suggest.Thanks!
SELECT T1.F3 FROM T1 INNER JOIN T2 ON T1.F4 = T2.F4 WHERE (T1.F1 > @iNum AND T2.F1 > @iNum) OR ( @iNum2 * (T1.F1 - T2.F1)/(T1.F2 - T2.F2) ) + (T1.F1 - ((T1.F1 - T2.F1)/(T1.F2 - T2.F2) * T1.F2) ) > @INum
As you can see, the second part of the WHERE (after the OR) is much more complicated than the part before the OR. My query would run a lot faster if it tried the first part of the OR and didn't bother with the second part if the first part was satisfied. Is there any way to do this?
Does anyone know how to how to performance case-insensitive search onXML data type in SQLServer 2005? Or I have to convert all the xml datato lower case before I store it?Thanks in advance.John
My colleague and I are having some difficulties regarding the speed of a LIKE query on our intranet system. Please see the quote below as posted on another SQL forum (To which no one has responded yet ).
Quote: Howdy,
I've taken control of an SQL Database with an ASP front end. The CPU usage of the SQL Server process periodically jumps up to between 75-99%. I've managed to identify the ASP page that is causing it and it's a search page.
It's basically doing something like this:
SELECT JobNumber, CustomerName, CustomerAddress, CustomerPostCode FROM Customers WHERE CustomerName LIKE '%query%' OR CustomerAddress LIKE '%query%' OR CustomerPostCode LIKE '%query%'
Where query is the value the user has searched on.
The CustomerName and CustomerPostCode fields are varchar fields, but the CustomerAddress field is a text field.
I know if I replace the CustomerAddress text field with a series of varchar fields then I could make them non-clustered indexes, but would this help speed things up with a LIKE query?
There's a lot of work required on the ASP side if I'm going to split the field out, so I only want to do it if I'm gonna get a significant benefit from doing it.
Failing this, is there a better way of performing a search like this?
I am presently tasked with improving the performance of a chatty application. What I mean by a chatty application is that the application makes multiple calls to the database server for each user request. In many cases it appears that many of the windows are making multiple calls to the database for the population of dialog boxes. In a couple of cases there are more than 10 dialog boxes that are populated similar to this.
To me this is kind of an "old issue", but my response is (1) cache the dialog information as much as possible and (2) make one call to the database to return all of this data for the dialog boxes if it is not cached. For update calls again make a single call to the database rather than making a call for each individual row of a table that is updated.
I admit that some of this will complicate the code of the application; however, my perspective is that I am supposed to look at this from the database perspective and not from the perspective of "programmer convenience." There will be times in which something that would otherwise get to hairy for a programmer will dictate an additional trip to the server, but this should not normally be the case.
Does any one know of any good components I can buy in to sit on top of my SQL Server to provide the kind of search functionality that google has - 'Did you mean...,' nearset match etc. I have set up SQL Server Full Text but to build all the extra functionality will take some time and I think it will be more cost effective to buy a component in. If any one has used any could you let me know of you experinces please. CheersScott
Is there a configuration or a trick to improve the speed of the access to inserted and deleted tables whithin a trigger? Whenever a trigger is called, the access to inserted or deleted constitute approximatly 95% of the execution time.
Is there a way to have access to inserted and to deleted improved other than copying the data to another table?
Hi - I'm short of SQL experience and hacking my way through creating a simple search feature for a personal project. I would be very grateful if anyone could help me out with writing a stored procedure. Problem: I have two tables with three columns indexed for full-text search. So far I have been able to successfully execute the following query returning matching row ids: dbo.Search_Articles @searchText varchar(150) AS SELECT ArticleID FROM articles WHERE CONTAINS(Description, @searchText) OR CONTAINS(Title, @searchText) UNION SELECT ArticleID FROM article_pages WHERE CONTAINS(Text, @searchText); RETURN This returns the ArticleID for any articles or article_pages records where there is a text match. I ultimately need the stored procedure to return all columns from the articles table for matches and not just the StoryID. Seems like maybe I should try using some kind of JOIN on the result of the UNION above and the articles table? But I have so far been unable to figure out how to do this as I can't seem to declare a name for the result table of the UNION above. Perhaps there is another more eloquent solution? Thanks! Peter
Our clients want to be able to do full text search with a single letter. (Is the name Newton, Nathan, Nick?, Is the ID N1, N2...). Doing a single character full text search on a table work 25 out of 26 times. The letter that doesn't work is 'n'. the WHERE clause CONTAINS(full_text_field, ' "n*" ') returns all rows, even rows that have no 'n' in them anywhere. Adding a second letter after the "n" works as expected.
Here is an example
create table TestFullTextSearch ( Id int not null, AllText nvarchar(400) ) create unique index test_tfts on TestFullTextSearch(Id); create fulltext catalog ftcat_tfts;
I have a scenario of where the standard Full-Text search identifies keywords but Semantic Search does not recognize them as keywords. I'm hoping to understand why Semantic Search might not recognize them. The context this is being used in medical terminology and the specific key words I noticed missing right off the bat were medications.
For instance, if I put the following string into a FT indexed table
'J9355 - Trastuzumab (Herceptin)' AND 'J9355 - Trastuzumab emtansine'
The Semantic Search recognized 'Herceptin' and 'Emtansine' but not 'Trastuzumab'
Nor in
'J8999 - Everolimus (Afinitor)'
It did not recognize 'Afinitor' as a keyword.
In all cases the Base of Full-Text did find those keywords and were identifiable using the dmvsys.dm_fts_index_keywords_by_document.It does show the index as having completed.
why certain words might not be picked up while others would be? Could it be a language/dictionary issue? I am using English and accent insensitive settings?
would you use sql server "full text search" feature as your site index? from some reason i can't make index server my site search catalog, and i wonder if the full text is the solution. i think that i wll have to you create new table called some thing like "site text" and i will need to write every text twice- one the the table (let's say "articles table") and one to the text. other wise- there is problems finding the right urlof the text, searching different tables with different columns name and so on... so i thought create site search table, with the columns: id, text, url and to write every thing to this table. but some how ot look the wrong way, that every forum post, every article, album picture or joke will insert twice to the sqr server... what do you think?
I have installed the Adobe iFilter 11 64 bit and set the path to the bin folder. I still cannot find any text from the pdf files. I suspect I am missing something trivial because I don't find much when I Bing for this so it must not be a common problem.Here is the code.
--Adobe iFilter 11 64 bit is installed --The Path variable is set to the bin folder for the Adobe iFilter. --SQL Developer version 64 bit on both Windows 7 and Windows 8. USE master; GO DROP DATABASE FileTableStudy; GO CREATE DATABASE FileTableStudy ON PRIMARY