Indexing SL CE To Make Search Faster. How?
Jan 11, 2006
Dear VB coders!
I have 8.000 or 16.000 products on my list in an sql table. It takes 8 or 16 secs to find the last item.
How could I make the search faster? Indexing? But how?
I created:
createTable1.CommandText = "Create TABLE products (ID int IDENTITY(0,1) PRIMARY KEY, name ntext)"
and I use simple WHERE to search.
Thank you!
View 3 Replies
ADVERTISEMENT
Apr 17, 2008
I recently installed SQL2005 Express on a Dell Precision workstation to be accessed by 6 users. The PC has XP Pro, 2GB memory, and 2.0GHz core2due CPU. This PC is dedicated for the database only.
I also set XP Performance options in System Properties to Background services and System cache.
Now, the users are experiencing slow response from the database.
How can I make the database response faster?
Thanks for your input.
View 4 Replies
View Related
Jan 26, 2001
Hi.We have stored procedure update specific table
Each time it run it delete 5000- 6000 rows
from table then insert 5000- 6000 rows with different information.
It take up to 1 1/2 min execute.
1.Can force Sql server do not make entry for each insert and if yes would it increase speed of procedure ?
2. Is any other way increase speed of insert?
View 2 Replies
View Related
Mar 20, 2007
I need help makeing the following query run more efficently.
Code:
SELECT t1.ID,t1.firstName,t1.lastName,t1.address,t1.city,t1.state, t1.zip,t1.locationAddress,t1.locationCity,t1.locationState,t1.locationZip
FROM Landlord_tbl t1
left outer join Mail_tbl t2
ON t2.potentialSitesID = t1.potentialSitesID
WHERE t2.mailed_out_date is null and NOT(t1.firstName+t1.lastName) is Null
GROUP BY t1.ID,t1.firstName,t1.lastName,t1.address,t1.city,t1.state, t1.zip,t1.locationAddress,t1.locationCity,t1.locationState,t1.locationZip
ORDER BY t1.firstName, t1.lastName, t1.city, t1.state
View 3 Replies
View Related
Nov 29, 2007
Hi:
I have thousands of rows that I have to retrieve for my report. When I run my main report it returns my data as a reasonable time.
When I run my sub report (by its own) again data are returned in reasonable time.
When I build the sub report in my main report it takes forever (I have to stop). Are there any tricks that I can use to make this run faster?
Thank you in advance
Bahram
View 3 Replies
View Related
Aug 5, 2007
Hi,
Here is the typical processing I do in Data Flow
1) read data file (data source)
2) do something with the data (e.g. derived column, script component, etc)
3) pass data as parameters into OLE DB command (stored proc) to take care of the rest.
Would it be faster if
I split the data into say 10 "chunks" (using split transform). Each chunk will go to its own OLE DB command (10 OLE DB command). Now I will have 10 concurrent processes running the stored proc ?
Am i misunderstanding concurrent process here ?
Thanks
View 1 Replies
View Related
Jun 17, 2004
Anyone got any alternatives to the SQL Statement below that would make it run a bit faster:
ALTER PROCEDURE sproc_ReturnAvailability
@ExtractDate DateTime,
@DateFrom DateTime,
@DateTo DateTime,
@96hrPlusFlag int,
@AppointmentsCount int OUTPUT
AS
IF @96hrPlusFlag = 0
BEGIN
SELECT @AppointmentsCount = COUNT(tbl_SurgerySlot.SurgerySlotKey)
FROM tbl_SurgerySlot
INNER JOIN tbl_SurgerySlotDescription ON (tbl_SurgerySlot.Label = tbl_SurgerySlotDescription.Label AND tbl_SurgerySlot.PracticeCode = tbl_SurgerySlotDescription.PracticeCode)
AND tbl_SurgerySlot.ExtractDate = @ExtractDate
AND tbl_SurgerySlot.StartTime BETWEEN @DateFrom AND @DateTo
AND tbl_SurgerySlotDescription.NormalBookable = 1
AND tbl_SurgerySlot.SurgerySlotKey NOT IN(
SELECT tbl_Appointment.SurgerySlotKey
FROM tbl_Appointment
WHERE tbl_Appointment.ExtractDate = @ExtractDate
AND tbl_Appointment.Deleted = 0
AND tbl_Appointment.Cancelled = 0
)
END
ELSE
BEGIN
IF @96hrPlusFlag = 1
SELECT @AppointmentsCount = COUNT(tbl_SurgerySlot.SurgerySlotKey)
FROM tbl_SurgerySlot
INNER JOIN tbl_SurgerySlotDescription ON (tbl_SurgerySlot.Label = tbl_SurgerySlotDescription.Label AND tbl_SurgerySlot.PracticeCode = tbl_SurgerySlotDescription.PracticeCode)
AND tbl_SurgerySlot.ExtractDate = @ExtractDate
AND tbl_SurgerySlot.StartTime >@DateTo
AND tbl_SurgerySlotDescription.NormalBookable = 1
AND tbl_SurgerySlot.SurgerySlotKey NOT IN(
SELECT tbl_Appointment.SurgerySlotKey
FROM tbl_Appointment
WHERE tbl_Appointment.ExtractDate = @ExtractDate
AND tbl_Appointment.Deleted = 0
AND tbl_Appointment.Cancelled = 0
)
END
Cheers...
View 1 Replies
View Related
Oct 3, 2006
Any help would be really appreciated....
My stored procedure...
CREATE PROCEDURE business3rd7
@Fromdate DATETIME,
@ToDate DATETIME
AS
select distinct CONVERT(VARCHAR(10),Receipts.Companynumber1)+CONVE RT(VARCHAR(10),Receipts.Companynumber2) as co ,
Receipts.Premium1+Receipts.Premium2 as Premium,
"CAN"=case when Receipts.transactiontype='CAN'
then (receipts.premium1+receipts.premium2)
else 0
end,
"NET"=Receipts.Premium1+Receipts.Premium2-case when Receipts.transactiontype='CAN'
then (receipts.premium1+receipts.premium2)
else 0
end,
"#NEW"=case when Receipts.transactiontype='NEW' then count(Receipts.policynumber) else
0 end,
-------
"$NEW"=
case when Receipts.transactiontype='NEW' then (Receipts.Premium1+Receipts.premium2)
else 0 end,
"#REN"=case when Receipts.transactiontype='REN' then count(Receipts.policynumber) else
0 end,
"$RENEW"=
case when Receipts.transactiontype='REN' then (Receipts.Premium1+Receipts.premium2)
else 0 end,
"#AP"=case when Receipts.transactiontype='AP' then count(Receipts.policynumber) else 0
end,
"$AP"=
case when Receipts.transactiontype='AP' then (Receipts.Premium1+Receipts.premium2) else
0 end,
"#SENT"=case when policy.Renewalofferdate between @Fromdate AND @ToDate then
count(policy.policynumber) end,
"%"=case when case when Receipts.transactiontype='REN' then count(Receipts.policynumber)else 0
end =0 then 0
when case when policy.Renewalofferdate between @Fromdate AND @ToDate then
count(policy.policynumber) else 0 end=0 then 0
else
case when Receipts.transactiontype='REN' then count(Receipts.policynumber)else 0
end /case when policy.Renewalofferdate between @Fromdate AND @ToDate then
count(policy.policynumber) else 0 end end,
"Current Year"= case when policy.Renewalofferdate between @Fromdate AND @ToDate then
count(clubamount) end,
"Previous Year"=case when policy.Renewalofferdate between DateAdd(year, -1, @Fromdate)
AND DateAdd(year, -1, @ToDate) then count(clubamount) end,
"#AA"=count(receipts.clubamount),
"$AA"=sum(receipts.clubamount)
FROM Receipts,Policy
where Receipts.Agencyid=Policy.Agentid
group by
Receipts.CompanyNumber1,Receipts.CompanyNumber2,
Receipts.Premium1,Receipts.Premium2,
Receipts.TransactionType,policy.Renewalofferdate,
Receipts.Agencyid
GO
Query plan...
----------------------Query Plan
|--Sort(DISTINCT ORDER BY:([Expr1008] ASC, [Expr1009] ASC, [Expr1010] ASC, [Expr1011] ASC, [Expr1012] ASC, [Expr1013] ASC, [Expr1014] ASC, [Expr1015] ASC, [Expr1016] ASC, [Expr1017] ASC, [Expr1018] ASC, [Expr1019] ASC, [Expr1020] ASC, [Expr1021] ASC, [Ex
|--Compute Scalar(DEFINE:([Expr1008]=Convert([Receipts].[CompanyNumber1])+Convert([Receipts].[CompanyNumber2]), [Expr1009]=[Receipts].[Premium1]+[Receipts].[Premium2], [Expr1010]=If ([Receipts].[TransactionType]='CAN') then ([Receipts].[Premium1]+[R
|--Compute Scalar(DEFINE:([Expr1004]=Convert([Expr1076]), [Expr1005]=Convert([Expr1077]), [Expr1006]=Convert([Expr1078]), [Expr1007]=If ([Expr1078]=0) then NULL else [Expr1079]))
|--Stream Aggregate(GROUP BY:([Receipts].[CompanyNumber1], [Receipts].[CompanyNumber2], [Receipts].[Premium1], [Receipts].[Premium2], [Receipts].[TransactionType], [Policy].[RenewalOfferDate], [Receipts].[AgencyID]) DEFINE:([Expr1076]=COUN
|--Sort(ORDER BY:([Receipts].[CompanyNumber1] ASC, [Receipts].[CompanyNumber2] ASC, [Receipts].[Premium1] ASC, [Receipts].[Premium2] ASC, [Receipts].[TransactionType] ASC, [Policy].[RenewalOfferDate] ASC, [Receipts].[AgencyID] ASC))
|--Hash Match(Inner Join, HASH:([Policy].[AgentID])=([Receipts].[AgencyID]), RESIDUAL:([Policy].[AgentID]=[Receipts].[AgencyID]))
|--Table Scan(OBJECT:([gasInquiry].[dbo].[Policy]))
|--Table Scan(OBJECT:([gasInquiry].[dbo].[Receipts]))
The two tables has number of records as 13349 and 97032.It taking more than 30 mins...
Any way to make it faster...
View 14 Replies
View Related
Sep 3, 2007
Any ideas how to make reports faster when returning lots fo rows?
I know you would need to work on your sql query etc..
Or maybe cache it.
But i'm thinking of having a kind of middle tier thing that sits between your sql database and the reports itself.
Any ideas would be appreciated
View 2 Replies
View Related
Aug 10, 2007
It seems when I run the query with the set staticts IO on then statistic reports back with the 'work table', and the query takes 30+ sec. if the worktable is ommited(whatever the reason?) the query take less 1 sec.
Here is my take, I believe work table is created in tempdb...and if not then whole query is using the cached page, am I right?
if I am right then the theory is, if I increase the (via sp_configure) server min memory setting and min query memory, the query ought use the cached page and return in less 1 sec. (specially there is absolutely no one but me on the server), so far I can't make it go faster...what setting am I missing to make it run faster?
Another question is if the query can not avoid but use the tempdb, is it going to always be 30 sec+ time? why is tempdb involvement make it go so much slower?
Thanks in for you help in advance
View 1 Replies
View Related
Jan 30, 2008
Hello.
I have an OLTP database and alot of users that is querying against that.
I am thinking of a way to store a more denormalized version of some tables in another database, and use that for a search function that we have.
The problem is that the base tables are updated all the time with new and new data. And the search cannot have that long delay but a delay of 5-10 minutes for updates/insert is ok. deletes though cannot have any delay.
I was thinking of a SCD-setup and when a delete goes against the OLTP then we also run a delete against the SCD-database. But is it possible to create a SCD-database and have it update every 5-10 minutes?
I have also been thinking of a queue-table (and store updates/inserts) in the OLTP-database and then run a query every 5 min against the queue-table and update the denormalized version of the table. But isnt this a longer way? Why not update the denormalized version of the table directly when the OLAP-tables is updated?
I have also been thinking of to truncate and then fill the denormalized version every 5-10 minute from the OLTP-version. Maybe have two tables and after the fill is complete do a rename of the two tables, so that we wont lock the table that long while filling it up with the new data.
The denormalized version would contain around 200k rows.
Any suggestions is appreciated.
Thanks
View 7 Replies
View Related
Nov 22, 2007
hi there,
i am doing a school project and i need to have this search engine that will search the data that i have stored inside the database and display the results out
can anyone help?
thanks
View 6 Replies
View Related
Jan 2, 2007
have over million rows in the our table and we are looking forward to increase the speed of our query .Any ideas?set ANSI_NULLS OFF
set QUOTED_IDENTIFIER OFF
GO
ALTER PROCEDURE [dbo].[mainSearch]
@startRowIndex int,
@maximumRows int,
@rowCount int out,
@countedRow int,
@QUERY nvarchar(400)
AS
SELECT _ID,_NAME,_TYPE,_CREATEDATE,ESTATETYPE,ESTATEDISPLAYPRICE,ESTATEDISPLAYPRICECURRENCY,ESTATEDISTRICT,ESTATECITY,ESTATEROOMCOUNT,NUMBEROFPICTURES FROM
(
SELECT ROW_NUMBER() OVER (ORDER BY _CREATEDATE DESC) AS ROWRANK,*
FROM ADDS AS ADTBL JOIN CONTAINSTABLE(ADDS_FTS,(ADDS_VALUE),@QUERY) as KEY_TBL
ON ADTBL._ID = KEY_TBL.[KEY]
Where (_DELETIONSTATUS=0)
)
AS RANKEDADDS
WHERE ROWRANK > @startRowIndex AND ROWRANK <= @startRowIndex + @maximumRows -1
if(@countedRow < 1)
SET @rowCount =
(
SELECT COUNT(_ID) FROM ADDS AS ADTBL JOIN CONTAINSTABLE(ADDS_FTS,(ADDS_VALUE),@QUERY) as KEY_TBL
ON ADTBL._ID = KEY_TBL.[KEY]
Where _DELETIONSTATUS=0
)
else
SET @rowCount = @countedRow
RETURN
View 2 Replies
View Related
Jul 30, 2007
I intend to develop a web based application, which uses SQL server 2005 at back end and Visual studio 2.0 as front end.
Application serves two functionalities
Requirement1: It carryout a search (In SQL server) for a particular name entered from front end .net application against a huge DataBase of size about 1 million records.
Scenario: The above requirement can be complemented by following example
Consider we have a bank database which has its existing customer DataBase having containing attributes like Name, Age, and Profession e.t.c.
Now if some new customer want to open a new account in bank, then bank officials want to know whether the
new customer is one of the existing customer or not(without asking to customer itself).
System should be able to detect the combination of name also i.e if we enter "Jhon" from front end .net interface
then application should be able generate all list of all customer having "Jhon" as part of their name at any location(firstname, middlename, lastname).
Requirement 2: If some time change is detected in bank's extisting customer's DataBase then each record of this DataBase is searched against a external dataBase(having almost 2 -3 million records).
Scenario: The above requirement can be complemented by following example
If new user is added to bank's existing customer database(database change) then this new updated database's every record is serarched against another bank's database.
I would like to hear experts voice for database design of such application for optimal performance,and types of searches I should look for application.
View 5 Replies
View Related
May 15, 2007
Hello everybody,
I have installed SQLServer 2005 for take advantage of full text seach features.
When I start full population, there are some rows that cause error and not are indexed.
I see the log in C:ProgrammiMicrosoft SQL ServerMSSQL.1MSSQLLOGSQLFT0000700005.LOG. In this file there are seven rows that have error and for everyone a line that say:
"Error '0x8004fd01: The IFilter::Init() function call failed.' ... full-text key value 0x495532303045594154504B55. Attempt will be made to reindex it. The component 'pdffilt.dll' reported error while indexing. Component path 'C:ProgrammiFoxit SoftwareFoxit PDF IFilterpdffilt.dll'."
With different full-text key value for every line.
My question is:
1) Start from a full-text key (ie. 0x495532303045594154504B55), how I arrive to the record in the source table for understand which is the problem?Otherwise how can I examine the problem if I don't know the row that cause the error?
2) What is the error 0x8004fd01?
3) Could be that the PDF filter of Foxit is better than Adobe? With Adobe filter (version 6) my full pupulation is endless!
Thanks!
View 1 Replies
View Related
Apr 19, 2000
I am about to heavily index a table and have to include atleast 3 to 4 olumns in the fulltext index for this table.
The table is updated very frequently and the also the columns that are involve in the fulltext indexing undergo frequent updates.
As of now, I can't avoid using full text indexing as these columns are very very lengthy and basically contail text.
The users of the database will give some key words as the search criteria
to get infomation as to what they are looking for.
How frequently should I update my full text catalog.
This is a scenario where the full text is operating on various tables and
each of thses table might be containingaround 300,000 to 800,00 rows.
I would appreciate an intelligent siggestion as I need it as soon as possible.
Thanks and have a wonderful time ahead.
View 1 Replies
View Related
Oct 28, 2015
We have an iFilter to extract text on emails and their attached files. I have encountered 5 emails that receive the following error code, but I cannot find anything on the error code: Error '0x80fc7586' occurred during full-text index population for table or indexed view '[RIA_Unindexed].[dbo].[Emails]' (table or indexed view ID '485576768', database ID '31'), full-text key value 'E11264A2-A17D-E511-80E0-005056B240B1'. Attempt will be made to reindex it.After a number of retry attempts they will fail to index.what does error code  '0x80fc7586' refer to, I have not found it.
Since the first two bytes of the error code are the severity and facility reporting the error, I stripped off that information and looked for error 0x7586 or 30086. That error reported "Too Many products".Theses emails being indexed contain attachments with large attachments that then the attachment chunks are being called for from the fdhost may result in too many chunk segments, or more likely there are some repeating words in the extracts like 4 million occurrences so right now I am testing that condition. To see if the word breaker is throwing the error.
View 10 Replies
View Related
Jan 28, 2008
Hi
I have a Full Text index on a table with an image field that is successfully indexing .doc, .pdf and .rtf files.
Keyword searching this is no problem.
What i want to be able to do is perform a similarity search. by this i mean pass in a Key_ID (documentID) and have the database return a list of Key_IDs (documents) which are similar.
By similar i mean contain mostly the same keywords in roughly the same quantities
Thanks
View 3 Replies
View Related
Jul 10, 2007
Hi, I was wondering if any SQL Server gurus out there could help me...I
have a table which contains text resources for my application. The text
resources are multi-lingual so I've read that if I add a html language
indicator meta tag e.g.<META NAME="MS.LOCALE" CONTENT="ES">and
store the text in a varbinary column with a supporting Document Type
column containing ".html" of varchar(5) then the full text index
service should be intelligent about the language word breakers it
applies when indexing the text. (I hope this is correct technique for
best multi-lingual support in a single table?)However, when I come to query this data the results always return 0 rows (no errors are encountered). e.g.DECLARE @SearchWord nvarchar(256)SET @SearchWord = 'search' -- Yes, this word is definitely present in my resources.SELECT * FROM Resource WHERE CONTAINS(Document, @SearchWord)I'm a little puzzled as Full Text search is working fine on another table that employs an nvarchar column (just plain text, no html).Does the filter used for full text indexing of html expect certain tags to be present as standard? E.g. <html> and <body> tags? At present the data I have stored might look like this (no html or body wrapping tags):Example record 1 data: <META NAME="MS.LOCALE" CONTENT="EN">Search for keywords:Example record 2 data: <META NAME="MS.LOCALE" CONTENT="EN">Sorry no results were found for your search.etc.Any pointers / suggestions would be greatly appreciated. Cheers,Gavin.UPDATE: I have tried wrapping the text in more usual html tags and re-built the full text index but I still never get any rows returned for my query results. Example of content wrapping tried - <HTML><HEAD><META NAME="MS.LOCALE" CONTENT="EN"></HEAD><BODY>Test text.</BODY></HTML>I've also tried stripping all html tags from the content and set the Document Type column = .txt but I still get no rows returned?!?
View 1 Replies
View Related
Jul 11, 2007
Hi, I was wondering if any SQL Server gurus out there could help me...
I have a table which contains text resources for my application. The text resources are multi-lingual so I've read that if I add a html language indicator meta tag e.g.
<META NAME="MS.LOCALE" CONTENT="ES">
and store the text in a varbinary column with a supporting Document Type column containing ".html" of varchar(5) then the full text index service should be intelligent about the language word breakers it applies when indexing the text. (I hope this is correct technique for best multi-lingual support in a single table?)
However, when I come to query this data the results always return 0 rows (no errors are encountered). e.g.
DECLARE @SearchWord nvarchar(256)
SET @SearchWord = 'search' -- Yes, this word is definitely present in my resources.
SELECT * FROM Resource WHERE CONTAINS(Document, @SearchWord)
I'm a little puzzled as Full Text search is working fine on another table that employs an nvarchar column (just plain text, no html).
Does the filter used for full text indexing of html expect certain tags to be present as standard? E.g. <html> and <body> tags? At present the data I have stored might look like this (no html or body wrapping tags):
Example record 1 data: <META NAME="MS.LOCALE" CONTENT="EN">Search for keywords:
Example record 2 data: <META NAME="MS.LOCALE" CONTENT="EN">Sorry no results were found for your search.
etc.
Any pointers / suggestions would be greatly appreciated. Cheers,
Gavin.
UPDATE: I have tried wrapping the text in more usual html tags and re-built the full text index but I still never get any rows returned for my query results. Example of content wrapping tried - <HTML><HEAD><META NAME="MS.LOCALE" CONTENT="EN"></HEAD><BODY>Test text.</BODY></HTML>
I've also tried stripping all html tags from the content and set the Document Type column = .txt but I still get no rows returned?!?
View 1 Replies
View Related
Dec 20, 2007
When i search for Peter.Peter,Pan was successfully retrieved from the DatabaseBut when i Search for Peter Pan.It found no results.How can i make it such tat it can find results for Peter Pan too?In another words, how to make my search engine more powerful,fuzzy and intelligent?My current Code:<asp:SqlDataSource ID="dsSearch" runat="server" ConnectionString="<%$ ConnectionStrings:csHPDB %>" SelectCommand="SELECT * FROM [employee_table] WHERE ([employee_name] LIKE '%' + @employee_name + '%') <SelectParameters> <asp:QueryStringParameter Name="employee_name" QueryStringField="Search" Type="String" /> </SelectParameters> </asp:SqlDataSource>
View 18 Replies
View Related
Jul 20, 2005
My SQL Server 2000 does not use the accent insensitive collation setting(collation containing _AI) in full-text serches:While SELECT * FROM <table> WHERE <column> LIKE '%a%' returns 'Mäuse',SELECT * FROM <table> WHERE CONTAINS(*, 'a') does not.Setting 'default full-text language' to neutral (0) does not help.How can I make full-text searches accent insensitive?Thanks for any ideas!Matthias
View 3 Replies
View Related
Sep 22, 2006
Hi
I want to know is a flat file faster than a RDBMS for indexing for example a search engine indexing would a flat file be better in terms of performance, scalability etc than a RDBMS?
View 14 Replies
View Related
Sep 10, 2007
Hi,
The other day we tried online re-indexing feature of SQL 2005 and it€™s performing faster than offline re-indexing. Could you please validate if it€™s supposed to do be this way? I always thought offline should be faster than online.
Thanks,
Ritesh
View 5 Replies
View Related
Aug 9, 2006
I'm just wonder if this is a bug in MS Search or am I doing something wrong.
I have a query below
declare @search_clause varchar(255)
set @Search_Clause = ' "hepatitis b" and "hepatocellular carcinoma"'
select * from results
where contains(finding,@search_clause)
I don't get the correct result at all.
If I change my search_clause to "hepatitis" and "hepatocellular carcinoma -- without the "b"
then i get the correct result.
It seems MS Search doesn't like the phrase contain one letter or some sort or is it a know bug?
Anyone know?
Thanks
View 3 Replies
View Related
Mar 28, 2007
hi all, if i have a comma delimited string and want to insert each delimited substring into a table which of the following way is faster?pass the whole string into the a stored procedure and loop through the delimited string and pick out the substring and insert into the table orloop and pass the substring into a stored procedure and insert N times?or any other better ways someone could suggest me to do thanks!
View 6 Replies
View Related
Jul 27, 2001
i have a query that i can use either and get the same results. i just need to shave some time off... which is faster the LIKE or IN () ???
thanks
kim
View 2 Replies
View Related
Jan 16, 2006
I was just wondering if this can be done any faster? code-wise that is...
Don't mind the converts, can't do without them, as the data discipline for the source table isn't always reliable, while I have to be absolutely sure the destination data ends in the required format.
UPDATE MATCH_basistabel
SET MATCH_basistabel.matchfelt = convert(varchar(50),ALL_tbl_medlemsinfo.søgenavn),
MATCH_basistabel.søgenavn = convert(varchar(50),ALL_tbl_medlemsinfo.søgenavn),
MATCH_basistabel.medlemsnavn = convert(varchar(50),ALL_tbl_medlemsinfo.medlemsnav n),
MATCH_basistabel.medlemsnavn2 = convert(varchar(50),ALL_tbl_medlemsinfo.medlemsnav n2),
MATCH_basistabel.medlemsnummer = ALL_tbl_medlemsinfo.medlemsnummer,
MATCH_basistabel.nationalitet = convert(varchar(10), ALL_tbl_medlemsinfo.nationalitet),
MATCH_basistabel.organisationsnummer = convert(varchar(10),ALL_tbl_medlemsinfo.organisati onsnummer),
MATCH_basistabel.medlemskab = convert(varchar(20), ALL_tbl_medlemsinfo.medlemskab),
MATCH_basistabel.ipdn = ALL_tbl_medlemsinfo.ipdn,
MATCH_basistabel.ipdnroll = convert(varchar(20), ALL_tbl_medlemsinfo.ipdroll),
MATCH_basistabel.franavision = 1
FROM MATCH_basistabel, ALL_tbl_medlemsinfo
WHERE isnumeric(matchfelt) = 1
AND (convert(int, MATCH_basistabel.matchfelt) = convert(int, ALL_tbl_medlemsinfo.medlemsnummer)
AND MATCH_basistabel.franavision = 0)
View 14 Replies
View Related
Feb 17, 2006
Hi Guys,
I have SQL file but it run slowly when comes to huge record. How do I make it faster. I do create an index but how to make use the index? Pls help me on this...
Many Thanks,
Regards,
Shaffiq
View 6 Replies
View Related
Jul 20, 2005
I'm sonewhat new to MS SQL Server and I'm wondering about which of thefollowing two queries would be faster:DECLARE @ResidencesBuilt intDECLARE @BarracksBuilt intDECLARE @AirBaysBuilt intDECLARE @NuclearPlantsBuilt intDECLARE @FusionPlantsBuilt intDECLARE @StarMinesBuilt intDECLARE @TrainingCampsBuilt intDECLARE @FactoriesBuilt intSELECT@ResidencesBuilt = SUM(CASE WHEN BuildingType = 0 THEN Built END),@BarracksBuilt = SUM(CASE WHEN BuildingType = 1 THEN Built END),@AirBaysBuilt = SUM(CASE WHEN BuildingType = 2 THEN Built END),@NuclearPlantsBuilt = SUM(CASE WHEN BuildingType = 3 THEN Built END),@FusionPlantsBuilt = SUM(CASE WHEN BuildingType = 4 THEN Built END),@StarMinesBuilt = SUM(CASE WHEN BuildingType = 5 THEN Built END),@TrainingCampsBuilt = SUM(CASE WHEN BuildingType = 6 THEN Built END),@FactoriesBuilt = SUM(CASE WHEN BuildingType = 7 THEN Built END)FROM BuildingsGROUP BY kdIDHAVING kdID = 2902Or:DECLARE @ResidencesBuilt intDECLARE @BarracksBuilt intDECLARE @AirBaysBuilt intDECLARE @NuclearPlantsBuilt intDECLARE @FusionPlantsBuilt intDECLARE @StarMinesBuilt intDECLARE @TrainingCampsBuilt intDECLARE @FactoriesBuilt intSET @ResidencesBuilt = (SELECT Built FROM Buildings WHERE BuildingType = 0AND kdID = 2902)SET @BarracksBuilt = (SELECT Built FROM Buildings WHERE BuildingType = 1 ANDkdID = 2902)SET @AirBaysBuilt = (SELECT Built FROM Buildings WHERE BuildingType = 2 ANDkdID = 2902)SET @NuclearPlantsBuilt = (SELECT Built FROM Buildings WHERE BuildingType =3 AND kdID = 2902)SET @FusionPlantsBuilt = (SELECT Built FROM Buildings WHERE BuildingType = 4AND kdID = 2902)SET @StarMinesBuilt = (SELECT Built FROM Buildings WHERE BuildingType = 5AND kdID = 2902)SET @TrainingCampsBuilt = (SELECT Built FROM Buildings WHERE BuildingType =6 AND kdID = 2902)SET @FactoriesBuilt = (SELECT Built FROM Buildings WHERE BuildingType = 7AND kdID = 2902)The data source is:kdID BuildingType Built2902 6 02902 7 02902 4 02902 0 802902 2 02902 1 52902 3 402902 5 10Or:CREATE TABLE [dbo].[Buildings] ([kdID] [int],[BuildingType] [tinyint],[Built] [int])INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 0, 80)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 1, 5)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 2, 0)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 3, 40)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 4, 0)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 5, 10)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 6, 0)INSERT INTO Buildings (kdID, BuildingType, Built) VALUES (2902, 7, 0)Analyzer says the first would be faster, but it has a lot of SUM()'s andwhatnot so I'm not too sure about this. There are also about 1000 rows inthe actual Buildings table. This will be a part of a stored procedure.
View 3 Replies
View Related
May 7, 2007
I want to know the # of users on our web site for each month in a given year. I'm looking for a faster way to do this--perhaps one that can leverage an index instead of reading the entire table! (My avg disk queue right now is above 7 and the query takes about 90 seconds).
Here's my current SP. Basically I'm calculating each month/year and using UNION to join them together, then pivot to rotate.
USE [TNS]
GO
/****** Object: StoredProcedure [dbo].[Unique_Login_IPs] Script Date: 05/07/2007 12:38:52 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: <Author,,Name>
-- Create date: <Create Date,,>
-- Description: <Description,,>
-- =============================================
ALTER PROCEDURE [dbo].[Unique_Login_IPs]
(
@year1 int,
@year2 int
)
AS
BEGIN
SET NOCOUNT OFF;
-- Define the years for testing purposes
set @year1 = 2006
set @year2 = 2007
SELECT month,[2006] as y2006,[2007] as y2007
FROM
(
SELECT @year1 AS year, 1 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year1) AND (MONTH(logged) = 1)) as tmpy1_1
UNION
SELECT @year1 AS year, 2 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog AS servicelog_1
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year1) AND (MONTH(logged) = 2)) as tmpy1_2
UNION
SELECT @year1 AS year, 3 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year1) AND (MONTH(logged) = 3)) as tmpy1_3
UNION
SELECT @year1 AS year, 4 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog AS servicelog_1
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year1) AND (MONTH(logged) = 4)) as tmpy1_4
UNION
SELECT @year1 AS year, 5 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year1) AND (MONTH(logged) = 5)) as tmpy1_5
UNION
SELECT @year1 AS year, 6 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog AS servicelog_1
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year1) AND (MONTH(logged) = 6)) as tmpy1_6
UNION
SELECT @year1 AS year, 7 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year1) AND (MONTH(logged) = 7)) as tmpy1_7
UNION
SELECT @year1 AS year, 8 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog AS servicelog_1
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year1) AND (MONTH(logged) = 8)) as tmpy1_8
UNION
SELECT @year1 AS year, 9 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year1) AND (MONTH(logged) = 9)) as tmpy1_9
UNION
SELECT @year1 AS year, 10 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog AS servicelog_1
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year1) AND (MONTH(logged) = 10)) as tmpy1_10
UNION
SELECT @year1 AS year, 11 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year1) AND (MONTH(logged) = 11)) as tmpy1_11
UNION
SELECT @year1 AS year, 12 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog AS servicelog_1
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year1) AND (MONTH(logged) = 12)) as tmpy1_12
UNION
SELECT @year2 AS year, 1 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year2) AND (MONTH(logged) = 1)) as tmpy1_1
UNION
SELECT @year2 AS year, 2 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog AS servicelog_1
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year2) AND (MONTH(logged) = 2)) as tmpy2_2
UNION
SELECT @year2 AS year, 3 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year2) AND (MONTH(logged) = 3)) as tmpy2_3
UNION
SELECT @year2 AS year, 4 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog AS servicelog_1
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year2) AND (MONTH(logged) = 4)) as tmpy2_4
UNION
SELECT @year2 AS year, 5 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year2) AND (MONTH(logged) = 5)) as tmpy2_5
UNION
SELECT @year2 AS year, 6 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog AS servicelog_1
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year2) AND (MONTH(logged) = 6)) as tmpy2_6
UNION
SELECT @year2 AS year, 7 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year2) AND (MONTH(logged) = 7)) as tmpy2_7
UNION
SELECT @year2 AS year, 8 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog AS servicelog_1
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year2) AND (MONTH(logged) = 8)) as tmpy2_8
UNION
SELECT @year2 AS year, 9 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year2) AND (MONTH(logged) = 9)) as tmpy2_9
UNION
SELECT @year2 AS year, 10 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog AS servicelog_1
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year2) AND (MONTH(logged) = 10)) as tmpy2_10
UNION
SELECT @year2 AS year, 11 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year2) AND (MONTH(logged) = 11)) as tmpy2_11
UNION
SELECT @year2 AS year, 12 AS month, COUNT(*) AS cnt
FROM (SELECT DISTINCT ipaddress
FROM servicelog AS servicelog_1
WHERE (method = 'LOGIN') AND (YEAR(logged) = @year2) AND (MONTH(logged) = 12)) as tmpy2_12
) piv
PIVOT
(
SUM(cnt)
FOR year IN
([2006],[2007])
) as child
END
View 1 Replies
View Related
May 14, 2007
Hi all, I m new to this forum and this is my first question. I m having 2 pages in my web site ... page 1 query directly to db using sqldatasource, the second page query through a BLL then DAL by following the step in this tutorial (http://www.asp.net/learn/dataaccess/tutorial02vb.aspx?tabid=63).... Page 1 is using a "Like" query search and the Page 2 is the normal displaying some product detail.... Under normal circumstances, one will expect Page 1 will be way fastest than the Page 2... however the problem is Page 1 is in thunder speed while Page 2 takes 10 secs to load... 10 seconds is really not acceptable... I really couldnt figure out what happens... both Page 1 and Page 2 are using the same connection string which connection through a DSN.... How is the connection different by using sqldatasource and DAL?? Could someone please help.... ThanksP.S. I m using a Pervasive database
View 5 Replies
View Related