Cached Search Of Top Terms
Mar 11, 2008Hi, I have a search and I want to create a hyperlinked list of the top 5 search terms below it, what's the most efficient way to go about this?
View 20 RepliesHi, I have a search and I want to create a hyperlinked list of the top 5 search terms below it, what's the most efficient way to go about this?
View 20 RepliesI have a table that contains words that will be used to search another table where FullText index has been created on searchable columns. I'm basically trying to run something like this:
SELECT t1.col1, t2.col3
FROM tbl1 t1, tbl2 t2
WHERE CONTAINS (t1.col1, t2.col1)
I know this won't work but is there a way to join these two tables so the words (t2.col1) can be passed as search conditions? There is no common key on both tables so normal join won't work. I'm trying to find a way to pass the search words from one table to another.
hey all,
basically, what I am trying to achieve to 2 types of search functions...
Search for All terms (easy and complete) and search for Any Terms...
the way I have gone about this so far is to in my asp.net app, split the search string by spaces, and then search for each word, and merging the resulting dataset into the main return dataset.
this, however has a few problems. the result dataset will contain duplicate values, and i am running queries in a loop.
What i am looking for is a one-stop-shop stored procedue that will split the search string, loop through each word, and add the results to a return table, ONLY if it does not exist already within the return table.
Can anyone point me in the right direction... basically with the splitting of the string and the looping through the words...the rest i think i can handle...
or any other hints/tips/tricks would also be helpful.
Thanks Everyone!
There are several terms using ms Server that I don't know and cannot find in my books. Does MS provide that, besides BOL where it is difficult to find good explanations or even find definitions?
thx,
Kat
ps. would be a nice feature if they don't have it currently.
Hi
Hopefully im asking this in the right place, sorry if its not, maybe you could point me in the right direction
I have been informed that use of MDF Files (SQL Server Express) Databases on the net was restricted as this was classed as multi connections and therefore was outside the free license agreement.
I am looking at commercially developing and marketing a web based system for with a relatively small database footprint (well under 1gb) with ASP.NET 2.0 and like the look of SQL Server Express.
Could anyone clear up whether or not this is allowed under the SQL Server Express terms of use, or point me in the direction of somewhere i can find information.
Thanks
What are MS SQL Server terms for
- statistical view
- materialized query tables
used in DB2
?
I have a table that contains 10 millions records. The following 2 statements, which one provide better performance? Frankly, i have no idea how to compare the execution plan...
Select Top 10000 * From Table
OR
Set rowcount 10000
Select * From Table
I want to know about types of replications used in SQL environment ?
Hi,
I need to categorize a lot of html or text files according to a list of terms and I wonder if terms lookup is adequate for this. The problem is that terms lookup can only take an Oledb source as input. My files can be up to 80 Kb big and aren't columns structured.
Should I import my files in a table ? But if so, how can I import a column with more than 8000 characters ?
Thank you in advance.
First and foremost, thanks for reading and responding!Does it matter how big a stored procedure is if you do things in the stored procedure such as:declare the parametersIF @Parm_Select = '<ALL>'do a select IF @Parm_Select <> '<ALL> and @Parm_Report = '1'do a selectIF @Parm_Select <> '<ALL> and @Parm_Report = '2'do select This goes on and on and on and I have written a couple of stored procedures that are about 1500 lines of code based upon parameters passed I do not create any tables - they are just all select statements based upon the parameters passed.I thought I was doing the right thing cause I did not want to have to write a procedure that called a procedure, (I read this and got confused on the return prarmeters cause there is a lot of data being returned from the select ----- I don't think I said that correctly! . I am just learning this SQL stuff and I it is cool and I am excited - but I don't want to develop any bad habits in the beginning - and I try to look these things up on the www - but I just don't get explicit answers from reading all of this stuff. Thank to all in advance!
View 1 Replies View RelatedIs it possible for the terms lookup function to manage the differences between US and english
spelling ? For example if I search for the terms "color" and "categorization", I'd would like that the terms lookup also count the "colour" and "categorisation" occurences in the text.
Thanks
Vincent
I am using oracle 10G DB as back end.I have two date fields in a table.
1)premium_paying_start_date
2)premium_paying_end_date
i have to get the premium_term i.e, the difference between the two dates(premium_paying_end_date-premium_paying_start_date).
The difference should show the year,month and no of days difference.
For example :
premium_paying_start_date : 14-10-1984
premium_paying_end_date : 01-03-2008
Difference should be : 23 Y : 4 M : 15 D (Y = years, M = months , D= days)
So please give me the solution for this.
I'm just wonder if this is a bug in MS Search or am I doing something wrong.
I have a query below
declare @search_clause varchar(255)
set @Search_Clause = ' "hepatitis b" and "hepatocellular carcinoma"'
select * from results
where contains(finding,@search_clause)
I don't get the correct result at all.
If I change my search_clause to "hepatitis" and "hepatocellular carcinoma -- without the "b"
then i get the correct result.
It seems MS Search doesn't like the phrase contain one letter or some sort or is it a know bug?
Anyone know?
Thanks
hi,
i am using a forEach look to import each file within a folder, but i also need to calculate the dataset related ot these files.
the file are named as:
ff_inbound_20071008_1.csv
ff_inbound_20071008_2.csv
where for file ff_inbound_20071008_1.csv:
ff => flat file
inbound => dataset of this csv
20071008 => date
1=> file count
having in mind that they are store in the variable as a full path:
z:myFlatFilesexportsproj01ff_inbound_20071008_2.csv
i need to extract the dataset and the date for each file.
how can i extract these terms from the file name?
many thanks,
nicolas
Hello all,
I have a report with a table and a chart. It uses dataset1 as the data source.
All works fine.
I create a new dataset called dataset2.
The queries are exactly the same. The only differences between the 2 datasets is the database server and the fact that one of the columns is a smallint (in dataset2) and an int(in Dataset1)
I change the datasetName property of both the table and the chart to use dataset2.
When I run the report I get a conversion error stating that there was an overflow of int2 while using dataset1. I have verified the report is not using dataset1 anywhere. If I delete dataset1 and run the report the error goes away. If I add it back, I get the error again. Why is the report looking at dataset1 if it is not referenced at all in the report? Does SQL RS cache the datasets and verify each when it compiles?
regards,
Bill
I am using SQL server 2005. I have a VIEW that joins several tables. One of the table's column can be added dynamically by the user from a GUI interface. However, after a column is added, it does not show up in the VIEW immediately. It will take a while (I haven't figured out exactly how long) before the extra column shows up as the execution result of the VIEW.
So it seems like SQL server is caching that VIEW's schema. Is there anyway I can make this view always comes back with the latest schema?
Thanks a lot!
Penn
I want to check the performance of m query and i just want to remove cached query results. Is there any suggestion how can i do this.
I just want to check after each modificatin how much improvement in performance
Phil, great links, really helpful and appreciated.
I just need to verify one thing on the lookup method:
--One of the lookup methods people were discussing is non-cached lookup -- which seem to be evaluated to be the fastest. Is the non-cached the default of LookUp transformation? and when I wanted the lookup method to be cached, I need to go into the Advance tab and set it to however %, right? thanks.
Hi,
I'm trying to understand the cases where it's more interesting to use snapshot and when it's more interesting to use cached instances.
If I have 100 users trying to reach a report, is it better to use snaphsot or cache instance ? In both case, the 100 users will have the same report result. And what about the performance, are they similar ?
Thanks for your time and response,
See you,
Have a nice day!
regards,
sandy
Hello,
I would like to know what is the difference between a snapshot and a cached instance in SSRS?
Which one has the best performances and which one is the best for multiple users and reports containing parameters (the parameters are then passed in the where clause of the sql code; ex: WHERE IN(@param1))?
Thanks for your answers.
Zoz
I was wondering if anyone had an concrete information about if there is a problem with having too many stored procedures or plans in the cache? Obviously there is an impact on memory but if we can ignore that for the time being, does SQL perform just as well with 100 query plans as it does with 10's millions of plans?
View 9 Replies View RelatedIs it possible to keep a Cached Lookup in memory when executing multiple Data Flows? Executing DFT€™s in parallel will cache and use the same LOOKUP statement. But what if I€™m executing the DFT sequentially, can I keep the LOOKUP from the first DFT in memory for the second DFT? For example, in my case, I€™m caching a lookup against the Customer dimension for invoices. The second DFT then processes credits and again does a lookup against the Customer dimension. I want to use the cached Customer records from the first DFT.
View 1 Replies View RelatedIs there a way to programatically determine if a report should be generated from the cache or run against real-time data?
View 7 Replies View RelatedParameterized queries are only allowed on partial or none cache style lookup transforms, not 'full' ones. Is there some "trick" to parameterizing a full cache lookup, or should the join simply be done at the source, obviating the need for a full cache lookup at all (other suggestion certainly welcome)
More particularly, I'd like to use the lookup transform in a surrogate key pipeline. However, the dimension is large (900 million rows), so its would be useful to restrict the lookup transform's cache by a join to the source.
For example:
Source query is: select a,b,c from t where z=@filter (20,000 rows)
Lookup transform query: select surrogate_key,business_key from dimension (900 M rows, not tenable)
Ideal Lookup transform query:
select distinct surrogate_key
,business_key
from dimension d inner join
t on d.business_key = t.c
where t.z = @filter
I have a procedure that generates dynamic sql and then executes via the execute(strSQL) syntax. BOL states that if I use sp_executesql with hard-typed parameters passed in variables, the query optimizer will 'probably' match the sql statement with the cached execution path, thus avoiding recompilation and speeding up the results for heavily run procedures.
Can anyone tell me if this is also true if the sql references an object on a linked sql server 2000 database? Technically, the sql is exactly the same, but I'm unsure if there is some exception due to the way linked objects are processed.
Thanks!
Any way to invalidate cached query plans? I would rather target a specific query instead of invalidating all of them. Also any sql server setting that will cause cached query plans to invalidate even though only one character in the queries has changed?
exec sp_executesql N'select
cast(5 as int) as DisplaySequence,
mt.Description + '' '' + ct.Description as Source,
c.FirstName + '' '' + c.LastName as Name,
cus.CustomerNumber Code,
c.companyname as "Company Name",
a.Address1,
a.Address2,
[code]....
In this query we have seen (on some databases) simply changing ‘@CustomerId int',@CustomerId=1065’ too ‘@customerId int',@customerId=1065’ fixed the a speed problem….just changed the case on the Customer bind parameter. On other servers this has no effect.the server is using an old cached query plan, but don’t know for sure.
Hello,
This is my first post, and I'm hoping you all can help.
Using Reporting Services 2005, I have several reports that use embedded images.
All images render fine when the report execution is set to:
1) Always run this report with the most recent data
1a) Do not cache temporary copies of this report
However, when I change the execution to either a Cache or a Snapshot, the images and some charts render as red "X" placeholders. This is sometimes remedied when the user clicks the page refresh button, but not always.
Of course, I could just have all the concurrent users use the uncached report that hits the OLAP server, but that would be highly inefficient, and just plain slow.
Thanks for any help on this subject.
-michael
I am frequently going back and forth between the making changes to my reports and previewing those changes, all from within visual studio without publishing the reports each time.
The problem is that frequently changes that I make are not reflected in the preview window unless I either close out of visual studio completely or I wait some length of time, (I'm not sure how long exactly, but 1/2 an hour seems to always do the trick.)
Is there a way to clear any cache and force visual studio to completely reprocess a report?
At least when it is formatting changes I can identify whether the change has stuck, but when I'm fixing bugs in the code, I can't tell if I didn't fix it or if the change just hasn't taken effect.
I'm working on a reporting tool that could bring back hundreds of thousands of results back at once. I need some way to run the actual query only once a day, and then the reporting tool would just pull back this cached results. To be short, I need to figure out how to do this using a minimum amount of resources. Would a DataView work with something like this? How would I have it update only once a day? I appreciate any advice!
View 2 Replies View RelatedI have a datagridview bound to a table that is part of an Entity Framework model. A user can edit data in the datagridview and save the changes back to SQL. But, there is a stored procedure that can also change the data, in SQL, not in the datagridview. When I try to "refresh" the datagridview the linq query always returned the older cached data. Here's the code that I have tried using to force EF to pull retrieve new data:
// now refresh the maintenance datagridview data source
using (var context = new spdwEntities())
{
var maintData =
from o in spdwContext.MR_EquipmentCheck
where o.ProdDate == editDate
orderby o.Caster, o.Strand
select o;
mnt_DGV.DataSource = maintData;
}
When I debug, I can see that the SQL table has the updated data in it, but when this snippet of code runs, maintData has the old data in it.
way to invalidate cached query plans? I would rather target a specific query instead of invalidating all of them.
Also do you know of any sql server setting that will cause cached query plans to invalidate even though only one character in the queries has changed?
exec sp_executesql N'select
cast(5 as int) as DisplaySequence,
mt.Description + '' '' + ct.Description as Source,
[Code].....
In this query we have seen (on some databases) simply changing ‘@CustomerId int',@CustomerId=1065’ too ‘@customerId int',@customerId=1065’ fixed the a speed problem….just changed the case on the Customer bind parameter. On other servers this has no effect. I’m thinking the server is using an old cached query plan, but don’t know for sure.
With SQL2005 SP2, we are seeing that when auto stats run on one or more indexes of a large table (1.5M rows), then immediately the stored proc using that table starts acting as if the query plan is no longer any good. This causes a drastic slowdown in response time and a corresponding increase of table reads to get the data. E.g, the next execution of the procedure after the auto stats kick in goes from 355 reads to 755000 reads (as depicted by Profiler). Generally, there are about 25 people using the DB at any one time. They connect through a mid-tier VB component.
I tried adding WITH RECOMPILE to the stored proc in question, but that caused almost all executions to run at the higher number. I thought that the WITH RECOMPILE hint would create a new query plan for each execution of the procedure and that plan would the the latest and greatest. Perhaps it did, but most users got stuck with the higher number of reads anyway. After taking the hint out, everyone went back to getting the 335 number and quick response times.
What we are wrestling with is that when those auto stats hit, it really messes up everyone until we manually recompile the procedure. Daily we delete all records in the table that are over 45 days old, so the table stays pretty much the same size. We also set the recompile flag to cause a new plan to be generated that will reflect the smaller amount of data. Should we also run a stats update before recompiling the procedure? Profiler has been very helpful in capturing what is going on, so I think I have a good handle on that. However, I don't understand why WITH RECOMPILE produced a messed up plan for everyone. The compile itself seems to take only 1 ms when done from the query screen.
Is there a known problem with BIS Deploy not clearing cache. I repeatly (but not always) have an issue where I will do a BIS Deploy of an RDL but when I run it, the old report runs. I can't clear this out until I explicitly log into the Report Manager and explicitly delete the RDL and re-deploy from the BIS.
Is there a workaround, fix, process change, configuration change I can do to keep this from happening?