Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
Is there any specific place where I can find which SQL Query is more efficient?.
Is INNER JOIN is faster or Select ... Where ID in (SELECT ...) is faster?
I have two tables: 1.FLEET (The number of rows is not so much) Attributes : Company_Id (PK) Fleet_Id (PK) Fleet_Name Fleet_Description
2.USER_PRIVILEGE (The number of rows can reach up to 3 times the number of row in fleet table) Attributes : Company_Id (PK) Fleet_Id (PK) User_Id (PK) Privilege_Id(PK) Comment Category
I want to select Fleet_Id and Fleet_Name from fleet table Where the current user has privilege_id=1
I have two possible select statement :
1.Option 1
SELECT Fleet_Name, Fleet_Id FROM FLEET WHERE (Company_Id = 2) AND (Fleet_Id IN (SELECT fleet_id FROM user_privilege WHERE user_id = 11 AND company_id = 2 AND privilege_id = 1)) ORDER BY Fleet_Name
2.Option 2
SELECT F.Fleet_Name, F.Fleet_Id FROM USER_PRIVILEGE U INNER JOIN FLEET F ON U.Fleet_Id = F.Fleet_Id WHERE (F.Company_Id = 2) AND (U.Privilege_Id = 1) AND (U.User_Id = 11) ORDER BY F.Fleet_Name
Actually which one is faster. Is SQL Statement with INNER JOIN (Option 2) can be executed faster than the one with double Select Statement(Option 1)?
Hi: I have the following query, can somebody help me? SELECT s.Id, s.NameFROM Switch s INNER JOIN SwitchTelephoneRange r ON s.Id = r.IdWHERE '1526858' BETWEEN FromTelephone AND ToTelephone Where the '1526858' is a phone number. My problem is, I want to run the above query for each record in : select Telephone from PhoneDirectory So, each telephone number in the second phone, would macth the ' ' in the first query. How can I do so? Do I need a loop? a cursor? Can you help please? Thanks
Hi, I want to know query execution time as output. I want execution time only. this is for tuning purpose... Time displayed in the status bar is not helpful for me. thanks.
Hi there, I'm having a big performance problems with sql query. What i have done is changes physical layout (rearanged the columns) in one of the tables in the database. I used bcp to get the data out and then in. There are about quarter million rows in the this table. I have created the same indexes but know the same query takes a long time to execute. I have noticed that the showplan is somehow different than it used to be. This query uses the table i have changed and another one that i haven't touched. I have updated the stats to no help. Here are the show plans. this one is slow STEP 1 The type of query is INSERT The update mode is direct Worktable created for ORDER BY FROM TABLE SW_PERSON Nested iteration Index : swiPERSON10 FROM TABLE SW_CASE Nested iteration Table Scan TO TABLE Worktable 1 STEP 2 The type of query is SELECT This step involves sorting FROM TABLE Worktable 1 Using GETSORTED Table Scan
this one used to be fast STEP 1 The type of query is INSERT The update mode is direct Worktable created for ORDER BY FROM TABLE SW_CASE Nested iteration Table Scan FROM TABLE SW_PERSON Nested iteration Index : PK_SW_PERSON_1__27 TO TABLE Worktable 1 STEP 2 The type of query is SELECT This step involves sorting FROM TABLE Worktable 1 Using GETSORTED Table Scan
I think the problem is with the fact that the fist one doesn't use the PK key which is the one the links both tables. My question is how to force the query to use this index. PS. One thing i haven't done is to recreate indexes on the other table. But i don't think that would have made a differnece. Thanks
CREATE FUNCTION dbo.fnProductsRetrieveBySupplierID ( @SupplierID int ) RETURNS TABLE AS RETURN ( SELECT * FROM Products WHERE SupplierID = @SupplierID )
CREATE FUNCTION dbo.fnSuppliersRetrieveBySupplierID ( @SupplierID int ) RETURNS TABLE AS RETURN ( SELECT * FROM Suppliers WHERE SupplierID = @SupplierID )
I have been testing the performance of the following SQL statements:
Code:
1. SELECT * FROM Products INNER JOIN Suppliers ON (Products.SupplierID = Suppliers.SupplierID) WHERE Products.SupplierID = 3
2. SELECT * FROM dbo.fnProductsRetrieveBySupplierID (3), dbo.fnSuppliersRetrieveBySupplierID (3)
I have built a loop to execute each statement multiple times and then compare the execution times. Although both queries produce the same result, the 2nd one (which uses the functions) is about twice as slow, does anyone know why?
I am joining three tables each table has got about 1.5 million rows,selecting data from these three tables and inserting into a table,to avoid transaction log issues I am running the query in a batch of size 50,000 rows,it is taking about 5hrs to insert all the 1.5 millions rows.
All the columns in the where clause have proper indexes,I ran show plan for the query and it is using indexes properly and not doing any table scan.I updated the statistics for all the indexes also.
query looks some thing like this.
insert into table d (col1,col2,col3,.............. ) values (a.col1,b.col2,c.col3 .....................) from a,b,c where a.id = b.id and a.id = c.id and a.id in between @minid and @currid
The @minid starts from 1 and @currid starts from 50000 ,I am running this in a loop, in next iteration @minid will become 50001 and currid 100,000 and so on.
I have two tables. Employee EmployeeCode int Primary Key
Employee_Stock EmployeeCode int StockCode varchar(10) Primay key on (Employeecode, StockCode)..
There is no foreign key relation between these 2 tables. Now my question is which query give more performance. and why? 1. Select * from Employee INNER JOIN Employee_Stock on Employee.Employeecode = Employee_Stock.EmployeeCode
2. Create a foreign Key between Employee and Employee_Stock for EmployeeCode. and run the same query.
Actually we forgot to put the foreign key relationship between these 2 tables and we have lot of queries joining them.. Now if we add foreignkey, is it going to improve the performance or not?
I wrote a query and I use cursor in the query. When I run the query on dev box it takes 3 mins. I moved the query to EPM database box and it takes forever to run. Usually EPM database query performance is much better. How do I start debugging the poor performance?
How can I check if the query is creating any table locks?
Purpose of query: I get all the Companies (20000) and loop thru each company in the cursor and do calculations.
I have a query like below and it takes a aouple of seconds to run
select a.Registration_Key, ag.Agreement_Type_Name,p.ServiceProvider from dbo.Assessment a INNER JOIN ( select distinct Registration_Key, p.ServiceProvider, max(CSDS_Object_Key) as [Sequence] from dbo.Assessment a INNER JOIN dbo.CD_Provider_Xref p ON a.Provider_CD = p.Provider_CD where Creation_DT >= '07/01/2007' and Reason_CD = 1 group by Registration_Key, p.ServiceProvider ) as s1 ON a.CSDS_Object_Key = s1.Sequence INNER JOIN dbo.CD_Provider_XREF p ON a.Provider_CD = p.Provider_CD INNER JOIN dbo.CD_Agreement_Type ag ON ag.Agreement_Type_CD = a.Agreement_Type_CD LEFT OUTER JOIN ( select distinct Registration_Key, p.ServiceProvider , 1 as served from dbo.Encounters e INNER JOIN dbo.CD_Provider_Xref p ON e.Provider_CD = p.Provider_CD where Encounter_Begin_DT between '08/01/2007' and '08/31/2007' and Procedure_CD is not null and Encounter_Units > 0
) as s2 ON a.Registration_Key = s2.Registration_Key and p.ServiceProvider = s2.ServiceProvider
group by a.Registration_Key, ag.Agreement_Type_Name, p.ServiceProvider
However, if i add a served field( stamped with 1) it takes forever to run.. All of join columns have indexes.. cluster and non-clustered.. and i don;t see any index fregmentaitons...
select a.Registration_Key, ag.Agreement_Type_Name,p.ServiceProvider, served from dbo.Assessment a INNER JOIN ( select distinct Registration_Key, p.ServiceProvider, max(CSDS_Object_Key) as [Sequence] from dbo.Assessment a INNER JOIN dbo.CD_Provider_Xref p ON a.Provider_CD = p.Provider_CD where Creation_DT >= '07/01/2007' and Reason_CD = 1 group by Registration_Key, p.ServiceProvider ) as s1 ON a.CSDS_Object_Key = s1.Sequence INNER JOIN dbo.CD_Provider_XREF p ON a.Provider_CD = p.Provider_CD INNER JOIN dbo.CD_Agreement_Type ag ON ag.Agreement_Type_CD = a.Agreement_Type_CD LEFT OUTER JOIN ( select distinct Registration_Key, p.ServiceProvider , 1 as served from dbo.Encounters e INNER JOIN dbo.CD_Provider_Xref p ON e.Provider_CD = p.Provider_CD where Encounter_Begin_DT between '08/01/2007' and '08/31/2007' and Procedure_CD is not null and Encounter_Units > 0
) as s2 ON a.Registration_Key = s2.Registration_Key and p.ServiceProvider = s2.ServiceProvider
group by a.Registration_Key, ag.Agreement_Type_Name, p.ServiceProvider, served
Hello SQL Gurus, From the query below, I am using 2 TOP functions to return the desired row. I am wondering if someone can shed some light on how to AVOID using 2 TOP statements and combine into just one select query?
select TOP 1 * from (select TOP 2 Num from A order by Num) X order by Num desc
Truly Appreciate your help as this performance issue has been bugging in my head for quite some time...
I usually am all over answering these kinds of questions, but while I continue to work on this issue, maybe someone here can lend me a hand. A vendor application we run, stores metadata about backup blobs stored on a NAS device. The app basically backs up select folders on 1400 remote computers in the back office of our stores, and stores this on a NAS, while maintaining metadata about the BLOBs in SQL Server so that they can push recovery of the data back to the original store it came from. The database is roughly 80GB in size and has a single file group and is on its own dedicated LUN. It uses TempDB heavily, and this is not something that I can change, but TempDB is on a different disk array.
Today I spent hours on a conference call with them looking at a specific stored procedure that is used to clean up the records in the database after a BLOB file is deleted. A single BLOB file can have millions of related records in the database. There is a LEFT JOIN in the code that is against a table with 150 million + rows of data in it. The table size is fairly small, only 5 GB of data, but the LEFT JOIN spools 2.4GB of data to a Hash Match. It seems to me like the left join can't be removed, but I don't get how all of this works, because I didn't write the application. It is an INDEX SCAN. I can't seem to eliminate it. Is there anything I can do to help this thing out?
I am attempting to get a better understanding of why my SQL 2005 setup when running a simple select statement on a large table is displaying very low IO in performance monitor. If i run a single Select * From testtable i see 4mbsec transfer and Disk readssec is around 8-9. This particular table is sitting on a single U320 10k drive so i expecting to see far more substantial IO. Does anyone have any information on how IO is consumed using different SQL operations so i can obtain a better understanding?
So I am experimenting with upgrading a Windows Mobile application from .NETCF 2.0 to .NETCF3.5, along with moving my SQL 2005 Compact to SQL Compact 3.5. I have a database that I upgraded using the recommended methods (creating a datasource in VS2008, opening the SQL 2005 Compact .sdf file and allowing the tool to upgrade to SQL Compact 3.5). On the device (Dell Axim x51), with the .SDF files on an SD Card, the query, when executed against the SQL 2005 Compact database file, takes 1.5 seconds, but takes 1min41sec to execute on the SQL Compact 3.5 database.
This is a fairly simple query, with an inner join (using about 4 inner join constraints), a where clause (over about 3 things), and an order by clause. The execution plan for the SQL Compact 3.5 query shows index seeks (one consuming 2% and the other consuming 0%, with the inner join using 98%). The database files are on the order of 90MB.
Can anyone offer any suggestion why the SQL Compact 3.5 query performance would be so much worse than the SQL 2005 Compact performance?
select distinct a.* from test a inner join test1 b on b.col1 = a.col1 inner join test2 c on c.col2 = a.col2 where exists (select NULL from test3 d where (d.col3 = a.col3 or a.col3 is null))
All the columns involved in the WHERE clause and JOIN conditions have index. Is there any alternative available for the above which can increase the performance ?
Hello, Here I have a small doubt about validating UserName and Password.I validate username and password with following quey (forget about case-sensitiveness of password) :select password from table where username='Uname' and password='pwd';Now in second scenario, I use following :select password from table where username='Uname'and validate password in .NET code.1) If user having 'Uname' does not exists in database then which query is faster (first or second)?2.1) If user exists and password is not matching then which is faster?2.2) 2.1 + If there is clustered index on username column, is first query optimized? Thanks
How could I tell the performance difference between two queries:
One is: select * from table where Lower(colomnname) = 'value'
The other is: select * from table where colomnname = 'value'
Basically the difference is in lower() function, how much this function will affect the query performance. Is there a formal way to test it out, or by any logic. Thanks, Mike
Hello, I'm new to OLAP systems and MDX, and am doing some testing on Microsoft Analysis Service 2000 SP3, the database is Microsoft SQL Server 2000 SP3. In the cube I designed, the fact table contains purcahse information including the cost and quantity of the parts and the suppliers of the parts. There are 2 measures, qtyAvailable and cost. Two dimensions are involved, which are part and supplier. Here is what I 'm going to do: 1. calculate sum(qtyAvailable * cost * 0.0001) for all the items in the fact table, let us call this value sum1 2. find out in the fact table all of those parts with their sum( qtyAvailable * cost ) greater than sum1
Here is the MDX to do the 2 things above: with member [Measures].[prod1] as '[Measures].[qtyAvailable] * [Measures].[cost]' with member [Measures].[prod2] as '[Measures].[prod1] * 0.0001' with member [Measures].[sum1] as 'sum(crossjoin([part].members, [supplier].members), [measures].[prod2])' with member [Measures].[sum2] as 'sum(crossjoin([part].currentmember, [supplier].members), [measures].[prod1])' select {[Measures].[sum2]} on columns, Filter({[part].members}, ([measures].[sum2]>[Measures].[sum1]) ) on rows from cube1
It takes 9 seconds to calculate only sum1 by using another MDX. The value of sum1 is 8256865.23. If I replace sum1 in the MDX provided above with 8256865.23, it takes several minutes to finish. But it keeps running for hours if I run the MDX query above with [sum1] instead of 8256865.23. So the calculation of sum1 seems to be the bottle neck. In my MDXquery, it iterates thru the members of the dimension [part]. I don't know whether [sum1] is calculated repeatedly for each iteration or not. However, Sum1 will be constant during the running of the whole MDX query. So sum1 only needs to be calculated once. I tried to use cache to improve the performance but it didn't work. Can anyone help to tell whether there is anyway to optimize this query? Thanks so much Roy
Please let me know the way to increase the performance of the below query :
SELECT DISTINCT a.* FROM a INNER JOIN #temp1 b on (a.col1 = b.col1 OR a.col1 IS NULL) INNER JOIN #temp2 c on (a.col2 = c.col1 OR a.col2 IS NULL)
Here, there are no indexes/pk on the columns in any table. But I am sure that the table #temp1 and #temp2 has distinct/unique values in columns col1 used here. The table 'a' has redandant values in its column used here.
Should I create pk on the columns for #temp1 and #temp2 used here. Is that enough ? Or should I also create index on the columns of the table 'a' used here.
Also please let me know is there anyother way to increase the performance of the query.
Hi, I want to know if anyone have any clue about the reason why this happens. I have a table on SQL Server 7 with 320 thousand registers and when I execute a SELECT * on it, it takes about 6 seconds to give an answer. But the same table on SQL Server 2005 Ent takes about 16 seconds, Is it normal?:shocked: :shocked:
Hi all. I'm new to this forum and looking for some assistance.
I've run into a unique (for me) performance problem.
I have a select statement that performs fine ( < 1 second ) using one set of values in the criteria but very poorly ( > 3 minutes )using different values. In both circumstances the query returns zero rows. The query involves a parent-child join with the criteria spread across both tables.
The execution plan looks similar between the two; the difference being a few percentage points difference on some of the operations. The tuning advisor has no recommendation in case 1 but suggests a couple of additional indexes and 4 statistics in case 2.
My gut tells me that the solution is *not* applying the additional indexes/statistics but some other issue. Or it could be the sushi I just ate.
Anyway, I'm hoping someone can point me in the right direction as what to analyze to determine why simply changing a single supplied criteria value would have such a dramatic effect on performance.
I have just created a logging table that I anticipate to have 10's of millions of rows (maybe 100's of millions eventually).
Basically its a very basic, narrow table, we are using it to log hits on images for a webserver.
My question is that we want to run queries that show how many rows are shown per day etc, however we want to make sure these queries which we are anticipating to be very heavy, do not slow down the system.
I have been recommended to have a seperate database (mirror/replica) for reporting so that the performance of regular activity will not be affected.
I assume this means I would need another server for this other database?
I am thinking there are probably some alternative solutions to this as well. Getting a dedicated server just for these queries really isnt an option.
In order to improvement it is not a problem to make some sacrifices. For example, having the data update every 15 minutes is more than acceptable.
I see certain websites I use employ this strategy of making data update every 15 minutes, but I am unsure what is likely going on behind the scenes. Also the queries are lightening fast when run. I am thinking that they have some sort of table that is populated with some computed data, so its quick to query.
Any thoughts or suggestions to give me some direction, are very much appreciated !
I have a weird problem with a query we are running. I have a website and a web service - both are calling the exact same query with the exact same parameters.
I find that this query through the web service is significantly slower than when being run through the web site.
I've used SQL Profiler to get an idea of what's happening on the SQL server side and in the web site it hits SQL server runs in about 1 second and returns. On the web service side it hits SQL Server and times out (set to 60 seconds). I can't figure out why SQL Server would timeout with the same query (and result set) on the web service but work quickly on the web site.
If I run the same query (with the same parameters) multiple times on the web service, the second time (and subsequent) it will return but takes significantly longer (20+ seconds). I can't figure out why SQL Server would respond differently. It's a select query, no updates, no locking occurring. I can consistently get it to perform quickly on the web site and consistently get it to perform badly on the web service.
Anyone have any ideas of what it might be or have any ideas of how I might be able to dig deeper into troubleshooting what the problem might be?
note: both the web site and the web service are running from the same server with the same credentials and connection strings.
I have a user who is complaining of delays when she queries a database over a remote desktop connection. As I have no experience with this, I was wandering if there were any experts out there on how to improve remote performance in SQL Server 05. Any suggestions are appreciated.
The following stored procedure is taking too long (in my opinion). Theproblem seems to be the SUM line. When commented out the query takes asecond or two. When included the response time climbs to minute and ahalf.Is my code that inefficient or is SUM and ABS calls just that slow?Any suggestions to spead this up?Thanks,- JasonSET NOCOUNT ONDECLARE @PriceTable TABLE ([Symbol] VARCHAR(15),[Identity] VARCHAR(15),[Exchange] VARCHAR(5),[ClosingPrice] DECIMAL(18, 6))-- Use previous trading date if none specifiedIF @TradeDate IS NULLSET @TradeDate = Supporting.dbo.GetPreviousTradeDate()-- Get closing prices from historical positionsINSERT INTO @PriceTableSELECT[Symbol],[Identity],[Exchange],[ClosingPrice]FROMHistorical.dbo.ClearingPositionWHERE[TradeDate] = CONVERT(NVARCHAR(10), @TradeDate, 101)-- Query the historical position tableSELECTtblTrade.[Symbol],tblTrade.[Identity],tblTrade.[Exchange],tblTrade.[Account],SUM((CASE tblTrade.[Side] WHEN 'B' THEN -ABS(tblTrade.[Quantity])ELSE ABS(tblTrade.[Quantity]) END) * (tblPrice.[ClosingPrice] -tblTrade.[Price])) AS [Value]FROMHistorical.dbo.ClearingTrade tblTradeLEFT JOIN @PriceTable tblPrice ON (tblTrade.[Symbol] =tblPrice.[Symbol]AND tblTrade.[Identity] = tblPrice.[Identity])WHERECONVERT(NVARCHAR(10), [TradeTimestamp], 101) = CONVERT(NVARCHAR(10),@TradeDate, 101)GROUP BY tblTrade.[Symbol],tblTrade.[Identity],tblTrade.[Exchange],tblTrade.[Account]
I have a table called work_order which has over 1 million records and acontractor table which has over 3000 records.When i run this query ,it takes long time since its grouping bycontractor and doing multiple sub SELECTs.is there any way to improve performance of this query ??-------------------------------------------------SELECT ckey,cnam,t1.contractor_id,count(*) as tcnt,(SELECT count(*) FROM work_order t2 WHEREt1.contractor_id=t2.contractor_id and rrstm=1 and rcdt is NULL) as r1,(SELECT count(*) FROM work_order t3 WHEREt1.contractor_id=t3.contractor_id and rrstm=2 and rcdt is NULL) as r2,(SELECT count(*) FROM work_order t4 WHEREt1.contractor_id=t4.contractor_id and rrstm=3 and rcdt is NULL) as r3,SELECT count(*) FROM work_order t5 WHEREt1.contractor_id=t5.contractor_id and rrstm=4 and rcdt is NULL) as r4,(SELECT count(*) FROM work_order t6 WHEREt1.contractor_id=t6.contractor_id and rrstm=5 and rcdt is NULL) as r5,(SELECT count(*) FROM work_order t7 WHEREt1.contractor_id=t7.contractor_id and rrstm=6 and rcdt is NULL) as r6,SELECT count(*) FROM work_order t8 WHEREt1.contractor_id=t8.contractor_id and rcdt is NULL) as open_count,(SELECT count(*) FROM work_order t9 WHEREt1.contractor_id=t9.contractor_id and vendor_rec is not NULL) asAck_count,(SELECT count(*) FROM work_order t10 WHEREt1.contractor_id=t10.contractor_id and (rtyp is NULL or rtyp<>'R') andrcdt is NULL) as open_norwoFROM work_order t1,contractor WHEREt1.contractor_id=contractor.contractor_id andcontractor.tms_user_id is not NULL GROUP BYckey,cnam,t1.contractor_id ORDER BY cnam*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
Hey...Why do I sometimes experience low performance when I use a parameter insteadof an exact value?For example the following performs very bad:declare @TypeID integerselect @TypeID = 10select ID from t_Table1 t1, t_Table2 t2 wheret_t1.ID = t_t2.FID andt2.Type = @TypeIDbut this query performs ok:select ID from t_Table1 t1, t_Table2 t2 wheret_t1.ID = t_t2.FID andt2.Type = 10Jakob
Guys,I'm stumped. While its not pertinent to thematter, we are running a Vignette content managementsystem on Win2k with Sql 2000 Enterprise on a cluster.The server has 2 Gig of RAM , 2 CPU's and the databasesize is 1.5G.The query below is fired at login. The indexesseem fine based on the query plan. When I look throughprofiler, the query below takes a very high # of CPUcycles and reads. It consistently takes more than 1.5seconds to execute the query below. I did a dbcc pintablefor ALL the tables in the query and that did not helpeither. It seemed to make it worse (3 seconds and above)Any idea what could be the issue here? The serveris not really heavily taxed.The tables are small. They have very few rows.VGNCCB_ROLE939VGNCCB_ROLE_JT62389VGNCCB_GROUP_USER_JT1364The problem Query:selectROLE_ID,NAME,DESCRIPTION,CREATE_DATE,MODIFIED_DATEFROMvign.VGNCCB_ROLE -- Clustered Indexed on Role IDWHEREROLE_ID in(select ROLE_IDFROMvign.VGNCCB_ROLE_JT -- Non clustered indexeson USER_NAME AND non clustered on GROUP_IDWHEREUSER_NAME = 'testRole' or GROUP_ID in (selectGROUP_IDFROMvign.VGNCCB_GROUP_USER_JT -- Non clusteredindex on USER_NAMEWHEREUSER_NAME = 'testRole'))I'd appreciate it if someone could follow me in thisthread to completion. Such a simple query should not takethis long.TIA,Jack...
Hey guys,Here's my situation:I have a table called lets say 'Tree', as illustred bellow:Tree====TreeId (integer)(identity) not nullL1(integer)L2(integer)L3(integer)....L10(integer)The combination of the values of L1 thru L10 is called a "Path" , andL1 thru L10 values are stored in a second table lets say called'Leaf':Leaf====LeafId (integer)(identity) not nullLeatText varchar(2000)Here's my problem:I need to lookup for a given keyword in each path of the tree table,and return each individual column for the paths that match thecriteria. Here's the main idea of how I have this now.SELECT TreeId,L1,L2,...,L10, GetText(L1) + GetText(L2) as L2text + ...+ GetText(L10) AS PathTextINTO #tmp FROM Tree //GetText is a lookup function for the Leaf tableSELECT L1,GetText(L1),L2,GetText(L2),...,L10,GetText(L10) FROM #tmpWHERECharIndex(@keyword,a.pathtext) > 0Does anyone would know a better,smart, more efficient way toaccomplish this task? :)Thks,