Is There A Way To Change The Index A Query Uses Without Modifying The Query?
May 9, 2006
MS SQL Server 2000 SP3
I'm not the most knowledgable DBA, I've had to learn almost completely on my own, AND on a production server, because it's the only MS SQL Server I have access to.
Everything was fine before I took down the production server for maintenance. Someone suggested that I re-index my tables because I was having some performance issues with a particularly large table (it didn't help that table btw), so I did re-index.
Now, Everything works wonderfully, except for the performance issue mentioned AND one other thing that is going horribly wrong.
Here is the table:
create table ABMcontactlink
(
classifier varchar(20) not null, /* Classification of contact. */
transmitter varchar(36) not null,
contact integer not null, /* Link to ABMcontact (detail) table */
primary key (classifier,transmitter,contact),
foreign key (contact) references ABMcontacts(identifier),
group_name varchar(20) null,
last_modification_date datetime, /* Date/time record last touched */
last_modification_id varchar(40) /* Who last touched record */
)
go
create index IndexABMcontactlink on
ABMcontactlink(classifier,transmitter)
go
create index CandidateABMcontactlink on
ABMcontactlink(transmitter)
go
As you can see, I have the primary key, which creates a clustered index, PK_ABMContactlink_Some Number, and two other indexes.
Now, this is a very busy production database, and most quick short queries benefit more from CandidateABMContactlink than from the other two indexes.
Unfortunately, in this production system, and this table, seconds count ALOT, so when I have roughly 3000-4000 quereies an hour pulling information from this table, I personally beleive I need to keep CandidateABMContactlink, and I'm not willing to find out on a production server.
** Now to the Problem at Hand **
I have one query that kicks off about 7 times a day, used to take less than 1 minute before the re-index. NOW it takes 30 Minutes. And it drags the system to a crawl.
I did some looking into it, and this query is using CandidateABMContactlink, and it takes 30 minutes. If it uses PK_Abmcontactlink it finishes in under 45 seconds.
Most queries are simple, "Select Column_names from abmcontacts where identifier in (select contact from abmcontactlink where transmitter = 'XXXXXX')"
This one is:
select * from ABMcontacts where (
(last_modification_date >= '2006-04-28 04:40:03' and last_modification_date <= '2006-05-09 16:41:14')
and EXISTS(select contact from ABMcontactlink where contact = identifier
and EXISTS(select transmitter_id from ABMtransmitter where transmitter_id = transmitter and (dealer = 'XXXX'))))
or
(EXISTS(select contact from ABMcontactlink where
(last_modification_date >= '2006-04-28 04:40:03' and last_modification_date <= '2006-05-09 16:41:14')
and contact = identifier and EXISTS(select transmitter_id from ABMtransmitter where transmitter_id = transmitter and (dealer = 'XXXX'))))
I can't change the query, so how do I make it use the Index I want it to use without removing the index that it is using? (I know there are much better ways to write the above query, I'm not the culprit, if I could re-write it, I would)
I have a query that I am working on that involves 2 tables. The query below is working correctly and bringing back the desired results, except I want to add 1 more column of data, and I'm not exactly sure how to write it.
What I want to add is the following data.
For each row that is brought back we want to have the COUNT(*) of users who joined the website (tbluserdetails) where their tbluserdteails.date is > the tblreferemails.referDate
Effectively we are attempting to track how well the "tell a friend" via email feature works, and converts to other joined members.
Any assistance is much appreciated!!
thanks once again mike123
SELECT CONVERT(varchar(10),referDate,112) AS referDate,
SUM ( CASE WHEN emailSendCount = 0 THEN 1 ELSE 0 END ) as '0', SUM ( CASE WHEN emailSendCount = 1 THEN 1 ELSE 0 END ) as '1', SUM ( CASE WHEN emailSendCount = 2 THEN 1 ELSE 0 END ) as '2', SUM ( CASE WHEN emailSendCount = 3 THEN 1 ELSE 0 END ) as '3', SUM ( CASE WHEN emailSendCount > 3 THEN 1 ELSE 0 END ) as '> 3', SUM ( CASE WHEN emailSendCount > 0 THEN 1 ELSE 0 END ) as 'totalSent',
count(*) as totalRefers, count(distinct(referUserID)) as totalUsers,
SUM ( CASE WHEN emailAddress like '%hotmail%' THEN 1 ELSE 0 END ) as 'hotmail', SUM ( CASE WHEN emailAddress like '%hotmail.co.uk%' THEN 1 ELSE 0 END ) as 'hotmail.co.uk', SUM ( CASE WHEN emailAddress like '%yahoo.ca%' THEN 1 ELSE 0 END ) as 'yahoo.ca', SUM ( CASE WHEN emailAddress like '%yahoo.co.uk%' THEN 1 ELSE 0 END ) as 'yahoo.co.uk', SUM ( CASE WHEN emailAddress like '%gmail%' THEN 1 ELSE 0 END ) as 'gmail', SUM ( CASE WHEN emailAddress like '%aol%' THEN 1 ELSE 0 END ) as 'aol', SUM ( CASE WHEN emailAddress like '%yahoo%' THEN 1 ELSE 0 END ) as 'yahoo',
SUM ( CASE WHEN referalTypeID = 1 THEN 1 ELSE 0 END ) as 'manual', SUM ( CASE WHEN referalTypeID = 2 THEN 1 ELSE 0 END ) as 'auto'
INSERT @Request SELECT '324234', 'Jack', 'SA023', 12, 111, Null UNION ALL SELECT '223452', 'Tom', 'SA023', 12, 112, Null UNION ALL SELECT '456456', 'Bobby', 'SA024', 12, 114, Null UNION ALL SELECT '22322362', 'Guck', 'SA024', 44, 123, Null UNION ALL SELECT '22654392', 'Luck', 'SA023', 12, 134, Null UNION ALL SELECT '225652', 'Jim', 'SA055', 67, 143, Null UNION ALL SELECT '126756', 'Jasm', 'SA055', 67, 145, Null UNION ALL SELECT '786234', 'Chuck', 'SA055', 67, 154, Null UNION ALL SELECT '66234', 'Mutuk', 'SA059', 72, 185, Null UNION ALL SELECT '2232362', 'Buck', 'SA055', 67, 195, Null
INSERT @Call SELECT 111, 1, 12123 UNION ALL SELECT 112, 1, 12123 UNION ALL SELECT 114, 2, 12123 UNION ALL SELECT 123, 2, 12123 UNION ALL SELECT 134, 3, 12123 UNION ALL SELECT 143, 1, 6532 UNION ALL SELECT 145, 1, 6532 UNION ALL SELECT 154, 2, 6532 UNION ALL SELECT 185, 2, 6532 UNION ALL SELECT 195, 3, 6532
INSERT @CallDetail SELECT 12123, 1, '11/5/2007 10:41:34 AM' UNION ALL SELECT 6532, 1, '11/5/2007 12:12:34 PM' -- --select * from @Request Query written to achieve the requirement UPDATE r SET r.UniqueNo = p.RecID FROM @Request AS r INNER JOIN ( SELECT r.RequestID, ROW_NUMBER() OVER (PARTITION BY cd.EmpID, r.StateNo, r.CityNo, c.CallDetailID, c.CallType ORDER BY cd.EntryDt) AS RecID FROM @Request AS r INNER JOIN @Call AS c ON c.CallID = r.CallID INNER JOIN @CallDetail AS cd ON cd.CallDetailID = c.CallDetailID ) AS p ON p.RequestID = r.RequestID WHERE r.UniqueNo IS NULL
SELECT CONVERT(varchar(10),LL.loginDate,112) as loginDate, COUNT(LL.userID) AS TotalLogins, COUNT(DISTINCT LL.userID) AS TotalLogins_Unique
FROM tblLogins_Log LL WITH (NOLOCK) WHERE DateDiff(dd, LL.loginDate, GetDate()) < @numDays GROUP BY CONVERT(varchar(10),LL.loginDate,112) ORDER BY loginDate DESC
table structure below
CREATE TABLE [dbo].[tblLogins_Log]( [loginID] [int] IDENTITY(1,1) NOT NULL, [userID] [int] NULL, [IP] [varchar](15) NOT NULL, [loginDate] [datetime] NOT NULL ) ON [PRIMARY]
Please help figure out what is wrong with my code. The script is supposed to load a package (from file). The loaded package already has everything set up to run a query against a local server and output the results to an Excel file. The reason for the outer script is because I need to change the query based on a global variable. When the query changes, though, I think the existing dataflow Path is no longer valid, so I should remove it and re-create another one with the new input mappings. Here is my code, which runs and throws an exception at the AcquireConnections call.
The error is
Error: 0x2 at Script Task: The script threw an exception: Exception from HRESULT: 0xC020801B
I pieced together this code from the examples in the online books, but I am not sure what to do.
' Microsoft SQL Server Integration Services Script Task
' Write scripts using Microsoft Visual Basic
' The ScriptMain class is the entry point of the Script Task.
Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Runtime
Imports Microsoft.SqlServer.Dts.Pipeline
Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper
Public Class ScriptMain
Public Sub Main()
'
Dim app As Microsoft.SqlServer.Dts.Runtime.Application = New Application()
Dim package As Microsoft.SqlServer.Dts.Runtime.Package = _
We have enabled Change Data Capture for auditing our table changes in SQL Server 2008. There is a request to NULL out a few columns (for all rows) in a couple CDC tables, due to compliance with a certification. Is there a compelling reason not to modify these tables and to leave the audit trail as-is?
SELECT a.AssetGuid, a.Name, a.LocationGuid FROM Asset a WHERE a.AssociationGuid IN ( SELECT ada.DataAssociationGuid FROM AssociationDataAssociation ada WHERE ada.AssociationGuid = '568B40AD-5133-4237-9F3C-F8EA9D472662')
takes 30-60 seconds to run on my machine, due to a clustered index scan on our an index on asset [about half a million rows]. For this particular association less than 50 rows are returned.
expanding the inner select into a list of guids the query runs instantly:
SELECT a.AssetGuid, a.Name, a.LocationGuid FROM Asset a WHERE a.AssociationGuid IN ( '0F9C1654-9FAC-45FC-9997-5EBDAD21A4B4', '52C616C0-C4C5-45F4-B691-7FA83462CA34', 'C95A6669-D6D1-460A-BC2F-C0F6756A234D')
It runs instantly because of doing a clustered index seek [on the same index as the previous query] instead of a scan. The index in question IX_Asset_AssociationGuid is a nonclustered index on Asset.AssociationGuid.
The tables involved:
Asset, represents an asset. Primary key is AssetGuid, there is an index/FK on Asset.AssociationGuid. The asset table has 28 columns or so... Association, kind of like a place, associations exist in a tree where one association can contain any number of child associations. Each association has a ParentAssociationGuid pointing to its parent. Only leaf associations contain assets. AssociationDataAssociation, a table consisting of two columns, AssociationGuid, DataAssociationGuid. This is a table used to quickly find leaf associations [DataAssociationGuid] beneath a particular association [AssociationGuid]. In the above case the inner select () returns 3 rows.
I'd include .sqlplan files or screenshots, but I don't see a way to attach them.
I understand I can specify to use the index manually [and this also runs instantly], but for such a simple query it is peculiar it is necesscary. This is the query with the index specified manually:
SELECT a.AssetGuid, a.Name, a.LocationGuid FROM Asset a WITH (INDEX (IX_Asset_AssociationGuid)) WHERE a.AssociationGuid IN ( SELECT ada.DataAssociationGuid FROM AssociationDataAssociation ada WHERE ada.AssociationGuid = '568B40AD-5133-4237-9F3C-F8EA9D472662')
To repeat/clarify my question, why might this not be doing a clustered index seek with the first query?
I have 2 identical DB on the same server with the same indexes on both db, but when I run a query using query analyzer, on db A it will take 6 sec but run the same query on db B it will take 1 minute. I checked the indexes on all tables in 2 db's are the same.
I used show execution plan, it does show that it is not using some indexes on the 2nd db, I droped all indexes and rebuild them again, but the same result.
Any idea why it is taking so long on the other db.
I have a view (table_rol_contct) on a number of tables, one of which is a table of address data (table_address). This table has a unique, non-clustered index on Postcode, House Number and an Id column (idx_x_s_postcode). There is a clustered index on the Id column.
There are a number of queries which are run against this view, one of which selects addresses whose Postcode matches a full or partial Postcode entered through a Client Application.
This query :
Select distinct top 30 con.salutation+' '+con.first_name+' '+con.last_name as fullname, con.title, con.phone, con.x_housename, con.x_houseno, con.x_housename+' '+con.x_houseno+' '+con.address+', '+con.address_2+' '+con.city as fulladdress, from table_rol_contct con where con.x_s_postcode like 'wf18%' order by con.last_name asc, con.first_name asc
returns its results in under a second while this one:
Select distinct top 30 con.salutation+' '+con.first_name+' '+con.last_name as fullname, con.title, con.phone, con.x_housename, con.x_houseno, con.x_housename+' '+con.x_houseno+' '+con.address+', '+con.address_2+' '+con.city as fulladdress, from table_rol_contct con where con.x_s_postcode like 'wf3%' order by con.last_name asc, con.first_name asc
takes 38 seconds to returns its results.
The execution plans for these queries show that the slow query uses a Clustered Index Scan whereas the quick one uses an index seek on the ‘idx_x_s_postcode’ non-clustered index.
From this, you can see that ‘WF18%’ is wholly contained in a single step and the optimizer appears to happily perform an index seek, whereas ‘WF3%’ spans more than 1 so the optimizer seems to choose a clustered index scan.
Is there any way that I can force the use of the non-clustered index or, at least, make it more likely that the optimizer will use this index without coding an optimizer hint into the ‘Create View’ statement? The problem is that the view is used for other queries, all the queries run against it are generated from within a client application and the given search criteria could, for example, be ‘con.address’ rather than ‘con.x_s_postcode’.
Hi,I have a table t1 with columns a,b,c,d,e,f,g,h,i,j,k,lI have created a clustered index on a,b,d,e which forms the primarykey. I have created a covering index on all the columns of t1. Thereare 1 million rows in this table.My query chooses the TOP20 rows based on some filter conditions. WhenI use an "ORDER BY 1", it uses the clustered index and I get the resultin 1 second, whereas it takes around 1minute 48seconds when I use an"ORDER BY b or any other column". It is not using the covering / theclustered index.What is the best way to index this table so that it uses the index andI get the result within the shortest possible time (just like that ofORDER BY 1 which take hardly a second).Thanks..Sridhar
Hi,I am making as SELECT query to fill a repeater, and I need to retrieve the index of each line of the query.ie, I want to get a dataset like :"0", "dataCol1", "dataCol2" for the first line"1", "dataCol1", "dataCol2" for the second line"2", "dataCol1", "dataCol2" for the third lineetc.Anyone knows if there is a sql statement that does it ?ThanksJohann
hi, what are the tools that I can use to Optimize the query/ index.
I know that if I am running a query on a table I create index on the fields where I use in the where clause, is this a right thinking.
Someone told me how do I determine what columns should be indexed, I told him the fields that I use in the where clause should be indexed to speed up the process of retrieving the data. ..... is this answer correct. if not please advice the correct one. Thanks
I currently develop an application for my company that actually uses rather long queries, with many records.
I have a particular query (Written using SQL string inside the .NET application rather than Stored proceedures),that needs to run in 2 databases (both SQL Server):
The first one is a test database that we use when in developing time quota to test our data
The second one is the real thing a data base that contains lots of records.
When criteria are placed in the query, it returs few records in both the databases , but if no criteria are placed (So it fetches all the records..) In the test Database works ok, but in the real one it "jams" till 30 seconds pass and I get a time out message...
I tried to change the Query time out time from inside the SQL Server from
Tools/Options/Advanced
but it doesn't seem to work out... it still times out after 30 secs
I'm converting a set of queries from Access to work as stored procedures on SQL server, and one of them that uses IIF gives me a syntax error. Here's the WHERE clause of the SELECT:
WHERE IIF(@myExtNum > 0,D.ExtentionNumber=@myExtNum,'') AND ...
I get the error message "syntax error near '>'."
It would seem this query wants to do two things: 1) if @myExtNum is >0, return only the rows for which D.ExtentionNumber equals a user specified value, or 2) if @myExtNum is 0 ignore this part of the condition.
I can rewrite this using multiple ANDs and ORs, but I wodered if there was a way I could get IIF working.
Something in my gut tells me this is not going to work, that IIF is being used to modify the query; that is, change which rows are returned; it is not being used to change how a given data value is displayed, which I think was the purpose for which IIF was originally intended.
I have recently upgraded my Access Database too use SQL server 2005 using the upsizing wizard. I have selected too use SQL Linked tables instead off the ADP project for ease off conversion. So now the tables are on the SQL server and the queries are still local.
On off my Access forms which returns a datasheet view by accessing the a query is now running really slowly now SQL server is the backend, I was wondering if I can put an index on one of the tables too speed it up, as the query is local too Access will this work? Just wondering if anyone ever come across this.
I am trying to resolve performance issues in a third party application. I have run the profiler and found a transaction that performs a table scan against a 6 million row table. This transaction occurs repeatedly, so I thought, just add an index on the columns in the where clause used here. After adding the index, I looked at the estimated execution plan in Query analyzer, and I find that it is still performing the table scan. If I run the query it takes over 60 seconds to run, if i add an index hint, it runs in under a second. I ran DBCC SHOW_STATISTICS to see if the statistics were up to date:
Statistics for INDEX 'IX_Finish_dept'. Updated Rows Rows Sampled Steps Density Average key length -------------------- -------------------- -------------------- ------ ------------------------ ------------------------ Jun 26 2007 5:18PM 6832336 6832336 150 2.1415579E-7 18.0
(1 row(s) affected)
All density Average Length Columns ------------------------ ------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.1875491E-7 8.0 finish 1.9796084E-7 18.0 finish, dept
use tempdb go if object_id('Data', 'u') is not null drop table Data go with temp as ( select top 10000 row_number() over (order by c1.object_id) Id from sys.columns c1 cross join sys.columns c2
[code]....
What index would be best for these three queries? With best I mean the execution time, I don't care about additional space.
This is the index I currently use: create nonclustered index Ix_Data on Data (StateId, PalletId, BoxId, Id) The execution plan is SELECT (0%) - Stream Aggregate (10%) - Index Scan (90%).
Can this be optimized (maybe to use Index Seek method)?
I've got this query that runs in 30 seconds and returns about 24000. The table variable returns about 145 rows (no performance issue here), and the TransactionTbl table has 14.2 Million rows, a compound, clustered primary key, and 6 non-clustered indexes, none of which meet the needs of the query.
Actual execution plan shows SQL is doing an index seek, then a nested loop join, and then fetching the remaining data from the TransactionTbl using a Key Lookup.
I designed a new indexes based on the query, which when I force it's usage via an index hint, reduces the run time to sub-second, but without the index hint the SQL optimiser won't use the new index, which looks like this:
CREATE INDEX IX_Test on GLSchemB.TransactionTbl (CltID, Date) include (Ledger_Code, Amount, CurrencyID, AssetID)and I tried this: CREATE INDEX IX_Test on GLSchemB.TransactionTbl (CltID, Date, Ledger_Code, CurrencyID, AssetID) include (Amount)and even a full covering index!
I did some testing, including disabling all indexes but the PK, and the optimiser tells me I've got a missing index and recommends I create one EXACTLY like the one I designed, but when I put my one back it doesn't use it.
I though this may be due to fragmentation and/or stats being out of date, so I rebuilt the PK and my index, and the optimiser started using my index, doing an index seek and running sub-second. Thinking I had solved the problem I rebuilt all the indexes, testing after each one, and my index was used BUT as soon as I flushed the related query plan, the optimiser went back to using a less optimal index, with a seek and key lookup plan and taking 30 seconds.
For now I've resorted to using the OPTION (TABLE HINT(G, INDEX(IX_Test))) to force this, but it's a work around only. Why the optimiser would select a less optimal query plan?
I have a table that seems to have a bad index. When I do the followingquery I get inconsistant and needless to say incorrect results.select count(*) from mytable where mycolumn = 1If I remove the index from "mycolumn" the query works correctly. If Iadd the index back (even with a new name etc...) it doesn't workright.Has anyone ran into this? or does anyone know how I can fix thisproblem?It seems that removing the index is not really removing everythingbecause when I add a new one I get this same problem... btw, this isisolated to this column on this table. all other indexes within thedatabase are fine.Any help would be appreciated.Thanks,dharper
SET @propname = NULL -- can be Null or have wild card values
SELECT col1, col2, col3, propertyname FROM testtable where col1 = 1 and col2 = 2 and col3 = 3 and (@propname IS NULL OR propertyname LIKE @propname)
col1, col2, col3 are part of a clustered index
propertyname is a nonclustered index
This is the predicament. If the "@propname is NULL" is first in the OR statement then the query will use the clustered index for finding the record. If I put "@propname is NULL" last then it uses the propertyname index no matter what. So I either get full index scans if NULL is ever used on property names or I get consistent two second long searches on the clustered index. Any way to have the best of both worlds? Or do I have to divide up my query into more stored procedures?
Create Index ind_Item_Name on Item(I_Name); Create Index ind_Item_BC on Item(I_BC); Create Index ind_Item_Company on Item(I_Company); Create Index ind_Item_CompanyFound on Item(I_CompanyFound); create Index ind_Item_i1 on Item(I_Company,I_CompanyFound); create Index ind_Item_i2 on Item(I_CompanyFound,I_Company);
Now this query DOES NOT use index: select I_Name, I_Code, I_MatID, I_BC, I_Company,I_Info1, I_Acquired, I_CompanyFound, 0 as I_Found from Item where (I_Company='102' or I_CompanyFound='102' )
While this one use:
select I_Name, I_Code, I_MatID, I_BC, I_Company,I_Info1, I_Acquired, I_CompanyFound, 0 as I_Found from Item where (I_Company='102' ) UNION select I_Name, I_Code, I_MatID, I_BC, I_Company,I_Info1, I_Acquired, I_CompanyFound, 0 as I_Found from Item where (I_CompanyFound='102' )
Both return the same rows. Is this a bug? I found the following: http://support.microsoft.com/kb/223423
Our app has been distributed on more then 300 different sites. On one of the sites we get the error "Could not continue scan with NOLOCK due to data movement" indicating that the query optimizer takes a NOLOCK for our select statement ( has been opened with adOpenDynamic, adLockOptimistic ).
It's no option to change the source, we have to solve this without touching the code.
Is there any way to tweak the query optimizer so that our app works correctly? I know that there will be a reduction of performance but it's our only choose.
I have a query which runs exactly as I want it to. Let's say it returns this in a table:
IDNumber 123 212 312
I want to change (find and replace?) all the number 12s in the 'Number' column for a string. I want to change the 12s to the word: HelloWorld. I don't want to update the table, merely the table that is being returned in the query.
I need someone that knows something about SQL queries.
I have a client that is running a Database known as ProLaw. It is in part a document management system for Law Offices.
They have an SQL 2005 database that tracks per client all the documents they create.
We had to replace there server with new server. The new server is running sbs2003 and had to have a different Netbios name then the old sbs2000 server. (Small Bus. Server has some weird quirks that make simply using the same netbios name impossible. Google search it if you don't believe me.)
The database holds in a single column the full network share path to each document.
Different documents may have different names and more subdirectories but the root path of "\lawwillsbs2000ProlawDocuments" is shared by all.
The new server is named \sbs2003 I need to change the first part of almost 3000 path statements to the new server. The rest of the path is unchanged.
I have had several people running prolaw tell me that I should run this query:
UPDATE Events
SET DocDir=REPLACE(DocDir, '\\lawwillsbs2000', '\\sbs2003')
WHERE EventKind='O'
This doesn't work. Nothing is changed. I'm guessing it is because this query assumes the value will be ONLY \lawwillsbs2000 I see nothing in here that tells the query that this is only part of the string. No wild card or other marker.
I need some kind of string function here do I not? Anyone know enough to help me craft a proper query?
/*Given*/CREATE TABLE [_T1sub] ([PK] [int] IDENTITY (1, 1) NOT NULL ,[FK] [int] NULL ,[St] [char] (2) NULL ,[Wt] [int] NULL ,CONSTRAINT [PK__T1sub] PRIMARY KEY CLUSTERED([PK]) ON [PRIMARY]) ON [PRIMARY]GOINSERT INTO _T1sub (FK,St,Wt) VALUES (1,'id',10)INSERT INTO _T1sub (FK,St,Wt) VALUES (2,'nv',20)INSERT INTO _T1sub (FK,St,Wt) VALUES (3,'wa',30)/*Is something like the following possible.The point is to change the value of the variableinside the query and use it in the calculated field.This doesn't compile of course, but is therea way to accomplish the same thing?*/DECLARE @ndx intSET @ndx = 1SELECT(a.FK+ (CASE WHEN @ndx > 0THEN (SELECT @ndx = b.WtFROM _T1sub bWHERE b.Wt = a.Wt)ELSE 0 END)) as FKplusWTFROM _T1sub a/*Output would look like this:*/FKplusWT-----------112233/*I know, I can get this output just by addingFK+WT. This is not about that.This is about setting vars inside a query*/thanks, Otto Porter