I am creating a table variable (@tblBin) to temporarily store a set of data. Later in my sproc, I am doing a JOIN from @tblBin to a persistent table. In order to improve performance, I was thinking of adding an index to the columns of the @tblBin (indexes already exist on the persistent table). Using standard CREATE INDEX syntax(*), I am getting a compile error. Can this be done?
(*)CREATE NONCLUSTERED INDEX IX_tblBin_shortname ON @tblBin(shortname)
Hi,I've got 2 table variables inside of an SQL 2000 function:@tmpBigList(BItemID, BRank)@tmpSmallList (ItemID, Rank)The following UPDATE statement can run for a long time if @TmpTable1has 500 rows and @TmpTable2 has 35,000 rows.UDPATE @tmpBigListSET BRank = t.RankFROM @tmpBigListJOIN @tmpSmallList t on t.ItemID = BItemIDLooking at the Query Plan, you see that the INNER JOIN Of @tmpBigListto @TmpSmallList results in 500 * 35,000 = 17,500,000 rows beingreturned from @TmpSmallList. That takes a long time.An index would help, but it appears that you can't add an index to atable variable.Changing to a temp table does not work since it's in a function.Thanks,Joe Landes
my stored procedure have one table variable (@t_Replenishment_Rpt).I want to create an Index on this table variable.please advise any of them in this loop... below is my table variable and I need to create 3 indexes on this...
I'm running a merge replication on a sql2k machine to 6 sql2k subscribers. Since a few day's only one of the merge agents fail's with the following error:
The merge process could not retrieve generation information at the 'Subscriber'. The index entry for row ID was not found in index ID 3, of table 357576312, in database 'PBB006'.
All DBCC CHECKDB command's return 0 errors :confused: I'm not sure if the table that's referred to in the message is on the distribution side or the subscribers side? A select * from sysobjects where id=357576312 gives different results on both sides . .
Hi everyone, When we create a clustered index firstly, and then is it advantageous to create another index which is nonclustered ?? In my opinion, yes it is. Because, since we use clustered index first, our rows are sorted and so while using nonclustered index on this data file, finding adress of the record on this sorted data is really easier than finding adress of the record on unsorted data, is not it ??
Hi,Here's my problem: I want to write a stored procedure that returns allrecords from a table that have a certain column starting with giventext. I however find that using LIKE and a variable always causes anindex scan... which is causing performance issues. My table has about3.5M records.Below is a test. In query analyser if I look at the execution plan forthe following it will come up as in index scan. However, if i justhard-code the text it all works fine (index seek).How can I do this with reasonable speed???Thanks GregDECLARE @find varchar(50)SET @find = 'start'SELECT TOP 100*FROM TestWHERECol1 LIKE @find + '%'--Col1 LIKE 'start%'
Why the index value is used in variable mapping while using for each loop container. In one of ssis package I saw the index value as 2. What 2 indicates in index?
I am trying to loop through 4 files in a folder and read the names of those files through the For each loop container task. I have 4 readme files (readme1.txt thru readme4.txt) in a folder called C:SSIS.
I have added a for each Loop container and a script task to my package. In the Variable Mapping page I have named the Variable and configured the index to be 0. The problem is when I execute the package the file name that is read is always readme1.txt. I want all the file names to be read under the folder ( in other words configure the index to be 0 thru 4). How do I configure the index so that it can read all the files?.
The following is the code I have in my script task:
insert into #t(branchnumber) values (005) insert into #t(branchnumber) values (090) insert into #t(branchnumber) values (115) insert into #t(branchnumber) values (210) insert into #t(branchnumber) values (216)
[code]....
I have a parameter which should take multiple values into it and pass that to the code that i use. For, this i created a parameter and temporarily for testing i am passing some values into it.Using a dynamic SQL i am converting multiple values into multiple records as rows into another variable (called @QUERY). My question is, how to insert the values from variable into a table (table variable or temp table or CTE).OR Is there any way to parse the multiple values into a table. like if we pass multiple values into a parameter. those should go into a table as rows.
please explain the differences btween this logical & phisicall operations that we can see therir graphical icons in execution plan tab in Management Studio
Simple example: declare @tTable(col1 int) insert into @tTable(col1) values (1) select * from @tTable
Works perfectly in SQL Server Management Studio and the database connection is OK to as I may generate PP table using complex (or simple) queries without difficulty.
But when trying to get this same result in a PP table I get an error, idem when replacing table variable by a temporary table.
Message: OLE DB or ODBC error. .... The current operation was cancelled because another operation the the transaction failed.
I am trying to use a stored procedure to update a column in a sql table using the value from a variable table I getting errors because my syntax is not correct. I think table aliases are not allowed in UPDATE statements.
This is my statement:
UPDATE [dbo].[sessions_teams] stc SET stc.[Talks] = fmt.found_talks_type FROM @Find_Missing_Talks fmt WHERE stc.sessionid IN (SELECT sessionid FROM @Find_Missing_Talks) AND stc.coupleid IN (SELECT coupleid FROM @Find_Missing_Talks)
Can someone tell me if it is possible to add an index to a Table variable that is declare as part of a table valued function ? I've tried the following but I can't get it to work.
ALTER FUNCTION dbo.fnSearch_GetJobsByOccurrence ( @param1 int, @param2 int ) RETURNS @Result TABLE (resultcol1 int, resultcol2 int) AS BEGIN
Hi All,Hope someone can help me...Im trying to highlight the advantages of using table variables asapposed to temp tables within single scope.My manager seems to believe that table variables are not advantageousbecause they reside in memory.He also seems to believe that temp tables do not use memory...Does anyone know how SQL server could read data from a temp tablewithout passing the data contained therein through memory???Is this a valid advantage/disadvantage of table variables VS temptables?
SQLLY challenged be gentle --Trying to create code that will drop a table using a variable as theTable Name.DECLARE @testname as char(50)SELECT @testname = 'CO_Line_of_Business_' +SUBSTRING(CAST(CD_LAST_EOM_DATEAS varchar), 5, 2) + '_' + LEFT(CAST(CD_LAST_EOM_DATE AS varchar),4)+ '_' + 'EOM'FROM TableNamePrint @testname = 'blah...blah...blah' (which is the actual tablename on the server)How can I use this variable (@testname) to drop the table? Undersevere time constraints so any help would be greatly appreciated.
In a previous post "Could #TempTable within SP cause lock on tempdb?" http://forums.microsoft.com/msdn/showpost.aspx?postid=2691763&siteid=1
It was obvious that we have to limit the use of #Temp table to a minimum. Let assume that some of the temp tables are really difficult to replace and we have to live with them.
Would it be easier on tempdb if the #TempTable is replaced by a table variable? Or do they all end up in tempdb?
I have a stored produre. Inside this stored procedure I have table variable with one column. Once the table variable is populated with rows, I would like to pass each value in the table, into a table-valued function. The table-valued function may return any number of rows. I would like all the rows the TVF returns to be returned from the stored procedure as a single result set. I would also like to do this without defining a table variable to hold the results of the table-value function.
Code Snippet
declare @IdTable table ( EmployeeId nvarchar( 16 ) not null ) insert into @IdTable select EmployeeNumber from Employees
/* I need to run this query for every EmployeeId value in @IdTable and return the results from the stored proc as a single result set. */ select * from fn_GetEmployeeById( EmployeeId )
In my stored procedure i have a multi-valued varchar(max) parameter and I wrote a table-valued function that takes the varchar(max) and return a table back to the stored procedure where i inserted into a @table. Just wondering is there a better and faster way of doing this?
ALTER PROCEDURE [dbo].[rpt]
(
@CourtIDs as nvarchar(MAX) -- @CourtIDs = '1231,3432,1234,3421'
) AS
--split CourtIDs into a table DECLARE @tbCourtIDs table(CourtID int NOT NULL PRIMARY KEY) INSERT INTO @tbCourtIDs select * from dbo.Split(@CourtIDs, ',')
I have setup a DTS job that performs the following steps once a week.
1. Truncate a User Table called Sales 2. Import 750,000 new sales records from a semi-colon delimited text file. 3. Execute an update query that adds a SalesID field to each record. (this is a concatenation of several columns for each record and may not be unique)
This whole process takes about 2 minutes.
Here is my question: all querys and views against this Sales table use the SalesId field to identify a result set. Therefore my thought is that I need Clustered index on the SalesID field in the sales table.
What is the right way to handle this:
1. Leave the table as is and do not add an index to the SalesID field. (All queries would rely on file scan of the table) 2. Add a permenat Index to the SalesID field. Which will probably cause the truncate and Import to run more slowly. 3. Do option 2 but drop the index before truncating the table and add the Index back to the SalesId field as the last step in the DTS job.
Any idea what would provide the best performance? If I missed any options please let me know, thank you for any help!
When considering locks caused by a table, would it be better to have more rows in a table than less? For example, I can design the table to hold 1 million rows that have 300,000 rows that are updated frequently or I can take out 700,000 rows and place them in another table and have the 300,000 rows that remain take a severe beating. Would I have less locking problems and deadlocks if I take out the non-updating rows or would it be more likely to deadlock because of a higher chance of a lock being held on the same page?
select row_number() over ( order by duedate) as row , duedate as date, ... into #fronta from oitb with(nolock) where .... order by duedate
The table is filled correctly with about 30k records. Now in next step I want to work with this tmp table I created, but I have problem, when I use query like this
select * from #fronta where row < 500
When the operator is = or <>, the query is quick, but when I use < or >, the query takes about 10 minutes.
I tried to add to this tmp table index on field named row, but with no succes.
My ID field is an identity field (unique). It is the primary key. I also want to add an index/unique key so that a combination of Field1 and Field2 can not be duplicated. How do I do this?
Hi I'm issuing a SELECT on a field with the SUM on SQL Server 7. I have an index on the field in the WHERE clause but upon analysis, the Query Optimizer always uses a Full Table Scan. Can anyone explain why and is there a way to use the index.
HEre's the structure: SELECT SUM(colA) FROM TABLE tblB GROUP BY colC
Hi all, Is it possible to get the name and size of each index in a table? Please let me know. In sql 7, we could do this using EM but, in sql 2000, I'm not sure how to do this.