I've read that table variables give better performance than temporary tables as they are kept in memory and don't need to be recorded in transaction logs ect however I have a stored proceedure which takes 0.183 seconds to execute but when I change the one temporary table used in the proceedure to a table variable the execution time increases to 0.223.
Not much of an increase I admit but it just seems contrary to what I've read.
I want to get the best performance possible so can someone explain to me what is going on ?
Can someone point me to some good articles or perhaps directly supply some words of wisdom with regard to wise utilization of variables within a T-SQL script from and standpoint of conserving memory usage and improved execution cost?
For example:
(1) Is it better to use varchars, nvarchars, etc. defined with minimal lengths to support the needs of the script or is it just as efficient to declare all with a length of say 4,000?
(2) I've seen behavior that leads me to believe that when passing a variable as a parameter in a nested procedure call, if the declared types of the parameter and the variable being passed in don't match (i.e. one is numeric(38,10) and the other is int), then implicit type conversions hurt performance. Is this true and how broadly does it apply?
(3) Does the number of variables declared in a script materially impact the performance and / or resource utlization?
(4) Is it more efficient to have a series of variable value assignments in a single SELECT statement versus a series of SET statements? Should I always perfer one to the other? Only within a looping construct?
insert into #t(branchnumber) values (005) insert into #t(branchnumber) values (090) insert into #t(branchnumber) values (115) insert into #t(branchnumber) values (210) insert into #t(branchnumber) values (216)
[code]....
I have a parameter which should take multiple values into it and pass that to the code that i use. For, this i created a parameter and temporarily for testing i am passing some values into it.Using a dynamic SQL i am converting multiple values into multiple records as rows into another variable (called @QUERY). My question is, how to insert the values from variable into a table (table variable or temp table or CTE).OR Is there any way to parse the multiple values into a table. like if we pass multiple values into a parameter. those should go into a table as rows.
Simple example: declare @tTable(col1 int) insert into @tTable(col1) values (1) select * from @tTable
Works perfectly in SQL Server Management Studio and the database connection is OK to as I may generate PP table using complex (or simple) queries without difficulty.
But when trying to get this same result in a PP table I get an error, idem when replacing table variable by a temporary table.
Message: OLE DB or ODBC error. .... The current operation was cancelled because another operation the the transaction failed.
I am trying to use a stored procedure to update a column in a sql table using the value from a variable table I getting errors because my syntax is not correct. I think table aliases are not allowed in UPDATE statements.
This is my statement:
UPDATE [dbo].[sessions_teams] stc SET stc.[Talks] = fmt.found_talks_type FROM @Find_Missing_Talks fmt WHERE stc.sessionid IN (SELECT sessionid FROM @Find_Missing_Talks) AND stc.coupleid IN (SELECT coupleid FROM @Find_Missing_Talks)
Can someone tell me if it is possible to add an index to a Table variable that is declare as part of a table valued function ? I've tried the following but I can't get it to work.
ALTER FUNCTION dbo.fnSearch_GetJobsByOccurrence ( @param1 int, @param2 int ) RETURNS @Result TABLE (resultcol1 int, resultcol2 int) AS BEGIN
my stored procedure have one table variable (@t_Replenishment_Rpt).I want to create an Index on this table variable.please advise any of them in this loop... below is my table variable and I need to create 3 indexes on this...
Hi All,Hope someone can help me...Im trying to highlight the advantages of using table variables asapposed to temp tables within single scope.My manager seems to believe that table variables are not advantageousbecause they reside in memory.He also seems to believe that temp tables do not use memory...Does anyone know how SQL server could read data from a temp tablewithout passing the data contained therein through memory???Is this a valid advantage/disadvantage of table variables VS temptables?
SQLLY challenged be gentle --Trying to create code that will drop a table using a variable as theTable Name.DECLARE @testname as char(50)SELECT @testname = 'CO_Line_of_Business_' +SUBSTRING(CAST(CD_LAST_EOM_DATEAS varchar), 5, 2) + '_' + LEFT(CAST(CD_LAST_EOM_DATE AS varchar),4)+ '_' + 'EOM'FROM TableNamePrint @testname = 'blah...blah...blah' (which is the actual tablename on the server)How can I use this variable (@testname) to drop the table? Undersevere time constraints so any help would be greatly appreciated.
In a previous post "Could #TempTable within SP cause lock on tempdb?" http://forums.microsoft.com/msdn/showpost.aspx?postid=2691763&siteid=1
It was obvious that we have to limit the use of #Temp table to a minimum. Let assume that some of the temp tables are really difficult to replace and we have to live with them.
Would it be easier on tempdb if the #TempTable is replaced by a table variable? Or do they all end up in tempdb?
I have a stored produre. Inside this stored procedure I have table variable with one column. Once the table variable is populated with rows, I would like to pass each value in the table, into a table-valued function. The table-valued function may return any number of rows. I would like all the rows the TVF returns to be returned from the stored procedure as a single result set. I would also like to do this without defining a table variable to hold the results of the table-value function.
Code Snippet
declare @IdTable table ( EmployeeId nvarchar( 16 ) not null ) insert into @IdTable select EmployeeNumber from Employees
/* I need to run this query for every EmployeeId value in @IdTable and return the results from the stored proc as a single result set. */ select * from fn_GetEmployeeById( EmployeeId )
In my stored procedure i have a multi-valued varchar(max) parameter and I wrote a table-valued function that takes the varchar(max) and return a table back to the stored procedure where i inserted into a @table. Just wondering is there a better and faster way of doing this?
ALTER PROCEDURE [dbo].[rpt]
(
@CourtIDs as nvarchar(MAX) -- @CourtIDs = '1231,3432,1234,3421'
) AS
--split CourtIDs into a table DECLARE @tbCourtIDs table(CourtID int NOT NULL PRIMARY KEY) INSERT INTO @tbCourtIDs select * from dbo.Split(@CourtIDs, ',')
Hi,I have a small theoretical issue.I have one table, which is prettyu large. There is lot of evaluationsrunning on this table, that's why, each process need to wait foranother to be finished. Sometimes, for some critical functions, ittakes to long time.I don't think that I can speed up processes, by changing the indexes onthe tables (to increase scan time for example), because this issomething what I was experimenting with already, and it was not enoughtgood.My question is, will it improve performance, if I will create secondtable, exactly like this one, and I will split some evaluations, thatthe one, which defenately need to run on the source table will run onthe first one, and the second evaluations, will run on the other one.To keep data consistance between this two tables, I was thinking baouttrigger on insert on the mother table, which will transport the data toanother one.Second part is: to improve selects on the table, should I set indexeswith option of Fill factor as possible close to 100% or as possibleclose to 0%. Or maybe should I set the pad index option?What about clustered indexes. Is it better to use them if I would liketo increase performace for selects?Thanks in advanceMateusz
I find that joining to the same table twice is OK, but as soon as you do it 3 or more times you get a massive performance hit. Does anyone know the reason for this? Whats special about 3? What's the best approach to do this sort of thing? (I've used the SQL Server 2005 Tuning Advisor to add indexes for the query).
Rather than: Select ..., sum(a1.<column>), sum(a2.<column>), sum(a3.<column>) from master_table left join table_1 a1 on ... left join table_1 a2 on ... left join table_1 a3 on ... group by ...
I have to select all the table and filter it using case: Select ..., sum(case when table_1.<column> = '...') as a1, sum(case when table_1.<column> = '...') as a2, sum(case when table_1.<column> = '...') as a3, from master_table left join table_1 group by ...
I know we are not allowed to benchmark SQL Server but..... It would be nice to have material to present which demonstrates the performance gains using a queue compared to insert/delete in a SQL table.
Logically it seems faster to use a queue due to the conversation grouping locking and the service broker itself. But there seems to be some overhead involved just to manage these queues that the service broker has to perform.
I am sure we are not unique with the choice to figure out if we will get a boost in performance using SQL a queue between services rather than a table to queue data. What is available to help understand the performance gains of using a queue?
hi all, if i have a function which it returns a table and i need to work with the table retured many times in the stored procedure, then should i use a temporary table or a table variable to store the returned table ? or it's there a better way in doing that?
I want to save some temporal data in the stored procedure. Comparing temporal table and table type variable, which one is better regarding to the performance?
Hi, I have a denormalized table (done so with reason) with around 40 columns. I would never have to retrieve data for all of those columns together. I haven't done any performance measurements yet but just wondering if anyone has ready answer to this: Will there be a performance degradation if I retrieve data from a table with many columns, even if not all columns are referred in the query? (for making it simple, lets assume that all or varchar type of columns, I just want to find out if performance degrades if there are too many columns in table)
I will be receiving data from a company called Loan Performance, that has one file/table that will hold 1 billion rows. They send data by period, and I plan to load the data via BCP via NT/DOS scripts. The 1 billion rows represents data for 200+ periods.Are the following design plans feasible1. Partition table by period value, I'm not sure of the max number of partitions per table in 2005, but I think we have periods data back to 1992 and a new one gets created every month, so the possibility of having > 1000 partitions exists. I plan on just pre-creating partitions for future data, instead of dynamically creating when a new period is sent.2. Load data via BCP in DOS shell scripts that will drop index (by partition), BCP in data, and they re-create indexes by partition, is this possible ? and will I see a performance increase as opposed to one huge table (I'm pretty much sure I will). There is usually one periods data present per day, but sometimes the vendor resends all data (would get loaded on weekend).I'm a bit unsure of where to start being I never worked with this amount of data. I worked with partitioning in Oracle a long time ago.I plan on having an 2XQuadCore 2.66Ghz CPU with 32GB of RAM and SQL2005EE 64Bit connected to 1 Terabyte SAN Disk.Thanks all,PMA
I created two tables one is based on partition structure and one is non-partition structure.
File Groups= Jan,Feb.....Dec Partition Functions='20060101','20060201'......'20061201' I am using RIGHT Range in Partition function. Then I defined partition scheme on partition function.
I have more than 7,00,000 data in my database. I checked filegroups and count rows. It works fine.
But When I check the estimation plan time out for query it is same for both partition table and non partition table.
I created two tables one is based on partition structure and one is non-partition structure.
File Groups= Jan,Feb.....Dec Partition Functions='20060101','20060201'......'20061201' I am using RIGHT Range in Partition function. Then I defined partition scheme on partition function.
I have more than 7,00,000 data in my database. I checked filegroups and count rows. It works fine.
But When I check the estimation plan time out for query it is same for both partition table and non partition table.
Hello -- thank you for taking the time to read this.
I have a very large table that is used both for archives and new information. To get the current information, the table is queried by many different users at various polling periods. The SELECT required includes about fifteen JOINS, and only returns about 200 rows at any given time.
So I got to thinking if it might be faster to periodically run the big query as a SELECT INTO into a smaller table and letting the polling clients query the smaller table with SELECT *. Periodically, the smaller table would be DROPPED and refereshed with another SELECT INTO.
Trouble is, the data would have to be updated once every 30 seconds, and there are inbound polls coming at the rate of about 200 per minute. It got me to thinking what might happen if a client attemtped to query the smaller table when it was in the process of being dropped and refilled.
So my question is three-part:
1) assuming a larger table of about 500,000 records and only 500 pertinent at any given time, is there any real potential of performance enhancement by switching to a SELECT INTO table?
2) if so, is there a chance of a client failing a query if the inbound query somehow collides with the DROP/SELECT INTO procedure?
3) if so, is there any way to prevent it or a better way of doing this?
Thanks again for reading, and in advance for any help you can provide. I apologize if I sound like a dummy - it's hard to fake intelligence!
Working with SQL Server 2000, I have a table with the following structure: ID (INT) userID (INT, foriegn key) productID (INT) productQTY (DECIMAL(5,2)) purchaseDate(smalldatetime)
I have about a 1000 users, entering about 20-30 rows per day each, i.e ~20,000 - 30,000 new rows per day. The table might be queried with a simple "SELECT" for the products a user ordered per day or per time frame (purchaseDate column). My question (finally) is - when should I expect to see performance degradation? Is there anything I can do to prevent it (i.e splitting this table somehow to several tables)?
Monthly, I copy a table from one database to another database. I delete the original table and copy the table back speed the performance of the query on the order of 10 to 1. Why does this work?
Detail: I have a legacy table that a small application queries about once a month. The table was poorly designed and the query runs a date range comparison on one field and has a sub query that runs string comparison against six fields. I cannot change the calling app or table design. When the app calls the query, the call times out due to the inordinate length of time. To fix this until next months query, I copy the table out, delete the original and copy back. What changes when the table is copied to another database and then copied back? The performance of the query changes from 10sec to 1.
Hello all,I've following problem. Please forgive me not posting script, but Ithink it won't help anyway.I've a table, which is quite big (over 5 milions records). Now, thistable contains one field (varchar[100]), which contains some data inthe chain.Now, there is a view on this table, to present the data to user. Theproblem is, in this view need to be displayed some data from this onelarge field (using substring function or inline function returningvalue).User in the application, is able to filter and sort threw this fields.Now, the whole situation starts to be more complicated, if I would likecombine this table, with another one, where is one additional much morlarger field, from which I need to select data in the same way.Problem is: it takes TO LONG to select the data according to userrequest (user access view, not table direct)Now the question:- using this substring (as in example) is agood solution, or beter todo a inline function which will return me the part of this dataset(probably there is no difference)- will it be much faster, if i could add some fields in toSource_Table, containing also varchar data, but only this part whichI'm interested in and binde these fields in view instead off usingsubstring function?Small example:CREATE TABLE [dbo].[Source_Table] ([CID] [numeric](18, 0) IDENTITY (1, 1) NOT NULL ,[MSrepl_tran_version] uniqueidentifier ROWGUIDCOL NULL ,[Date_Event] [datetime] NOT NULL ,[mama_id] [varchar] (6) COLLATE Latin1_General_CI_AS NOT NULL ,[mama_type] [varchar] (4) COLLATE Latin1_General_CI_AS NULL ,[tata_id] [varchar] (4) COLLATE Latin1_General_CI_AS NOT NULL ,[tata_type] [varchar] (2) COLLATE Latin1_General_CI_AS NULL ,[loc_id] [nvarchar] (64) COLLATE Latin1_General_CI_AS NOT NULL ,[sn_no] [smallint] NOT NULL ,[tel_type] [smallint] NULL ,[loc_status] [smallint] NULL ,[sq_break] [bit] NULL ,[cmpl_data] [varchar] (100) COLLATE Latin1_General_CI_AS NOT NULL ,[fk_cmpl_erp_data] [numeric](18, 0) NULL ,[erp_dynia] [bigint] NULL) ON [PRIMARY]GOcreate view VIEW_AllDataasselect top 100 percentisnull(substring(RODZ.cmpl_data,27,10),'-') as ASO_NO,(RODZ.mama_type + RODZ.mama_Id) as MAMA,isnull(substring(RODZ.cmpl_data,45,5),'-') as MI,isnull(substring(RODZ.cmpl_data,57,3),'-') as ctl_EC,isnull(substring(RODZ.cmpl_data,60,3),'-') as ctl_IC,RODZ.Date_Event as time_time,RODZ.sn_no as SNFROMSource_Table RODZ with (nolock)goThanks in advanceMateusz