When I import data I first import it from a text file into a table of
it's own, then using some logic insert some of the records into a
permanent table.
I am considering having the table that the data from the text file is
placed in being there all the time and just clearing it out after I do
the import, or creating it and after using it then drop it, or using a
temporary table.
What are the advantages of a temporary table as opposed to creating a
regular table and dropping it after use?
Hi, I wanna know is there any advantage of perf gain when using Derived Tables over Temp Tables, advice me which one is better to use. Can I create Indexes and Insert/Update records into Derived Tables.
I think this is a very simple question, however, I don't know the answer. What is the difference between a regular Temp table and a Global Temp table? I need to create a temp table within an sp that all users will use. I want the table recreated each time someone accesses the sp, though, because some of the same info may need to be inserted and I don't want any PK errors.
Hi folks.I'm trying to connect to a Medisoft Advantage SQL db through SQL Serverusing OpenDataSource or OpenRowSet. I have general connections to the dbworking fine, but not with OpenDataSource or OpenRowSet.I've tried variations on:select * fromOpenDataSource('Advantage OLE DB Provider','Data Source=C:MediDataTutormwddf.add;User ID=user;Password=password;Advantage Server Type=ADS_LOCAL_SERVER;Initial Catalog=mwddf.add;')...MWPATWhich gives:OLE DB error trace [OLE/DB Provider 'Advantage OLE DB Provider'IColumnsInfo::MapColumnIDs returned 0x80040e21: [COLUMN_NAME=TABLE_CATALOGORDINAL=-1], [COLUMN_NAME=TABLE_SCHEMA ORDINAL=-1], [COLUMN_NAME=TABLE_NAMEORDINAL=-1], [COLUMN_NAME=TABLE_TYPE ORDINAL=-1], [COLUMN_NAME=TABLE_GUIDORDINAL=-1]].I've also tried:select * fromOpenrowset('Advantage OLE DB Provider','Data Source=C:MediDataTutormwddf.add;UserID=user;Password=password;Initial Catalog=mwddf.add;Advantage ServerType=ADS_REMOTE_SERVER',MWPAT)and 'Select * from MWPAT'.These last yield:[OLE/DB provider returned message: No Data Source specified]which seems closer.Can anyone help? I think this is all Advantage-specific. I've posted ontheir boards, but their not very active...Thanks.David
Can anyone tell the advantage of using the new catalog views over the traditional system tables which were given as compatiblity views in sql server 2005
hi all, i need to create a report in reporting services, in reporting services there are two ways to develope a report by using "Report Server Project" and "Report Model Project". i need to know the purpose of each thing and which one will be advantage.
i am inserting something into the temp table even without creating it before. But this does not give any compilation error. Only when I want to execute the stored procedure I get the error message that there is an invalid temp table. Should this not result in a compilation error rather during the execution time.?
--create the procedure and insert into the temp table without creating it. --no compilation error. CREATE PROC testTemp AS BEGIN INSERT INTO #tmp(dt) SELECT GETDATE() END
only on calling the proc does this give an execution error
Simple example:    declare @tTable(col1 int)    insert into @tTable(col1) values (1)    select * from @tTable
Works perfectly in SQL Server Management Studio and the database connection is OK to as I may generate PP table using complex (or simple) queries without difficulty.
But when trying to get this same result in a PP table I get an error, idem when replacing table variable by a temporary table.
Message: OLE DB or ODBC error. .... The current operation was cancelled because another operation the the transaction failed.
If on the source I have a new column, the script generated by SqlPackage.exe recreates the table on the background with moving the data into a temp storage. If the table is big, such approach can cause issues.
Example of the script is below: in the source project I added columns [MyColumn_LINE_1]  and [MyColumn_LINE_5].
Is there any way I can make it generating an alter statement instead?
BEGIN TRANSACTION; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; SET XACT_ABORT ON; CREATE TABLE [dbo].[tmp_ms_xx_MyTable] ( [MyColumn_TYPE_CODE] CHAR (3) NOT NULL,
[Code] ....
The same script is generated regardless the table having data or not, having a clustered or nonclustered PK.
The SP UserPersist_GetByCriteria does a "SELECT * FROM tbl_User WHERE gender = @Gender AND culture = @Culture", so why am I receiving this error when both tables have the same structure?
The error is being reported as coming from UserPersist_GetByCriteria on the "SELECT * FROM tbl_User" line.
On 64 bit SSIS platforms (win2k3 and vista), where the amount of virtual memory is practically unlimited, is there any advantage to running child Integration Services Execute Package Tasks out of process? The question pertains to 64 bit processes on a 64 platform, not 32 on 32, or 32 bit emulation processes on 64.
Would the answer be different based on the amount of physical memory on the machine, supposing one had a "reasonable" amount of memory on the machine (e.g. 4Gb or more )
"A data source can be defined one time and then referenced by connection managers in multiple packages. You use a data source object in a package by adding a connection manager that references the data source object to the package. There is no dependency between a data source and the connection managers that reference it."
I have created a data source (DS1) and set it to point to a database say DB1. In the connection managers area, I create a connection manager CM1 using the datasource DS1.Now I edit DS1 to point to a different database DB2. When I open CM1 however it€™s still pointing to DS1.
I guess this is because it€™s said that there is no dependency between the connection manager and data source. My question is what exactly is the advantage of using a data source?
I want to insert the data from temp table to other table. Only condition is, it needs to sorted based on tool number and tool date. For example if we have ten records for tool number 1000, it should be order by tool number and then based on tool_dt. Both tables doesn't have any primary keys. Please find below my code. I removed all the unnecessary columns for simple understanding. INSERT INTO tool_summary  (tool_nbr, tool_dt) select tool_nbr, tool_dt from #tool order by tool_nbr, tool_dt...But this query is not working as expected. Data is getting shuffled.
WE have a job that loads data from an Oralce DB into our SQL Server 2000 DB twice a day. The schedule has just changed so that now there is a possibility of having my west coast users impacted when it runs at 5 PM PST and my east coast users impacted when it runs at 7 AM EST. As a workaround, I have developed a DTS package that loads the data into temp tables instead of the real tables. IE. Oracle -> XTable_temp instead of Oracle -> XTable. The load sometimes takes about an hour to an hour and a half to load, so this solution works great, but I want to then lock the table, delete it and rename the temp table to table X. The pseudo code would be:
Begin Transaction
Lock Table XTable
Drop XTable
Alter Table XTable_temp rename to XTable
Release Lock XTable
End Transaction
Create XTable_temp
I see two issues with this solution. 1) I think if I can lock XTable that the lock would be released when the table is dropped and the XTable_temp was being renamed. 2) I can't find a command to rename a table.
I want to pass the 'inserted' table from a trigger into an SP, I think I need to do this by dumping inserted table into a temporary table and passing the temp table. However, I need to do this for many tables, and don't want to list all the column names for each table/trigger (maintenance nightmare).
Can I dump the 'inserted' table to a temp table WITHOUT specifying the column names?
suppose if we do not have FK then what kind of advantage we could not avail. we can fetch data from two table by creating a relation in sql.....then why FK is required.
I need to decide what is better to use: global temp table ( I can't use local one) or permanent table in SQL 2000 stored procedures. I extract data from linked server table and update several tables on our server. Those procedures scheduled to run every 3 hours.
Another question: for some reasons when I used global temp table, I wasn't able to schedule multi steps with every step executing one of the stored procedures.I think global temp tables should be visible to other stored procedures, right?
Hi everyone, I'm fairly new to sql and right now I am struggling with a script. I am trying to extract data from a normal table into a temporary table, update it in the temporary table, then put it back into the normal table. I'll display my code, let me know what you think, any suggestions are appreciated. Thanks a lot.
Hi All,Hope someone can help me...Im trying to highlight the advantages of using table variables asapposed to temp tables within single scope.My manager seems to believe that table variables are not advantageousbecause they reside in memory.He also seems to believe that temp tables do not use memory...Does anyone know how SQL server could read data from a temp tablewithout passing the data contained therein through memory???Is this a valid advantage/disadvantage of table variables VS temptables?
In a previous post "Could #TempTable within SP cause lock on tempdb?" http://forums.microsoft.com/msdn/showpost.aspx?postid=2691763&siteid=1
It was obvious that we have to limit the use of #Temp table to a minimum. Let assume that some of the temp tables are really difficult to replace and we have to live with them.
Would it be easier on tempdb if the #TempTable is replaced by a table variable? Or do they all end up in tempdb?