If you add a "Union All" task to your Data Flow the output columns get configured by the first input you connect to it.
However, if you later want to modify the input columns on the inputs (perhaps after adding an upstream Data Conversion) there seems to be no way to get it to forget the original set of output columns.
Ideally, there would be a button you can press to reset the output columns based on the earliest added input. At the very least it ought to forget its output columns when the last input is removed.
Other tasks offer a dialogue to resolve input / output mismatches but not Union All.
I have a package that takes data from 3 different sources, unions them all together and then performs a lookup against a tabel on a different server. The 3 sources of data creates a dataset with quite a large volume of rows (approx 100,000) and the look up takes an age to complete.
Is there anyway of adding an index of sorts to the dataset created by the union to speed the process up? I guess i could output the union all to a temp table and look up against this but wondered if there was another way
I've created a Union All data flow task which unions string fields from a couple of queries. I found that I needed to increase the length of the string field in the queries. Increasing the length causes an error in my Union All task. I can't seem to find a way to change the property of the fields in the Union All task. Most of the other data flow tasks have an advanced editor where I can change the length or type of the fields on the input and or output. I can't seem to find this for the Union All task.
Is there any way short of deleting the Output column and recreating it? (I like the order of my original output columns)
I`m two weeks old with SQL 7 Beta 3 with no formal training whatsoever. I just kinda tinker with it at the moment since it`s installed in a stand alone server with me having sole access.
Using DTS (import into SQL), I tried to migrate an Inventory History dbf (Dbase 3) having 13+ million records. I got this error message after a few hours:
"Error at Destination for Row number 6353502. Errors encountered so far in this task: 1. SQL Server has run out of LOCKS. Rerun your statement where there are fewer active users, or ask the system administrator to reconfigure SQL Server with more locks."
It was the only application running on the server (aside from the SQL services) and i was the sole user. This is my first time on SQL Server and as DBA i should know everything `bout it.
Question is how and where do i reconfigure for more LOCKS? And how many LOCKS do i have to set?
Configuration option 'show advanced options' changed from 1 to 1. Run the RECONFIGURE statement to install.
THis is the error that I am receiving, suggesting that I have made a change to the configuration, which I haven't, anway I have responded with this code:
USE master EXEC sp_configure 'show advanced option', '1'
RECONFIGURE EXEC sp_configure
However I don't know if anything have change with the configuration. I am new to the DBA world. HELLLP!!!
hi, I have NT server which has drive c: 500 MB and drive d has 44 GB.
I know that the person who set up this server did not give enough space to the c drive, here is the problem. I am running sql server 7.0 which has 30 GB of data in the d drive. I need to reconfigure the NT hard drive so I can allocate 2 GB for C drive and 42 GB for D drive.
What is the best, safe method to accomplish this task.
Hello I have a working package. It imports a file. The file import uses a file connection that points to one file. This file connection contains all of the file layout and field information like the name, length etc. This information is for 85 columns.
I would like to change the file connection to a source of a SQL task (or a for loop task). This SQL task will get the same file type but there will be multiple instances of the file (different names.)
What is the best way to migrate this current Flat File Source so it can now take different names and yet not loose the current field mapping that the current flat file source contains? Is there a way to accomplish this?
I have a query which does 3 selects and Union ALLs each to get a final result set. The performance is unacceptable - takes around a minute to run. If I remove the Union All so that the result sets are returned individually it returns all 3 from the query in around 6 seconds (acceptable performance).
Any way to join the result sets together without using Union All.
Each result set has exactly the same structure returned...
Query below [for reference]...
WITH cte AS ( SELECT A.[PoleID], ISNULL(B.[IsSpanClear], 0) AS [IsSpanClear], B.[SurveyDate], ROW_NUMBER() OVER (PARTITION BY A.[PoleID] ORDER BY B.[SurveyDate] DESC) rownum FROM[UT_Pole] A LEFT OUTER JOIN [UT_Surveyed_Pole] B ON A.[PoleID] = B.[PoleID]
I have a Union All transformation with 4 inputs and one output when I debug the package the sum of the different inputs rows does not match the row count in output.
I don't understand, I've used the Union All transform many times and I've never seen this.
OK. I give up and need help. Hopefully it's something minor ...
I have a dataflow which returns email addresses to a recordset.
I pass this recordset into a ForEachLoop configuring the enumerator as (Foreach ADO Enumerator). I also map the email address as a variable with index 0.
I then have a Execute SQL task which receives this email address as a varchar variable (parameter 0) which I then use in my SQL command to limit the rows returned. I have commented out the where clause and returned all rows regardless of email address to try to troubleshoot this problem. In either event, I then use a resultset to store the query result of type object and result name 0.
I then pass this resultset into a script variable to start parsing the sql rows returned as type object. ( I assume this is the correct way to do this from other prior posts ...).
The script appears to throw an exception at the following line. I assume it's because I'm either not passing in the values properly or the query doesn't return anything. However, I am certain the query works as it executes just fine at the command prompt.
My intent is to email the query results to each email address with the following type of data by passing the parsed data from the script to a send mail task. Email works fine and sends out messages but the content is empty. I pass the parsed data as string values to the messagesource and define the messagesourcetype as a variable in the mail task.
part number leadtime
x 5
y 9
....
Does anyone have any idea what I might be doing wrong?
I'm using SSIS in Visual Studio 2012. My Execute SQL Task calls a Stored Procedure where I have a TRY-CATCH. Last week there was a problem and the CATCH was executed and logged an error to my error table, but for some reason the Execute SQL Task didn't fail. Is there a setting to make the Execute SQL Task fail when an SP encounters a failure?
I am trying to create a simple BI Application for SSIS. In Visual Studio 2005 I just get a Data Flow Task from the toolbar and add it to the project. When I double click it I get the following error:
The task with the name "Data Flow Task" and the creation name "DTS.Pipeline.1" is not registered for use on this computer.
Then when I try to delete it it gives this other error:
Cannot remove the specified item because it was not found in the specified Collection.
I am creating this application in an administrator account in this computer, so I doubt the problem is related to permissions. I am running SQL Server 2005 and Visual Studio 2005 in WinXP Tablet PC Edition.
Any suggestions why this is happening and how to fix it?
I am using the "Transfer SQL Server Objects Task" to copy some tables from database A to database B including data.
The tables, primary key constraints, Foreign key, data and all transfers nicely except for "DEFAULT CONSTRAINTS" on the tables.
I have failed to find any option in the "Transfer SQL Server Objects Task" task to explicitly say "copy default constraints". So I guess logically it should happen automatically but it doesn't. I hope it is not a bug :-)
I am using SQL 2005 SSIS. I am joining several large tables and then the move result into another table in the same database.
I would like know which method is faster:
Use Execute SQL Task to insert the result set to the target table
Use the Data Flow Task to insert the result set to the target table. (Use OLE DB source to execute SQL command and then use the SQL destination) Could you tell me why then other is slower?
I have a SQL Task that calls a stored procedure and returns an output parameter. The task fails with error "Value does not fall within the expected range." The Stored Procedure is defined as follows: Create Procedure [dbo].[TestOutputParms] @InParm INT , @OutParm INT OUTPUT as Set @OutParm = @InParm + 5 The task uses an OLEDB connection and has a source type of Direct Input. The SQL Statement is Exec TestOutputParms 7, ? output The parameter mapping is: Variable Name Direction Data Type Parameter Name User::OutParm Output LONG @OutParm
In the Control flow tab, I have an Execute SQL Task that outputs full Result set into a variable of an object type. Now how can I write the contents of the Full Result Set into a text file using Script Task. I also want to format the following way while I output into a file:
Column Name 1 : Column Value
Column Name 2: Column Value and so on
I tried writing the contents of the Object Variable into a file, but the file had an output of single word: System.__ComObject.
Code for Writing the Full Result Set into a Text File
Dim RSsqloutput as String = Dts.Variables("objVariable").Value.ToString
Dim strVal as String = "File completed on " & Now() & vbCrLf & "------------------------------------------------------" & vbCrLf
In short, does the €œTransfer SQL Server Objects Task€? support distributed transactions?
In trying to use a €œTransfer SQL Server Objects Task€? in a container using a transaction on the container. The task is set to support the transaction. It is setup to copy table data from several tables from a non-domain server (sql server 2000) to a domain-based server (sql server 2005). I get an error stating, €œThis task can not participate in a transaction€?.
I am wondering if it means exactly what it says €“ this task in SSIS can€™t participate at all. Or does it mean that it won€™t in this scenario for some reason. I attempted a simple copy of data from mssql 2005 to mssql 2005 (same server) and the task still failed). MSDTC appears to be running properly on my machine and such (I can do a simple distributed transaction across linked server to the 2000 server in Query Analyzer (QA)). Also, MSDTC appears to be working on both servers with distributed transaction query tests in QA.
Here€™s the error info€¦
SSIS package "Development BusinessContacts and Products Migration.dtsx" starting. Information: 0x4001100A at Copy BusinessContacts Data: Starting distributed transaction for this container. Error: 0xC002F319 at Copy BusinessContacts database table data 1, Transfer SQL Server Objects Task: This task can not participate in a transaction. Task failed: Copy BusinessContacts database table data 1 Information: 0x4001100C at Copy BusinessContacts database table data 1: Aborting the current distributed transaction. Information: 0x4001100C at Copy BusinessContacts Data: Aborting the current distributed transaction. SSIS package "Development BusinessContacts and Products Migration.dtsx" finished: Failure. The program '[4700] Development BusinessContacts and Products Migration.dtsx: DTS' has exited with code 0 (0x0).
I have an application like fetching records from the DataBase(MS Access 2000) and results i have to use in Script Task. At present i have used the record fetching query,connection string in Script itself. I would like to use in Independently. Is there any Tools like (Control Flow Tools like Execute SQL Task) are there to fetch the result set from Acccess and can use the fetching results in Script Task....
I have a stored procedure that is executed via a sql script task that returns a full result set. I map this result set to a variable or object type. Is there a way to use this variable as a data source in a subsequent data flow task?
I have made one package which extracts data from the source does transformation and submits the data to destination. Subsequently it also updates the required control files.
Now I want to add a functionality :
If the package is executed again it should check the status of previous execution in control file if success mark all tasks disable and stop
if failure mark all tasks at enable and start extracting data and continue further with execution.
I was able to attain similar functionality in SQL Server 2000 using activeX script. What code do I need to write as a part of Script Task in order to attain above functionality.
For first time I'm testing this task and surprisingly, when I try "Edit Package" option:
1)The DTS host failed to load or save the package properly 2)The selected package cannot be opened 3)Error HRESULT E_FAIL has been returned from a call to a COM component
But after these messages you can see all the tasks but they haven't name!!
It seem as if RCW mechanism has failed between managed and unmanaged coded-partially.
I don't dare to follow doing more stuff, I don't know if that package is well-loaded or not from there. ?¿
2: Dataflow Task: Datareader--Script componant--OLE DB Destination (SQL Server 2005--a single table --always around 600,000 rows)
How do I set up a transaction where if there is a failure the Truncate Table command will roll back---and the OLE Destination (A single SQL Server table) will be left the same as before the load started.
Another question with that volume of data --600,000 rows will a truncate table be pratical in a transaction
HI, I need to trigger some packages upon existance of specific files in a particular directory. Sound lkike the file watcher task (from SQLIS) would do the job but I am wondering what is the difference of using this tool instead of a for each loop container. I mean, If a file exists in a directory, the for each loop container will detect it. Since the file watcher is not a service, the package containing it needs to ne scheduled on a regular basis for the filewatcher to detect the file, right? So, a for each loop container would do the job? So, waht wouldbe the advantage of using the file watcher task?
A common issue that I run across with clients is they want only want to process a file if it's finished transmitting to the server. This SQL Server 2005 task reads the properties of a file and writes the values to a series of variables. For example, you can use this task to determine if the file is in use (still be uploaded or written to) and then conditionally run the Data Flow task to load the file if it's not being used. You can also use it to determine when the file was created in order to determine if it must be archived.
I'm trying to get a record count out of a databse using OLE DB Source and row count tasks but keep getting an error. I set up a variable as int32 and select the variable name in the row count task and when I go to the Input Columns tab to select a field to count, it gives me this error:
Error at Data Flow Task[Row Count[505]]: The component "Row Count" (505) has forbidden the requested use of the input column with lineage ID 32.
I just found out that I can do an ORDER BY clause on entire records set retrieve from a query that combines several sub queries with UNION from different tables with the same structure... so this is great to know, BTW, is this a new feature of MSSQL 2K ? I don't recall being able to do this in MSSQL 7 or 6.5.
Anyway, the main question is, can I use the TOP command in a query that has UNION in it?? Meaning, there are two queries (or more) from two tables (or more) and I need to fetch the top 10 records by an ORDER BY clause from the combined results, when I try to add each sub query TOP 10 the results are not correct at all, when I try to add TOP 10 only to the first query hoping that the analyzer will refer to the whole query, it's selecting TOP 10 from the first query and combines it with all the records from the others...
So, can anyone help? I hope the problem is understood.