Can someone please explain the difference between Output and External columns? I can't fathom why "Output" columns aren't good enough. In other words, what is there a need or value in having two types of "output" columns?
I have a OLE DB Data Source Component that uses an SQL command as the data access mode. On the Connection Manager "tab" of the OLE DB Source Editor I can sucessfully parse my query and produce a preview of the data.
However, when I go to the Columns tab I see no available external columns? Apart from the fact I'm using a union I can't think of any reason why I can't see any columns, it doesn't make sense.
Case: Exporting Report to PDF/Printing/TIFF Report: Contains 1 table with 19 Columns. 1 column is static, the other 18 are visible at the users descretion. Report when printed/exported to pdf spans 2 pages naturally, 16 on the first page, 3 on the second, and the column widths have been adjusted to provide a perfect page span .
User A elects to hide two of the columns, and show the rest. The report complies and the viewable version is perfect, the excel export is perfect.. the PDF export on the first page causes every fith column, starting with the last column that was hidden to be expanded to take up additional width. On the spanned page, it renders the first column on that page correctly, then there is a white space gap equal to the width of the hidden columns and then the rest of the cells show with the last column expanded to take up the same width that the original 2 columns were going to take up, plus its width.
We have tried several different settings to see if it helps this issue or makes it worse. So far cangrow/canshrink/keep together have made no impact. It is not possible to increase the page size due to limited page size selection availablility for the client. There are far too many combinations of what the user can elect to show or hide to put together different tables to show and hide on the same report to remove this effect.
Any help or suggestion on this issue would be appreciated
I need to execute several stored procedures on a Sybase server and copy the results to SQL Server 2005 tables. While using an ad-hoc sql statement the "Available External Columns" list is correct, however when using a stored procedure the list is empty. I've tried to work around this a couple of ways without success.
1) DelayedValidation. I ran the sql from the stored procedure body in the OLE DB Source to set the column list, then turned on DelayValidation for the Data Flow component. When I switch to using a stored procedure it still connects to the Sybase database and removes the column list. Still does this even after turning on DelayedValidation for the sequence container and entire package (the OLE DB source itself does not have the option).
2) Using a variable that changes during runtime. I copied the sql from the Sybase procedure into the default value of a variable. A script changes it to a procedure at runtime. This provides a column list in design mode but throws the error "The external metadata column collection is out of synchronization with the data source columns. The external metadata column xxxx needs to be removed from the external metadata column collection" and repeats for every column in the list. I know that the column names and data types are identical.
3) Manually updated the external/output columns list. Was very painful and gave me the same errors.
It seems that DelayedValidation is the route I'm supposed to take, but I don't see how it would be any different during runtime.
Hopefully someone can help me with what I'm sure is a very simple question (new to the XML thing). I receive an XML file from "someplace" that I need to parse out using the XML Source in SSIS. I have SSIS generate me an XSD document, as one isn't provided for me. However, after I do this, SSIS does not show any available external columns to pull data from- the "Columns" section of the source is just blank. I'm pretty sure this has to do with a syntax error in either the XML file that is being provided to me, or the XSD doc that SSIS is generating. Below are both (obviously with data dummied up). Can someone take a look and let me what needs to be changed in either file to get this up and running? I'm looking to grab the AccountNumber, RecordNumber, ProcessedDate, Status, and StatusMessage elements.
I have a SSIS package with a Data Flow task. This task transfers the data from SQL Server 2000 to a table in SQL Server 2005.
I deployed and tested this package on the Test Server. Then put this package in a job and executed it - Works fine.
On the production server- If I execute the package through DTEXECUI, it works fine. But when I try executing it through a job- the job fails and it gives me following error:
Description: The external metadata column collection is out of synchronization with the data source columns. The "external metadata column "T_FieldName" (82)" needs to be removed from the external metadata column collection....
What I don't understand is, why are there no errors displayed when I execute the package through DTEXECUI.
I've got a question that I can't seem to find an answer for, I was hoping someone here might be able to point me in the right direction. I've set up a stored procedure that will email someone if any entries are added to a table . However, the output is garbled looking (see below)
Client Number SSN Client Name Old SD New SD ------------- ----------- ---------------------------------------- -------- -------- 800901 899-34-3482 John Smith 04/20/20 05/01/20 400909 144-23-0029 John Smith 04/09/20 04/11/20 447788 445-89-9967 kjl;j;j 04/05/20 04/12/20 300099 234-90-7815 John Johnson 04/08/20 04/15/20
What's happened is the client name field is too wide, so the New SD field kicks down to the next line. I'd like to clean this up. Is there a way I can either increase the length of the row before it moves to the next line, or can I re-size the client name field to match the size of the data. In other words, cli_name_vc is declared as a varchar(40). If the longest name that comes up in the query is 18 characters long, can I re-size the output so that it does not take up 40 characters?
Using MySQL 5, MySQLQuery (latest). Complete Newbie,
I have the following query:
SELECT i.IndivId, i.Surname, i.First_Names, i.Parents, (SELECT i.Surname FROM individuals i, families f where (i.IndivId = f.father_ID) and (i.indivId = f.Family_ID)) as "father" FROM individuals i where i.Parents>0 order by i.Parents;
It returns nulls for the subselect. There are 2 tables, individuals and Parents. I am trying to use a father id. in the PArents table to cause the individual id. (being the father of the individual(s)) to be named in the father column. I have a feeling I'm going wrong here.
I think the query above is pretty self explanatory!
Hi Everyone, I am trying to do a query where I need to use as little C# as possible to build my gridview. Basically I have a column called statusID. There are about 15 options for this column but I only want to count certain ones. I want to count when statusID = 3 and output that into a column called "fullUnitsUsed" but when the value is 4 or > 13 I want it to count and put the result into a column called "halfUnitsUsed". I also want it to count based on the month. To accomplish this I have used CASE and GROUP BY. This has worked to some extent. Currently if I COUNT for one month I get the correct number of fullUnitsUsed and halfUnitsUsed used for January. Unfortunately the query returns 2 records for the month. The first one has a value for fullUnitsUsed and halfUnitsUsed is NULL, the second record has fullUnitsUsed as NULL and halfUnitsUsed has the correct value. I was hoping to output one record where both fullUnitsUsed and halfUnitsUsed have data. My other problem is that if I test for the entire year (which is what this query is supposed to do) there are 5 records returned for each month, 3 of the records have fullUnitsUsed and halfUnitsUsed both as NULL and for the other 2, one has fullUnitsUsed with data and the other has halfUnitsUsed with data and the other column in both records is NULL. The values for fullUnitsUsed and halfUnitsUsed are counted for the entire year as well, which I only want it to count based on each month. Below is my query, any suggestions about how to approach this will be greatly appreciated. If any clarification is needed please let me know. Again if I could get this to work completely with SQL and not need to use any more C# than I have to it would be preferable. SELECT People.lastName + ', ' + People.firstName AS fullName, Property.Name, NYSDDSORegion.Description, OpenDays.[month], OpenDays.maxOpenDays,CASE Attend.statusID WHEN 3 THEN COUNT(Attend.statusID) END AS fullUnitsUsed,CASE Attend.statusID WHEN 4 THEN COUNT(Attend.statusID) WHEN 14 THEN COUNT(Attend.statusID) WHEN 15 THEN COUNT(Attend.statusID) WHEN 16 THEN COUNT(Attend.statusID) WHEN 17 THEN COUNT(Attend.statusID) WHEN 18 THEN COUNT(Attend.statusID) WHEN 19 THEN COUNT(Attend.statusID) WHEN 20 THEN COUNT(Attend.statusID) END AS halfUnitsUsed FROM Attend INNER JOIN People ON Attend.personID = People.personID INNER JOIN Property ON Attend.propertyID = Property.propertyID INNER JOIN NYSDDSORegion ON Property.RegionID = NYSDDSORegion.RegionID CROSS JOIN OpenDays WHERE (Attend.attendDate BETWEEN '1/1/2007' AND '12/31/2007') GROUP BY Property.Name, People.lastName, NYSDDSORegion.Description, People.firstName, OpenDays.monthID, OpenDays.[month], OpenDays.maxOpenDays, Attend.statusID ORDER BY Property.Name, fullName, NYSDDSORegion.Description
There are several events. Each event has several different sessions (stored in EventOptionGroups), and each session has a certain number of options (stored in Options).
A user can sign up for an event, and their information is stored in EventRegistration. They can choose an option for each session in the event. For each option they choose, a new row is added to RegistrantOptions.
For each row in EventRegistration, I want to output the user's information, and then the option they chose for each session in the event. Like this:
I need it to dynamically output based off COL1, the output should look like this. When there are more rows for CAT, it should output more columns. Kind of like merging the columns
Hi All,Currently the query returns 2006-03-27 00:00:00, can I make it output03/27/2006, I want to truncate the time, and replace the hyphens withforward slashes. Any ideas?Thanks In Advance,~CK
When setting an output's "IsErrorOut" property to true, is it also possible to add additional columns to that error output?
I'd like to add a message beyond the standard errorCode and errorColumn columns, a column which is the "specific error message", not just a lookup on the errorCode.
I map the columns, refresh & OK out of the component without trouble, but on executing the package it fails during validation on this component. I'm utterly stumped. Any light shed would be greatly appreciated. Many thanks in advance, Tamim.
I created an Excel Source and used a query to get the data,i.e
SELECT F1,F2,F3,F4,F5,F6,F7 FROM [Fut Days$A20:G1480]
The query works fine, the preview returns the rows, but SSIS will not generate output Columns nor will it let me manually add them? Am I missing something?
I apologise if this question has been asked before but I have spent ages searching these forums and the internet for an answer and I am yet to find one.
My problem is that I have a package which imports a column, lets call it 'Column A'. Column A is used to create other columns, lets say Columns B, C & D. This is done in a script using asychronous input and once completed column A is no longer used. Other tranformations occur to B, C & D including a split and then finally a merge together again, but all the time A seems to remain a valid input on all processes yet I never choose to use or output it. When I come to the merge process I am required to merge columns B, C & D but also A, surely this is inefficient. Furthermore when the package has run I get a warning telling me that Column A is not required and should be removed, but I can not seem to find anywhere to remove it from the pipe.
I am hoping that I am just missing something obvious here but I have been tearing my hair out so any help would be much appreciated!
i am trying to create a view but i keep getting the error 'View definition includes no output columns or no items in the FROM clause.'
below is the select statement that's the basis of my view. the explanation i got from the F1 help of enterprise manager was ... View definition includes no output columns or no items in the FROM clause. A view definition must have at least one table or table-structured object in the FROM clause, and must have at least one column in the select list. The view definition is missing one or both. Modify the view definition accordingly.
query:
select Case_CaseId, Logged, CAST(DATEDIFF(minute, Logged, Waiting)/60.0 AS NUMERIC(9, 2)) AS Waiting, CAST(DATEDIFF(minute, Logged, Investigating) /60.0 AS NUMERIC(9, 2)) AS Investigating, CAST(DATEDIFF(minute, Logged, Rejected) /60.0 AS NUMERIC(9, 2)) AS Rejected, CAST(DATEDIFF(minute, Logged, Resolved) /60.0 AS NUMERIC(9, 2)) AS Resolved, CAST(DATEDIFF(minute, Logged, Solved) /60.0 AS NUMERIC(9, 2)) AS Solved, CAST(DATEDIFF(minute, Logged, Closed) /60.0 AS NUMERIC(9, 2)) AS Closed from (
SELECT Case_CaseId, MIN(CASE WHEN case_stage = 'Logged' THEN Case_CreatedDate END) AS Logged, MIN(CASE WHEN case_stage = 'Waiting' THEN Case_CreatedDate END) AS Waiting, MIN(CASE WHEN case_stage = 'Investigating' THEN Case_CreatedDate END) AS Investigating,
AS Rejected, MIN(CASE WHEN case_stage = 'Resolved' THEN Case_CreatedDate END) AS Resolved, MIN(CASE WHEN case_stage = 'Solved' THEN Case_CreatedDate END) AS Solved, MIN(CASE WHEN case_stage = 'Closed' THEN Case_CreatedDate END) AS Closed FROM CaseProgress GROUP BY Case_CaseId ) as temp order by Case_CaseId
Can I add Output Columns to the Script Transformation Editor using code? I have to execute a SQL Statement to determine the number of years we have the data for for an item and then create the columns for the months in those years and populate them with the quantities. So my question is can I create output columns to the Script Transformation Editor on the fly that is as it is being executed?
I am using a script component to transform data. In the script component I created a bunch of fields for the output. Is there any way to loop through that list of columns? Is there code I can use in the script component to access the names, data types, data etc?
I saw a lot of informaiton on the OutputColumnCollection as part of some IDTSOuput90 thing (greek to me). As best I can guess this is for creating your own new columns, but can I see what columns are already defined via the script interface?
I would like my transformation to automatically create an output column for each input column. Any tips? I can't seem to determine which event to listen to or method to override.
I am not sure which type to use for my Script Transformation Editor output fields. I'm getting errors based on the Data Type I'm specifying for my fields.
Error at Import Maintenance (mnt) File [Split HeaderRows into Columns [5176]]: Error 30512: Option Strict On disallows implicit conversions from 'Double' to 'UInteger'. Line 21 Column 37 through 71 Error 30512: Option Strict On disallows implicit conversions from 'Double' to 'Long'. Line 22 Column 35 through 69 Error 30512: Option Strict On disallows implicit conversions from 'Double' to 'Long'. Line 23 Column 37 through 71 Error 30512: Option Strict On disallows implicit conversions from 'Double' to 'Long'. Line 25 Column 27 through 61
Error at Import Maintenance (mnt) File [Split HeaderRows into Columns [5176]]: Error 30512: Option Strict On disallows implicit conversions from 'Double' to 'UInteger'. Line 21 Column 37 through 71 Error 30512: Option Strict On disallows implicit conversions from 'Double' to 'Long'. Line 22 Column 35 through 69 Error 30512: Option Strict On disallows implicit conversions from 'Double' to 'Long'. Line 23 Column 37 through 71 Error 30512: Option Strict On disallows implicit conversions from 'Double' to 'Long'. Line 25 Column 27 through 61
Error at Import Maintenance (mnt) File [DTS.Pipeline]: "component "Split HeaderRows into Columns" (5176)" failed validation and returned validation status "VS_ISBROKEN".
Error at Import Maintenance (mnt) File [DTS.Pipeline]: One or more component failed validation.
Error at Import Maintenance (mnt) File: There were errors during task validation.
I'm not sure if this is needed but here's the script I coded in my script task also: Imports System Imports System.Data Imports System.Math Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper Imports Microsoft.SqlServer.Dts.Runtime.Wrapper
Public Class ScriptMain Inherits UserComponent
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)
I am seeing a particular problem in the XML Source Editor "Columns" configuration where it is not persisting the "Output name" selection.
Control Flow Tab: 1. I use a "Exec SQL Command" to drop, create, or alter the destination tables in the database that I want to be repository for the inbound XML data. The data types are fairly straightforward.
2. I add a singular "Data Flow"
Data Flow Tab:
1. I add a "XML Source" task, and assign a well-defined XML file. I then use the "Generate XSD" option in the "Connection manager"; and I am fairly satisfied with the generated XSD.
2. I create "OLE DB Destination"
3. I wire the "XML Source" to the "OLE DB Destination". In the "XML Source" in the "Columns".
4. I go to the dropdown list of "Output name" and see the list ordered with the various complex-types that I want to map and transfer to a target table.
For the sake of this report, I select the 5th one down on the list (for which I already have a target table) - let's call this "Mesh"
5. In the "Input Output" dialog, I select the "output" to be the desired 5th item, "Mesh"
6. I check all my mappings so that they map one-to-one ... XML name entries match SQL table destination mapping entries; correct types; correct size
7. Check the metadata and it all looks good.
8. When I hit "Debug" to test the package the failure occurs at the "XML Source". The error report comes back saying that it failed because "field xxx in Contributor was truncated". However, "Contributor" corresponds to the 1st name in the dropdown list presented in "Columns" "Output name:".
If I select return to Step 4, when I open up "Columns" I see that my previous selection of the 5th item on the list named "Mesh" was not persisted, but invariably and no matter how often I select item #5 "Mesh" and save to ensure that selection sticks, it is not persisted.
I hand-edited the .dtsx file and only then was I able to make this stick. However, if I ever re-save the package this non-persistency pops up again.
Am I doing something wrong here or is this a known defect? As I have several dozen XSD mappings that I want to transfer to tables, hand-editing is not something I relish.
I am trying to update a table and then also use OUTPUT clause to capture some of the columns. The code that I am using is something like the one below
UPDATE s SET Exception_Ind = 1 OUTPUT s.Master_Id, s.TCK_NR INTO #temp2 FROM Master_Summary s INNER JOIN Exception d ON d.Id = LEFT(s.Id, 8) AND d.Barcode_Num = s.TCK_NR WHERE s.Exception_Ind IS NULL
The above code is throwing an error as follows:
Msg 4104, Level 16, State 1, Procedure Process_Step3, Line 113 The multi-part identifier "s.Master_Id" could not be bound. Msg 4104, Level 16, State 1, Procedure Process_Step3, Line 113 The multi-part identifier "s.TCK_NR" could not be bound.
I'm having a tad bit of trouble getting output from an asynchronous component that I've written and am looking for some insight.
This component takes in a name string passed from upstream and parses the name components into standardized output fields. I'm using an asynchronous component because if the name string contains two names ("Fred & Wilma Flintstone") I'm outputting one row for Fred and one for Wilma. I've gotten it to run and with debugging have observed what appeared to me to be proper execution, but zero rows are flowing out of it.
In my ProvideComponentProperties method, I add the three fields and there associated metadata to the OutputColumnCollection. Is this method where this should occur? It's before the PrimeOutput method, so I didn't know if I should be creating the output columns in ProcessInput (i.e., after the output buffer is provided by PrimeOutput.)
In ProcessInput, I'm using AddRow for each input row and another if it contains a second name, setting the value for each index using the buffer's SetString method, to no avail. I can observe it to this point, but then don't know what's in that output buffer (if I'm using the wrong buffer index value, etc)
I'm trying to create a fairly simple custom transform component (because I've read that's the easiest one to create) which will take one column from a flat file source and based on the first row create the output columns. I'm actually trying to write a component that will solve the now well known problem with parsing CSV files in SSIS. I have a lot of source files and all have many columns so a component that can read in the first line from the CSV file and create the output columns automatically will save me lots of time when migrating the old DTS packages.
I have the basic component set up but I'm stuck when trying to override the OnInputPathAttached method because I don't know how to use the inputID to get the first line from the input (the buffer). Are there any good examples for creating output columns dynamically based on the input buffer? Should I just give up on on the transform and create a custom source component instead?
I explicitly set one column to have text qualifiers in a flat file connection mgr and specified to use double quotes as the qualifier, yet in the output file, the column is not qualified. What did I leave out ?