Can You Add Output Columns To The Script Transformation Editor On The Fly?
Jun 21, 2006
Can I add Output Columns to the Script Transformation Editor using code? I have to execute a SQL Statement to determine the number of years we have the data for for an item and then create the columns for the months in those years and populate them with the quantities. So my question is can I create output columns to the Script Transformation Editor on the fly that is as it is being executed?
I am seeing a particular problem in the XML Source Editor "Columns" configuration where it is not persisting the "Output name" selection.
Control Flow Tab: 1. I use a "Exec SQL Command" to drop, create, or alter the destination tables in the database that I want to be repository for the inbound XML data. The data types are fairly straightforward.
2. I add a singular "Data Flow"
Data Flow Tab:
1. I add a "XML Source" task, and assign a well-defined XML file. I then use the "Generate XSD" option in the "Connection manager"; and I am fairly satisfied with the generated XSD.
2. I create "OLE DB Destination"
3. I wire the "XML Source" to the "OLE DB Destination". In the "XML Source" in the "Columns".
4. I go to the dropdown list of "Output name" and see the list ordered with the various complex-types that I want to map and transfer to a target table.
For the sake of this report, I select the 5th one down on the list (for which I already have a target table) - let's call this "Mesh"
5. In the "Input Output" dialog, I select the "output" to be the desired 5th item, "Mesh"
6. I check all my mappings so that they map one-to-one ... XML name entries match SQL table destination mapping entries; correct types; correct size
7. Check the metadata and it all looks good.
8. When I hit "Debug" to test the package the failure occurs at the "XML Source". The error report comes back saying that it failed because "field xxx in Contributor was truncated". However, "Contributor" corresponds to the 1st name in the dropdown list presented in "Columns" "Output name:".
If I select return to Step 4, when I open up "Columns" I see that my previous selection of the 5th item on the list named "Mesh" was not persisted, but invariably and no matter how often I select item #5 "Mesh" and save to ensure that selection sticks, it is not persisted.
I hand-edited the .dtsx file and only then was I able to make this stick. However, if I ever re-save the package this non-persistency pops up again.
Am I doing something wrong here or is this a known defect? As I have several dozen XSD mappings that I want to transfer to tables, hand-editing is not something I relish.
I am using a script component to transform data. In the script component I created a bunch of fields for the output. Is there any way to loop through that list of columns? Is there code I can use in the script component to access the names, data types, data etc?
I saw a lot of informaiton on the OutputColumnCollection as part of some IDTSOuput90 thing (greek to me). As best I can guess this is for creating your own new columns, but can I see what columns are already defined via the script interface?
I would like my transformation to automatically create an output column for each input column. Any tips? I can't seem to determine which event to listen to or method to override.
Greetings, I am attempting to create a flat file delimited by |. I am using (ISNULL(LIN1_OPT_ADDR) ? "" : LIN1_OPT_ADDR + "| ") to replace the blank address column with the pipe delimiter. So that a row that would consist of:
Customer Number,Name,Address Line1,City,State
12345,ACE HARDWARE INC. ,801 Rockefeller St.,New York, New York 56789,BUILDING SUPPLY INC., ,Wichita, Kansas
Should end up as:
12345|ACE HARDWARE INC.|801 Rockefeller St.|NEW YORK|NEW YORK 56789|BUILDING SUPPLY INC.||Wichita|Kansas
When I run the data flow to create the flat file the file contains the following:
I am reading in a deliminated file. In the Script Transformation Editor, if the UPC does not past the checksum test, I want to throw the row out right then. I am not sure how to do that...but it is probably really simple.] Thanks, Linda
Here is my script:
' Microsoft SQL Server Integration Services user script component
' This is your new script component in Microsoft Visual Basic .NET
' ScriptMain is the entrypoint class for script components
'Option Strict Off
Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper
Imports Microsoft.SqlServer.Dts.Runtime.Wrapper
Public Class ScriptMain
Inherits UserComponent
Private Function DoubleTest(ByVal Value As String) As Boolean
Dim d As Double
If Not Double.TryParse(Value, d) Then
'Windows.Forms.MessageBox.Show(Value + " is not numeric")
I'm having trouble coming up with a valid expression in my derived column transformation editor that tests the input column for NULL and responds something like this:
I'm having trouble coming up with a valid expression in my derived column transformation editor that tests the input column for NULL and responds something like this:
In a nut shell I want to be able to instruction some Data Analysts on how to modify SSIS packages using the simpliest solutions possible. This is because there are many different data sources and some of these data sources have a huge number of fields, and yes you guessed it these data sources are subject to change on a regular basis.
A very common task they will need to do is to modify an SSIS package to do a to transform of a source date string format of "YYYYMMDD" into a date data type field within a table.
Similar threads have advised the use of the Data Flow Transformations->Derived Column for this sort of thing.
So within the Expression Text box I have inserted the following SSIS compatible SQL to convert the above string into a british format date data type; -
I have a Lookup Transformation that matches the natural key of a dimension member and returns the dimension key for that member (surrogate key pipeline stuff).
I am using an OLE DB Command as the Error flow of the Lookup Transformation to insert an "Inferred Member" (new row) into a dimension table if the Lookup fails.
The OLE DB Command calls a stored procedure (dbo.InsertNewDimensionMember) that inserts the new member and returns the key of the new member (using scope_identity) as an output.
What is the syntax in the SQL Command line of the OLE DB Command Transformation to set the output of the stored procedure as an Output Column?
I know that I can 1) add a second Lookup with "Enable memory restriction" on (no caching) in the Success data flow after the OLE DB Command, 2) find the newly inserted member, and 3) Union both Lookup results together, but this is a large dimension table (several million rows) and searching for the newly inserted dimension member seems excessive, especially since I have the ID I want returned as output from the stored procedure that inserted it.
Thanks in advance for any assistance you can provide.
From SQL Server 2014, using SQL Server Data Tools for Visual Studio - BI, I'm trying to edit a Script Component within an SSIS Data Flow Task. The 'Edit Script...' button is enabled and turns a nice shade of blue when moused over, but a click has no effect. Perhaps I'm missing a component of VSTA? Everything else seems to work correctly. What might I be missing?
I'm a relative SQL Server newbee and have developed a function that converts mm/dd/yyyy to yyy/mm/dd for use as in a DT_DBDATE format for insert into a column with smalldatatime.
I receive the following erros when using the function in the Derived Column Transformation Editor. First, the function, then the error when using it as the expression Derived Column Transformation Editor.
Can anyone explain how I can do this transformation work in this context or suggest a way either do the transformation easier or avoid it altogerher?
expression "lipper.dbo.convdate(eomdate)" failed. The token "." at line number "1", character number "11" was not recognized. The expression cannot be parsed because it contains invalid elements at the location specified.
Error at Data Flow Task [Derived Column [111]]: Cannot parse the expression "lipper.dbo.convdate(eomdate)". The expression was not valid, or there is an out-of-memory error.
Error at Data Flow Task [Derived Column [111]]: The expression "lipper.dbo.convdate(eomdate)" on "input column "eomdate" (165)" is not valid.
Error at Data Flow Task [Derived Column [111]]: Failed to set property "Expression" on "input column "eomdate" (165)".
(Microsoft Visual Studio)
===================================
Exception from HRESULT: 0xC0204006 (Microsoft.SqlServer.DTSPipelineWrap)
------------------------------ Program Location:
at Microsoft.SqlServer.Dts.Pipeline.Wrapper.CManagedComponentWrapperClass.SetInputColumnProperty(Int32 lInputID, Int32 lInputColumnID, String PropertyName, Object vValue) at Microsoft.DataTransformationServices.Design.DtsDerivedColumnComponentUI.SaveColumns(ColumnInfo[] colNames, String[] inputColumnNames, String[] expressions, String[] dataTypes, String[] lengths, String[] precisions, String[] scales, String[] codePages) at Microsoft.DataTransformationServices.Design.DtsDerivedColumnFrameForm.SaveAll()
I have a data flow and inside the data flow , i have a ole db source and ole db command task to execute an insert transaction ( SP).. i would like to save the error output if the insert didn;t happend into a error log table.. but when I darg an error output line ( red) to another ole db command to insert an error log , i can only see two columns( error code and error column) are available in OLEDB command advance editor related to errors.. this doesn;t tell you much information about the error.how can i grap the error reson(desc) as the error output and store into a erro log table? so that i can see what the problem is?
I am writing a Dataflow task which will take a Particular column from the source table and i am passing the column value in the SQL command property. My SQL Command will look like this,
Select SerialNumber From SerialNumbers Where OrderID = @OrderID
If i go and check the output column in the Input and output properties tab, I am not able to see this serial number column in the output column tree,So i cant able to access this column in the next transformation component.
I am developing a custom destination component and I have encountered a few areas where there seems to be a lack of helpful documentation and examples.
1. I have not been able to find any information on or examples of creating custom destinations with an error output. The OLE DB Destination has an error output so I investigated the input and error output properties in the advanced editor and found that the OLE DB Destination error output is synchronous with the input (its SynchronousInputID matches the input's ID) and has its ExclusionGroup value set to 1. Using this information, I modeled my error output after the OLE DB Destination.
Shortly after I start my SSIS package and it encounters an error row, I get the following exception: [My Destination Adapter 1 [3512]] Error: System.ArgumentException: Value does not fall within the expected range. at Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSBuffer90.DirectErrorRow(Int32 hRow, Int32 lOutputID, Int32 lErrorCode, Int32 lErrorColumn) at Microsoft.SqlServer.Dts.Pipeline.PipelineBuffer.DirectErrorRow(Int32 outputID, Int32 errorCode, Int32 errorColumn) at MyDestination.ProcessInput(Int32 inputID, PipelineBuffer buffer) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostProcessInput(IDTSManagedComponentWrapper90 wrapper, Int32 inputID, IDTSBuffer90 pDTSBuffer, IntPtr bufferWirePacket)
2. My custom destination component is used for writing a file with a fixed schema. I followed the means by which source component examples add their output columns, but applied this to my external metadata columns. In my Validate() I check if the ExternalMetadataColumnCollection.Count == 0 and return DTSValidationStatus.VS_NEEDSNEWMETADATA; to force a call to ReinitializeMetaData(). In ReinitializeMetaData() I call a method that creates the input's external metadata columns that reflect my external data source.
This works fine except every time I add my custom destination component to a SSIS Package and go to edit the component I am greeted with a dialog box that states: "The component is not in a valid state. ... Do you want the component to fix these errors automatically?" Pressing the Yes button, I assume, makes the call to ReinitializeMetaData() and I have my external metadata columns. Where is the correct place to add the external metadata columns so the user does not have to take this extra step every time they add my component to their package?
Table A on Server 1 (3 ROWS) ID Name Address ID1 A B ID2 X Y ID3 M N
There is another table on a different server which looks like
Table B on Server 2 PKColumn ID Details 1 ID1 Desc1 2 ID1 Desc2 3 ID1 Desc 3 4 ID2 Desc 5 ID2 Description
As you can see the ID is the common column for these two tables, I want to get the Query the above 2 tables and the output should be dumped into a new table on Server2.
I am using the following SSIS Package
OledbDataSource-------> OledbCommand(Select * from TableB where ID =?)
From here, how can insert the rows returned from the oledb command into another table. Since, for each row of TableA it will return some output rows...How can I insert all these into the New Table.
Please help on configuring the output of the oledb command.
is it true that I will not be able to use the returned value from an sp that is called on every row from an oledb command transformation? I see all kinds of complaints on the web but cant determine if this would be a waste of time. I'd like to append the returned value (which is calculated and cannot be joined in the buffer) to the data on its way out of the transformation.
I know that I think as sql2k programmer-dba yet but I can€™t avoid.
I€™ve got Flat File Connection Manager Editor dragged with a text file as €˜ragged right€™ format and CRLF as header row limiter. When from properties page and Columns option I€™m going to alter just a few colums I am not be able. It seems that you must erase all of them in order to define one or two. And in the case you€™d have 50???? When I ran sql2k DTS designer did that without problems, alter columns again and again.
As far as I know it€™s a lose of flexibility, or not? Or is there any way for do that without deleting nothing else?
I am using SSIS in SQL Server Enterprise 2005. I have two OLE DB data sources from two disparate databases (IBM DB2 and Microsoft SQL Server), some columns from each of which are to be included in the merged output results. I have noted the various requirements in the forum postings with regard to sorting the OLE DB sources and specifying the output source columns as being sorted, as well as the requirement that the join fields in the two sources be close/exact matches. Yet, when I run this in VS, while the work area reflects the expected number of rows being input into the Merge Join transformation, no count is reflected as output from that transformation into the final destination table.Specifically, my two data sources (IBM DB2 and MS SQL) are configured as follows:
IBM DB2 contains an SQL statement that uses Cast operations to create the result columns.and an ORDER BY clause to ensure that the output is sorted by the desired two columns.. The OLE DB source property setting for IsSorted is set to true; the Output Columns folder column definitions for "key_ source_dtsy" and "key_source_dtrt" have their SortKeyPosition properties set to 1 and 2, respectively. Those field are both defined as data type DT_STR, with lengths of 4 and 2, respectively. Below is the Path metadata from the Data Flow Path editor from the path from this source:
IBM DB2 source"Name"Â "Data Type"Â "Precision"Â "Scale"Â "Length"Â "Code Page"Â "Sort Key Position"Â "Comparison Flags"Â "Source Component""ID_CODE"Â "DT_STR"Â "0"Â "0"Â "10"Â "1252"Â "0"Â ""Â "Source F0005 User Defined Codes""CODE_DESCR_1"Â "DT_STR"Â "0"Â "0"Â "30"Â "1252"Â "0"Â ""Â "Source F0005 User Defined Codes""CODE_DESCR_2"Â "DT_STR"Â "0"Â "0"Â "30"Â "1252"Â "0"Â ""Â "Source F0005 User Defined Codes""key_source_dtsy"Â "DT_STR"Â "0"Â "0"Â "4"Â "1252"Â "1"Â ""Â "Source F0005 User Defined Codes""key_source_dtrt"Â "DT_STR"Â "0"Â "0"Â "2"Â "1252"Â "2"Â ""Â "Source F0005
User Defined Codes:
MS SQL contains an SQL statement that takes the columns as they are in the MS SQL table (no Cast operations needed); it also uses an ORDER BY clause to ensure the output is sorted by the join columns. The OLE DB source property setting for IsSorted is set to true; the Output Columns folder columns for "key_source_dtsy" and "key_source_dtrt" have their SortKeyPosition properties set to 1 and 2, respectively. Those field are both defined as data type DT_STR, with lengths of 4 and 2, respectively. Below is the Path metadata from the Data Flow Path editor from the path from this source:
MS SQL source"Name"Â "Data Type"Â "Precision"Â "Scale"Â "Length"Â "Code Page"Â "Sort Key Position"Â "Comparison Flags"Â "Source Component""id_code_name"Â "DT_I2"Â "0"Â "0"Â "0"Â "0"Â "0"Â ""Â "Source CodeName in db dwVdFY""key_source_dtsy"Â "DT_STR"Â "0"Â "0"Â "4"Â "1252"Â "1"Â ""Â "Source CodeName in db dwVdFY""key_source_dtrt"Â "DT_STR"Â "0"Â "0"Â "2"Â "1252"Â "2"Â ""Â "Source CodeName in db dwVdFY"
The Merge Join transformation specifies an INNER JOIN using the columns named "key_source_dtsy" and "key_source_dtrt" from the respective data sources.I know there are alternative ways of accomplishing my intent (Lookup, port MS SQL table to IBM DB2 so join can occur in SELECT statement, etc.; however, I'd like to use this functionality and assume that it should work.Â
I've built a simple custom data flow transformation component following the Hands On Lab (http://www.microsoft.com/downloads/details.aspx?familyid=1C2A7DD2-3EC3-4641-9407-A5A337BEA7D3&displaylang=en) and the Books Online (ms-help://MS.MSDNQTR.v80.en/MS.MSDN.v80/MS.SQL.v2005.en/dtsref9/html/adc70cc5-f79c-4bb6-8387-f0f2cdfaad11.htm and ms-help://MS.MSDNQTR.v80.en/MS.MSDN.v80/MS.SQL.v2005.en/dtsref9/html/b694d21f-9919-402d-9192-666c6449b0b7.htm).
All it is supposed to do is create an output column and set its value to the result of calling a web service method (the transformation is synchronous). Everything seems fine, but when I run the data flow task that contains it, it doesn't generate any output. The Visual Studio debugger displays it as yellow, with 1,385 rows going into it, but the data viewer attached to its output is empty. The output metadata looks just like I expect: all of my input columns plus the new column, correctly typed. No validation or run-time warnings or errors are reported.
I'll include the entire C# file below, which only overrrides the ProvideComponentProperties, Validate, PreExecute, ProcessInput, and PostExecute methods of the parent PipelineComponent class.
Since this is effectively a specialization of the DerivedColumn transformation, could I inherit from the class that implements the DC component instead of PipelineComponent? How do I even find out what that class is?
Thanks! Here's the code: using System; // using System.Collections.Generic; // using System.Text;
using Microsoft.SqlServer.Dts.Pipeline; using Microsoft.SqlServer.Dts.Pipeline.Wrapper; using Microsoft.SqlServer.Dts.Runtime.Wrapper;
namespace CustomComponents { [DtsPipelineComponent(DisplayName = "GID", ComponentType = ComponentType.Transform)] public class GidComponent : PipelineComponent { /// /// Column indexes for faster processing. /// private int[] inputColumnBufferIndex; private int outputColumnBufferIndex;
/// /// The GID web service. /// private GID.WS_PDF.PDFProcessService gidService = null;
/// /// Called to initialize/reset the component. /// public override void ProvideComponentProperties() { base.ProvideComponentProperties(); // Remove any existing metadata: base.RemoveAllInputsOutputsAndCustomProperties(); // Create the input and the output: IDTSInput90 input = this.ComponentMetaData.InputCollection.New(); input.Name = "Input"; IDTSOutput90 output = this.ComponentMetaData.OutputCollection.New(); output.Name = "Output"; // The output is synchronous with the input: output.SynchronousInputID = input.ID; // Create the GID output column (16-character Unicode string): IDTSOutputColumn90 outputColumn = output.OutputColumnCollection.New(); outputColumn.Name = "GID"; outputColumn.SetDataTypeProperties(Microsoft.SqlServer.Dts.Runtime.Wrapper.DataType.DT_WSTR, 16, 0, 0, 0); }
/// /// Only 1 input and 1 output with 1 column is supported. /// /// public override DTSValidationStatus Validate() { bool cancel = false; DTSValidationStatus status = base.Validate(); if (status == DTSValidationStatus.VS_ISVALID) { // The input and output are created above and should be exactly as specified // (unless someone manually edited the persisted XML): if (ComponentMetaData.InputCollection.Count != 1) { this.ComponentMetaData.FireError(0, ComponentMetaData.Name, "Invalid metadata: component accepts 1 Input.", string.Empty, 0, out cancel); status = DTSValidationStatus.VS_ISCORRUPT; } else if (ComponentMetaData.OutputCollection.Count != 1) { this.ComponentMetaData.FireError(0, ComponentMetaData.Name, "Invalid metadata: component provides 1 Output.", string.Empty, 0, out cancel); status = DTSValidationStatus.VS_ISCORRUPT; } else if (ComponentMetaData.OutputCollection[0].OutputColumnCollection.Count != 1) { this.ComponentMetaData.FireError(0, ComponentMetaData.Name, "Invalid metadata: component Output must be 1 column.", string.Empty, 0, out cancel); status = DTSValidationStatus.VS_ISCORRUPT; } // And the output column should be a Unicode string: else if ((ComponentMetaData.OutputCollection[0].OutputColumnCollection[0].DataType != DataType.DT_WSTR) || (ComponentMetaData.OutputCollection[0].OutputColumnCollection[0].Length != 16)) { ComponentMetaData.FireError(0, ComponentMetaData.Name, "Invalid metadata: component Output column data type must be (DT_WSTR, 16).", string.Empty, 0, out cancel); status = DTSValidationStatus.VS_ISBROKEN; } } return status; }
/// /// Called before executing, to cache the buffer column indexes. /// public override void PreExecute() { base.PreExecute(); // Get the index of each input column in the buffer: IDTSInput90 input = ComponentMetaData.InputCollection[0]; inputColumnBufferIndex = new int[input.InputColumnCollection.Count]; for (int col = 0; col < input.InputColumnCollection.Count; col++) { inputColumnBufferIndex[col] = BufferManager.FindColumnByLineageID(input.Buffer, input.InputColumnCollection[col].LineageID); } // Get the index of the output column in the buffer: IDTSOutput90 output = ComponentMetaData.OutputCollection[0]; outputColumnBufferIndex = BufferManager.FindColumnByLineageID(input.Buffer, output.OutputColumnCollection[0].LineageID); // Get the GID web service: gidService = new GID.WS_PDF.PDFProcessService(); }
/// /// Called to process the buffer: /// Get a new GID and save it in the output column. /// /// /// public override void ProcessInput(int inputID, PipelineBuffer buffer) { if (! buffer.EndOfRowset) { try { while (buffer.NextRow()) { // Set the output column value to a new GID: buffer.SetString(outputColumnBufferIndex, gidService.getGID()); } } catch (System.Exception ex) { bool cancel = false; ComponentMetaData.FireError(0, ComponentMetaData.Name, ex.Message, string.Empty, 0, out cancel); throw new Exception("Could not process input buffer."); } } }
/// /// Called after executing, to clean up. /// public override void PostExecute() { base.PostExecute(); // Resign from the GID service: gidService = null; } } }
Is there by chance a cunning way to make the input columns automatically populate the output of an asynchronous script transformation?
My transformation writes several rows for each input row read. I'm creating some new columns along the way but I'd like all of the input columns to get output each time also. However I can't see any obvious way to achieve this, short of manually defining each column to the output and populating it in the script.
SELECT a.TestID, a.TestCode FROM TableA a WHERE UPPER(RTRIM(a.TestCode)) IN SELECT (SELECT UPPER(RTRIM(b.TestCode)) FROM TableB b)
Of course the above query is missing a few things but with ETL the where clause UPPER(RTRIM does not appear to be something that has an object or property that I can use in the Lookup.
Hi, I have an example situation that seems like it should have a super easy solution, but my jobs keep failing. Here we go. . .
I have a SQL Server 2005 table as my source in a data flow task. This table contains raw data. We'll call it FACT_Product_Raw - which contains a field called ProductType varchar(1) Let's say that ProductType contains values of "A" or "B" or "C" - or for that matter, some null and garbage values
I have a lookup table, LOV_Product_Types This table contains 3 fields that will transform my raw data table We'll call these fields ProdTypeID smallint, ProdTypeRaw varchar(1) and ProdType smallint It contains pairs such that A = 1, B = 2, and so on.
Here's what I want to do. I want to ADD a field to FACT_Product_Raw that contains the "looked up" value from LOV_Product_Types. Let's say that I want to add the ProdTypeID field to my _Raw table.
I have used the _Raw table as both my source and destination It blows up every time. Help. Thanks, David
The documentation on the fuzzy lookup transform mentions that only columns of type DT_WSTR and DT_STR can be used in fuzzy matching. I interpreted this as meaning that you could not create a mapping between an input column of type DT_NTEXT and a column from the reference table. I assumed that you could still have a DT_NTEXT column as part of the input and mark this as a pass through column so that it's value could be inserted in the destination, together with the result of the lookup operation. Apparently this is not the case. Validation fails with the following message: 'The data type of column 'fieldname' is not supported.' First, I'd like to confirm that this is really the case and that I have not misinterpreted this limitation.
Finally, given the following situation
- A data source with input columns
Field_A DT_STR Field_B DT_NTEXT
- A fuzzy lookup is used to match Field_A to a row in the reference table and obtain Field_C.
- Finally, Field_B and Field_C must be inserted into the destination.
I'll try to reproduce this later, but want to report it before I forget.
I just had my package fail on a VM I was testing on. It failed because on that machine, I logged in as MachineNameAdministrator instead of using my domain account (the VM is not in the domain).
This was a problem because the "User Name" column generated by the Audit Transformation was 17 characters long! This is the length of my domain + user name on my development machine. Similarly, the machine name length was 15 characters.
I'd love to know what the "correct" sizes are for these columns. In the meantime, I'm going to set these to 255 manually, and hope the size sticks.
P.S. There was one other post on this topic, though the thread isn't clear that this was the problem: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=472445&SiteID=1.
In SSIS I use the DQS Cleansing transformation component. I've got a knowledge base (KB) in place and this KB holds various domains and my data source has more input columns than would like to use for a particular clean up operation. I want to use some of the input columns to map against some domains in the KB. It is my understanding that it should be possible to select only the required input columns, but all i can do is select all input columns.