Mapping A Table (10 Columns) To A Table (100 Columns)
Apr 4, 2008
I'm in the process of converting legacy DTS packages to SSIS. I need to populate a table that has more fields than the source file. In DTS I did this with an ActiveX script. How do I go about doing this within SSIS.
In the ActiveX script most of the fields were defaulted with either spaces or zeroes.
One of the Destination fields needs to be incremented by 1 for each new record inserted.
I have a table that contains codes for commodities.Some of the codes in this table have changed and some of them have not.So now i want to design a solution that enable me to map the new codes in a different mapping table to the old ones in the other table.I also want to retain the old codes because most of the archived data used the old codes.
Where there is no new code, the current code is being retained.How do i design my table and queries so that i can use the new codes as if i was using the old code.I want to select products with a certain code but using the new code and mapping to the old codes or vice versa.
The structure of the data is like this. Code Name AA AA AL Aluminium ALM ALM ALT Aluminium in tonnes AR AR AUD Australian Dollars AUJPY AUJPY CAQ CAQ CC CC CCF CCF CER Carbon Emmission Reduction
The mapping table is like this: XAA AA XAL AL XMA ALM XAL ALT XAR AR
The only way to add a new column to an existing mapping that I know is to go to advanced editor and refresh. This however keeps only the default mapping (where the field names match), the rest is wiped out, so need to restore the mapping manually after that. Risky and annoying at the same time. Is there any alternative?
So I have been trying to get mySQL query to work for a large database that I have. I have (lets say) two tables Table_One and Table_Two. Table_One has three columns: Type, Animal and TestID and Table_Two has 2 columns Test_Name and Test_ID. Example with values is below:
In Table_One all types come under one column and the values of all Types (Mammal, Fish, Bird, Reptile) come under another column (Animals). Table_One and Two can be linked by Test_ID
I am trying to create a table such as shown below:
This should be my final table. The approach I am currently using is to make multiple instances of Table_One and using joins to form this final table. So the column Bird, Reptile, Mammal and Fish all come from a different copy of Table_one.
For e.g
Select Test_Name AS 'Test_Name', Table_Bird.Animal AS 'Birds', Table_Mammal.Animal AS 'Mammal', Table_Reptile.Animal AS 'Reptile, Table_Fish.Animal AS 'Fish' From Table_One
[Code] .....
The problem with this query is it only works when all entries for Birds, Mammals, Reptiles and Fish have some value. If one field is empty as for Test_Two or Test_Three, it doesn't return that record. I used Or instead of And in the WHERE clause but that didn't work as well.
Please note that the number columns are different in each table. I wanted to dump the data of Source table to Destination table. I meant to say that the rows of 2 columns in Source table to last 2 rows of Destination table. And also my oreder of the columns in Destination table will vary. So i need to a way to dynamically insert the data in bulk. but i will know the column names for sure before inserting.
Is there anyway to bulk insert into these columns.
I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:
I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.
1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B
If updated my query (see below) and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.
2. My second question: How to i get around this error?
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B, fnSplitJson2(A.ITEM6,NULL) C
I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.
Here is My requirement, I'm not sure if this is possible. Creating table called master like col1, col2 col3, col4 , col5 ...Where Col1, col2 are updatable - this can be done easily
Col3, col4 are columns in another table but these can be just a read only ?? Is this possible ? this is possible with View but not friendly with share point CRUD...Col 5 is a computed column of col 2 and col5 ? if above step can be done then sure this can be done I guess.
Hello,Using SQL Server 2000, I'm trying to put together a query that willtell me the following information about a view:The View NameThe names of the View's columnsThe names of the source tables used in the viewThe names of the columns that are used from the source tablesBorrowing code from the VIEW_COLUMN_USAGE view, I've got the codebelow, which gives me the View Name, Source Table Name, and SourceColumn Name. And I can easily enough get the View columns from thesyscolumns table. The problem is that I haven't figured out how tolink a source column name to a view column name. Any help would beappreciated.Garyselectv_obj.name as ViewName,t_obj.name as SourceTable,t_col.name as SourceColumnfromsysobjects t_obj,sysobjects v_obj,sysdepends dep,syscolumns t_colwherev_obj.xtype = 'V'and dep.id = v_obj.idand dep.depid = t_obj.idand t_obj.id = t_col.idand dep.depnumber = t_col.colidorder byv_obj.name,t_obj.name,t_col.name
I just created a new table with over 100 Columns and I need to populated just the first 2 columns.
The first columns to populate is an identify column that is the primary key. The second column is a foreign_key to an other column and I am trying to populate this columns with all the values from the foreign_key value. This is what I am trying to do.
column1 = ID column2= P_CLIENT_D
SET IDENTITY_INSERT PIM1 ON
INSERT INTO PIM1 (P_CLIENT_ID) SELECT Client.ID FROMP_Client
So I am trying to insert both an identity values and a value from an other table while leaving the other columns blank. How do I go about doing this.
Hi,I use SqlDataReader to read one row from database and than set some properties to values retrieved like this:string myString = myReader.GetValue(0) // this sets myString to first value in a rowIf, however, I change order of columns returned by stored procedure myString would be set to wrong value. Is there a way to do something like this: string myString = myReader.GetValue["ColumnName"];
Hi I have 2 tables defined as follows: Table1 = uid, Field1, Field2, Field3 ... Fieldn, FormUID Table2 = FormUID, Label, Position When I query Table1 I would like to replace the column name of Field1...Fieldn with the Label from Table2 where the Position = n value of Field lable e.g. lets say Table2 contains the following 1, customerName, 1 1, customerTitle, 2 1, customerDOB, 3 and Table1 might contain 1, Paul Jones, Mr, 21/09/1987, 1 when I query Table1 I would get uid = 1, Field1 = Paul Jones, Field2 = Mr, Field3 = 21/09/1987 what I would like to get is uid = 1, customerName = Paul Jones, customerTitle = Mr, customerDOB = 21/09/1987 I have up to 20 Fieldn columns so need to do this for all columns even if there is no matching columns. any help would be great regards
I created a new package with a source and destination and manually created the output column with data type, etc. Works. The issue is say the table has 200 columns to export.. I dont want to create these by hand. How can I just say export them all to csv format and not have to specify and map each and every column?
Hello,I have a query that I need help with.there are two tables...Product- ProductId- Property1- Property2- Property3PropertyType- PropertyTypeId- PropertyTypeThere many columns in (Product) that reverence 1 lookup table (PropertyType)In the table Product, the columns Property1, Property2, Property3 all contain a numerical value that references PropertyType.PropertyTypeIdHow do I select a Product so I get all rows from Product and also the PropertyType that corresponds to the Product.Property1, Product.Property2, and Product.Property3ProductId | Property1 | Property2 | Property3 | PropertyType1 | PropertyType2 | PropertyType3 PropertyType(1) = PropertyType for Property1PropertyType(2) = PropertyType for Property2PropertyType(3) = PropertyType for Property3I hope this makes sence.Thanks in advance.
I am new to using SSIS. I am supposed to move data from a text file to a SQL Server table. I did that successfuly when I simply mapped column one-to-one, but when I could not conditionally map one column to different destination column depending on some criteria.
Example: I want to make SSIS map the column A depending on the value of field X:
If X= Value1 Map A -> B
if X= Value2 map A -> C and so on.
This is an urgent situation. I will really appreciate instant help.
I had to use use ssis 2005 in a short project recently & had littletime to work it out. I was importing a whole bunch of flat files intoSQL Server tables with many derived columns and transformations inbetween.It seems to automatically map columns from the flat file to columns inthe sql table where the names of the columns are equal. But can italso do it automatically on position, so flat file column 1 goes tosql table colum 1, etc, etc? In each flat file I had to manually clickand drag the columns across to map them which took a very long time asthere were hundreds of columns in some tables!Thanks.
As soon as I call the input.GetVirtualInput(); method I get a com exception ,Seem that I am missing a
VirtualInputColumnCollection on the component ,but can't seem to figure out why.
When I drop the all the other components and only keep the OLEDb Source and OLEDB Destination with a flow between them , the call to input.GetVirtualInput() doe not fail with a com exception and I can mapping normally
I have set up a script task in one of my packages that I have set up to modify another package right before running it. This package is nothing more than a data flow task that transfers rows via an sql command from one table into another. The strange thing is I have gotten it to work with some tables but not with others. T
he script bombs out in the loop where i map all of the columns found below, where i use MapInputColumn with the error HRESULT: 0xC0010009 On Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSExternalMetadataColumnCollection90.get_Item(Object Index)
The thing is this happens after looping roughly 55 times but there are still about 100 columns that it needs to loop through still.
Code Block
Dim input As IDTSInput90 = data_destination.InputCollection(0) Dim virtual_input As IDTSVirtualInput90 = input.GetVirtualInput Dim input_column As IDTSInputColumn90 Dim virtual_column As IDTSVirtualInputColumn90
' Iterate through the virtual input column collection and map field names For Each virtual_column In virtual_input.VirtualInputColumnCollection input_column = inst_data_destination.SetUsageType(input.ID, virtual_input, virtual_column.LineageID, DTSUsageType.UT_READONLY)
inst_data_destination.MapInputColumn(input.ID, input_column.ID, input.ExternalMetadataColumnCollection.FindObjectByID(virtual_column.Name).ID) Next
Just for kicks i removed the mapping portion of the code and left in the SetUsageType to see if it would update the available input columns in the destination. The script will then finish successfully but still only the 55 or so fields out of 155 are available in the input. So i then stepped through the script with the mapping portion still disabled and after it loops successfully, i call reinitialize meta data and it produces an error in the input_column variable: HRESULT: 0xC0047041.
I find it odd that this still reports to me that the script finished successfully and I also find it odd that this works fine on two other tables I've tested but not this one. Any insight would be greatly appreciated.
I have two tables with the common column-name/ The first one include such columns as the name of football team, its country and budget. The second one include the team name, victories (quantity of it), lost games, and season. So I have the task to select such column as name, victories, lost games in 2nd table but only that ones that has budget over 1 mln? How to build select statement SELECT name, victories, lost g/ from TABLE2 WHERE budget >1000000 from TABLE1? Or should I equates the table1.name=table2.name??
Case: Exporting Report to PDF/Printing/TIFF Report: Contains 1 table with 19 Columns. 1 column is static, the other 18 are visible at the users descretion. Report when printed/exported to pdf spans 2 pages naturally, 16 on the first page, 3 on the second, and the column widths have been adjusted to provide a perfect page span .
User A elects to hide two of the columns, and show the rest. The report complies and the viewable version is perfect, the excel export is perfect.. the PDF export on the first page causes every fith column, starting with the last column that was hidden to be expanded to take up additional width. On the spanned page, it renders the first column on that page correctly, then there is a white space gap equal to the width of the hidden columns and then the rest of the cells show with the last column expanded to take up the same width that the original 2 columns were going to take up, plus its width.
We have tried several different settings to see if it helps this issue or makes it worse. So far cangrow/canshrink/keep together have made no impact. It is not possible to increase the page size due to limited page size selection availablility for the client. There are far too many combinations of what the user can elect to show or hide to put together different tables to show and hide on the same report to remove this effect.
Any help or suggestion on this issue would be appreciated
I have a table A, with two ID columns. In a report both ID colums shouldbe shown with the actual value stored in a second table, BThe problem is, both IDs need to be looked up in B, but are not in thesame row.How do I do this in an efficient way? A sub select?Thanks,--John MexIT: http://johnbokma.com/mexit/personal page: http://johnbokma.com/Experienced programmer available: http://castleamber.com/Happy Customers: http://castleamber.com/testimonials.html
Orders, with OrderID as primary key, a code for the client, and a code for the place of delivery/receipant.
Both the client and place of delivery should be linked to the table:
Relations, where each relation has it's own PrimaryID which is a auto-numbered ID. Now I want to substract my orders, with both the clientcode, and the place of delivery code linked to the relations table, so that for both the name and adress is shown.
I can link one of them by:
InnerJoin On Orders.ClientID = Relations.ClientID, but it's not possible to also link to the ReceipantsID. Is there a way to solve this?
Say you have a fact table with a few columns that all reference the same key column in a dimension table, you want to write a view to return the information for those keys?
USE MyTestDB; GO SET NOCOUNT ON; IF OBJECT_ID ('dbo.FactTemp' ,'U') IS NOT NULL DROP TABLE dbo.FactTemp;
[Code] ....
I'm using very small data at the moment, and the query plan and statistics don't really say which way.
What is the easiest way to determine the number of columns in a table (in a particular database)? Also is there any way to get the column id associated with each column (is there such thing)??
NOTE, this needs to be done with only knowing the name of the table and database and nothing else.
I need to find out the number of columns per table for ALL of my tables in my database. This way I can compare my old copy of the database to the new and make sure that any new columns are there.
I have a table (that i get as a result of pivots and joins in a stored procedure) that has two columns with the same name, (each time the stored procedure is called, different columns will be in this situation). For every row, only one of these columns has a value and the other has NULL.
Does anyone know how to do prevent this situation during the JOIN that creates this table? OR of a way to merge these columns once they are there ?
I have a procedure when i need to get the columns of a table and store them in a temp table and process further. For this i am planning to use sys.columns table as follows:
Hi, I have 3 tables which are part of my query and I need to total one column from each of them, but cannot get it to work. I had this working within Access using the nz function (below), but when trying to use this within VS to produce my ASP.net code did not work, saying NZ was not a valid function. nz(HW.HGF)+nz(HD.HGF)+nz(HL.HGF) AS FWhere HW,HD,HL are the table names and HGF is the same column name in each which I want to total.Any help would be grately appreciated.Thanks