Transform One Varchar Column Into Many Bit Columns
May 19, 2007
Hi all,
I'm new at this SSIS but have been able to successfully create some simple packages. My situation is that at work we use a column to describe a status of applications. However, this makes for hellacious query because some of those statuses inherintly were one or more statuses previously. Example
Admit = Admit
Accept = Admit then Accept
Withdraw Accept = Admit, Accept, then Withdraw
Decline = Admit then Decline
As you can see inherintly those were all admits at one point. So what I'd like to do is instead of having long queries for example to get all my "Admits", I'd rather query another table that has the following columns as bits:
Admit
Accept
Withdraw
That way I can query the admit column and get all my admits. How can I use SSIS to transform my "Decision" column into those bit columns?
Thanks for any help or suggestions you have.
View 13 Replies
ADVERTISEMENT
Jun 18, 2008
Hi all, new to SSIS so please bear with me on the noobie question:
Situation: have a SQL database with several tables, each table has several char fields that represent dates (ex. YYYYMMDDHHMMSSMS)- this SQL database is created weekly from an extract of an old Oracle RDB database maintained by a third party vendor.
Need to copy the data to a new database and tables
Then for each table:
1. check each char date column and if the value is '1858111700000000' (Oracle dummy date) then change to SQL low date, if it's not then transform the date into SQL server date format. I' ve tried some of the data controls - just need to know which ones to use and in what order.
What would be the best controls to do iterative processing in an efficiant manner? Some tables have upto 5 million rows
Any Ideas would be appreciated! Thanks!
View 5 Replies
View Related
Feb 21, 2005
Hi guys,
I have a SQL Server 2000 table that contains a Year field and another table containing Products.
The Year table contains only a year field.
The Product table contains ProductId, Date, Quantity.
I'm trying to construct a query such that I can view the Quantity of Products by Year (derived from the Year table).
ie.
If the Year table contains the rows 2003, 2004 and 2005, the resultant query would produce the output: -
PId 2003 2004 2005
50 3 6 7
51 4 2 6
52 8 0 3
Any ideas how I can achieve this?
Cheers in advance,
Rob
View 2 Replies
View Related
Mar 22, 2004
I have a table of 1000+ rows with 100+ different items with each Id having max 10 items
Id item
1 a
1 b
2 c
2 a
2 d
3 b
3 z
each item has an order in a different lookup table
item item_seq
a 1
b 2
c 3
d 4
The item columns (below) need to use the item order from lookup table, so the output table is as follows
Id item item2 item3
1 a b
2 a c d
3 b z
PLEASE HELP !!!
View 8 Replies
View Related
Jan 10, 2007
Hi All,
I have an OLE DB transform with a SQL Command of:
sp_get_sponsor_parent ?,? OUTPUT
where sp_get_sponsor_parent is defined like:
CREATE PROCEDURE [dbo].[sp_get_sponsor_parent]
@pEID int,
@results int OUTPUT
AS
BEGIN
.
.
.
END
I map the columns, refresh & OK out of the component without trouble, but on executing the package it fails during validation on this component. I'm utterly stumped.
Any light shed would be greatly appreciated.
Many thanks in advance,
Tamim.
View 3 Replies
View Related
Nov 20, 2007
I have looked far and wide and have not found anything that works to allow me to resolve this issue.
I am moving data from DB2 using the MS OLEDB Provider for DB2. The OLEDB source sees the column of data as DT_TEXT. I setup a destination to SQL Server 2005 and everything looks good until I try and run the package.
I get the error:
[OLE DB Source [277]] Error: An OLE DB error has occurred. Error code: 0x80040E21. An OLE DB record is available. Source: "Microsoft DB2 OLE DB Provider" Hresult: 0x80040E21 Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.".
[OLE DB Source [277]] Error: Failed to retrieve long data for column "LIST_DATA_RCVD".
[OLE DB Source [277]] Error: There was an error with output column "LIST_DATA_RCVD" (324) on output "OLE DB Source Output" (287). The column status returned was: "DBSTATUS_UNAVAILABLE".
[OLE DB Source [277]] Error: The "output column "LIST_DATA_RCVD" (324)" failed because error code 0xC0209071 occurred, and the error row disposition on "output column "LIST_DATA_RCVD" (324)" specifies failure on error. An error occurred on the specified object of the specified component.
[DTS.Pipeline] Error: The PrimeOutput method on component "OLE DB Source" (277) returned error code 0xC0209029. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
Any suggestions on how I can get the large string data in the varchar column in DB2 into the varchar(max) column in SQL Server 2005?
View 10 Replies
View Related
Jan 25, 2007
Is there a transform available which allows you to specify two different tables (same primary key) and compare columns (you identify which column(s) values need to be compared in the transform) between those two tables?
thanks
View 11 Replies
View Related
Apr 12, 2006
HI, I was wondering if there is a possibility to use a confitional if like this:
IF(ISNULL(mycolumn value, "new value if null", mycolumnvalue)
into a derived column transform to infer a value to a null column value. I do know I can do it using a script component by it would be simpler to do by using an expression.
Thank you,
Ccote
View 3 Replies
View Related
Jan 8, 2014
I have data like this in the table :
IntRecpieID strName intMealtypeID Total
100 ‘A’ 1 20
101 'B' 2 30
100 'A' 3 40
Desired Output required:
IntRecpieID StrName 1 2 3
100 'A' 20 Null 40
101 'B' Null 30 Null
View 5 Replies
View Related
Oct 30, 2007
Hello,
I understand that it is not possible to use PATINDEX in a Derived Column transform. I'm trying to eliminate leading zeros from a column where the string is always 14 characters long, but the first non zero character could occur any number of characters from the left of the string. The following achieves what I need:
substring([Account No w/0s], patindex('%[^0]%', [Account No w/0s]), 14) AS ExtractedAcctNo
Does anyone happen to know how I might express this differently in a Derived Column transform? Should I consider a calling a function through an OLE DB Command transform on each row instead? Maybe I should have a SQL Task that runs an UPDATE statement against the column.
Thank you for your help!
cdun2
View 6 Replies
View Related
Nov 30, 2006
Hi,
Is it possible using derived column transform to change all blank values in a flat file to say a "0"
Basically convert "" to "0"
Thanks for any help,
Slash.
View 3 Replies
View Related
Apr 26, 2007
Hi,
I have a table with a BLOB column, and I need to populate this table including the BLOB column (image type in the database).
What I have done is:
1. use a flat file transform to read a .csv file which specifies the names of the files that store the binary contents for the BLOB column for each row.
2. use an Import Column Transform to read the binary files.
3. use an OLE DB Dest transform to dump the data into my destination table.
I got the error saying:
Error: 0xC02090BB at XXXX, Import Column [1]: Opening the file ".diagram1.bin" for reading failed. The file was not found.
I guess this is because my file "diagram1.bin" is not in the current path? (The current path can be found by "System.IO.Directory.GetCurrentDirectory() call, in my case it is "c:program filesmicrosoft visual studio 0common7IDE".)
My question is: how to determine the directory path information of the package I am running?
Thanks!
Wenbiao
View 3 Replies
View Related
Feb 16, 2007
When data is imported from our legacy system, the same functions need to be applied to several columns on different tables. I want to build a kind of "Function Library", so that the functions I define can be re-used for columns in several packages.
The "Derived Column" transform seems ideal, if only I could add my list of user-defined functions to it. Basically I want to inherit from it, and add my own list of functions for the users to select.
Is this possible ?
What other approaches could I take to building about 30 re-usable functions?
View 7 Replies
View Related
Jun 26, 2006
Would anyone happen to have any pointers or know of any good code examples to either programmatically change the type of an input column when it is passed through the component, or add a new column to the output? I am extracting data from an Oracle database which is in Julian date format (represented within SSIS as a DT_NUMERIC column) and I need to to either transform the input column holding it into a date column, or to dynamically add a new output column holding the transformed data.
Many thanks
View 1 Replies
View Related
May 15, 2008
I want to replace the contents of a value column with itself but rounded to 2 decimal places.
The current column is a double and I have tried to perform this using the following expression but it fails to work.
Code Snippet
Round(cc_vl,2)
How should I achieve this?
View 7 Replies
View Related
Mar 7, 2008
i have too many DTS packages to migrate to SSIS, and while examining a DTS package in BIDS (converted with the migration utility) i tried to edit the resulting migrated package, which opened the DTS interface with the two connection icons joined by the big fat arrow with a gear on it...not exactly what i had in mind, iow, it looks like SSIS on the outside, but its still DTS on the inside.
So I stripped out a series of components from a more complex package hoping that simplifying it would reveal the contents of old DTS Transformations tab at least partially set up in a Derived Column transformation.
Can i get there from here, or must i recreate every stinking definition in a derived column manually from the ground up?
thanks very much for your help
View 2 Replies
View Related
Apr 29, 2015
We run std 2008 r2. I'm looking at the files this transform is complaining about. They seem to be named appropriately. The customerid folders don't exist when this runs. I'm going to put one in place to see if that is the problem.
The errors i'm getting are...
[Export Column [22]] Error: The file name "c:usersmyuserid heprojectnamecustomeridafilename.doc" is not valid. The file name is a device or contains invalid characters.
[Export Column [22]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "component "Export Column" (22)" failed because error code 0xC020207F occurred,
and the error row disposition on "input column "FILENAME" (29)" specifies failure on error. An error occurred on the specified object of the specified component.
There may be error messages posted before this with more information about the failure.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Export Column" (22) failed with error code 0xC0209029
while processing input "Export Column Input" (23). The identified component returned an error from the ProcessInput method. The error is specific to the component,but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.
View 7 Replies
View Related
Jul 25, 2006
W2k3 server, SQL 2005.
@@version = Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86)
Standard Edition on Windows NT 5.2
(Build 3790: Service Pack 1)
I have my first SSIS package almost working, but I'm having an odd problem and can't find any information to help resolve it.
I'm importing from a flat file (csv) to an existing table (append). I've got a Derived Column transformation in the middle to do some data cleanup. It's all working except for one little problem...
One of the transformations is 'REPLACE([Column 3],"^","; ")', output to a new column. (The input file has a field that uses carets as delimiters between an unknown number of items; I'm changing that to semicolons for easier reading.) Not all rows have data in this column, some will have one item, some will have multiple items.
The REPLACE works except that it fills in repeated data for all the blank rows.
Example:
Incoming data is:
1 Smith,Jane^Jones,Jane
2 Brown,John
3
4 Adams,James^Adams,Jim
5
6 White,Debra
Data inserted into the table is:
1 Smith,Jane; Jones,Jane
2 Brown,John
3 Brown,John
4 Adams,James; Adams,Jim
5 Adams,James; Adams,Jim
6 White,Debra
I've tried to use a Conditional to skip the empty rows, but I can't get that working at all (get syntax errors no matter what I put in).
Any suggestions on how to fix this would be most appreciated!
Thank you.
View 5 Replies
View Related
Nov 11, 2011
I have a table that imported as varchar. Most of these columns need to be in a numerical format. How can I convert a table with columns named column0 (needs to be int),column1 (stays varchar), column2(needs to be int), and column 3(needs to be int)?
View 4 Replies
View Related
Sep 22, 2015
I'm attempting to use T-SQL to strictly parse/pull Names from a string field like such: CN=John Doe,OU=xyz,DC=ituy,DC=qwer,DC=org...I would like the ultimate result to be JUST the full name John Doe (pretty much everything after the first = sign and before the first comma. I'm attempting combinations of REPLACE, STUFF, PATINDEX and SUBSTRING, but to no avail.
View 5 Replies
View Related
Jun 4, 2008
Hi guys,
Is there a way to declare a default value of empty string '' for a varchar table column?
Thanks,Kevin
View 4 Replies
View Related
Oct 19, 1999
I have a table on two different servers, the only difference that I can see is that on server A columns first (varchar 32) and last (varchar 32) have ANSI_PADDING set ON and on server B those columns are OFF. No idea why this is true: I didn't specify that the table be set up this way and they both followed similar creation/upgrade paths.
I execute "select last+first from <table>" on server A and the result looks like:
<last1> <first1>
<last2> <first2>
...
On server B I get
<last1><first1>
<last2><first2>
Now the docs say ANSI_PADDING has nothing to do with this behavior; in fact if I copy the data on server B to 2 new columns with ANSI_PADDING ON I get the same results. But that's the *only* thing that was different in syscolumns. What is causing the different output behaviors on these two servers? Thanks.
View 1 Replies
View Related
Jul 20, 2005
I have an application with highly compressable strings (gzip encodingusually does somewhere between 20-50X reduction.) My base 350MBdatabase is mostly made up of these slowly (or even static) strings. Iwould like to compress these so that my disk I/O and memory footprintis greatly reduced.Some databases have the ability to provide a compressedtable, compressed column, or provide a user defined function tocompress an indvidual Field with a user defined function[ala. COMPRESS() and DECOMPRESS() ].I could right a UDF with an extended prodcedure if I need to but I'mwondering if there are any other known methods to do this in MS SQLServer 2000 today?--Frederick Staatsfrederick dot w dot staats at intel dot com (I hate junk mail :-)
View 6 Replies
View Related
May 22, 2008
This is obviously a radical idea but some actually DO want to store linefeeds in varchar columns.
In MySQL I can escape difficult characters for example
INSERT INTO sometable(address) VALUES("23 SomeRoad
SomeTown
SomeCounty");
Does anyone know how to do this in Transact SQL?
View 7 Replies
View Related
Mar 24, 2015
We have a customer that is running SQL2012 and we are seeing a weird result on a query when we run it on their db. It is based off of a table that has about 30 columns but in this case we only care about 2 of them.
[Number] [varchar](15) NOT NULL
[Person_ID] [varchar](12) NULL
Here is the query we are doing:
Select Number,Person_ID From TableName where LP='ABC123'
The result I get back is the following:
Number:1
Person_ID:13864
The Person_ID should be a result of another table that created that Person_ID but it doesn't exist in that table. So we do not know where that 13864 is coming from. When we open that record through our application it shows Nothing for the Person_ID in that field.
When we do this query on our copy we get back
Number:1
Person_ID:
Which is exactly what we should see as the result.
Could there be a sql server setting that is set on their server that could possibly be given us back 13864 for a NULL value?
View 2 Replies
View Related
Jul 23, 2005
Hi,This is probably an easy question for someone so any help would beappreciated.I have changed the columns in a table that where nvarchar to the samesize of type varchar so halve the space needed for them.I have done this a) becuase this is never going to be an internationalapplication, b) we are running out of space and c) there are 100million rows.I have done this with the alter table statement which seems to work butthe space used in the database hasn't altered.I'm presuming that the way the records are structured within the tablethere is just now more space free inbetween each page???Is there a way or re-shrinking just an individual table and free upsome of the space in there or am i missing the point somewhere?Thanks in advance,Ian
View 4 Replies
View Related
Jan 18, 2007
I have an encrypted column of data that is encrypted by a passphrase. The passphrase was encrypted by a symetric key in a key pair. The passphrase also is stored in a table. I can get the passphrase as needed to encrypt/decrypt the columns. I copied the production database to a new database for development. Subsequently I had to create a new symmetric/asymmetic key pair and recreated my passphrase with the new key pair. Now the passphrase will decrypt a text column but it will not decrypt two other columns which are of type varchar in the database. Here is an example:
DECLARE @pss varchar(30)
EXEC [dbo].[uspPassPhraseGet] @pss OUTPUT
SELECT DISTINCT contactid, uissueid, createdby, created_dt
,CONVERT(varchar(max),DecryptByPassPhrase(@pss, CONVERT(varchar(max),dbo.tbl_msg_app_legislativeinquiry.title), 1, CONVERT(varbinary, 23))) as title
,CONVERT(varchar(max),DecryptByPassPhrase(@pss, CONVERT(varchar(max),dbo.tbl_msg_app_legislativeinquiry.description), 1, CONVERT(varbinary, 23))) as description
,CONVERT(varchar(max),DecryptByPassPhrase(@pss, CONVERT(varchar(max),dbo.tbl_msg_app_legislativeinquiry.shortdesc), 1, CONVERT(varbinary, 23))) as shortdesc,
closed_dt, confidential, statusid, due_dt, deleted_dt,deletedbyid, highrisk, dbo.tbl_msg_app_legislativeinquiry.designator, dbo.tbl_ref_sys_status.description AS statusdesc
FROM dbo.tbl_msg_app_legislativeinquiry INNER JOIN
dbo.tbl_ref_sys_status ON statusid = dbo.tbl_ref_sys_status.ustatusid INNER JOIN
dbo.tbl_gbl_lkp_security ON uissueid = dbo.tbl_gbl_lkp_security.msgid AND
dbo.tbl_msg_app_legislativeinquiry.designator = dbo.tbl_gbl_lkp_security.designator
Like I said I can execute the uspPassPhraseGet stored procedure and I get my passphrase. It will correctly decrypt the dbo.tbl_msg_app_legislativeinquiry.description field which is great but the other two fields will not decrypt. When i copied the database over the encrypted fields do not display the same on the new database. The old database shows a box character followed by a bunch of junk (as expected). The new copied table on the new database shows only a single box (not the same as the original). Is there a known bug with copying a table with varchar fields that are encrypted to a new database? I tried to run a test and got the same result. I also tried to convert the varchar columns to text to see if that solved the problem and it didn't. The description field however is a text type column and it reads exactly as the original. The problem I think is that the Copy Database didn't actually copy my data correctly. How can I get the original encrypted data from the production into my development. I also tried just dropping the table and reimporting the table but that didnt take either. Scratching my head on this one.
View 5 Replies
View Related
Feb 26, 2014
I know that if I have an nvarchar column I can use an equality like = N'supersqlstring' so it doesn't implicit cast as a varchar, like if I were to do ='supersqlstring'. And then I'll be a big SQL hero and all my stored procedures will run before a millisecond can whisper.
But if I'm comparing an nvarchar column to a varchar column, is it better to cast the varchar 'up' to an nvarchar or cast the nvarchar 'down' to a varchar?
For instance:
cast(a.varchar as nvarchar(100)) = an.nvarchar
or
cast(an.nvarchar as varchar(100)) = a.varchar
Leaving aside non-matching, like (at least I don't think) that SQL considers the varchar n to be equal to the nvarchar ń, what's the best way to handle this?
Pretend for a moment that each column contains a mixed letter and number ID with no accented or wiggly-squiggly Unicode characters; it's just designs clashing.
Is there a performance hitch doing it one way or another? Should I use COLLATE? Should one of the columns be altered?
View 8 Replies
View Related
Mar 15, 2014
-- My first Data
create table #myfirst (id int, city varchar(20))
insert into #myfirst values (500,'Newyork')
insert into #myfirst values (100,'Ediosn')
insert into #myfirst values (200,'Atlanta')
insert into #myfirst values (300,'Greenwoods')
insert into #myfirst values (400,'Hitchcok')
insert into #myfirst values (700,'Walmart')
insert into #myfirst values (800,'Madida')
-- My Second Data
create table #mySecond (id int, city varchar(20),Sector varchar(2))
insert into #mySecond values (1500,'Newyork','MK')
insert into #mySecond values (5500,'Ediosn','HH')
insert into #mySecond values (5060,'The Atlanta','JK')
insert into #mySecond values (7500,'The Greenwoods','DF')
insert into #mySecond values (9500,'Metro','KK')
insert into #mySecond values (3300,'Kilapr','MK')
insert into #mySecond values (9500,'Metro','NH')
--Third Second Data
create table #myThird (id int, city varchar(20),Sector varchar(2))
insert into #myThird values (33,'Walmart','PP')
insert into #myThird values (20,'Ediosn','DD')
select f.*,s.Sector from #myfirst f join #mySecond s on f.city = s.city
/*
idcitySector
500NewyorkMK
100EdiosnHH
*/
i have doubt on two things
1) How Can i compare the City names, by eliminating 'The ' at the beginning (if there is any in second tale city) between first and second
2) after comparing first and second if there is no match found in second them want to compare with third table values for those not found
--i tried below to solve first doubt, it is working but want to know any other wasys to do it
select f.*,s.Sector from #myfirst f join #mySecond s on replace (f.city, 'THE ','')= replace (s.city, 'THE ','')
--Expected results wull be
create table #ExpectResults (id int, city varchar(20),Sector varchar(2))
insert into #ExpectResults values (200,'Atlanta','JK')
insert into #ExpectResults values (100,'Ediosn','HH')
insert into #ExpectResults values (300,'Greenwoods','DF')
insert into #ExpectResults values (500,'Newyork','MK')
insert into #ExpectResults values (700, 'Walmart','PP')
insert into #ExpectResults values (800, 'Madidar','')
[code]....
View 1 Replies
View Related
Sep 14, 2005
I have a string which I need to know where it came from in a database.I don't want to spend time coding this so is there a ready made scriptwhich takes a string as a parameter and searches all the tables whichcontain varchar type columns and searches these columns and indicate whichtables contain that string?Full text search is not enabled.--Tonyhttp://dotNet-Hosting.com - Super low $4.75/month.Single all inclusive features plan with MS SQL Server, MySQL 5, ASP.NET,PHP 5 & webmail support.
View 1 Replies
View Related
Sep 1, 2006
I am trying to interpret some of the results I observe when trying to match similar records using a fuzzy lookup transform, but it's not entirely clear how the overall row similarity score is calculated. In particular, sometimes rows with lower individual column similarity scores will achieve a higher similarity and confidence score than a matching row with higher individual column scores.
The transform is configured with 6 text fields set to fuzzy mapping and a minimum similarity of 0, and 3 additional numeric fields with an exact mapping. It is set to return a maximum of 2 matches per lookup and to do an exhaustive search of the reference table.
For example, from the following matching pair of records Match 1 is picked over Match 2 even though it's individual scores are lower.
Match 1 Match 2
----------------- -----------------
_similarity_author 1.0 1.0
_similarity_title 0.85344648 1.0
_similarity_headline 0.0125 0.0125
_similarity_summary 0.0125 0.0125
_similarity_picture 1.0 1.0
_similarity_caption 1.0 1.0
_similarity 7.8429267E-2 7.3196657E-2
_confidence 0.55728668 0.44271332
In another case both matching records have *identical* scores for every mapped column and yet their similarity and confidence scores are different.
Clearly there are other factors involved in calculating the overall row score. Anybody know what these are?
Fernando Tubio
View 2 Replies
View Related
Aug 5, 2015
declare @var varchar(8000)
set @var='Name1~50~20~50@Name2~25.5~50~63@Name3~30~80~43@Name4~60~80~23'
---------------------
Create table #tmp(id int identity(1,1),Name varchar(20),Value1 float,Value2 float,Value3 float)
Insert into #tmp (Name,Value1,Value2,Value3)
Values ('Name1',50,20,50 ), ('Name2',25.5,50,63 ), ('Name3',30,80,43 ), ('Name4',60,80,23)
select * from #tmp
I want to convert to @var to same like #tmp table ..
"@" - delimiter goes to rows
"~" - delimiter goes to columns
View 6 Replies
View Related
Jul 23, 2005
Hi,I am writing a SP in MSSQL. One of the parameted is VARCHAR typewhich have comma seperated INT vlaues. I want to use this varible inWHERE clause against INT type coloum. how do I achive this.eg. DECLARE @CompanyTypeIDs VARCHAR(200)SET @CompanyTypeIDs = '1,3,2'Currently I am using it asSELECT CompanyTypeID FROM Company WHERE CompanyTypeID IN (1,2,3)CompanyTypeID is of INT type.But I want to replace the consts with the Parameter passed as belowSELECT CompanyTypeID FROM Company WHERE CompanyTypeID IN(@CompanyTypeIDs) - but this is not giving any result.Please let me know the solution.Many Thanks,Mahesh
View 10 Replies
View Related