I have table where i have the following fields in datasheetview:
id | date | image | question1 | question2 | question..N
I would like to have it in this way:
id | date | image | questionnr | answer
1 01-01-2004 test.tif 1 1000 (this is the value of
field question1)
As you can see the first 3 fields remains the same, but the records of those
3 fields should be inserted for each record of the question fields. I can do
this with a union query for each questionfield, but.....
How can i do this automatically, because there are more than 500 columns.
i am using to Lookup transform for the matching records , if my source records are matching with lookup records it work fine , if not it through the following error
[Lookup [34]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "component "Lookup" (34)" failed because error code 0xC020901E occurred, and the error row disposition on "output "Lookup Output" (36)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
I am importing data from Excel to a SQL table using a simple DTS. At times the DTS fails because one of the columns in the Excel file may have an invalid time date entry. Sometimes the time will be an invalid negative number and will cause an overflow error durring import to the SQL table column.
Is there a way to capture the data before writing it to the table and validate it and if it is invalid, or more specifically a negative nuimber, enter a default value or a null value?
If there is could you be specific in how to setup the DTS transformation script.
Hi friends, Can somebody tell me how to do this- How can we Analyze existing code used to transform data into the Operations Data Warehouse, and make changes to correspond to upcoming changes in the SAP data sources. Thanks
My requirement is to check whether value of a particular column is null or not. if it is null I have to enter warning messages into the temp table I have created.
For this I am using Script Transform
Now I want to know how to write info from script transform to a table using SSIS.
Currently I am using the following code in script component
[Code]
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)
I am creating packages from a template package whicg I have built ,I have managed to implement basically everything sucessfully .Setting Properties on all the different tasks ,Connections etc except for the OLE DB Command transform.
I have not been sucessfull in getting to the properties or collections which allows me to do the mapping of the Command to parameters (Command Below),I am aware that the command executes for every row . I really need help with how to now do the mapping between the columns and the Paramaters programmatically in c#
I have set the sql command properties of the OLE DB Command Transform,as below
I'm new to SQL Server 2005. I used the TRANSFORM query in Access to display the data I had stored in columns into rows. I want to do something similar in SQL Server 2005 but it doesnt let me. I have used the same query here, but it gives an error.
This is the query I was running in Access and it was working:
TRANSFORM Max(Schools.Expense) AS MaxOfExpense SELECT Schools.[FiscalYear], Max(Schools.[FIPS]) AS [Total Of FIPS] FROM Schools GROUP BY Schools.[FiscalYear], Schools.[FIPS] PIVOT Schools.[DataID];
If there some other syntax for SQL Server or it doesnt support this command or what???
I have a very simple problem I am trying to solve.
I have a table with a "DateEntered" field, and I have an ssis pkg set up to load data from a file into the database table. I just want to make sure that no one loads the same file twice in one day.
For example, if today is 8/22/07, and "DateEntered" is "2007-08-22", then I want to add a Lookup transform to run a query that will check and see if there's any rows in the table with a "DateEntered" is "2007-08-22". If so, don't load the file again!
Here's my query:
SELECT Code FROM myTable WHERE DATEADD(dd, DATEDIFF(dd, 0, DateEntered), 0) = DATEADD(dd, DATEDIFF(dd, 0, GETDATE()), 0)
(all the dateadd stuff is doing is removing the time portion from the DateEntered field, so we are comparing apples to apples).
Now, if the query returns a bunch of "Codes" then we know that the data has already been entered for the day! So far, so good.
Now, how do I set up the Lookup to get it to work? I'm getting this error message: Error 1 Validation error. Data Flow Task: Lookup [1299]: The lookup transform must contain at least one input column joined to a reference column, and none were specified. You must specify at least one join column. FXRateLoader.dtsx 0 0
But I thought I did this! On the columns tab, I have: Lookup column: code Lookup operation: Replace 'code' Output alias: code
I have my error output set to: Lookup output - redirect row
Hi! I am a newbie, grateful for some help. I have a Source Ole DB w sql-command selecting the customer.salary and customer.occupation, which I want to match with demo_id in Ole DB destination. salary, occupation also in dim_demographic. But in Lookup editor I find no column demo_id... how do I do this?
I am in the process of creating a simple managed stored procedure using C# and VS2005. The goal is to transforms an Xml document with a Xslt file that may contain a very simple script .
At this stage my stored procedure code is extremly simple.
[Microsoft.SqlServer.Server.SqlProcedure] public static void Manufacture(string xsltFile, string xmlFile) { XslCompiledTransform xslt = new XslCompiledTransform(false); PermissionSet ps1 = new PermissionSet(PermissionState.Unrestricted); XmlSecureResolver resolver = new XmlSecureResolver(new XmlUrlResolver(), ps1); xslt.Load(xsltFile, XsltSettings.TrustedXslt, resolver); ....more code.... xslt.Transform(xmlFile, xslArg, ms); }
If I execute the SP above and the script exists in the Xslt file, I get the following error upon loading the Xslt file. If I remove the script it transforms perfectly.
A .NET Framework error occurred during execution of user defined routine or aggregate 'Manufacture': System.Security.SecurityException: Request failed. System.Security.SecurityException: at System.Xml.Xsl.Xslt.Scripts.CompileClass(ScriptClass script) at System.Xml.Xsl.Xslt.Scripts.CompileScripts() at System.Xml.Xsl.Xslt.QilGenerator.Compile(Compiler compiler) at System.Xml.Xsl.Xslt.QilGenerator.CompileStylesheet(Compiler compiler) at System.Xml.Xsl.Xslt.Compiler.Compile(Object stylesheet, XmlResolver xmlResolver, QilExpression& qil) at System.Xml.Xsl.XslCompiledTransform.CompileToQil(Object stylesheet, XsltSettings settings, XmlResolver stylesheetResolver) at System.Xml.Xsl.XslCompiledTransform.LoadInternal(Object stylesheet, XsltSettings settings, XmlResolver stylesheetResolver) at System.Xml.Xsl.XslCompiledTransform.Load(String stylesheetUri, XsltSettings settings, XmlResolver stylesheetResolver)
I have tried messing around with CAS and giving virtually everything full trust. Nothing resolves the issue.
I have a ETL that have a Lookup transform to get a rate from a table SpotRates.
The problem is when the match od some date in SpotRates Table doens't exist...
And for that records I need to lookup for next date...
For example...
SpotRate Table
Date Currency Rate
05-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2262
06-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2312
07-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2179
10-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2099
11-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2105
12-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2125
13-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2094
18-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2252
19-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2346
20-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2346
21-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2315
24-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2365
25-04-2006 0:00 DOLAR ESTADOS UNIDOS 1,2425
When I first try to lookup the date 17-04-2006, doesnt give me any records... and I need to create a new lookup for the next date from 17-04-2006. And in this example the next date is 18-04-2006.. How can I do it??
I made a sql query date gives me the next date with 2 parameters ... but I'm having some errors...
SELECT TOP 1 Data FROM Spot_Rates WHERE (Currencies_Name = ?) AND (Data > CONVERT(DATETIME, ?, 102)) ORDER BY Data DESC
In this exampple, the parameters returned from lookup1 is:
Currencies_name= 'DOLAR ESTADOS UNIDOS'
DATE='17-04-2006'
I need to create a second lookup transform to return the next date/currency for each row that didnt match in the first lookup...
I want to do something relatively simple with SSIS but can't find an easy way to do this (isint it always the case with SSIS )
I have a column lets say called iorg_id, and I want to lookup the matching rows for this col in a table. In this table iorg_id may have several potential matching rows. In this table there is another col called 'Amount'. I want to retrieve for each iorg_id the matching iorg_id in the other table but only the row with the largest value in the 'Amount' col.
I couldn't find a way to do this all in the Lookup Transform. I can match the iorg_ids and retrieve the Amount column, but can't find a way just to retrieve the matching row with the largest value in the Amount col. The only way I can think to do this is then run the output from the Transform through an Aggregate function and determine the Max (although haven't tested this yet).
Seems strange to me in that the SQL in the Advanced tab gives me something like: select * from (select * from [dbo].[Table1]) as refTable where [refTable].[iorg_id] = ?
where I believe the first 'select *' is retrieving all the cols that are listed in the LookupColumns list in the Columns tab. I thought I would be able to amend this to something like: select max(amount) from (select * from [dbo].[Table1]) as refTable where [refTable].[iorg_id] = ?
but I get a metadata type error.
So, questions are: Is it possible to do this all in the Lookup Transform are do I have to use the Aggregate function as I think ? Why is it not possible to amend the sql in the Advanced tab to manipulate the returned data ?
I need to transform the following layout by hopefully using the pivot transform, but am confused about the editor ......I have a compound primary key that I want to keep intact but then values in the row need to be broken out into their own row.
I need to go from this...
PKcol1 PKcol2 PKcol3 col4 col5 col6 col7 A 2007 1 Y N N N A 2007 2 Y Y N N A 2007 3 N N N Y
into this....
A 2007 1 col4 Y A 2007 1 col5 N A 2007 1 col6 N A 2007 1 col7 N A 2007 2 col4 Y A 2007 2 col5 Y A 2007 2 col6 N A 2007 2 col7 N A 2007 3 col4 N A 2007 3 col5 N A 2007 3 col6 N A 2007 3 col7 Y
Can I do this using the pivot transform? Any suggestions?
I am trying to find the equivalent to MS Access's TRANSFORM and PIVOT in T-SQL. I've tried using GROUP BY with CUBE, and I can't seem to get the data in the correct format. Can someone help? And please explain things to me like I am an idiot, because I am.
Here is the current table and the desired results that I want.
Current Table MonthTypeSubtypeTotalTime 1TaskASubA5 1TaskASubB10 1TaskASubC8 1TaskBSubX5 2TaskASubA4 2TaskBSubX5
On my MS SQL Server 2000, I am trying to create a generic way to load tables into my datawarehouse.
I have as input to the process a large number of table definition(s) stored individually as files on my server. And, ascii delimited data files in various locations but mostly accessible via NFS mounts.
I created two DTS package in MSSQL2K that in theory represents what I want to do:
package1 ... invoke package2 with global variables to load a system of related tables
package2 ... check for a trigger file ... set the "Execute SQL Task" statement to my first file ... run the "Execute SQL Task" which drop/add's a table ... set a "Connection" to a data source file that I want to use ... run the transformation and, with that my package starts to fall apart ... set the "Execute SQL Task" statement to the next file, and ...... goback and execute it
I can't figure out how to set the table in the transformation section to the table I want to use. And, I assume next to have the transformations links between the source and new table relinked.
The source files contain in the first row the column names as found in the tables I just created.
Hi all, new to SSIS so please bear with me on the noobie question:
Situation: have a SQL database with several tables, each table has several char fields that represent dates (ex. YYYYMMDDHHMMSSMS)- this SQL database is created weekly from an extract of an old Oracle RDB database maintained by a third party vendor.
Need to copy the data to a new database and tables Then for each table: 1. check each char date column and if the value is '1858111700000000' (Oracle dummy date) then change to SQL low date, if it's not then transform the date into SQL server date format. I' ve tried some of the data controls - just need to know which ones to use and in what order.
What would be the best controls to do iterative processing in an efficiant manner? Some tables have upto 5 million rows
Hi,this is easy with OLAP tools, but I need to do it just with MS-SQLserver:fatTableyeartypeval97a197b297c398a498b598c6....yeartype_atype_btype_c971239845699...The problem is number of different types - not just 3 like a,b,c butmore than 100, so I don't want to do it manually likeselectyear, a.val, b.val, c.valfrom(select year, val from factTable where type='a') afull join (select year, val from factTable where type='b') bon a.year = b.yearfull join (select year, val from factTable where type='c') con a.year = c.yearis it possible somehow with DTS or otherwise? I just need to presentthe data in spreadsheet in more readable form, but I cannot find anyway how to export the result from MS-SQLserverOLAPservices to Excel...Martin
I am trying to digest this logic, and have been unsuccessful so far. I am designing a package for incremental loads, but the destination table has a composite primary key on 2 columns, one of which is nullable. The source data comes from a SPROC. Uptill now, I have been banging my head trying to get this logic to work via the Lookup transform with a conditional split, but it doesn't work. Am I on the right track, or should I be using the SCD Wizard?
As a side note, I have been trying to work a solution using Andrew's blogpost on doing incremental loads: http://sqlblog.com/blogs/andy_leonard/archive/2007/07/09/ssis-design-pattern-incremental-loads.aspx
I would like a component that has the following functionality (if this already exists, let me know):
Pretty much, I want a multiple input merge join transform that joins X number of inputs so long as they have the same sorted key value. I only care about inner joins.
The reason for this component is that I have a data flow that is having to perform a LOT of lookups. Having to do them one after another is incredibly slow, so I thought that I could multi-cast them and perform several at once, which works fine, but merging the army of split inputs is messy with multiple sorts and merge join transforms. I can live with it this way, but having a single transform to collect all the inputs would look a lot nicer and a lot less to configure.
IF (IsNull( DTSSource("LastName") ) ) and (IsNull( DTSSource("FirstName") ) ) THEN IF Not (IsNull( DTSSource("PersonCode") ) ) then DTSDestination("Surname") = DTSSource("PersonCode") Else DTSDestination("Surname") ="Satff Name not set" END IF END IF
DTSDestination("StatusRef") = 1 ELSE DTSDestination("StatusRef") = 2 END IF
Main = DTSTransformStat_OK
End Function
Thus as you can see I want to query the value of a source field and then put what I need in the destination coloumn - how do I go about doing this in SQL 2005, I've had a look at trying to do this in a dataflow - script task but can't seem to get the destination columns to appear like the source columns do? any ideas
Good Day All, I have an interesting situation that I cannot believe is unique. I have a flat file (ragged right) that contains 5 different record types. Each row in the file identifies the record type in the first character. The layout is something like this:
File Header Group Header (Contains group id number) Data Item (Contains group id number) . . . Group Footer (DOES NOT CONTAIN GROUP ID NUMBER) Group Header (Contains group id number) Data Item (Contains group id number) . . . Group Footer (DOES NOT CONTAIN GROUP ID NUMBER) File Footer
Now I only want to extract data for ONE of the aforementioned groups, however I need the group footer as well because it contains some control totals for the group. The real problem is that the footers do not contain the group id number it goes with. It is a completely positional thing. Silly, yes I know but this particular file layout is an industry standard.
I thought the conditional split would be the way to go. Unfortuately, it seems the conditional split wants to split the entire data set before passing the results down stream rather than processing a single row at a time and passing that row down stream before processing the next one. (Blocking versus streaming I think its called) I could do it in a single god-awful script but I would rather try not to have to code the entire thing.
I have to Load a table with aggregate data, I can do it using Aggragate transformation, but other the "GROUP BY" I need to make a filter on input records with "WHERE" condition.
I have issue where based up a value in a column i need to do some processing of the previous and current row. The dataflow is also already sorted. I tried creating a Script Data Flow Transformation to do this but it isn't working right and the debugging of it sucks. Would anyone know of the best way to do this? or some helpful pointers? I tried "firing" information to help debug but doesn't help when the error message i get back is a stack overflow message.
An example of what I'm trying to do is process the sorted incoming rows for each person. Each person can have multiple rows. Based upon a "status" column in each row do some different processing on the previous or current row. Some Psuedo code:
if prev.PersonID = current.PersonID
if status = 1
change prev.PersonDate to today + 60 days if status = 2
change current.PersonDate to prev.PersonDate change prev.PersonDate to today + 1 day else
send rows to output
Any comments or suggestions or helpful advice/critique would be MUCH appreciated!
im new to vb.net so i will like sum help with extracting data from a database and loading it into another database using vb.net coding. Thanx for helping guys.
I have two input columns (both DT_I4) in a column collection to a Aggregator transform. Now I am doing a group by to one and Count to another column.
To my surprise the output's column datatype is changed for Count Transform (DT_UI8) and I have to put extra Data Conversion Transfrom to get my DT_I4 datatype back.