Design This Flow

May 9, 2008

Hi

I am SSIS newbie and need help in desigining this flow.

Source Sqlserver2005

Select records based on complex sql statement from a
[InstanceA].[DatabaseA].[TableA]



Target Sqlserver2005

Insert the records into another table
[InstanceB].[DatabaseB].[TableB]
only if these records are not present

Take the records from [InstanceB].[DatabaseB].[TableB]
and insert it into [InstanceB].[DatabaseB].[TableC]
only those records which are not in C

And finally

Take the records from [InstanceB].[DatabaseB].[TableB]
Join them with [InstanceB].[DatabaseB].[TableC]
and insert it into [InstanceB].[DatabaseB].[TableD]
only those records which are not already in D



Can somebody please help me in visualising this solution .


I am having problems populating a target and then using that populated target as a source for subsequent targets.


regards
Hrishy

View 1 Replies


ADVERTISEMENT

Need Help For Design Data Flow Task

Nov 14, 2007

Hi frns,

I am new to SSIS. I need some help in designing the below dataflow task.


-- Teacher creates several tasks and each task is assigned to multiple students
-- The teacher table contains contains all the tasks created a every teacher
use ods
go
create table teacher
(
yr int,
tid int,
tname varchar(20),
taskid int

)

insert into teacher values(2007,101,'suraj','task1')
insert into teacher values(2007,101,'suraj','task2')
insert into teacher values(2007,102,'bharat','task3')

insert into teacher values(2007,103,'paul','task4')
insert into teacher values(2007,103,'paul','task5')
insert into teacher values(2007,103,'paul','task6')


-- Teacher "suraj" has created 2 tasks
-- Teacher "bharat" has created 1 task

select * from ods..teacher
yr tid tname taskid
============================
2007 101 suraj 1111
2007 101 suraj 1122
2007 102 bharat 2222

-- Students table contains studentid(sid),teacherid(i,e tid ) & taskid
drop table students

create table students
(
yr int,
sid varchar(10),
tid int,
taskid varchar(10)
)

truncate table students

insert into students values(2007,'stud1',101,'task1')
insert into students values(2007,'stud1',101,'task2')

insert into students values(2007,'stud2',101,'task1')
insert into students values(2007,'stud2',101,'task2')

--Note : stud1,stud2 comes under teacher with tid "101"



insert into students values(2007,'stud3',102,'task3')

-- Note : stud3 and stud4 comes under teacher with tid "102"

insert into students values(2007,'stud4',103,'task4')
insert into students values(2007,'stud4',103,'task5')
insert into students values(2007,'stud4',103,'task6')

insert into students values(2007,'stud5',103,'task4')

select * from students
yr sid tid taskid
----------------------------
2007 stud1 101 task1
2007 stud1 101 task2

2007 stud2 101 task1
2007 stud2 101 task2

2007 stud3 102 task3
2007 stud4 103 task4
2007 stud4 103 task5
2007 stud4 103 task6
2007 stud5 103 task4


Now in my target table i need to load the data in a such a way that

use targetdb
go
drop table trg
go

create table trg
(
yr int, -- data should load from teacher.yr
tid int,
taskid int(20),
cnt int

)

Mapping in target column and value to be loaded
==================================================
yr -- teacher.yr
tid -- teacher.id
taskid -- this need to start a new sequence of numbers starting from 1 for each teacher and dont want the task id to be copied as it is.
cntofstudents -- need to count no of students from "students" table for a given teacher and for his assignment

For example for teacherid "101" and taskid "task1" there are 2 students
again for the same teacher "101" and taskid "task2" there are 2 students


For teacher "102" and taskid "task3" there is only 1 student

Similary for teacher "103"


Relation
========

Teacher table | Students Table
yr | yr
tid | tid


After i run the ETL the data should look as follows :

insert into trg values(2007,101,1,2)
insert into trg values(2007,101,2,2)

insert into trg values(2007,102,1,1)

insert into trg values(2007,103,1,2) -- task4 is created by teacher "103" and assigned to 2 students stud4 and stud5
insert into trg values(2007,103,2,1) -- task5 is created by teacher "103" and assigned to 1 student i.e stud4
insert into trg values(2007,103,3,1) -- task6 is created by teacher "103" and assigned to 1 student i.e stud5

Note : If u observer the values in 3rd column of the trg table, instead of directly mapping the taskid we need to generate a separate sequence for every teacher.

BottomLine : for each and every task created by each teacher there should be a unique record along with the count of students in "STUDENTS" table


Can anyone help me out in designing the Data Flow task for this Functionality.



Thanks,
Manu

View 10 Replies View Related

Control Flow Design Question

Jun 13, 2006

hi all of you,
I haven€™t idea if the following description is a issue or not, anyway:

I begin from Control Flow layer.

I€™ve created a sequence container and inside I€™ve got two groups, one own a sql task and another one own a Data Flow task. Both are linked for a completion conector. Up to here everything is fine. But when I collapse my sequence container the arrow remains there for these tasks and you can see the sequence container €œclosed€? and the arrow lonely.

Not very esthetic, not practical.

Any clarification or though will be as usual welcomed

View 6 Replies View Related

How To Pass Parameter Froon Control Flow To Data Flow

Feb 14, 2006

Hi, All,

I need to pass a parameter from control flow to data flow. The data flow will use this parameter to get data from a Oracle source.

I have an Execute SQL task in control flow to assign value to the Parameter, next step is a data flow which will need take a parameter in the SQL statement to query the Oracle source,

The SQL Looks like this:

select * from ccst_acctsys_account

where to_char(LAST_MODIFIED_DATE, 'YYYYMMDD') >?

THe problem is the OLE DB source Edit doesn€™t have anything for mapping parameter.

Thanks in Advance





View 2 Replies View Related

HELP: How Do I Pass Variables From Control Flow To Data Flow

Mar 9, 2007

I have an Execute SQL Task that returns a Full Rowset from a SQL Server table and assigns it to a variable objRecs. I connect that to a foreach container with an ADO enumerator using objRecs variable and Rows in first table mode. I defined variables and mapped them to the columns.

I tested this by placing a Script task inside the foreach container and displaying the variables in a messagebox.

Now, for each row, I want to write a record to an MS Access table and then update a column back in the original SQL Server table where I retreived data in the Execute SQL task (i have the primary key). If I drop a Data Flow Task inside my foreach container, how do I pass the variables as input to an OLE DB Destination on the Data Flow?

Also, how would I update the original source table where source.id = objRects.id?

Thank you for your assistance. I have spent the day trying to figure this out (and thought it would be simple), but I am just not getting SSIS. Sorry if this has been covered.

Thanks,

Steve

View 17 Replies View Related

Handle Tasks In Control Flow Tab From Data Flow Tab

Jan 17, 2008

Dear All!
My package has a Data Flow Task. In Data Flow Task, I use a Script Component and a OLE BD Destination to transform data from txt file to database.
Within Data Flow Task, I want to call File System Task to move file to a folder or any Task of "Control Flow" Tab. So, Does SSIS support this task? Please show me if any
Thanks

View 3 Replies View Related

SSIS Variables Between Data Flow And Control Flow... How To????

May 17, 2007

Hi everyone,

Primary platform is 64 bit cluster.

How to move information allocated in SSIS variables from Data Flow to Control Flow layers??

We've got a SSIS package which load a value into a variable inside a Data Flow. Going back to Control Flow how could we retrive that value again????

Thanks in advance and regards,

View 4 Replies View Related

Is There A Way To Set A Variable In A Data Flow From A SQL Statement (like In Control Flow)

Jan 12, 2006

I'm currently setting variables at the package level with an ExecuteSQL task.  This works fine.  However, I'm now starting to think about restartability midway through a package.  It would be nice to have the variable(s) needed in a data flow set within the data flow so that I only have to restart that task. 

Is there a way to do that using an SQL statement as the source of the value in a data flow? 

OR, when using checkpoints will it save variable settings so that they are available when the package is restarted?  This would make my issue a moot point.

View 2 Replies View Related

Please Advise: Big Control Flow Or Big Data Flow

Jul 22, 2007

Hi all! I recently started working with SSIS and one of the things that is puzzling me the most is what's the best way to go:



A small control flow, with large data flow tasks
A control flow with more, but smaller, data flow tasksAny help will be greatly appreciated.
Thanks,
Ricardo

View 7 Replies View Related

Reuse Existing Data Flow Components In A Custom Data Flow Component

Aug 29, 2007

Hello,

Is it possible to use existing data flow components (Merge Join, aggregation,...) in a custom data flow component?

Thanks,

Yoann

View 15 Replies View Related

DB Design :: Database Design For Matrix Representation

May 13, 2015

I have a scenario like below

Product1
Product2 Product3
Product4 Product5
Product1 1
1 0 0
1
Product2 1
1 0 0
1
Product3 0
0 1 1
0
Product4 0
0 1 1
0
Product5 1
1 0 0
1

How to design tables in SQL Server for the above.

View 2 Replies View Related

Database Design/query Design

Feb 13, 2002

Ok, I'm doing a football database for fixtures and stuff. The problem I am having is that in a fixture, there is both a home, and an away team. The tables as a result are something like this:

-------
Fixture
-------
fix_id
fix_date
fix_played

----
Team
----
tem_id
tem_name

-----------
TeamFixture
-----------
fix_id
tem_id
homeorawayteam
goals

It's not exactly like that, but you get the point. The question is, can I do a fixture query which results in one record per fixture, showing both teams details. The first in a hometeam field and the second in an away team field.

Fixture contains the details about the fixture like date and fixture id and has it been played

Team contains team info like team id, name, associated graphic

TeamFixture is the table which links the fixture to it's home and away team.

TeamFixture exists to prevent a many to many type relationship.

Make sense? Sorry if this turns out to be really easy, just can't get my head around it at the mo!

View 2 Replies View Related

DB Design :: Table Design For Packages

Aug 18, 2015

I would like to create a table called product. My objective is to get list of packages available for each product in data grid view column while selecting each product. Each product may have different packages type (eg:- Nos, CTN, OTR etc). Some product may have two packages and some for 3 packages etc. Quantity in each packages also may be differ ( for eg:- for some CTN may contain 12 nos or in other case 8 nos etc). Prices for each packages also will be different that also need to show.  How to design the table.. 

Product name   :  
Nestle milk |
Rainbow milk
packages  :
CTN,OTR, NOs |

CTN, NOs
Price:
50,20,5 |
40,6

(Remarks for your reference):CTN=10nos, OTR=4 nos  
| CTN=8 Nos

View 3 Replies View Related

Lookup Task Data Flow Transformation Causes Data Flow Task To Hang?

Dec 28, 2007

Hi,
I'm trying to implement an incremental data pull (Oracle to SQL) based on Andy's blog:
http://sqlblog.com/blogs/andy_leonard/archive/2007/07/09/ssis-design-pattern-incremental-loads.aspx

My development machine is decent: 1.86 GHz, Intel core 2 CPU, 3 GB of RAM.
However it seems the data flow task gets hung whenever I test the package against the ~6 million row source, as can be seen from these screenshots. I have no memory limitations on the lookup transformation. After the rows have been cached nothing happens. Memory for the dtsdebug process hovers around 1.8 GB and it uses 1-6 percent of CPU resources continuously. I am not using fast load to insert new records into my sql target table. (I am right clicking Sequence Container 3 and executing this container NOT the entire package in the screenshots)

http://i248.photobucket.com/albums/gg168/boston_sql92/1.jpg
http://i248.photobucket.com/albums/gg168/boston_sql92/2.jpg
http://i248.photobucket.com/albums/gg168/boston_sql92/3.jpg
http://i248.photobucket.com/albums/gg168/boston_sql92/4.jpg
http://i248.photobucket.com/albums/gg168/boston_sql92/5.jpg
http://i248.photobucket.com/albums/gg168/boston_sql92/6.jpg


The same package works fine against a similar test table with 150k rows.
http://i248.photobucket.com/albums/gg168/boston_sql92/7.jpg
http://i248.photobucket.com/albums/gg168/boston_sql92/8.jpg

The weird thing is it only takes 24 minutes for a full refresh of the entire source table from Oracle to the SQL target table.
Any hints,advice would be appreciated.

View 18 Replies View Related

Need To Save A Value From Data Flow A To Use In Data Flow B

Mar 20, 2007

Good morning, all,

I am working on importing an Excel workbook, saved as multiple CSV flat files, that has both group level data and related detail row on the same sheet. I have been able to import the group data into a table. As part of the Data Flow task, I want to be able to save the key value for the group, which I will use when I insert the detail rows.

My Data Flow has the following components: The flat file with the data, which goes to a derived column transformation to strip out extraneous dashes, which leads to the OLEDB Destination component.

I want to save the value as a package level variable, so that I can reference it in another dataflow.

Is this possible, and if so, at what point do I save the value?

Thanks,
Kathryn

View 1 Replies View Related

Table Flow

Oct 3, 2006

Premnath writes "How do i find the order of my Tables in my database? I need to populate them from the beginning. As i have 600 tables in my database how do i find out its order."

View 5 Replies View Related

Data Flow

Apr 9, 2007

I am designing a data flow archieture.I need the suggestions.

What are the advantages of storing data of the flat files to sqlserver.

View 3 Replies View Related

Help With Data Flow

Jun 12, 2007

I would like to use Integration Services to update data in my datawarehouse. I have a table called "AgentStats" that stores archived data from the past 3 years. I would like to import the current year's data from the production server into the same table in my datawarehouse and have my ETL update only the current year's day on a daily basis. The current year's data is constantly updated in the datasource, so I achive the data at the end of the year. Any ideas how I can accomplish this?



Thank You

-Sam

View 4 Replies View Related

Designing This Flow

May 9, 2008

Hi

I am SSIS newbie and need help in desigining this flow.

Source Sqlserver2005
Select records based on complex sql statement from a [InstanceA].[DatabaseA].[TableA]

Target Sqlserver2005
Insert the records into another table [InstanceB].[DatabaseB].[TableB] only if these records are not present
Take the records from [InstanceB].[DatabaseB].[TableB] and insert it into

[InstanceB].[DatabaseB].[TableC] only those records which are not in C
And finally
Take the records from [InstanceB].[DatabaseB].[TableB] Join them with
[InstanceB].[DatabaseB].[TableC] and insert it into

[InstanceB].[DatabaseB].[TableD] only those records which are not already in D

Can somebody please help me in visualising this solution .

I am having problems populating a target and then using that populated
target as a source for subsequent targets.

regards
Hrishy


View 6 Replies View Related

Data Flow

Dec 4, 2007

Is there a way to extract data from a source and insert these data directly into destination without using an intermediate step like storing data in a text or staging table? I am trying to use the data flow task: Source -> Destination. The Data Access Mode for both the source and destination is from a variable name.

Thanks in advance.

Andrew

View 14 Replies View Related

About Control Flow

Apr 20, 2008

I am new to SSIS can anyone tell me the diff between control flow and dataflow. if all the transformation are done using dataflow than why do we use control flow. Sorry if I am asking you very basic question.

View 5 Replies View Related

Control Of Flow Question

Oct 23, 2003

I would like to be able to have a "while" statement execute until all af the rows in a temporary table have be processed.

I know I can do this with counters.

However, I was wondering if anyone knows of a SQL equivalent of
"WHILE NOT EOF" as in "do this until you hit the end of the table".

Thanks in advance for any input.

View 5 Replies View Related

Data Flow Problem

Oct 9, 2007

Hi,

I have an xml datasource and an oledb destination. I want to insert data from an xml file into a sql server table. The xml source configurations are set properly (i.e., I have an xsd file and the data source identifies correctly the output column). The oledb destination properties seem set properly (it recognizes the input column and maps it the correct output column for the sql table). When I run the package there are no rows that get written to the table (with no errors either).

help! I do have data in that xml file, dunno why it doesn't write any rows.

View 1 Replies View Related

Flow Layout In Rs2005

Jan 10, 2007

I have a report that needs to show postal addresses. The address is broken down into several fields. The problem I have is some of the address parts are optional. If they are empty, I'm left with nasty gaps in the address. I'd really like next label to reclaim the space of any empty labels.

a quick example

A full address would look like this..

customer name
address line 1
address line 2
town
county
post code

if address line 2 isnt given, I get:

customer name
address line 1

town
county
post code

but I want:

customer name
address line 1
town
county
post code

Can anyone help?

Many thanks,

Paul

View 4 Replies View Related

Flow Control In SSIS

Aug 22, 2007

I am having a hard time with what appears to be something simple. I want to import an excel spreadsheet into a table on a daily basis from a command line. I created a package from the Import Wizzard in the SQL Management Studio and saved it. Since I want a clean table each day, my process needs to be create a temp table, import from the Excel file into the temp table. If that is successful, delete the original table and rename the temp table the original name. The point of this process is to provide for a fail-safe if there is some unforseen problem downloading the data on a particular day.

When I run the package, the first thing it does is delete the original table. I know this because the process shows the time that it finished is before anything else has started or finished. The time shown for the completion of the data flow task is about 2 minutes after that time.

This is maddening!!! The one thing I do not want to happen I can not seem to prevent. I have my control flow set on success. Why does it do this?

View 3 Replies View Related

Data Flow Split

Aug 13, 2007

I'm trying to figure out how in a Data Flow Transform I can split some data.

I have data coming through that has a PK (col1) and datetime col (col2).
This data may contain multiple rows for the PK, col1.
I want to be able to take the min datetime for each row for col1 and send down one path and all other rows down another path eg. There are 20 rows in the Data Flow coming through. Col1 has value 1 for first 10 rows, value 2 for remaining 10 rows and there are different values for Col2 for all rows. Take the Min value in Col2, where col1=1 and also the min value in Col2, where col1=2 and send down one path. Send all other rows down another path.

I thought of using the Conditional Split Transform but my Expression knowledge isin't experienced enough (I'm looking into this) and I'm not sure if it can be done.

I also though of using the Multi-Cast Transform and then using the Aggregate function, on each data set. While I can do this easily for the first data flow.

SELECT col1, MIN(col2)

FROM tbl1

GROUP BY col1

which is easily done in the Aggregate transform, the other side of the data flow I can't see how can be done using the Aggregate Transform

SELECT col1

FROM tbl1

WHERE col2 NOT IN (SELECT MIN(col2) FROM tbl1 GROUP BY col1).

Are either methods feasilble or is there another way.
I want to avoid putting this data into temp tables in a SQL database and manipulating the data from there. The data has been extracted from a flat file source.
Any help and ideas welcome.

View 13 Replies View Related

SELECT INTO Using A Data-flow

Sep 13, 2005

Hi,

View 18 Replies View Related

SQL Batch In Data Flow

May 9, 2008

I am learning SSIS, I am try to figure out how to run a SQL batch that returns a result set and export it to Excel using a Data Flow. I am using a OLE Db Source with a SQL batch shown below and the Destination Query is an Excel file.

How is this done?

So far I have had no luck getting the tasks to run. I need more than just simple queries.

The SQL is below:



SET NOCOUNT ON

DECLARE Tables CURSOR FAST_FORWARD FOR
SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE != 'VIEW'

DECLARE @RowCount INT

DECLARE @TableSchema NVARCHAR(128)
DECLARE @TableName NVARCHAR(128)
DECLARE @SqlCmd NVARCHAR(1024)

CREATE TABLE #temp ([Name] NVARCHAR(128), [RowCount] INT)

OPEN Tables
FETCH NEXT FROM Tables INTO @TableSchema, @TableName

WHILE @@FETCH_STATUS = 0
BEGIN
SET @SqlCmd = 'SELECT ''[' + @TableSchema + '].[' + @TableName + ']'', COUNT(*) FROM [' + @TableSchema + '].[' + @TableName + ']'
PRINT @SqlCmd
INSERT #temp EXEC sp_executesql @SqlCmd
FETCH NEXT FROM Tables INTO @TableSchema, @TableName
END

SELECT * FROM #temp

CLOSE Tables
DEALLOCATE Tables

DROP TABLE #temp

View 4 Replies View Related

Retrieve Value Within A Data Flow

Aug 29, 2006

hi,

I am doing a data flow to insert data into a dimension table(A) and master surrogate key table(B). Master surrogate key table(B) will be inserted prior to dimension table(A). My dimension table(A)'s key is depend on the last key in master surrogate key table(B). The data flow started from a flat file and checking if the key found in dimension table. If not found, insert a record into B. Then generate the new record to A with max id from B.

My question is how to retrieve the max id in master surrogate key table(B) in a data flow and use it as an input to my dimension table?

View 1 Replies View Related

Data Flow Errors

Jul 21, 2006

I have a package that archives some old data. It ran fine last month, but today it's failing with the following error messages:




DTS_E_BUFFERGETTEMPFILENAME
The buffer manager could not get a temporary file name. The call to GetTempFileName failed.

DTS_E_UNUSABLETEMPORARYPATH
The buffer manager could not create a temporary file on the path "__". The path will not be considered for temporary storage again.

DTS_E_CANTCREATEBLOBFILE
The buffer manager cannot create a file to spool a long object on the directories named in the BLOBTempStoragePath property. Either an incorrect file name was provided, or there are no permissions.



Any ideas on how to fix this?

Thanks!!!

MarkA



View 2 Replies View Related

Data Flow Question

Feb 14, 2006



I need to process magazine subscription records in the following way:

I have a table containing records for subscriptions that haven't started yet. in some cases, a customer can have multiple subscriptions that haven't started. I have to lookup the current subscription's expire issue number and add 1 to it. this becomes the future subscription's start issue number. for the cases where there are multiple future subscriptions, i was going to:

- lookup the issue start number for the first future subscription found.

- then compute the expire issue number by adding the amount of issues purchased to the start issue number.

i was then going to put the expire issue number in a table. for the following future subs, i would refer to this table to get the previous expire issue number.

my question is: how can i insure that the expire issue number is added to the table before the next future subscription is processed?

any questions or suggestions on a better way to do this, please let me know.



Thanks.

Frank



View 3 Replies View Related

Data Flow Is Locking Up

Jan 10, 2006

I have a situation where we periodically maintain an updated version of some data. I select the fresh data and match it on an alternate key to the existing data, determine if its a new record or changed record, perform some transformations, and then end with either adding (OLEDB Dest) or updating (OLEDB Command) the table.  It's basically like using a SlowlyChangingDimension but much faster.  Anyway, on one of the more complicated tables the data flow is getting locked up between the initial source select and the destination (because the table is referenced in both).  I've fiddled with the isolation levels and transactions and destination table lock but nothing will prevent the lockup.

Is there 1) a better way to structure a data flow when you are pulling data from a table you will ultimately be using as a destination to prevent this lockup  (without using the SCD) or 2) something that can be done to prevent the data flow lockup??

View 5 Replies View Related

Data Flow Task

Dec 4, 2007


Hi

I have a data flow task. If it completes I should update a flag in the database. So How I can I know if the

data flow task has completed or not.

Thanks

Sai

View 4 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved