Refreshing Metadata Of Transformations In The Data Flow
Jan 10, 2007
Hello,
I created a slowly changing dimension object and used an OLE DB Source object with a SQL Command to feed it. After all the SCD objects are created, I get a warning about a truncation which might happen.
I adjust the source query with the right converts, so the query uses data types and lengths which match the target table. I update the OLE DB Source with the new query. Column names are unchanged.
Now how do I get this data flow to reflect the new column lengths? Do I have to throw away all objects and recreate the SCD? The refresh button in the SCD object doesn't do it. This is also a problem when adding columns to your dimension table. Because I modify the stuff that the SCD generates, it's VERY teadious to always have to throw it all away and start over again.
Thnx. Jeroen.
View 7 Replies
ADVERTISEMENT
Jan 2, 2007
Hi Folks,
I'm currently looking into SSIS, to establish whether or not it can improve on an existing stored procedure.
We have a sp that performs standard ELT functions: it extracts new (or newly updated) data out of A, transforms it, and then loads it into B. It runs as part of a scheduled job and takes approx 60 seconds to complete. Fine. But we want it to go faster, and this is where (we hope) SSIS comes in...
I'm approaching this area of SQL Server 2005 for the first time, and have been looking towards the data flow task and its transformations to provide such an equivalent, faster solution. Before I continue down this road however, I would welcome some peer feedback/comment on whether SSIS - and its data flow transformations - are indeed the best tools to leverage when looking to accomplish such an ELT function, and quickly.
I guess the fundamental question here is: 'Why transforms over script?' I am reading Brian Knight's book, and I'd like to quote a few passages:
'...the nicest thing about transforms in SSIS is that it is all done in-memory and it no longer requires elaborate scripting as in SQL Server 2000 DTS...'
I guess this means that it doesn't need to be complied/interpreted, which I suppose all DML does?
'...one of the overriding themes of SSIS is that you shouldn't have to write any code to create your transformation...'
Is this because writing code is considered a more complex task than creating + configuring a transformation, or is it (at least in part) because a transformation is necessarily going to be quicker than its DML equivalent?
Hope some of you guys can respond with some interesting thoughts.
Cheers,
Tamim Sadikali.
View 3 Replies
View Related
Mar 5, 2008
Hi, hope someone can help or point me down the right track.
I have to load 50+ tables (all have different file layouts) using a data reader (The source only allows an OBDC connection) into a SQL Server 2005 databases, rather than create 50+ ETL packages that will be identical in process terms is it possible to create a single package that will dynamically re-map the source destination joins. I know I can set the Source and Destination using expressions however how do I ensure that the mappings are updated and for the errors can I automate so that they re-direct? Is this possible using a script task? Or by some other means?
Many thanks for any help
View 6 Replies
View Related
Jun 18, 2007
I have a data flow task with a single source and destination task. I'm having the source task creating a table from a variable expression and the destination table also created from a variable expression. I'm running this under 3 scenarios in which each scenario has a different source and destination table. They are different in name but close in table structure with the exception of one column being different. The Metadata for the source flow path seems to be "sticky" in that it is not modifying the source table structure in the flow to account for this different column. I'm not sure how to adjust this. Any ideas? I've modified several properties in the task and data flow but nothing seems to make this adjustment in run-time.
View 1 Replies
View Related
Oct 1, 2014
I am tasked with truncating and reloading tables from one server to another. Company policy prevents cross-server queries, but allows SSIS packages with cross-server connections. I am doing this for about 25 tables. I have the table names in a single table & I have created an FEL to execute tasks against each table one-by-one. It works fine to truncate all the tables. I run into issues, though, with the DataFlowTask. I'm able to tell it which server & table to dynamically connect from and to, but it doesn't know how to map the metadata. They're the exact same columns and field names in both source & destination.
View 9 Replies
View Related
Aug 29, 2007
Hello,
Is it possible to use existing data flow components (Merge Join, aggregation,...) in a custom data flow component?
Thanks,
Yoann
View 15 Replies
View Related
Jul 30, 2007
Right, so I'm currently trudging through the SQL video tutorials and such, so it may be that I get to this sooner or later, but as I'm under a deadline, I thought I'd post this question beforehand so I can use that info with what I'm learning now. Here's my situation: I have a ASP.NET 2.0 site in which I currently use XML files to display the text on the page, and I transform that text using an XSL stylesheet. I want to move that data to a database, but I'm not sure what is the best way to do that. Basically what I'm most concerned with is storing the main text (paragraphs with embedded hyperlinks). Currently, I can get the XSL to pick out the links and transform them from simply XML data to live links when they display on the page, but would I be able to do the same if I were pulling these paragraphs out of a database? Or should I just store the XML data in the database, and still pull that out so I can transform it appropriately with the XSL sheet I already have? (For that matter, can I dynamically write XML content to a database? Or am I just better off keeping my XML files?) What's the best approach for something like this?Thanks for the help!
View 4 Replies
View Related
Jun 18, 2008
I am using SQL 2005 SSIS. I need to do a data conversion for a date field in a txt file. I used the import wizard to bring my txt file into SQL 2005 but didn't convert the date. The date is displayed in the flat file as 20070612. Can someone help me convert the date. I did add an OLE DB Source to the Data Flow screen and selected command what do I do next and what do I write?
View 9 Replies
View Related
Jul 20, 2005
Hello all.I am trying to write a query that "just" switches some data around soit is shown in a slightly different format. I am already able to dowhat I want in Oracle 8i, but I am having trouble making it work inSQL Server 2000. I am not a database newbie, but I can't seem tofigure this one out so I am turning to the newsgroup. I am thinkingthat some of the SQL Gurus out there have done this very thing athousand times before and the answer will be obvious to them.This message is pretty long but hopefully gives you enough informationto replicate the issue.There are 3 tables involved in my scenario. Potentially a lot more inthe real application, but I'm trying to keep this example as simple aspossible.In my database I have many "things". Let's call them "User Records"(table: users) for this example. My app allows the customer to createany number of custom "Extra Fields" (XF's) for a given User Record.The Extra Field definitions are stored in a table which we can callattribs. The actual XF values for a given user record are stored in athird table, let's call it users_attribs.users_attribs will look something like this (actual DDL below.)UserID | ExtraFieldID | Value--------------------------------------User_1 | XF_1 | hamUser_1 | XF_2 | eggsUser_2 | XF_1 | baconUser_2 | XF_2 | cheeseUser_3 | XF_2 | onionsThe end result is that I want a SQL query that returns something likethis:UserID | XF_1 | XF_2-------------------------------------User_1 | ham | eggsUser_2 | bacon | cheeseUser_3 | NULL | onionsPotentially there would be one column for each extra field definition.One interesting question is how to get a dynamic number of columns toshow up in results, (so new XF's show up automatically) but I'm notworried about that for now. Assume I will hard-code a specific set ofextra fields into my query.The key here is that all users must show up in the final result EVENIF they don't have some extra field value defined. Since User_3 inthe example above doesn't have an XF_1 record, we see a NULL in thatcolumn in the final result.With Oracle I am able to accomplish this via an Outer Join, and I knowSQL Server supports Outer Joins, but I can't seem to make it work. Inever version I have tried so far, if any user is missing any extrafield value, the entire row for the user goes "missing", and that ismy problem.It seems like one possible solution would be to just go ahead andpopulate the users_attribs table with a NULL value for thatcombination of user ID and extra field ID, basically adding a new rowlike this:UserID | ExtraFieldID | Value--------------------------------------User_3 | XF_1 | NULLI would like to avoid that if possible, for a number of reasons,particularly the question of *when* that NULL would be added. I don'twant my report to touch the database and add stuff at reporting timeif at all possible. In Oracle, I seemingly don't have to, and I wantto get to that point on SQL Server.So, here is some specific DDL to recreate this scenario:CREATE TABLE users (user_id varchar(60), username varchar(60));-- Extra Field (attribs) definitionsCREATE TABLE attribs (xf_id varchar(60), xf_name varchar(60));-- Extra Field values for UsersCREATE TABLE users_attribs (user_id varchar(60), xf_id varchar(60),val varchar(60));-- populate the sample tables-- sample User recsINSERT INTO users VALUES ('U_1', 'John Smith');INSERT INTO users VALUES ('U_2', 'Mary Rogers');-- sample extra field definitionsINSERT INTO attribs VALUES ('XF_1', 'Extra Field 1');INSERT INTO attribs VALUES ('XF_2', 'Extra Field 2');INSERT INTO attribs VALUES ('XF_3', 'Extra Field 3');-- sample values for User Extra Fields (XF's)-- U_1 ("John Smith") has complete values for each XFINSERT INTO users_attribs VALUES ('U_1', 'XF_1', 'XF_1 value forU_1');INSERT INTO users_attribs VALUES ('U_1', 'XF_2', 'XF_2 value forU_1');INSERT INTO users_attribs VALUES ('U_1', 'XF_3', 'XF_3 value forU_1');-- U_2 ("Mary Rogers") only has one value, missing the other two..INSERT INTO users_attribs VALUES ('U_2', 'XF_2', 'XF_2 value forU_2');Now, I can get what I want on Oracle, provided that I define an newview that joins the three tables together, then do a separate query onthat view that does an outer join. I could dispense with the view,but I don't want to hard-code the XF ID's into the query. I am finewith hardcoding the XF names, though. (Long story.)-- Create a User Extra Field view that joins Users-- extra field definitons (attribs)-- and values (users_attribs.)CREATE VIEW u_xf_view ASSELECT u.user_id, at.xf_name, uxf.valFROMusers u,attribs at,users_attribs uxfWHEREuxf.user_id = u.user_id ANDuxf.xf_id = at.xf_id-- Oracle-only outer join syntax works if you use the view:SELECTu.username as "User Name",uxf1.val as "Extra Field 1 Value",uxf2.val as "Extra Field 2 Value",uxf3.val as "Extra Field 3 Value"FROMusers t,u_xf_view uxf1,u_xf_view uxf2,u_xf_view uxf3WHEREuxf1.user_id(+) = t.user_id ANDuxf1.xf_name(+) = 'Extra Field 1' ANDuxf2.user_id(+) = t.user_id ANDuxf2.xf_name(+) = 'Extra Field 2' ANDuxf3.user_id(+) = t.user_id ANDuxf3.xf_name(+) = 'Extra Field 3';-- RESULTS (correct):User Name Extra Field 1 Value Extra Field 2 Value ExtraField 3 Value------------- ------------------------ ------------------------------------------------John Smith XF_1 value for U_1 XF_2 value for U_1 XF_3value for U_1Mary Rogers NULL XF_2 value for U_2 NULL2 Row(s)So far I have not been able to get the equivalent result in SQLServer. Like I said, I am really hoping to avoid populating thoseNULL values. Can anything think of a way to replicate Oracle'sbehavior here? I have tried a number of variations on the ANSI joinsyntax instead of Oracle's (+) operator, but everything I tried so farhas only yielded a row when ALL extra fields are populated (or evenworse behavior.)I greatly appreciate any assitance you may be able to give. I would behappy to provide any additional information if I forgot to mentionsomething important. I apologize in advance for any broken / wrappedlines. Thank you for taking the time to read this.I'm going to be out of town for the next week or so, so I won't checkfor a response until then, but as soon as I get back home I will checkback in the newsgroup.Thank you!!Preston Landerspibble (at) yahoo (dot) com
View 2 Replies
View Related
Sep 18, 2006
Hi,I’m new to SQL Express, but I have created a table and a stored proc to populate it.I have dragged the table into a form so I can view the data in a GridView.I have added a button to add new rows to the table. All the above works fine, except when I hit add, the data gets added, but the GridView doesn’t update and show the new data. Is there some code I can add to the add button that also refreshed the GridView? ThanksMike
View 2 Replies
View Related
Nov 17, 2006
Hello everybody,
I wrote a stored procedure for SqlServer 2000 and i am using it for paging purpose.
The procedure is as follows :
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go
ALTER PROCEDURE [dbo].[OLAP_PagedRows]
(
@SelectFields nVarchar(2000) =NULL,
@GroupByFields nvarchar(1000) =NULL,
@BaseTable varchar(100),
@KeyColumn nvarchar(200)=NULL ,
@JoinTables varchar(500) =NULL,
@ConditionalClause varchar(1000) =NULL,
@Pagesize int = 10,
@PageNumber int =1,
@SortExpression nvarchar(200)=NULL,
@SearchText nvarchar(200)=NULL
)
AS
BEGIN
DECLARE @SQLSTMT NVarchar(4000)
DECLARE @SQLSTMT1 NVarchar(4000)
SET @SQLSTMT1 = ''
--check whether page size is given null or not, if so set to default value
IF @Pagesize IS NULL OR @Pagesize = ''
BEGIN
SET @Pagesize =10
END
--check whether page number is given null or not, if so set to default value
IF @PageNumber IS NULL OR @PageNumber = ''
BEGIN
SET @PageNumber =1
END
--Start constructing the query --
SET @SQLSTMT = 'SELECT '
SET @SQLSTMT1 = 'DECLARE @CountValue INT SELECT @CountValue = count(*) From '+@BaseTable
SET @SQLSTMT = @SQLSTMT + @SelectFields + ' FROM '+@BaseTable
If @JoinTables Is Not Null
BEGIN
SET @SQLSTMT = @SQLSTMT + ' ' +@JoinTables
SET @SQLSTMT1 = @SQLSTMT1 + ' ' +@JoinTables
END
DECLARE @StmtWhereClause nvarchar(500)
SET @StmtWhereClause =''
--------------------- Get where conditional clause
If (@SearchText Is Not Null AND RTRIM(LTRIM(@SearchText))<>'')
BEGIN
SET @StmtWhereClause = @StmtWhereClause + ' WHERE ' + @SearchText
END
If @ConditionalClause Is Not Null AND RTRIM(LTRIM(@ConditionalClause))<>''
BEGIN
IF (@StmtWhereClause <> '')
BEGIN
SET @StmtWhereClause= @StmtWhereClause + 'AND ' +@ConditionalClause
END
ELSE
BEGIN
SET @StmtWhereClause = @StmtWhereClause + ' WHERE ' + @ConditionalClause
END
END
SET @SQLSTMT = @SQLSTMT + @StmtWhereClause
SET @SQLSTMT1 = @SQLSTMT1 + @StmtWhereClause
If @GroupByFields Is Not Null And RTRIM(LTRIM(@GroupByFields))<>''
BEGIN
SET @SQLSTMT = @SQLSTMT + ' Group By ' +@GroupByFields
SET @SQLSTMT1 = @SQLSTMT1 + ' Group By ' +@GroupByFields
END
IF @SortExpression Is Not Null AND RTRIM(LTRIM(@SortExpression))<>''
BEGIN
SET @SortExpression = LTRIM(RTRIM(' Order By '+ @SortExpression))
SET @SQLSTMT = @SQLSTMT +' '+ @SortExpression
SET @SQLSTMT1 = @SQLSTMT1 +' '+ @SortExpression
END
SET @SQLSTMT1= @SQLSTMT1+' SELECT @CountValue As MyRows '
--SELECT @SQLSTMT1
--SELECT @SQLSTMT
DECLARE @StartRow INT
SET @SQLSTMT = ' DECLARE temp_Cursor CURSOR SCROLL FOR '+@SQLSTMT
EXECUTE SP_EXECUTESQL @SQLSTMT
Open temp_Cursor
DECLARE @RowCount INT
SET @RowCount = 1
SET @startRow = (@PageSize * (@PageNumber-1))+@RowCount
--SELECT @startRow as 'Current Row'
WHILE @RowCount <= @PageSize
BEGIN
--Select @StartRow 'as @StartRow'
FETCH ABSOLUTE @startRow From temp_Cursor
SET @RowCount= @RowCount+1
SET @StartRow = @startRow + 1
END
deallocate temp_Cursor
EXECUTE SP_EXECUTESQL @SQLSTMT1
END
It is working fine but I have problem with this kind of paging. I need to load the whole data into the cursor and i have to fetch records. The problem is that my table's contains more than Half a million records in it. If I have to load each time this cursor it will be a very big problem on the server side.
Probably it may not be a best solution, but sqlserver 2000 cannot provide more help than this. If I use sub-query for this like using Top <Number> it adversly effecting the nature of the data retrieval.
One solution that I am thinking is Load cursor once and whenever some updations performed on those tables from which cursor is getting data should be automatically reflect the changes.
Is this possible? Please help me.
Regards
Andy Rogers
View 3 Replies
View Related
May 25, 2006
I've read that SSIS tries to do all transformations in memory as a way of enhancing processing speed. What happens though if the amount of data processed exceeds the available RAM? Are raw files then used (similar to staging tables) or is an error generated?
Barkingdog
View 1 Replies
View Related
Feb 14, 2006
Hi, All,
I need to pass a parameter from control flow to data flow. The data flow will use this parameter to get data from a Oracle source.
I have an Execute SQL task in control flow to assign value to the Parameter, next step is a data flow which will need take a parameter in the SQL statement to query the Oracle source,
The SQL Looks like this:
select * from ccst_acctsys_account
where to_char(LAST_MODIFIED_DATE, 'YYYYMMDD') >?
THe problem is the OLE DB source Edit doesn€™t have anything for mapping parameter.
Thanks in Advance
View 2 Replies
View Related
May 30, 2008
Hi,
Recently we upgraded the SQL sever from 32 bit box to 64 bit and we encounterd some weird results like data was not refreshing unless we manually does a manually refresh.
we just ran some migration scripts from another server to the new 64 bit server and script is correct only thing problem is with data refreshing.
Please send me the solution for the problem and let me is there any script that we can refresh the database using tsql command???
Thanks in Advance,
Shiva.
View 2 Replies
View Related
Mar 9, 2007
I have an Execute SQL Task that returns a Full Rowset from a SQL Server table and assigns it to a variable objRecs. I connect that to a foreach container with an ADO enumerator using objRecs variable and Rows in first table mode. I defined variables and mapped them to the columns.
I tested this by placing a Script task inside the foreach container and displaying the variables in a messagebox.
Now, for each row, I want to write a record to an MS Access table and then update a column back in the original SQL Server table where I retreived data in the Execute SQL task (i have the primary key). If I drop a Data Flow Task inside my foreach container, how do I pass the variables as input to an OLE DB Destination on the Data Flow?
Also, how would I update the original source table where source.id = objRects.id?
Thank you for your assistance. I have spent the day trying to figure this out (and thought it would be simple), but I am just not getting SSIS. Sorry if this has been covered.
Thanks,
Steve
View 17 Replies
View Related
Jan 17, 2008
Dear All!
My package has a Data Flow Task. In Data Flow Task, I use a Script Component and a OLE BD Destination to transform data from txt file to database.
Within Data Flow Task, I want to call File System Task to move file to a folder or any Task of "Control Flow" Tab. So, Does SSIS support this task? Please show me if any
Thanks
View 3 Replies
View Related
May 17, 2007
Hi everyone,
Primary platform is 64 bit cluster.
How to move information allocated in SSIS variables from Data Flow to Control Flow layers??
We've got a SSIS package which load a value into a variable inside a Data Flow. Going back to Control Flow how could we retrive that value again????
Thanks in advance and regards,
View 4 Replies
View Related
Jan 12, 2006
I'm currently setting variables at the package level with an ExecuteSQL task. This works fine. However, I'm now starting to think about restartability midway through a package. It would be nice to have the variable(s) needed in a data flow set within the data flow so that I only have to restart that task.
Is there a way to do that using an SQL statement as the source of the value in a data flow?
OR, when using checkpoints will it save variable settings so that they are available when the package is restarted? This would make my issue a moot point.
View 2 Replies
View Related
May 7, 2007
Good day,
I have a SSRS 2000 report that when I view the report data does not refresh until I press the refresh data button in the report. Clearly this can't be right and to expect users to press the refresh button every single time is also rediculous.
HAs anyone had this problem before and know what to do.
Please help.
Thanks
View 3 Replies
View Related
Jan 17, 2007
Hello,
I am trying to refresh a test database with data from a production database. Both database structures are identical, e.g. constraints, stored procs, PK, etc. I am trying to create a package in SSIS that accomplishes this task and I am having extensive problems. The import export wizard is out of the question because the constaints are not carried over, plus when I try to refresh the data using the import export wizard, it fails on 1 specific table because of a column in that table named "Error code". I think "Error code" is a micrsoft keyword, so it fails on this column. Does anyone know a workaround that I can do to accomplish this simple task, that could be completed in minutes using DTS. I understand that SSIS is not as straight forward as DTS, but this task is something that DBA's do on a regular basis and therefore should not be this difficult.
Any help would be appreciated,
David
View 1 Replies
View Related
Jul 11, 2006
Hi all,
Normally I would be using SSAS but our finance department make use of Hyperion Essbase. I was wondering whether it was possible to upload data into an Essbase cube using SSIS in the same way that you can use the Analysis Services processing task? I realise there are no specific task for Essbase, but are there any suggestions about what would be the best way of going about this?
regards
Colin
View 4 Replies
View Related
Aug 21, 2015
So I use Excel 2010 connect to a cube I have built. Â Then I change some values in the cube via my ETL and re-process the cube. Â Then I verify that record is NOT there in the fact table - check!
However, when I refresh the worksheet where the pivot table is pulling data from the cube, but that old record wont go away!
Just realized my cube data source on the dev server, was in fact still configured with my local workstation name. Once I updated that, processed the cube all was well.
View 2 Replies
View Related
Mar 18, 2008
Dear All,
Im using VS2008, visual basic and SQL Server express.
General questions please: Is it necessary, to issue some sort of command, to 'commit' data to a SQL express database after a INSERT or UPDATE sql command?
I'm getting strange behavior where the data is not refreshed unless i exit my app and re-enter. In other words, i can run a sql command , the data is apparantly saved (because i get no errors) then if i refresh a data set or do a sql select query the data that i expect to return is not there.
Im fairly new to SQL express (and SQL server generally) so i dont know if its my coding or i need to switch some 'feature'
on/off or not.
I hope thats clear
Also, could someone point me to documentation that explains each parameter in the connection string
Many Thanks
Chris Anderson
My code is:
ConnectionString = "Data Source=.SQLEXPRESS;AttachDbFilename=C:myfile.mdf;Integrated Security=True;Connect Timeout=30;User Instance=True"
Connection = New SqlConnection
Connection.ConnectionString = ConnectionString
Connection.Open()
'''''''''''''''''the above code is done at the start of my application
'''''''this code below called during application many times
dim sql as string = "my sql string here"
Dim cmd As SqlCommand = Nothing
cmd = New SqlCommand(sql, Connection)
Try
cmd.ExecuteNonQuery()
Catch err As SqlException
MessageBox.Show(err.Message.ToString())
MessageBox.Show(sql, "SQL ERROR: ", MessageBoxButtons.OK, MessageBoxIcon.Error)
End Try
cmd.Dispose()
View 5 Replies
View Related
Jul 22, 2007
Hi all! I recently started working with SSIS and one of the things that is puzzling me the most is what's the best way to go:
A small control flow, with large data flow tasks
A control flow with more, but smaller, data flow tasksAny help will be greatly appreciated.
Thanks,
Ricardo
View 7 Replies
View Related
Nov 1, 2006
Hi
I am new to SSIS and have the following problem. I used the following script to clear data in columns of any CR/LF/Commas and char(0)'s. Can I just transfer this to SSIS and how exactly do I do that? Any help or advice would help.
Function Main()
Dim x
For x=1 to DTSSource.count
If Isnull(DTSSource(x)) = False Then
DTSDestination(x) = replace(replace(replace(Replace(DTSSource(x) , chr(13),""),chr(10),""),chr(0),""),","," ")
Else
DTSDestination(x) = DTSSource(x)
End If
Next
Main = DTSTransformStat_OK
End Function
Andre
View 8 Replies
View Related
Dec 28, 2007
Hi,
I'm trying to implement an incremental data pull (Oracle to SQL) based on Andy's blog:
http://sqlblog.com/blogs/andy_leonard/archive/2007/07/09/ssis-design-pattern-incremental-loads.aspx
My development machine is decent: 1.86 GHz, Intel core 2 CPU, 3 GB of RAM.
However it seems the data flow task gets hung whenever I test the package against the ~6 million row source, as can be seen from these screenshots. I have no memory limitations on the lookup transformation. After the rows have been cached nothing happens. Memory for the dtsdebug process hovers around 1.8 GB and it uses 1-6 percent of CPU resources continuously. I am not using fast load to insert new records into my sql target table. (I am right clicking Sequence Container 3 and executing this container NOT the entire package in the screenshots)
http://i248.photobucket.com/albums/gg168/boston_sql92/1.jpg
http://i248.photobucket.com/albums/gg168/boston_sql92/2.jpg
http://i248.photobucket.com/albums/gg168/boston_sql92/3.jpg
http://i248.photobucket.com/albums/gg168/boston_sql92/4.jpg
http://i248.photobucket.com/albums/gg168/boston_sql92/5.jpg
http://i248.photobucket.com/albums/gg168/boston_sql92/6.jpg
The same package works fine against a similar test table with 150k rows.
http://i248.photobucket.com/albums/gg168/boston_sql92/7.jpg
http://i248.photobucket.com/albums/gg168/boston_sql92/8.jpg
The weird thing is it only takes 24 minutes for a full refresh of the entire source table from Oracle to the SQL target table.
Any hints,advice would be appreciated.
View 18 Replies
View Related
Sep 24, 2015
I am working on to create a data warehouse. I have made a database which will be the data warehouse and will consist of dimension and fact tables. I know that other than dimension and fact table a data warehouse should also consist of a meta data, now my question is what should be the structure of metadata and all the information it should have?
View 2 Replies
View Related
Nov 24, 2006
Hi, all here,
Thank you very much for your kind attention.
I am wondering if it is possible to use SSIS to sample data set to training set and test set directly to my data mining models without saving them somewhere as occupying too much space? Really need guidance for that.
Thank you very much in advance for any help.
With best regards,
Yours sincerely,
View 5 Replies
View Related
Mar 20, 2007
Good morning, all,
I am working on importing an Excel workbook, saved as multiple CSV flat files, that has both group level data and related detail row on the same sheet. I have been able to import the group data into a table. As part of the Data Flow task, I want to be able to save the key value for the group, which I will use when I insert the detail rows.
My Data Flow has the following components: The flat file with the data, which goes to a derived column transformation to strip out extraneous dashes, which leads to the OLEDB Destination component.
I want to save the value as a package level variable, so that I can reference it in another dataflow.
Is this possible, and if so, at what point do I save the value?
Thanks,
Kathryn
View 1 Replies
View Related
Oct 17, 2007
I am looking for anyone who can help me find an existing tool that will allow my programming team to build reports and data extracts simply and easily from a form of query builder. Unfortunatley, my database is structured as a metadata store using name data pairs. The actual structure is something like:
DateTime; Customer (integer); fieldname(string); fieldvalue (string)
In addition, I have another table that is structured similarly to store numeric data
DateTime; Customer (integer); fieldname(string); fieldvalue (Integer)
So, for the query tool to even start, it needs to scan both tables for distinct values in the fieldname column for a specific date/time range for a specific customer. Once that is done, it can display the field names and begin to search for the data I am looking for.
In addition, If I want the names of all people in the system who have bought a pie from my customer 1, the query tool will have to search for the fieldvalue "pie", return the key cells and then search for the fieldnames "Name" where the fields match (that or do it in a compound query).
Is there any tool anywhere out there that might have such capabilities?
View 3 Replies
View Related
Mar 5, 2005
Greetings, I'm interested in learning how to create queries against a cubes metadata. Specifically I'm interested in writing a aquery, using VB, to return a collection of bottom level members relative to a given member name.
Using the MDX Sample appication I can execute the query:
select
Descendants([Customer].[All Customer].[Canada],,leaves) on columns
from Sales
to get the members, however, the query appears to attempt to bring back data, thus is probably an expensive method.
It would be great to understand how to implement a meta data query without have to resort to recursion,
TIA
View 2 Replies
View Related
Feb 21, 2007
SSIS seems to automatically set the metadata type and for "typed" sources like database and XML connections it seems to take whatever the source column datatype is. If you use a cast or convert in the your source SQL query, it will not change the datatype of the metadata. This becomes an issue when doing things like merge joins on data from different sources and the join columns are different types (e.g. a ZipCode in one system is a varchar and it is an int in another system). I've been working around the issue by editing the XML code and changing the datatype there. Is there any way to do this through the GUI?
View 2 Replies
View Related
Apr 17, 2007
Hello,
I have a SSIS package with a Data Flow task. This task transfers the data from SQL Server 2000 to a table in SQL Server 2005.
I deployed and tested this package on the Test Server. Then put this package in a job and executed it - Works fine.
On the production server- If I execute the package through DTEXECUI, it works fine. But when I try executing it through a job- the job fails and it gives me following error:
Description: The external metadata column collection is out of synchronization with the data source columns. The "external metadata column "T_FieldName" (82)" needs to be removed from the external metadata column collection....
What I don't understand is, why are there no errors displayed when I execute the package through DTEXECUI.
Can anyone help me to resolve this issue.
Thanks.
View 3 Replies
View Related