When I create a DTS to export a text file from a table, if I click on
the Define Columns button and then Populate from source, then execute,
it changes one of the types to not quotable and the size to 19, which
in turn is changing the file width. The field that it is changing has
a numeric data type and the remainder of the fields are text (all
varying sizes). Is it doing this because it is numeric or is there
some other explaination? And is there anyway to stop it from
happening?
I want to copy 2 columns from 1 database to another database. I managed to do this, using a Ole DB source and a Ole DB destination dataset.
Now I want to merge 2 colums into 1: Source Database: Column A: first name, Column B: Lastname. Destination Database: Column 1: First and lastname customer.
Overview of my scenario: i have an SQL table with 10 columns. I read data off a csv flat file source and insert / update them into the sql. No problems there.
Now, after the 10th column, the columns are variable from site to site. Ie, for any customer, the first 10 columns are the same but the following 'n' number of columns can be different.
I'm already able to detect what are the 'custom' columns from the source db and can programmatically add / modify the columns to the destination sql tables when SSIS runs.
My problem now is, i have the flat file source with 'fixed' and 'custom' columns together, and my sql table is ready to receive them, but i don't have the proper column mappings in my SSIS package. I only have mappings for the 'fixed' columns.
How can i programmatically 'add' these mappings to my data source and subsequently my oledb destination?
What strategy should i use? Open for anyone's suggestions or ideas.
I am trying to get a table from production to development created and populated with data in Prod. When I create a package to set the data flow on the available input columns in mappings I do not see any columns there. The source has been defined as the production table.
I'm using 2 OLE DB Commands; 1 to perform an insert the other an update. I have found that on the column mapping tab, I only have 10 parameters available to map to. The issue is I need 40 parameters. Am I doing something wrong? Is there a setting I am missing? Is there another way to do this? Am I out of luck . Here is the sql query I am using, so you can see that I have the correct number of parameters listed:
i am trying to load almost 15 csv files to my oledb destination can i use for each container to map the source columns dynamically to destination table during data flow task
I have a database app, and we're implementing various data export features using SSIS.
Basically, it's a couple of straight extracts of various recordsets/views, etc. to CSV (flat files) from our SQL Server 2005 database, so I'm creating an SSIS package for each of these datasets.
So far, so good, but my problem comes here: My requirements call for users to select from a list of available columns the fields that they want to include in their exported file. Then, the package should run, but only output the columns specified by the user.
Does anyone have any idea as to the best way to accomplish this? To recap, at design time, I know which columns the users will have to choose from, but at run time, they will specify the columns to export to the flat file.
Here is the situation. I have created a package that takes 50 columns from a comma delimited flat file. I then validate and clean the data. Next I add two columns that were not in the original source file. These two columns need to be in the 5th and 9th column position when the file is then re-written to a text file. How do i get those two columns to write out in the desired order? Any ideas?
I have a dataflow task with two components: OLE DB source --> Flat File destination
In the OLE DB source, I am accessing a view, not a table.
However, when I attempt to map the view's input columns to the flat file destination columns, the screen is completely blank. There's none of the usual drop downs that let you select the column name, nor can I drag and drop the input column from the upper pane to the lower pane.
As soon as I call the input.GetVirtualInput(); method I get a com exception ,Seem that I am missing a
VirtualInputColumnCollection on the component ,but can't seem to figure out why.
When I drop the all the other components and only keep the OLEDb Source and OLEDB Destination with a flow between them , the call to input.GetVirtualInput() doe not fail with a com exception and I can mapping normally
However, the first three columns are not being populated in the destination table. The other columns come over fine.
The SQL stmt. returns data as expected when run against the source database.
I deleted the source and destination and recreated the flow to prevent metadata mapping issues. In the source editor preview I see all of the columns and data. In the destination editor preview, the first three columns of data are null ???.
It appears that the columns are not mapping properly even though they are in the source and destination of the mapping editor.
I have made sure that the destination mapping contains all the columns in the UI.
The source and destination have the columns represented in the advanced editor metedata. I also checked the XML to verify that the columns are in the destination.
There is a row count between the source and destination. which should have no effect.
This is a part of a larger DW load where I have 10 other tables populated within the dataflow. I also do not get any validation, or error messages. So, I have eliminated truncation errors or the like.
I am really puzzled. Has anyone run accross anything like this?
I need to create a number of flat files, all with the same layout and sourced from the the same table, but with different criteria.
The first set of (three) flat files file is created out of a simple Conditional Split transformation: If Source Table row number > 40,000 route to File 3; if row number > 20,000, route to file 2, otherwise route to file 1. This gives me 20,000 rows in files 1 & 2 and the remainder in file 3.
I also want to create a fourth flat file by joining the Source Table with a sample table and selecting only those rows where the Customer numbers match. I'm currently doing this in two stages: An Execute SQL Task performs the join and inserts the selected rows into a Destination table (identical layout to source table), and then a simple data flow moves the rows from the Destination table into the fourth flat file.
My problem is that the order of the columns in the first three flat files is different from the fourth file. I've tried creating the fourth flat file with a single data flow using a Merge Join transformation which didn't work because the tables aren't sorted in the correct sequence, and I couldn't get an OLE DB Command transformation to work either.
I'm not sure why the column order of the 4th file should be different seeing as how its contents are sourced from the same Source table, but is there a cunning way of setting this up so that the columns end up in the same order?
I have a flat file data source and SQL Server destination data flow. Only a subset of columns from the source are mapped to the destination. During execution SSIS returns DTS pipline warnings for every unmapped source column. Is some kind of transformation the only way to get rid of these warnings?
Also this data flow subsequently returns an error: [SQL Server Destination [1293]] Error: An OLE DB error has occurred. Error code: 0x80040E14. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Could not bulk load because SSIS file mapping object 'GlobalDTSQLIMPORT ' could not be opened. Operating system error code 2(The system cannot find the file specified.). Make sure you are accessing a local server via Windows security."
I'm researching this error, but if anyone is familiar with it your advice would be appreciated. Thanks.
I am trying to Import data into SQL server 2008 using management studio. The source data is an Access dbase. I am trying to do this with queries as the tables do not match but I do need to copy specific columns for the source to the destination. Any brief example selecting a column from the source table, and just entering a dummy value for other columns(other column of data for destination table does not exist in source table).
For the example
Source access dbase, just two columns but no primary key, T2dbase, Employee
In Transactional Replication -while Publisher, Distributer & Subscriber are on same instance- can I have additional columns in destination Table & set "Action if name is in use" for related Article to "Keep existing object unchanged"?
I do this but even Snapshot is not transfered to destination table!!
After: If (Row.RegenerateXML_IsNull()) Then Command.Parameters.AddWithValue("@RegenerateXML",System.DBNull.Value) Else Command.Parameters.AddWithValue("@RegenerateXML", Row.RegenerateXML) End If
Which is great but it does turn 1 line into 5 lines. I have 20 parameters which takes up 20 lines of code which will now be 20 x 5 = 100 lines of code.
My question is :
Is there a better way to write this code without having to take up so many lines?
I am new to SSIS programming and trying to export data from a flatfile source to SQL server destination table dynamically. I need to get the table schema info (column length, data type etc.) from SQL server table and then map the source columns from flatfile to destination table columns.
I am referring to one of the programming samples from Microsoft and another excellent article by Moim Hossain. Can someone help me understand how to map the Source columns to destination table columns depending on table schema? Please help.
The only way to add a new column to an existing mapping that I know is to go to advanced editor and refresh. This however keeps only the default mapping (where the field names match), the rest is wiped out, so need to restore the mapping manually after that. Risky and annoying at the same time. Is there any alternative?
Case: Exporting Report to PDF/Printing/TIFF Report: Contains 1 table with 19 Columns. 1 column is static, the other 18 are visible at the users descretion. Report when printed/exported to pdf spans 2 pages naturally, 16 on the first page, 3 on the second, and the column widths have been adjusted to provide a perfect page span .
User A elects to hide two of the columns, and show the rest. The report complies and the viewable version is perfect, the excel export is perfect.. the PDF export on the first page causes every fith column, starting with the last column that was hidden to be expanded to take up additional width. On the spanned page, it renders the first column on that page correctly, then there is a white space gap equal to the width of the hidden columns and then the rest of the cells show with the last column expanded to take up the same width that the original 2 columns were going to take up, plus its width.
We have tried several different settings to see if it helps this issue or makes it worse. So far cangrow/canshrink/keep together have made no impact. It is not possible to increase the page size due to limited page size selection availablility for the client. There are far too many combinations of what the user can elect to show or hide to put together different tables to show and hide on the same report to remove this effect.
Any help or suggestion on this issue would be appreciated
I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:
I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.
1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B
If updated my query (see below) and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.
2. My second question: How to i get around this error?
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B, fnSplitJson2(A.ITEM6,NULL) C
I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.
I'd like to first figure out the count of how many rows are not the Current Edition have the following:
Second I'd like to be able to select the primary key of all the rows involved
Third I'd like to select all the primary keys of just the rows not in the current edition
Not really sure how to describe this without making a dataset
CREATE TABLE [Project].[TestTable1]( [TestTable1_pk] [int] IDENTITY(1,1) NOT NULL, [Source_ID] [int] NOT NULL, [Edition_fk] [int] NOT NULL, [Key1_fk] [int] NOT NULL, [Key2_fk] [int] NOT NULL,
[Code] .....
Group by fails me because I only want the groups where the Edition_fk don't match...
Here is My requirement, I'm not sure if this is possible. Creating table called master like col1, col2 col3, col4 , col5 ...Where Col1, col2 are updatable - this can be done easily
Col3, col4 are columns in another table but these can be just a read only ?? Is this possible ? this is possible with View but not friendly with share point CRUD...Col 5 is a computed column of col 2 and col5 ? if above step can be done then sure this can be done I guess.
I have query which retrieves multiple column vary from 5 to 15 based on input parameter passed.I am using table to map all this column.If column is not retrieved in the dataset(I am not talking abt Null data but column is completely missing) then I want to hide it in my report.
As I am creating the non-clustered indexes for the tables, I dont quite understand how dose it really matter to put the columns in the index key columns or put them into the included columns of the index?
I am really confused about that and I am looking forward to hearing from you and thank you very much again for your advices and help.
Here's another one of my bitchfest about stuff which annoy the *** out of me in SSIS (and no such problems in DTS):
Do you ever wonder how easy it was to set up text file to db transform in DTS - I had no problems at all. In SSIS - 1 spent half a day trying to figure out how to get proper column data types for text file - OF Course MS was brilliant enough to add "Suggest Types" feature to text file connection manager - BUT guess what - it sample ONLY 1000 rows - so I tried to change that number to 50000 and clicked ok - BUT ms changed it to 1000 without me noticing it - SO NO WONDER later on some of datatypes did not match. And boy what a fun it is to change the source columns after you have created a few transforms.
This s**hit just breaks... So a word about Derived Columns - pretty useful feature heh? ITs not f***ing useful if it DELETES SOME of the Code itself after there have been changes in dataflow. I cant say how pissed off im about that SSIS went ahead and deleted columns from flow & messed up derived columns just because the lineageIDs dont match.
Meta-data - it would be useful if you could change it and refresh it - im just sick and tired of it that it shows warnings and errors when there's nothing wrong - so after a change i need to doubleclick all my transforms so that those red & yellow boxes would disappear.
Oh and y I passionately dislike Derived columns - so you create new fields based on some data - you do some stuff - combine multiple columns to one, but you have no way saying remove the columns from the pipeline. Y you need it - well if you have 50K + rows with 30+ columns then its EXTRA useless memory overhead for your package.
Hopefully one day I will understand how SSIS works (not an ez task I say) - I might be able to spend more time on development and less time on my bitchfest - UNTIL then --> Another Day - Another Hassle with SSIS
Basically, I'm given a daily schedule on two separate rows for shift 1 and shift 2 for the same employee, I'm trying to align both shifts in one row as shown below in 'My desired results' section.
Sample Data:
;WITH SampleData ([ColumnA], [ColumnB], [ColumnC], [ColumnD]) AS ( SELECT 5060,'04/30/2015','05:30', '08:30' UNION ALL SELECT 5060, '04/30/2015','13:30', '15:30' UNION ALL SELECT 5060,'05/02/2015','05:30', '08:30' UNION ALL SELECT 5060, '05/02/2015','13:30', '15:30'
Hello,Using SQL Server 2000, I'm trying to put together a query that willtell me the following information about a view:The View NameThe names of the View's columnsThe names of the source tables used in the viewThe names of the columns that are used from the source tablesBorrowing code from the VIEW_COLUMN_USAGE view, I've got the codebelow, which gives me the View Name, Source Table Name, and SourceColumn Name. And I can easily enough get the View columns from thesyscolumns table. The problem is that I haven't figured out how tolink a source column name to a view column name. Any help would beappreciated.Garyselectv_obj.name as ViewName,t_obj.name as SourceTable,t_col.name as SourceColumnfromsysobjects t_obj,sysobjects v_obj,sysdepends dep,syscolumns t_colwherev_obj.xtype = 'V'and dep.id = v_obj.idand dep.depid = t_obj.idand t_obj.id = t_col.idand dep.depnumber = t_col.colidorder byv_obj.name,t_obj.name,t_col.name
I am working on a Statistical Reporting system where:
Data Repository: SQL Server 2005 Business Logic Tier: Views, User Defined Functions, Stored Procedures Data Access Tier: Stored Procedures Presentation Tier: Reporting ServicesThe end user will be able to slice & dice the data for the report by
different organizational hierarchies different number of layers within a hierarchy select a organization or select All of the organizations with the organizational hierarchy combinations of selection criteria, where this selection criteria is independent of each other, and also differeBelow is an example of 2 Organizational Hierarchies: Hierarchy 1
Country -> Work Group -> Project Team (Project Team within Work Group within Country) Hierarchy 2
Client -> Contract -> Project (Project within Contract within Client)Based on 2 different Hierarchies from above - here are a couple of use cases:
Country = "USA", Work Group = "Network Infrastructure", Project Team = all teams Country = "USA", Work Group = all work groups
How to implement the data interface (Stored Procs) to the Reports Implement the business logic to handle the different hierarchies & different number of levelsI did get help earlier in this forum for how to handle a parameter having a specific value or NULL value (to select "all") (WorkGroup = @argWorkGroup OR @argWorkGrop is NULL)
Any Ideas? Should I be doing this in SQL Statements or should I be looking to use Analysis Services.