How To Divide A Row Data To Multiple Row In One Table By SSIS?
Jun 26, 2006
Hi, Experts,
I have a data table from a old system.
There are 10 data fields stored in one row at this table.
How can I seperate those fields into another table to be 10 rows?
Any component in Data Flow can I use?(I have tried several component...)
by the way,
the original table is a large table, contain 3,000,000 rows.
I have the following scenario: N identical Databases (corresponding to different Fiscal Years, with names <Company Name>.<YEAR>). We want to consolidate the N DBs to a New Datawarehouse.
In SSIS we have designed a Dataflow that reads through a OLE DB Source (Connected to one of the N Databases) and maps to a OLE DB Destination (Connected to the NEW DB).
The question is, how we loop in SSIS through the N identical Connections, so to repeatedly execute the designed Dataflow, each time with a different Connection?
I am facing a problem in writing the stored procedure for multiple search criteria.
I am trying to write the query in the Procedure as follows
Select * from Car where Price=@Price1 or Price=@price2 or Price=@price=3 and where Manufacture=@Manufacture1 or Manufacture=@Manufacture2 or Manufacture=@Manufacture3 and where Model=@Model1 or Model=@Model2 or Model=@Model3 and where City=@City1 or City=@City2 or City=@City3
I am Not sure of the query but am trying to get the list of cars that are to be filtered based on the user input.
I'm looking for the group's collective wisdom on the following issue, I'm sure many have been confronted with it.
I'm designing a db for a website that will utilize SQL Server 7. I have few tables that I think will grow very large row wise and that will be written to an read from frequently. I sense that I might be able to get better perfomance out of the system if I split many of these large tables up (row wise) into many smaller tables.
For example, The website that I'm working on has a "MailingAddresses" db table that contains the mailing addresses for the sites users (subscribers and other misc users - 1 record per user, anticipating 70,000+ users). Each user can update their record, and there will be frequent queries to the tables to get addresses for both internal admin use and for display on public webpages.
The website has five distinct sections that are in some sense like five distinct websites. Each user belongs to only one of these sections and they won't migrate between sections. Therefore I'm considering breaking up the one large "MailingAddresses" tables into five smaller tables, one for each section, i.e., "MAddressesSectionA", ..., "MAddressesSectionE".
These tables will have the same fields and constraints. Also of course there would be the same number of reads and writes in total with the five smaller tables as compared to that for the one big table. Also the combined size of the info in the five smaller tables would be the same as that for the one large table.
Though it's going to more of a pain to manage five tables versus one, I have a hunch it might be easier for the dbms to handle reads and writes with five smaller tables than with one large table...
...Of course this is true when queries are mainly respective to users from just one or two of the sections (as might be the case) - the big table then has the overhead of the address records for users in the other sections. BUT is there any benefit with the five smaller tables route when they are all frequently accessed??? Sure each select query has fewer records to go through, but with all five tables in play the dbms has to deal on average with the same amount of info as in the one big table.
What do you folks think, to divide or not to divide the big table(s) up row wise into smaller tables?
I guess the issue is summed up in this question: In general, can a dbms better handle in memory, and more quickly write to and query access - one BIG table with a+b+...+n records, or N smaller tables with a, b,...,n records respectively?
Entity Value A 2424053.500000 B 1151425.412500 C 484810.700000 Table 2 contains
Entity Formula A (2100*(1-0.0668)*24*mday*10) B (1000*(1-0.0575)*24*mday*10) C (1260*(1-0.09)*24*mday*10)
Where mday is number of days taken from user
I need to calculate the output of value/formula for each entity can you provide me the query for the same
The datatype for formula column is varchar
I do not have the liberty to use cursors or loops.mday will be a input fromt the user say 'mday = 31' ..i need to divide the value in the first table with the computed value of the formula after replacement
I'm trying to help out a beginning DBA and come up with a SQL query. Here is the information I've been given so far.
"I have a SQL database (TEST1_new) with several tables. I need to have some values updated in one of the tables HARR2APP.CUSTOMER_ORDER_LINE_TAB1). I need the value that exists in the QTY_SHIPPED field to be divided by 250. I also need to include a WHERE statement for the PLANNED_SHIP_DATE greater than 11/15/2004 and a CATALOG_NO =18053185292 or 18053185285.
Basically, what I need to do is take the value for QTY_SHIPPED for Catalog_NO 18053185285 & 18053185282 and divide the result by 250"
After reading through her statement above about 10 times and guessing a bit, here's what I was able to come up with. I'm sure the syntax is not correct but maybe it's close to being what's needed?
INSERT INTO HARR2APP.CUSTOMER_ORDER_LINE_TAB1()
(SELECT dbo.TEST1_new.HARR2APP.CUSTOMER_ORDER_LINE_TAB.QTY_SHIPPED LIMIT 1) /250),PLANNED_SHIP_DATE from dbo.TEST1_new.HARR2APP.CUSTOMER_ORDER_LINE_TAB
WHERE CATALOG_NO = ‘18053185292’ OR CATALOG_NO= ‘18053185285’
I have a table called ADSCHL which contains the school_code as Primary key and other two table as
RGDEGR(common field as SCHOOl_code) and RGENRl( Original_school_code) which are refrencing the ADSCHL. if a school_code will be updated both the table RGDEGR (school_code) and RGERNL ( original_schoolcode) has to be updated as well. I have been provided a new data that i have imported to SQL server using SSIS with table name as TESTCEP which has a column name school_code. I have been assigned a task to update the old school_code vale ( ADSCHL) with new school_code ( TESTCEP) and make sure the changes happen across all 3 tables.
I tried using Merge Update function not sure if this is going to work.
Update dbo.ADSCHL SET dbo.ADSCHL.SCHOOL_CODE = FD.SCHOOL_Code FROM dbo.ADSCHL AD INNER JOIN TESTCEP FD ON AD.SCHOOL_NAME = FD.School_Name
Hi All, i have mutiple text file. let us say,a1.txtb1.txtc1.txt i have to port this text file data into the table (SqlServer Database) which have the same file structure.(i.e)x1 (SqlServer table)y2 (SqlServer table)z3 (SqlServer table) now i have to transfer a1.txt file data ----to--- x1b1.txt file data ----to--- y2c1.txt file data ----to--- z3 using SSIS. like that, i have to transfer more than 250 files at a time.manually binding 250 files into the package is very cumbersome and time consuming process. so, can any one give ur valuable sugession to solve this issue.
I am trying to load a file using SSIS that contains records with two different layouts in one data file but in the flat file connection I can only specify one layout and this is causing the records with the second layout to be loaded incorrectly.
The different record layouts can be identified by the first character of the record. Example: If Field begins with "A" then assign one layout; "B" assign second layout.
Has anybody come accross this issue, if so some guidence would be appreciated.
I am getting the following error when trying to load multiple excel files using for each loop container in SSIS, I tried to put the quotes in several different ways but still can't get rid of this error. I was able to successfully load single excel file, but when I use the for each loop container that's when I am having problems. Any help is greatly appreciated. Thx.
Error at Package1 [Connection manager "SourceConnectionExcel"]: The connection string components cannot contain unquoted semicolons. If the value must contain a semicolon, enclose the entire value in quotes. This error occurs when values in the connection string contain unquoted semicolons, such as the InitialCatalog property.
Error at Package1: The result of the expression ""Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + @[User::Folder] + @[User::file] + ";Extended Properties="Excel 8.0;HDR=NO";"
" on property "ExcelFilePath" cannot be written to the property. The expression was evaluated, but cannot be set on the property.
I have a delimited text file with 650+ columns. The sum of the column lengths of a single row, if fully populated, exceeds 30K bytes. The "killer" fields lengthwise are the "Description" fields. If they were removed from the input file, the remainig columns would occupy about 5000 bytes, which is within SQL max row length.
Can SSIS be used to created these two tables? (one without description fields, the other with those field but arranged vertically in the table rows).
The fundamental issue is I can not import a single file row into a sql table because that row length could exceed the max byte count for a row.
I have one one XLS File i need to import data in 2 diffrent SQL SERVER 2005. For e.g One Master Table X and Child Table Y. X Table columns mastertabid, col1,col2,col3, etc etc..master table columns are fix. and Child table columns childtabid,mastertabid,childcol1,childcol2,childcol3,childcol4, etc. etc.. Child table columns are not fix it may be 5,10,15 or 20... Now, at the time of importing XLS file i need to insert data in both table. most importing thing is i need to add data in child table with reference of mastertabid..i have to add this column value.
I can do it with business logic, i can do it but it makes my page slow...and i dont want to do that...
i am using asp.net vb, i have 2 table show as below if i want to delete the forumid 1 row,then how would i delete the topic table who belong to the forumid 1. how would 1 do it if i am using gridview Forum table Forumid | Forumname 1 | hi 2 | me Topic table Topicid | Forumid | Topicname 1 | 1 | yo 2 | 1 | everyone 3 | 1 | google
Yesterday I was looking to the processor usage in the Task Manager of Windows NT when a script of mine was running. The script was an InfoPump Script; which is a tool from the DecisionBase suite from CA (was previously owned by Platinum). This script contains SQL statements that select data from several tables and stores the result into another table. The SQL code used for this looks fine to me. The query was running on a Compaq Proliant 5500 with 4 500 Mhz Xeon processors, 1 GB RAM, NT Server 4, SP 5, RAID 5. The SQL Server is configured to use all resources and SQL has normal priority on NT. When the select part was running al four processors were used for about 75% and when the store happens only 1 processor is used for 100%. Why is the store not spread over all four processors? It only uses one processor and it seems to be a bottleneck.
Our database stores vehicle data in one table, but 3 different types of data are stored in the one table. The table contains all the columns for all 3 types so when you query the table you get at least 3 rows back with null values for all the columns that don't apply to that record. The data is imported to the table when it's updates so there's a possibility that they're updated at different times so they have a different BATCH like:
BATCH TYPE ID RATING INSURANCE SAFETY 300 SAFE 123 NULL NULL A 300 INS 123 NULL YES NULL 250 RATE 123 A NULL NULL
What I'd like returned is: ID, RATING, INSURANCE, SAFETY 123 A YES A
I'm trying to do a case statement to pull the data down, but I keep ending up with multiple rows because of all the nulls. I tried doing a SUM of the case statement with an ISNULL(SAFETY,0) but I can't SUM char values. I can probably do this with 3 temp tables to load the data that I want for each TYPE into them and then select and join them together, but is there a better way to do this?
I have 3 tables with the follwing schema Table <Category> {
UniqueID, LastDate DateTime }
Assume the follwing tables with data following the above schema
Table Cat1 {
1, D1 2, D2 3, D3 } Table Cat2 {
2, D4 3,D5 4, D6 } Table Cat3 {
1, D7 3,D8 5,D9 }
I have a Master and the schema is as follows Table master {
UniqueId, Cat1 DateTime, -- This is same as the Table name Cat2 DateTime, -- This is same as the Table name Cat3 DateTime -- This is same as the Table name }
After inserting the data from all these 3 tables, I want the my master table to look like this Table Master {
I know that this is legal sql: "SELECT 1 AS Blah" I want to do something like this except for I need to select multiple rows each with a different value for Blah. The query needs to be legal to be passed to the SqlCommand.ExecuteReader Method. Is this possible?
So I have been trying to get mySQL query to work for a large database that I have. I have (lets say) two tables Table_One and Table_Two. Table_One has three columns: Type, Animal and TestID and Table_Two has 2 columns Test_Name and Test_ID. Example with values is below:
In Table_One all types come under one column and the values of all Types (Mammal, Fish, Bird, Reptile) come under another column (Animals). Table_One and Two can be linked by Test_ID
I am trying to create a table such as shown below:
This should be my final table. The approach I am currently using is to make multiple instances of Table_One and using joins to form this final table. So the column Bird, Reptile, Mammal and Fish all come from a different copy of Table_one.
For e.g
Select Test_Name AS 'Test_Name', Table_Bird.Animal AS 'Birds', Table_Mammal.Animal AS 'Mammal', Table_Reptile.Animal AS 'Reptile, Table_Fish.Animal AS 'Fish' From Table_One
[Code] .....
The problem with this query is it only works when all entries for Birds, Mammals, Reptiles and Fish have some value. If one field is empty as for Test_Two or Test_Three, it doesn't return that record. I used Or instead of And in the WHERE clause but that didn't work as well.
Sorry for the confusing subject. Here's what im doing:I have a table of products. Products have N categories andsubcategories. Right now its 4. But there could be more down theline so it needs to be extensible.So ive created a product table. Then a category table that has manycategories of products, of which a product can belong to N number ofthese categories. Finally a ProductCategory "match" table.This is pretty straigth forward. But im getting confused as to how towrite views/sprocs to pull out rows of products that list all theproducts categories as columns in a single query view.For example:lets say productId 1 is Cap'n Crunch cereal. It is in 3 categories:Cereal, Food for Kids, Crunchy food, and Boxed.So we have:Product----------------1 Capn CrunchCategories-----------------1 Cereal2 Food for Kids3 Crunchy food4 BoxedProductCategories------------------1 11 21 31 4How do I go about writing a query that returns a single result set fora view or data set (for use in a GridView control) where I would havethe following result:Product results---------------------------------ProductId ProductName Category 1 Category 2Category 3 Category N ...------------------------------------------------------------------------1 Capn Crunch Cereal Food for Kids Crunchy foodBoxedAm I just thinking about this all wrong? Sure seems like it.Cheers,Will
Each one of the tables listed below has a “CreateDateTime” and “UpdateDateTime” fields, I need to get yesterday changes, I can get any record where either CreateDateTime or UpdateDateTime is greater than midnight yesterday butI need to watch dates on all of the tables so I need to do atleast 10 date checks.
If any table shows an updated or created record, I need to gather ALL of the information for that customer. So, if my name didn’t change (SCUS table), but my email does (SEML table), I have to pull out both the SCUS and SEML tables (and the others, of course). So It may not be simple WHERE clause, How can I achieve this:
I have a data flow task in which there is a OLEDB source, derived column item, and a oledb destination. My source is a SQL command, that returns some values. I have some values, that I define in the derived columns, and set default values under the expression column. My question is, I also have some destination columns which in my OLEDB destination need another SQL command. How would I do that? Can I attach two or more OLEDB sources to one destination? How would I accomplish that? Thanks
I have a requirement where in i have around 15 different flat files , filenames are fixed but folder path can be changed(i think i should use a variable for folder path). These 15 files data should go to their respective tables in the database.
Whether I need to create separate data flow task for each file or separate package? In addition to these, example : while importing product data into product table, if product ID already exists, we need to ignore it and upload only the new records.