This file format has multiple levels (X12).
One level could have one or more instances of the
next level contained within it. Kind of like XML,
except that some sections have no end tags, and the
ones that do have end tags actually have a _different_
tag for the end. (ISA ...IEA or GS ... GE)
It's easy enough to read a line at a time, see what
type it is, and insert its parts into the appropriate
table. Keeping track of the keys of the parent level
for relationships.
But I'm wandering whether there's some (not impossibly
complex) more efficient method with SQL and/or DTS.
--
Wes Groleau
If you put garbage in a computer nothing comes out but garbage.
But this garbage, having passed through a very expensive machine,
is somehow ennobled and none dare criticize it.
I need to export data from multiple tables into one single file. The big problem here is that the tables will have different column types.
I am attempting to create something that allows users to be able to send me the contents of their tables's, through either email or ftp. I would prefer to make it easier for them so they only have to deal with one file, instead of the multiple files that bcp and dts create when exporting from multiple tables.
I was thinking of using DTS or BCP and then join (append) the files (either zip them or append the files together in some fashion), but I was hoping that there was an easier method out there.
Any ideas on how I may accomplish this would be greatly appreciated.
I am trying to create a text file from multiple SQL Tables using @BCP_Command. I tried using DTS and SQL but the number of columns in the tables have to be the same size when doing a union. I also not to place a delimeter between each column. I've learned how to use BCP_commands on one file not sure if you can make it work with two or more.
16 flat files all fixed width. Some over 350 columns.
Open flat file 1
extract id and go see if its in table 1, if true update table 1 with first 30 columns
otherwise insert into table 1 first 30 columns.
goto table 2, lookup id, insert/update next 30 columns...etc..etc..for 10 different tables
So I've got my flat file source, I do a derived column to convert the dates, i've got a lookup for table 1, then 2 ole db commands, 1 for update if lookup successful, 1 for insert if lookup fails.
How can I pass the id as a param into the update command so it updates where x = 'x'
also I need a pointer on doing the next lookup, eg table 2, would I do this as some sort of loop?.
If you can help great, but, please don't just reply with "I'd use this object"...then no explanation of how
Probably a stupid and regularly asked question but I can't seem to find an answer, so here goes,
we have 16 .txt files, some with over 350 columns.
That info from each individual file needs importing to multiple sql tables.
need to look at sql table1 does record exist? if not create new then add in data once its been transformed eg datetime from yyyymmdd into datetime values [managed to get this using derived column] for first 20 columns, otherwise do update for the 20 columns...
then look at sql table2 and repeat for next n columns....
So I was wondering is it going to be better to write this as a dtsx package? if so can you point me to an example
or should I just write the code as part of a code behind page that scrapes the info and does a standard update/insert procedure?
We are trying to use SSIS Import export wizrd to import the flat files (CSV format) that we have into MS SQL Server 2005 database tables. We have huge number of CSV files. Is there a way by which we can import these flat (CSV) files in to corresponding SQL server tables in a single shot. I would really appreciate this help as it is painful to convert each and every file using the Import Export wizard.
I have a text file which contains the data that has to be inserted into multiple tables.The columnames of table 1 form the H1 follwed by Details D1,D1,D1... The column names of table two form the H2 followed by details D2,D2,D2 so on and similarly for Table 3. Am using a link server to the file directory and schema.ini which defines the column names fofr the text file
Is there any way of defining column names for more than one table through the schema.ini? or is there any other way through I can parse the text file contents to multiple tables?
Sample text file: H1,JobDate,JobNumber,FileName, D1,13/02/2008,asdf123,text1.txt D1,13/02/2008,asdf123,text2.txt D1,13/02/2008,asdf123,text3.txt
I want to combine a series of outputs from tsql queries into a single flat file destination using SSIS.
Does anyone have any inkling into how I would do this.
I know that I can configure a flat file connection manager to accept the output from the first oledb source, but am having difficulty with subsequent queries.
I have a package that contains three database tables (Header, detail and trailer record) each table is connected via a OLE DB source in SSIS. Each table varies in the amount of colums it holds and niether of the tables have the same field names. I need to transfer all data, from each table, in order, to a flat file destination.
My Requirement ,In Source Database 5 tables are there ( Emp,Loc,dept,Time,Product ), Destination is Single Excel file.But Dynamically how to load each table information to load into each sheet wise through SSIS Package?
I have a delimited text file with 650+ columns. The sum of the column lengths of a single row, if fully populated, exceeds 30K bytes. The "killer" fields lengthwise are the "Description" fields. If they were removed from the input file, the remainig columns would occupy about 5000 bytes, which is within SQL max row length.
Can SSIS be used to created these two tables? (one without description fields, the other with those field but arranged vertically in the table rows).
The fundamental issue is I can not import a single file row into a sql table because that row length could exceed the max byte count for a row.
I have multiple xml data file in a directory say C:XMLData abc1.xml, abc2.xml, abc3.xml etc.
Need to loop through each file in ssis with Foreach loop container, and get the file name say abc1, and load the data of abc1.xml to abc1 table in sql server DB.
Next iteration will pick up the abc2.xml and find the abc2 table in sql server DB then insert the data in abc2 table.
While each iteration, xml source should also point each xsd file correspondingly.
Tables are already created in DB
I solved my problem up to getting the file name from ech iteration and assigned file name to variable, in oledb destination data access mode I select Table or view name variable, then corresponding table will get selected for data insertation.
Just wanted to know how can I read each xsd file for each xml data files while iteration.
Hi! I have a general SQL CE v3.5 design question related to table/file layout. I have an system that has multiple tables that fall into categories of data access. The 3 categories of data access are:
1 is for configuration-related data. There is one application that will read/write to the data, and a second application that will read the data on startup.
1 is for high-performance temporal storage of data. The data objects are all the same type, but they are our own custom object and not just simple types.
1 is for logging where the data will be permanent - unless the configured size/recycling settings cause a resize or cleanup. There will be one application writing alot [potentially] of data depending on log settings, and another application searching/reading sections of data. When working with data and designing the layout, I like to approach things from a data-centric mindset, because this seems to result in a better performing system. That said, I am thinking about using 3 individual SDF files for the above data access scenarios - as opposed to a single SDF with multiple tables. I'm thinking this would provide better performance in SQL CE because the query engine will not have alot of different types of queries going against the same database file. For instance, the temporal storage is basically reading/writing/deleting various amounts of data. And, this is different from the logging, where the log can grow pretty large - definitely bigger than the default 128 MB. So, it seems logical to manage them separately.
I would greatly appreciate any suggestions from the SQL CE experts with regard to my approach. If there are any tips/tricks with respect to different data access scenarios - taking into account performance, type of data access, etc. - I would love to take a look at that.
I'm not really sure my question belongs to here...
I have a database in Access (from microsoft office, of course), and I want to transfer convert it to SQL Server. Does anyone know how I can do it? It must be very simple, but I haven't found it yet....
I have a table called DISTRIBUTION_SL in which there will multiple records for each request as shown below in the data.I would like to convert the recepient email into columns and then join with other tables so that the entire request information is seen in one row rathar than multiple rows
Table xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx CREATE TABLE [dbo].[DISTRIBUTION_SL]( [requestid] [numeric](18, 0) NOT NULL, [rptdis] [char](1) NOT NULL, [recipnm] [varchar](50) NULL, [emailadr] [varchar](75) NULL,
I have a requirement where in i have around 15 different flat files , filenames are fixed but folder path can be changed(i think i should use a variable for folder path). These 15 files data should go to their respective tables in the database.
Whether I need to create separate data flow task for each file or separate package? In addition to these, example : while importing product data into product table, if product ID already exists, we need to ignore it and upload only the new records.
I have just taken over the job of sorting out a rather poorly designed database. It looks like it was 'upsized' from an access database to the SQL server. The SQL server is the 2000 version.
Now I am trying to generate a report of what the students in the database are owing by referencing the Receipt table and then all the available payment methods and allocations. I was wondering if there was anyway to work out data being displayed twice (Let me demonstrate)
Note1: All the tables are linked by a key of ReceiptNo. From what I can see there is a table for every payment type and allocation but no link between the two other then the receipt number.
Using the query: SELECT T_Receipt.ReceiptNo, T_cheque.Amount AS Chq_Amount, T_credit.Amount AS Cre_Amount, StandingOrder.Amount AS Stn_Amount, T_BankTransfer.amount AS Bnk_Amount, T_cash.TotalAmount AS Cas_Amount, T_RentPayment.AmountPayed AS Ren_Paid, T_AdminPayment.AmountPaid AS Adm_Paid, T_InternetBilling.Total AS Int_Paid, T_Utilities.AmountPaid AS Util_Amount, T_InvoicePayment.amountPaid AS Inv_Paid, T_OtherPayments.paymentAmount AS Oth_Paid, T_parkingBill.paymentAmount AS Prk_Paid, T_TelephoneBills.TelephoneCredit AS Tel_Paid, T_DepositPayment.[Deposit payment] AS Dep_Amount, T_Receipt.cancelled AS Canceled, T_Receipt.RemittanceReceiptNo AS Rec_Ref, T_Receipt.Student FROM T_Receipt INNER JOIN T_DepositPayment ON T_Receipt.ReceiptNo = T_DepositPayment.receiptNo LEFT OUTER JOIN T_RentPayment ON T_Receipt.ReceiptNo = T_RentPayment.RentPaymentNo LEFT OUTER JOIN StandingOrder ON T_Receipt.ReceiptNo = StandingOrder.ReceiptNo LEFT OUTER JOIN T_TelephoneBills ON T_Receipt.ReceiptNo = T_TelephoneBills.ReceiptNo LEFT OUTER JOIN T_parkingBill ON T_Receipt.ReceiptNo = T_parkingBill.ReceiptNo LEFT OUTER JOIN T_OtherPayments ON T_Receipt.ReceiptNo = T_OtherPayments.ReceiptNo LEFT OUTER JOIN T_InvoicePayment ON T_Receipt.ReceiptNo = T_InvoicePayment.receiptNo LEFT OUTER JOIN T_cash ON T_Receipt.ReceiptNo = T_cash.ReceiptNo LEFT OUTER JOIN T_AdminPayment ON T_Receipt.ReceiptNo = T_AdminPayment.ReceiptNo LEFT OUTER JOIN T_BankTransfer ON T_Receipt.ReceiptNo = T_BankTransfer.receiptNo LEFT OUTER JOIN T_Utilities ON T_Receipt.ReceiptNo = T_Utilities.receiptNo LEFT OUTER JOIN T_credit ON T_Receipt.ReceiptNo = T_credit.ReceiptNo LEFT OUTER JOIN T_cheque ON T_Receipt.ReceiptNo = T_cheque.ReceiptNo LEFT OUTER JOIN T_InternetBilling ON T_Receipt.ReceiptNo = T_InternetBilling.ReceiptNo GROUP BY T_Receipt.Student, T_Receipt.ReceiptNo, T_cheque.Amount, T_credit.Amount, StandingOrder.Amount, T_BankTransfer.amount, T_cash.TotalAmount, T_AdminPayment.AmountPaid, T_InternetBilling.Total, T_Utilities.AmountPaid, T_InvoicePayment.amountPaid, T_OtherPayments.paymentAmount, T_parkingBill.paymentAmount, T_TelephoneBills.TelephoneCredit, T_Receipt.cancelled, T_Receipt.RemittanceReceiptNo, T_DepositPayment.[Deposit payment], T_RentPayment.AmountPayed, T_Receipt.Student HAVING (T_Receipt.Student LIKE N'06%')
Which gives a result of:
RecNo. 30429 Cheque 250 Deposit 250
30429 679.98 250
This is fine but when I do analysis on this it appears as though the student has paid two deposit payments. I was wondering with out querying each table independently from an application if there was a criteria to specify that I only get one deposit result. So as such say, give me all the payments but I only want one result from the other tables. I though about discrete but that wouldn't work here.
I have 7 source databases and one target database, all using the same structure. The structure is made of 10 tables, with foreign key constraints.
I need to merge the source databases into the target (which won't have any data before that process, but will already have the correct schema), and to keep the relationships between the records.
I know how to iterate over the source databases (with SMO foreach), but I'd like to know if someone can advise the best copy method for that context in SSIS ? (I don't want to keep the primary keys, but I need to keep the relationships...)
I have a couple of hundred flat files to import into database tables using SSIS.
The files can be divided into groups by the format they use. I understand that I could import each group of files that have a common format at the same time using a Foreach Loop Container.
However, the example for the Foreach Loop Container has multiple files all being imported into the same database table. In my case, each file needs to be imported into a different database table.
Is it possible to import each set of files with the same format into different tables in a simple loop? I can't see a way to make a Data Flow Destination item accept its table name dynamically, which seems to prevent me doing this.
I suppose I could make a different Data Flow Destination item for each file, in the Data Flow. Would that be a reasonable solution, or is there a simpler solution, or should I just resign myself to making a separate Data Flow for every single file?
Hello I am building a survey application. I have 8 questions. Textbox - Call reference Dropdownmenu - choose Support method Radio button lists - Customer satisfaction questions 1-5 Multiline textbox - other comments. I want to insert textbox, dropdown menu into a db table, then insert each question score into a score column with each question having an ID. I envisage to do this I will need an insert query for the textbox and dropdownlist and then an insert for each question based on ID and score. Please help me! Thanks Andrew
I need to be able to bulk insert a bunch of tables from their corresponding flat file. I have created an XML file (see below) which has the file name/table name pair at each node. I then created a ForEachLoop task and used the Node enumeration type and the following OuterXpathString: ReferenceFiles/File. At this point I get lost. How do I pass the 2 inside node values (file name and table name) to variables which I can then use as expressions for the bulk insert task inside the Foreach?
I used the data export wizard to export a single table to a single flat file (multiple wasn't allowed). I saved the package as a *.dtsx file which I'm attempting to edit to add the additional tables.
Creating additional sources is fairly easy copy of the first source and change to the table name.
I've tried copying the destination connection and changing to a new text file, but can't get past having to add each column manually to the new destination.
How can I duplicate the mapping that must be taking place in the wizard in the *.dtsx editing environment?
This seems like a simple / common task, but I've been unable to find a solution.
Hello, we've an Oracle transition in the pipeline and want to convertall our database objects to upper case. Any one got a script ortechnique (other than manual) to do it?Many thanks, Kevin.
I am trying to convert the rows in a table to columns. I have found similar threads on the forum addressing this issue on a high level suggesting the use of cursors, PIVOT Transform, and other means. However, I would appreciate if someone can provide a concrete example in T-Sql for the following subset of my problem.
Consider that we have Product Category, Product and its monthly sales information retrieved as follows:
I have purposefully included QtySold here as I need to display both Quantity and Sales as measured column groups in my report. Can this be achieved in sql? I would appreciate any responses.
I am trying to query the Topics in my discussion forum...The Topic contains a "last_poster_id" and a "author_id" I need the username and userid for both "last_poster_id" and "author_id" in the table "aspnet_Users"How do I do this?I would guess I need to use sub select statements. Can someone help me?
Hi, I am trying to build search engin with 11 parameters in 4 different tables in the database. For example: In search.aspx I have 11 textboxes namely nameTextbox, phoneTextbox, nationalityTextbox, ageTextbox etc. And in the result.aspx page I have gridview which post data from the database if the search match. I wrote this stored procedure. P.S please ignore the syntax. @name var(30),
@nationality (30),
@phone int,
etc
as
Select a.UserId, b.UserId, c.UserId FROM Table1 a, Table2 b, Table3 c
WHERE
name LIKE '%' @name '%'
OR nationality LIKE '%' @nationality '%'
OR phone LIKE '%' @phone '%'
etc
But I got an error when I am trying to execute this code because the nulls values so I wrote 1 @name var(30), 2 3 @nationality (30), 4 5 @phone int, 6 7 etc 8 9 as 10 11 12 13 Select a.UserId, b.UserId, c.UserId FROM Table1 a, Table2 b, Table3 c 14 15 WHERE 16 17 name LIKE '%' ISNULL(@name, '') '%' 18 19 OR nationality LIKE '%' ISNULL(@nationality,'') '%' 20 21 OR phone LIKE '%' ISNULL(@phone,'') '%' 22 23 etc 24 25
Also the error still exist. What is the best way to search for multiple parameters in multiple tables ?
Hello, I am in the progress of designing a new section of my database and was thinking of creating a hole new database instead of just creating tables inside the database. My question is can you JOIN multiple tables in an SQL Statement from multiple databases. Ie, In the Management program I have a database called 'Convention' and another one called 'Services', inside the two databases there are many tables. Can I link say tblRegister from Convention to tblUser in Services? Thanks