How To Bulk Insert A File With Fixed Row Length And No Row Terminator?
Jul 16, 2004
Hi All,
I have a file that has fixed row size of 148 and fixed column size, but the file has no end of line character. I know it is wierd but a client has made the file and refuses to change the format. So I am stuck with reading it the way it is.
In Enterprise Manager, I used the Import/Export wizard and I specified fixed length and it let me specify 148 as the lenght of each line. Then it recognized the file and I was able to read it in.
I saved the DTS package and I can run it over and over again using dtsrun. However I want to do the same thing using Bulk Insert. How do you specify fixed row length for Bulk insert and how do you give it individual column lengths?
I'm trying to do an insert using Bulk Insert with a fixed length file.I'm using a format file.I'm getting the following error message:Cannot perform bulk insert. Invalid collation name for source column 16in format file '\wbhq.comdfsdviDataIntGOPFilesGOPFormatFile. txt'.Any suggestions are appreciated.Thanks!JenniferFormat File Contents:8.0161SQLCHAR02""0Space""2SQLCHAR04""1YearID""3SQLCHAR02""0Space""4SQLCHAR02""2PeriodID""5SQLCHAR02""3CompanyID""6SQLCHAR01""0Dash""7SQLCHAR04""0Space""8SQLCHAR01""0Dash""9SQLCHAR04""4UnitID""10SQLCHAR01""0Dash""11SQLCHAR04""5AccountCode""12SQLCHAR05""0Space""13SQLCHAR01""6AccountType""14SQLCHAR029""0Space""15SQLCHAR016""7GLAmount""16SQLCHAR0105" "0Space""Bulk Insert Statement:BULK INSERT FlatFile_GOPFROM '\wbhq.comdfsdviDataIntGOPFilesGLPAM.GOP'WITH(FORMATFILE ='\wbhq.comdfsdviDataIntGOPFilesGOPFormatFile. txt')Table Definition:CREATE TABLE [dbo].[FlatFile_GOP] ([YearID] [smallint] NOT NULL ,[PeriodID] [smallint] NOT NULL ,[CompanyID] [smallint] NOT NULL ,[UnitID] [smallint] NOT NULL ,[AccountCode] [int] NOT NULL ,[AccountType] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,[GLBalance] [money] NOT NULL) ON [PRIMARY]GOFile Contents:2007 210- -0002-3000 G196395.102007 210- -0002-3700 B1484.002007 210- -0002-3700 G1571.132007 210- -0002-3800 B157457.002007 210- -0002-3800 G161577.73
Similar to a previous post (http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=244646&SiteID=1), I am trying to import data into a SQL Table.
I am trying to program a small application that will import product data obtained through suppliers via CD-ROM. One supplier in particular uses Fixed width colums, and data looks like this:
Example of Data
0124015Apple Crate 32.12
0124016Bananna Box 12.56
0124017Mango Carton 15.98
0124018Seedless Watermelon 42.98 My Table would then have: ProductID as int Name as text Cost as money
How would I go about extracting the data with an XML Format file? I am stumbling over how to tell it where to start picking up data for a specific column. Is there any way that I could trim the Name column (i.e.: "Mango Carton " --> "Mango Carton")?
I don't know if it makes any difference, but I've been calling SQL from my code by doing this:
Code in C# Form
SqlConnection SqlConnection = new SqlConnection(global::SQLClients.Properties.Settings.Default.ClientPhonebookConnectionString); SqlCommand cmd = new SqlCommand();
SqlConnection.Open(); cmd.ExecuteNonQuery(); SqlConnection.Close(); RefreshData(); I am running Visual Studio C# Express 2005 and SQL Server Express 2005.
I have a file that has fixed row size of 148 and fixed column size, but the file has no end of line character. I know it is wierd but a client has made the file and refuses to change the format. So I am stuck with reading it the way it is. In Enterprise Manager, I used the Import/Export wizard and I specified fixed length and it let me specify 148 as the lenght of each line. Then it recognized the file and I was able to read it in. I saved the DTS package and I can run it over and over again using dtsrun. However I want to do the same thing using Bulk Insert. How do you specify fixed row length for Bulk insert and how do you give it individual column lengths?
I have a CSV file that I am trying to bulk load into a temp table. The data in the file is all jumbled together, as in, there does not appear to be a row terminator. However, I do see a bunch of little rectangular boxes that I assume are the row terminators.
When I run the bulk insert, the data is treated as one string. For example... If I have 10 columns in the table, the 10 columns will be populated, but the remainder of the data is dumped into the last column.
Here are the row terminators I have used so far that haven't worked.
I wasn't sure where to put this topic so I put it here since I figured it is a question that would apply to virtually any version even though I am using SQL Server 2005.
We have a vendor that sends us a fixed width text file every day that needs to be imported to our database in 3 different tables. I am trying to import all of the data to a staging table and then plan on merging/inserting select data from the staging table to the 3 tables. The file has 77 columns of data and 20,000+ records. I created an XML format file which I sampled below:
The data file is a fixed width file with no column delimiters or row delimiters that I can tell. When I run the following insert statement I get the error below it.
BULK INSERT myStagingTable FROM '.........myDataSource.txt' WITH ( FORMATFILE = '.........myFormatFile.xml', ERRORFILE = '.........errorlog.log' );
Here is the error:
Msg 4832, Level 16, State 1, Line 1 Bulk load: An unexpected end of file was encountered in the data file.
Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
Hi there, I'm trying to import a cobol file (.dat) which has a line feed as the row delimiter. Using the TransactSQL Bulk Insert with a row terminator of '' is not working for me. Does anyone know the equivilant row terminator of a LF? (Using the Import Export wizard I supply a {LF} and it likes that fine). I would like to use the Bulk Insert Statement for more control of the data. Any help is greatly appreciated.
Hello all,I have a multiple text files with an odd row terminator. If you were toexamine it in VB it would be like a "CrCrLf" instead of just "CrLf". InHEX it is DDA instead of just DA. When I am trying to import into mytable using BULK INSERT I use "" as the row terminator but that isputting the the previous character into the column and then it signalsa carriage return when I attempt to query the data.Any suggestions on what I should use as the row terminator? Is itpossible to tell BULK INSERT to use something like "CHAR(10)"?"" does NOT work.Thanks in advance.
I can't use DTS nor DTSwizard as I need to put it in a .sql and run it through a command line via .bat file (it's more for the users).
Each row ends with an EOL character, the fields are all fixed width, but I have a little problem here, some rows are empty but just with a EOL character.
Hi, I have a data file which consists of data as below, 4 PPU_FFA7485E0D|| T_GLR_DET_11||
While iam inserting into table using bulk insert, this pipe(||) is also getting inserted into the table, here is my query iam using to insert the data using bulk insert.
BULK INSERT TABLE_NAME FROM FILE_PATH WITH (FIELDTERMINATOR = ''||'''+',KEEPNULLS,FIRSTROW=2,ROWTERMINATOR = '''')
I need to write data into a fixed column length file and was wondering the best (most efficient) way to tackle this. For example, the first few pieces of the report I'm working on now would be:
PacketID - Starting position 1, Field length 9 TransactionID - Starting position 10, Field length 9 Group number - Starting position 19, field length 10 PID/SSN - Starting position 29, field length 10
For the PID/SSN, if I have a PID it'll be 10 digits and fill the field length, if I don't I use SSN which is only 9 digits and enter a space as the 10th digit. Obviously if I don't have certain pieces of information I'll just need spaces of the specified length to satisfy the file format. I'm using SQL 2005. Thanks in advance for any help provided.
I have approximately 13 columns. Each Column has a start position and end position.. I created this in a table and defined the position, it's still not working for me.
FiceCode char(6), -- starting position 1, field length 6 StateStudID char(10), -- starting position 7, field length 10 CampusStudID char(10), -- starting position 17, field length 10 LastName char(25), -- starting position 27, field length 25
[Code] .....
I need a text output file that will define each start position.I also used: right(replicate('0',25) + cast(last_name as char(25)), 25) in my sql statement.when I add the first_name, I can't get it to start in position 52.
I have a fixed-length flat file that contains about 30 columns. I have got it pretty well figured out using the flat file connection tool, but I am having trouble with the end of the line.
I know when I look at the file it is a CrLf that separates the rows, and SSIS only seems to understand this to a certain extent. It knows to go to the next line, but it also adds two rectangles to the lines, like this:
While this does create a cool pattern, it is a pain in the butt. The only solution I have found is adding two more spaces to the last column in the table, but the ?s just get appended there.
If anybody has any clue how to get rid of them, that would be great.
Will be getting new file for download from vendor to process in future. When I use DTS or Import wizard in sql server I get "could not find the selected row delimiter within the first 8kb of data, is the selected delimited valid? This is for a fixed length file. If I answer yes and continue everything is fine, until I get to the end of the record which it can't find. It basically lumps the records together. What is interesting, that if I import the same file in Access 2000, I don't get this problem. ANyone seen this before? Could not find anything on MSDN
Hi, I am trying to upload a fixed field text file to a sqlserver table using the DTS wizard. The txt file has 111 columns and the total length of a single row is 5897. The problem is when I use the wizard to specify the starting and ending of each column, its not allowing me to specify the columns beyond the position 4095. Is there a limitation on this? if so is there a work around ? to solve this. Any help on this is truly appreciated.
I have a requirement to import a file of rows containing fixed length data. The problem is that each row can be one of 5 different formats (i.e. different columns) -- where the "type" of row is indicated by the first two characters of the row. Each row gets inserted into its own table.
Could I use a simple Conditional Split to route the rows? Or is the split for routing similiar rows? Anyways, problems are never this simple...
In addition, each "grouping" of rows is related. The "first" row is considered the "primary" row (and gets a row id via IDENTITY, whereas the remaining rows in the group are "secondary" rows and have foreign key references back the the primary rows id.
Given (using spaces to separate columns and CrLf to show "grouping"):
So, the first 3 lines are all related to a MSFT record which needs to be spread across multiple tables. The next three lines are all related to AAPL, And the next FOUR lines (yes, each record can have zero, one, or more secondary rows) are related to CSCO.
(If this is still not clear, all the "01" rows will be written to [Table1] with each row having an IDENTITY value. All the "02" rows will be written to [Table2] the a FK pointing to the correct [Table1] row. All the "03" rows will be written to... and so on.
I am trying to import a text file that has fixed length fields. It also has column headers repeated in the file.
The text file is delimited by <CR><LF>.
Question 1
There are certain rows that end abruptly after a column for ex: Row 1 <col1 data>.....<col2 data>.....<col3 data><CR><LF> Row 2 <col1 data>.....<col2 data><CR><LF>
This seems to be throwing off the text file import as row 1 seems to be importing alright but row 2 gets ignored. This not the behaviour I want. I want row 2 to also be imported and have a default of NULL for the columns that are not specified.
Does anyone know how to achieve this?
Question 2
This is not that serious, but currently, I do not know of a way to ignore repeating column headers. It would be nice if there was a way, but I can always resort to T-SQL based data cleaning after the import.
I'm trying to extract data from a Flat File which is as fixed length as they come. The file has a header, which simply contains the number of records in the file, followed by the records, with no header delimeter (No CR/LF, nothing).
For example a file would look like the following:
00000003Name1Address1Name2Address2Name3Address3
So this has 3 records (indicated by the first 8 characters), each consisting of a Name and Address.
I can't see a way to extract the data using a flat file connection, unless we add a delimeter for the header (not possible at this stage). Am I wrong?
Any suggestions on possible solution would be much appreciated - I'm thinking Ill have to write a script to parse the file manually.
When I use SQL 2000 DTS Export to create a fixed length flat file, the data rows are delimited by carriage return-line. Which means that when I open the flat file in a text editor like UltraEdit or WordPad, the data rows are broken out nicely (row ends at the max row length position and new row starts at position 0).
But when I use SSIS to create the file, the whole file is displayed as one line in WordPad. The data rows don't end at the max row lenght position in ultraEdit neither. From Flat File Connection Manager's Preview page, I can see the data rows are displayed properly.
Now I wonder if the flat file destination is a true fixed length file.
I would like to read from a Text File using SSIS Integration Package.
The file has a fixed number of columns, let's say 3 columns. There is no row header and each columns length is fixed. There is no delimiter as well.
Here is the sample of the file contents: John Doe USA Mary Monroe UK Andy Archibald Singapore
Here is the hints to read the file contents 123456789 0123456789 0123456789 ============================== John Doe USA Mary Monroe UK Andy Archibald Singapore
If you notice, from the 1st column until the 9th column, it's reserved for the first name. The 10-th column until the 19th column, it's reserved for the last name. Finally the 20-th column until the 29th column is reserved for the Origin Country.
Since there's no delimiter inside the flat file contents, i have difficulty in parsing this text using SSIS Package.
Please let me know if you need any necessary information.
what is the best way to import fixed length text file to sql server using SSIS?
I was trying to using text file source and ole db destination..but since the text file has no columns and have different length per column and per line( it show only one column becasue it all concatnated), I can not map it to destination column..
How can I import it?
Here is the example of text file ( fixed with row delimeter)that i need to import to different columns...
We would like to use the bulk insert function to import large CSV files into a SSE database however we have serious concerns regarding giving all our users these high privleges. Is there some way around this can we give them the privleges temporarily do the insert and take it away again or some other solution.
I'm trying to use Bulk insert for the first time and getting the following error. I think it might have something to do with my Format File and from the error msg there's a conversion error for the first column. In my database the Field is nvarchar(6) so my best guess is to use SQLNChar for the first column. I've checked the end of each line is CR LF therefore the is correct for line 7 right?
Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 1 (ASXCode). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
BULK INSERTtbl_ASX_Data_temp FROM 'M:DataASXImportTest.txt' WITH (FORMATFILE='M:DataASXSQLFormatImport.Fmt')
I created a package to import records from a fixed length 700 byte text file to a table in a SQL Database. I used the wizard to set it up and note the byte where each column ends. I need to customize the process, as the name of the text file will change each night, so I want to be able to set the file name from the VB front end app.
I have tried modifying the Datasource property of the DTS connection to the flat file, without success. I have also tried setting a global variable for the datasource property in DTS, and assigning that from VB, similarly without luck.
Do I have to create a custom package in code from VB? If so, how do indicate where each column in the text file ends? If I can customize the existing package, is there a specific reference that I need to set in my project that will let me control the value of the global?
We are converting an old (circa 2000) VB app that used an Access database to a C#/SQL Server 2005 database. One of the key business processes in this app was to import large quantities of data (200K+ at a time). The format of the text file is fixed length and while we would love to change it we are unable to (the user community would burn us in effigee if not for real). I have been looking at DTS, BCP, and Bulk Insert using format files and, while I think we can get most of the way there I am not sure if I am understanding the format file structure completly. Below is the import specification for one of our files:
FieldName DataType Start Length AFIID C 1 5 LOCID C 7 15 LOGDATE C 23 11 LOGTIME C 35 4 LOGCODE C 40 4 HEADDIR C 45 3 SLUGVOL C 49 7 UNITS C 57 10
As you can see the each column is separated by a single space so that the second column starts two beyond the length of the first column, etc, etc. Here is an example of the data (the numbers in the first two rows are there to assist counting characters and do not exist in the real import file) 1 2 3 4 5 6 1234567890123456789012345678901234567890123456789012345678901234567 AFP29 01-SB-01 10-AUG-2000 0900 ABB DRW 9999.99 CFS AFP29 01-SB-01 10-AUG-2000 0900 ABB DRW 9999.99 CFS AFP29 01-SB-02 30-DEC-2000 0900 ABB DRW 9999.99 CFS AFP29 XX-XX 10-AUG-2000 1000 ABB DRW 123.23 CFS C123 01-SB-01 10-AUG-2000 0900 ABB DRW 9999.99 CFS AFP29 10-AUG-2000 0900 ABB DRW 9999.99 CFS AFP29 01-SB-01 0900 ABB DRW 9999.99 CFS AFP29 01-SB-01 SDKJFDKL 0900 ABB DRW 9999.99 CFS AFP29 01-SB-01 10-AUG-2000 ABB DRW 9999.99 CFS AFP29 01-SB-01 10-AUG-2000 900 ABB DRW 9999.99 CFS AFP29 01-SB-01 10-AUG-2000 0900 ABB DRW DFDSF CFS AFP29 01-SB-01 10-AUG-2000 0900 ABB DRW 9999.99 CFS
From what I have read about the structure of format files you start with the host file field order number, then the host file data type, then prefix length, then field length, then field terminator, then database column order, then database column name, and final the collation. This looks like it would work great for delimeted files but in my case there are no delimiters. If I don't include a terminator ("") would the following format file spec import my data correctly? 9.0 8 1 SQLCHAR 0 5 "" 1 AFIID "" 2 SQLCHAR 0 15 "" 2 LOCID "" 3 SQLCHAR 0 11 "" 3 LOGDATE "" 4 SQLCHAR 0 4 "" 4 LOGTIME "" 5 SQLCHAR 0 4 "" 5 LOGCODE "" 6 SQLCHAR 0 3 "" 6 HEADDIR "" 7 SQLCHAR 0 7 "" 7 SLUGVOL "" 8 SQLCHAR 0 10 " " 8 UNITS ""
Hello All, Can someone tell me how (in SQL) to convert an integer to a fixed length character filled with leading zeros. For example, I have an integer value of '125'. My user wants to see it displayed as '00000125'. How do I get the zeroes to fill in to a char(8) field when the length of the value differs, ie. '1', '125', '3452', etc.
I have a small problem - I am unable to load data from a .csv file into a table in SQL Server. Here is the command I am running: BULK INSERT CCSProgramParticipation FROM 'c: est.csv' WITH ( DATAFILETYPE = 'char', FIELDTERMINATOR = ',', ROWTERMINATOR = '' )
Data in test.csv is the following format: (date fields can be blank) NY580232,0,6/30/2006,3567,396,7/1/2005,9/9/2005 NY580232,0,6/30/2006,14850,462,12/12/2005, .... ....
What I see is the data does get loaded; however, data from the following row is getting inserted in the last field of a particular row (previous row) - it seems like the rowterminator is being ignored.
Has anyone encountered this issue? Please let me know your thoughts on this.
BULK INSERT testTable FROM 'C:UsersRobsDocumentsSoccer2011-2012SC1.csv' WITH ( FIRSTROW=2, FIELDTERMINATOR = ',', ROWTERMINATOR = ' ', )
Works fine. Most of my csv files have the same number of columns, but some have 4 less. The files contain the same column headings as the full size ones(only 4 less). Is there a way when bulk inserting for sql to either skip these, or ignore the error.