Bulk Insert Fails To Import Data Files Created On Unix
Sep 21, 2006
It seems to me that files created on Unix machines with line terminator
, or chr(10), cannot be imported using the Bulk Insert statement. Is this a bug, or an oversight by Microsoft? Does this mean that unless one replaces all
with
, there is no way to use Bulk Insert to import Unix files? This is a very strange behavior by MSSQL. Even lessor programs such as Excel and Word automatically recognize chr(10) as a line termination character. Am I missing something, or is this just the way MSSQL is?
I have imported data in my table using the bulk insert command. I was supposed to fill specific columns of my table with that data so I used a view to put them in the column I wanted.
The table looks like this now:
id | id_param | val_param +-----------+--------------+ 1 | no_tel | 742062141 2 | sex | 1 3 | age | 23 4 | no_tel | 765234157 5 | sex | 1 6 | age | 34
When I want to select only the val_param that is=1 for the id_param=sex using this interogation:
select * from bd_rox where id_param='sex' and val_param='1'
it returns no value and I don`t know why.The wanted result should look like this:
id | id_param | val_param +-----------+--------------+- 2 | sex | 1 5 | sex | 1
I have two versions of "rsk.txt" one with 1.9mill rows and one with the first 2000 rows only. The files have one column only with 115 characters that I'll split in to several columns later using SUBSTRING. The one with 2000 rows fires in to the database with no problems whatsoever using this exact code, the other one throws the following error:
Server: Msg 4866, Level 17, State 66, Line 1 Bulk Insert fails. Column is too long in the data file for row 1, column 1. Make sure the field terminator and row terminator are specified correctly.
How can I resolve this problem?
EDIT: I tried several different row- and fieldterminators but this exact one works for the small data-file so I assume it should also work for the large one...the large one is however copyed directly using binary ftp from a unix-filesystem and the small one is manually copied into a new txt-file using UltraEdit.
Hi I have a page that bulkinsert data to my sql server, I build up the bulk insert part like this.... 1 sb.Append("Exec p_BulkInsertPDI '<ROOT><PROT>") 2 3 sb.Append("<PDI NID=""" & HiddenField1.Value & """ AID=""" & HiddenField1a.Value & """ MID="" GID="" UID=""" & UserID & """/>") 4 5 sb.Append("</PROT></ROOT>'")The problem I have here is that sometimes the AID value doesn't have any value beacuse on the previous page haven't sent any value to that hiddenfield. So when I try to run this, I get a error message like this... "Conversion failed when converting the nvarchar value 'AID=' to data type int".It would be the best if I could insert Null values if no value have been provided. Is this possible to do? Regards
I'm setting up a new 2005 server and bulk insert from a client workstation (using windows authentication) is failing with:
Msg 4861, Level 16, State 1, Line 1 Cannot bulk load because the file "\FILESERVERNAMEsharedfolderfilename.txt" could not be opened. Operating system error code 5(Access is denied.).
Here's my BULK INSERT statement (though I'm pretty sure there's nothing wrong with it):
BULK INSERT #FIRSTROW FROM '\FILESERVERNAMEsharedfolderfilename.txt' WITH ( DATAFILETYPE = 'char', ROWTERMINATOR = '', LASTROW = 1 )
If I run the same transact SQL when remote desktopped into the new server (under the same login as that used in the client workstation), it imports the file without errors.
If I use the sa client login from the client workstation (sql server authentication) the bulk insert succeeds.
My old SQL 2000 server lets me bulk insert the file without errors even from my client workstations using windows authentication.
I have followed the instructions on this site: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=928173&SiteID=1 , but still no luck and same error.
I'm pretty sure it is being caused by the increased constraints on bulk insert in 2005. Hoping someone can help. The more specific the better. If you need more info, let me know.
Oh and I've also made sure that the SQL service uses a domain logon account rather than the local system account (this would work on the 2000 server, but not 2005).
Note that the file server (source file resides there) is a DIFFERENT machine than the 2005 SQL server. If I move the source file to the sql server machine the error goes away (not a preferred solution though).
I'm almost positive that what I need to do is make sure that the SQL server is setup for delegation when a windows authenticated user attempts to bulk load a file from a second server.
I have more than 500 CSV files with a similar structure [Same column name and same data format]. I would like to load these files in a database table on the SQL Server 2014 database.
I am new to MS DTS and i am using MS SQL 2000 as my database. I am trying to do a Bulk insert using MS DTS package. The package is trying to load data from Text file to a SQL 2000 table. When runninh the package i am getting an error saying that 1 task failed during execution and the task is shown in red colour indicating that the task has failed. Now when i get the details of the error it shows the follows:
Could not bulk insert because the file D:DtsFile.txt could not be opened. Operation system error code: 21 (The device is not ready).
Please help me in solving this problem, if any one has got this error and resolved or have any idea of the error please help. :)
i have a log file, i am trying to import data from it to SQL in order to analyze the data (able to query on the data), however that task seems impossible. In fact the log file contains a varying number of column fields (error logged, various types of data logged demand varying number of columns). More than that the fields themself are hard to extract.
An example of data in my log is:xxxxxxxx is some alphanumeric chars2008-01-09 20:16:05,4784E36F.req,10.1.1.26,xxxxxxxxx,OK -- SMPP - xxxxxxx:xxxxx,Sender=xxxx;SMSCMsgId=2028eecc;Binary=1;DCS=8;Data=xxxxxxxxxxxxxx...2008-01-09 20:16:05,4784E338.req,10.1.1.26,xxxxxxxx,Retry Pending - ERROR: Timeout waiting for response from server or lost connection -- SMPP - xxxxxxxxxxx:xxxxx,Sender=xxxxx........
I may use regular expressions to extract the data, and maybe use a regular INSERT to put in the right table. Thus it seems like making a manual Bulk Insert(yeah and it may take much more time), it seems strange, can i use somehow some additional tool (in SQL package or external), to assist somehow.
I have multiple xml files in a directory that i would like to insert into tables. I can perform the action just fine with 1 file...but how do I add all the files in the directory into the same tables...or keep track of the files in that directory and if a new one is added it will be uploaded to the database. I have attached the script I am using to upload 1 file...
declare @XMLdocument xml declare @doc_handle int create table XMLfile(surveyXML xml) bulk insert XMLfile
I'm having a problem doing a bulk insert on a tab delimited text file into mssql 2005 using either bulk insert or bcp. When using the following bulk insert command I get the "The column is too long in the data file for row 1, column 2" error. I have tried
The data file only has data for the first 10 columns of a table with over 100 columns.
First 10 table columns have the format of CREATE TABLE [dbo].[CustomerDefinition]( [Rowid] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL, [CustomerId] [varchar](15) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [Name] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Addr1] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Addr2] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Addr3] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [City] [varchar](30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [State] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Zipcode] [varchar](10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Country] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL CONSTRAINT [PK_CustomerDefinition] PRIMARY KEY CLUSTERED
The data-file looks like this (it is tab delimited):
1 1 BEN ONE BENVENUTO 1 BENVENUTO ST. CLAIR & AVENUE ROAD TORONTO ON 2 1 BIGGIN DDP PARTNERSHIP 1 BIGGIN LTD. 1 BIGGIN COURT NORTH YORK ON 3 1 EVA MELIA CORPORATION 1 EVA ROAD SUITE #412 ETOBICOKE ON 4 1 FINANC CONCERT PROPERTIES 200 BAY STREET- SOUTH TOWER SUITE 2100- PO BOX 56 TORONTO ON 5 1 LONGBRID BERKLEY PROPERTY MANAGEMENT INC 1 LONGBRIDGE ROAD 2ND FL THORNHILL ON 6 10 DORA VILLA LASFLORES C/O FOCUS PROPERTIE 10 DORA AVENUE TORONTO- ON 7 10 HOLMES HALTON COMMUNITY HOUSING 10 HOLMESWAY PLACE ACTON ON 8 100 CANYON DEL PROPERTY MANAGEMENT 100 CANYON AVENUE BATHURST & SHEPPARD NORTH YORK ON 9 100 CEDAR LAWRENCE CONSTRUCTION 100 CEDAR AVENUE YONGE & MAJOR MACKENZIE WEST RICHMOND HILL ON 10 100 GOWAN KANCO - 100 GOWAN LTD. 100 GOWAN AVENUE PAPE & DANFORTH EAST YORK ON
A view named "Viw_Labour_Cost_By_Service_Order_No" has been created and can be run successfully on the server. I want to import the data which draws from the view to a table using SQL Server Import and Export Wizard. However, when I run the wizard on the server, it gives me the following error message and stop on the step Setting Source Connection
Operation stopped...
- Initializing Data Flow Task (Success)
- Initializing Connections (Success)
- Setting SQL Command (Success) - Setting Source Connection (Error) Messages Error 0xc020801c: Source - Viw_Labour_Cost_By_Service_Order_No [1]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "SourceConnectionOLEDB" failed with error code 0xC0014019. There may be error messages posted before this with more information on why the AcquireConnection method call failed. (SQL Server Import and Export Wizard)
Exception from HRESULT: 0xC020801C (Microsoft.SqlServer.DTSPipelineWrap)
- Setting Destination Connection (Stopped)
- Validating (Stopped)
- Prepare for Execute (Stopped)
- Pre-execute (Stopped)
- Executing (Stopped)
- Copying to [NAV_CSG].[dbo].[Report_Labour_Cost_By_Service_Order_No] (Stopped)
- Post-execute (Stopped)
Does anyone encounter this problem before and know what is happening?
I used bcp to produce the apended format file.How can it be modified to recognize the quotes that surround the textfields and not insert the quotes along with the text? Invariably, thefirst four columns have text surrounded by quotes and are terminated bytabs. If the first column has "abc", only abc ought to be insertedinto that field in the table.ThanksTed==================format file========================<?xml version="1.0" ?>- <BCPFORMATxmlns="http://schemas.microsoft.com/sqlserver/2004/bulkload/format"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">- <RECORD><FIELD ID="1" xsi:type="CharPrefix" PREFIX_LENGTH="2" MAX_LENGTH="6"COLLATION="Latin1_General_CI_AI" /><FIELD ID="2" xsi:type="CharPrefix" PREFIX_LENGTH="2" MAX_LENGTH="7"COLLATION="Latin1_General_CI_AI" /><FIELD ID="3" xsi:type="CharPrefix" PREFIX_LENGTH="2" MAX_LENGTH="48"COLLATION="Latin1_General_CI_AI" /><FIELD ID="4" xsi:type="NativeFixed" LENGTH="4" /><FIELD ID="5" xsi:type="CharPrefix" PREFIX_LENGTH="2" MAX_LENGTH="8"COLLATION="Latin1_General_CI_AI" /><FIELD ID="6" xsi:type="NativePrefix" PREFIX_LENGTH="1" /></RECORD>- <ROW><COLUMN SOURCE="1" NAME="f_supplier_code" xsi:type="SQLVARYCHAR" /><COLUMN SOURCE="2" NAME="f_product_code" xsi:type="SQLVARYCHAR" /><COLUMN SOURCE="3" NAME="f_product_name" xsi:type="SQLVARYCHAR" /><COLUMN SOURCE="4" NAME="f_asset_classes_id" xsi:type="SQLINT" /><COLUMN SOURCE="5" NAME="f_size" xsi:type="SQLVARYCHAR" /><COLUMN SOURCE="6" NAME="f_dist_unit" xsi:type="SQLFLT8" /></ROW></BCPFORMAT>
I am using SQL Server Data Tools for Visual Studio 2012. I have a very simple SSIS package with a Data Flow task that exports from an OLE DB Source to a tab-delimited unicode Flat File Destination and a Bulk Insert task that loads from the file. Both the Flat File Destination and Bulk Import are using the same code page. The Bulk Insert task is using the wide char format to read from the file. The process works fine with nvarchar and int columns, but when I add a unique identifier column it fails with "type mismatch or invalid character for the specified code page".
I need to import (pereodicaly) large ammount of data to my CE database. When tested import on network this take a lot of time. That's why decided to send raw data in ASCII files (because of small size) and to import files to CE database.
Certainly, it's not a problem to write those cli by myself, but it's interesting if someone already did this...
I'm trying to use Bulk insert for the first time and getting the following error. I think it might have something to do with my Format File and from the error msg there's a conversion error for the first column. In my database the Field is nvarchar(6) so my best guess is to use SQLNChar for the first column. I've checked the end of each line is CR LF therefore the is correct for line 7 right?
Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 1 (ASXCode). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
BULK INSERTtbl_ASX_Data_temp FROM 'M:DataASXImportTest.txt' WITH (FORMATFILE='M:DataASXSQLFormatImport.Fmt')
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.
I'm just learning SSIS and I've hit my first bump. I am doing a bulk import from a tab delimited text file to an empty sql table that has a Idendity column defined. How do I tell the bulk insert task to skip that column when inserting from the text file. If I remove the identity column it imports the data fine, but I want to create the indentity column in the table too.
On my site users can register using ASP Membership Create user Wizard control. I am also using the wizard control to design a simple question and answer form that logged in users have access to. it has 2 questions including a text box for Q1 and dropdown list for Q2. I have a table in my database called "Players" which has 3 Columns UserId Primary Key of type Unique Identifyer PlayerName Type String PlayerGenre Type Sting
On completing the wizard and clicking the finish button, I want the data to be inserted into the SQl express Players table. I am having problems getting this to work and keep getting exceptions. Be very helpful if somebody could check the code and advise where the problem is??
To match the answers to the user I get the UserId and insert this into the database to.protected void Wizard1_FinishButtonClick(object sender, WizardNavigationEventArgs e) { SqlDataSource DataSource = (SqlDataSource)Wizard1.FindControl("InsertArtist1"); MembershipUser myUser = Membership.GetUser(this.User.Identity.Name); Guid UserId = (Guid)myUser.ProviderUserKey;String Gender = ((DropDownList)Wizard1.FindControl("PlayerGenre")).SelectedValue; DataSource.InsertParameters.Add("UserId", UserId.ToString());DataSource.InsertParameters.Add("PlayerGenre", Gender.ToString()); DataSource.Insert();
I get the following error when I try to delete files using the FTP client in a SSIS package. This is the error I get.
Error: 0xC002918E at FTP Task, FTP Task: Unable to delete remote files using "FTP Connection Manager".
Task failed: FTP Task
This is a unix server. I'm able to delete the files using other FTP clients but the FTP client in the SSIS package cannot delete the files. I read on many places on the interent that this is a known MS bug. Let me know if there is some sort of work around for this. I'm using SQL servewr 2005 SSIS packages to accomplish this task..
Hi, I am working on inserting information into a DB and then retrieving the ID created for that Data to use elsewhere in my code. I have the code below but I do not know how to get toOutput parameter. Can anyone please help?
AS INSERT INTO PRODUCTION (DATEOUT,DATEREQUIRED, PREPAREDBY, COMMENTID, TOTALQUANTITY, VENDORID, WPO, TCAPONUMBER, APPROVEDBY) VALUES( @DATEOUT, @DATEREQUIRED, @PREPAREDBY, @COMMENTID, @TOTALQUANTITY, @VENDORID, @WPO, @TCAPONUMBER, @APPROVEDBY) SET @Identity = SCOPE_IDENTITY()
'collect all the information from the form and then apply all and then update 'Get a reference to the Production table. Dim dtProduction As DataTable = DS.Tables("Production") Dim dtLineItem As DataTable = DS.Tables("LineItems") ' Create the SqlCommand to execute the stored procedure. Production.InsertCommand = New SqlCommand("dbo.InsertProduction", connection) Production.InsertCommand.CommandType = CommandType.StoredProcedure ' Add the parameter for the CategoryName. Specifying the ' ParameterDirection for an input parameter is not required. 'Production.InsertCommand.Parameters.Add("@CategoryName", SqlDbType.NVarChar, 15, "CategoryName") Production.InsertCommand.Parameters.Add("@DATEOUT", SqlDbType.DateTime, 8, "CategoryName") Production.InsertCommand.Parameters.Add("@DATEREQUIRED", SqlDbType.DateTime, 8, "CategoryName") Production.InsertCommand.Parameters.Add("@PREPAREDBY", SqlDbType.VarChar, 50, "CategoryName") Production.InsertCommand.Parameters.Add("@COMMENTID", SqlDbType.Int, 4, "CategoryName") Production.InsertCommand.Parameters.Add("@TOTALQUANTITY", SqlDbType.Int, 4, "CategoryName") Production.InsertCommand.Parameters.Add("@VENDORID", SqlDbType.Int, 4, "CategoryName") Production.InsertCommand.Parameters.Add("@WPO", SqlDbType.VarChar, 50, "CategoryName") Production.InsertCommand.Parameters.Add("@TCAPONUMBER", SqlDbType.Int, 4, "CategoryName") Production.InsertCommand.Parameters.Add("@APPROVEDBY", SqlDbType.VarChar, 50, "CategoryName") ' Add the SqlParameter to retrieve the new identity value. ' Specify the ParameterDirection as Output. Dim parameter As SqlParameter = Production.InsertCommand.Parameters.Add("@Identity", SqlDbType.Int, 0, "ProductionID") parameter.Direction = ParameterDirection.Output ' Create a new row with the same schema. Dim dr As DataRow = dtProduction.NewRow() 'you need the ID from this to insert into the Production DB ' Set the value of all the columns. dr("DateOut") = CDate(DateTimePicker1.Text) dr("DateRequired") = CDate(DateTimePicker2.Text) dr("VendorID") = CInt(vendorbox.SelectedValue) dr("HomeAddress") = txtApproved.Text.ToString dr("ApprovedBy") = txtPrepared.Text.ToString dr("TCAPO") = CInt(txtTCAPO.Text.Trim) dr("CommentID") = CommentID dr("TotalCost") = CDec(txtTotals.Text) dr("TotalQuantity") = CInt(txtQtyTotal.Text) ' Add to the Rows collection or table . dtProduction.Rows.Add(dr) 'Update the Production Table and then retrieve the ID created in this case Production.Update(dtProduction)
I am using Bulk Copy command for Exporting data table wise from database to csv files and it was working fine. Since last 3-4 days when exporting for some tables data in csv file is coming junk.
I need to be able to bulk insert a bunch of tables from their corresponding flat file. I have created an XML file (see below) which has the file name/table name pair at each node. I then created a ForEachLoop task and used the Node enumeration type and the following OuterXpathString: ReferenceFiles/File. At this point I get lost. How do I pass the 2 inside node values (file name and table name) to variables which I can then use as expressions for the bulk insert task inside the Foreach?
Hi! I'm building a web application. I need to read data from a text or excel file and process the data and then store the result records into database. The record number is big. I can store the data record into database (SQL Server 2005) one at a time. I think it's slow. Is there any way to insert the data in bulk.
I manage a legacy system that dumps it's data into a number of different databases (same schema) on a nightly basis using bulk insert. I need to formulate a strategy for efficiently aggregating that data into a single database right after these nightly extractions complete. Here is my current stategy:
1. Duplicate the legacy system's database schema and add an identifier column to specify which database the data loaded from.
2. Each night, delete all records in the table.
3. Each night, for each database:
3a. Set each table's default value to a value that references the current database being loaded.
3b. Use the legacy system's flat files and format files to bulk insert into the database.
3c. Clear the default value.
What other steps would faciliate performance? Dropping and recreating the indexes? Does anyone forsee faults in this strategy?
Any help would be appreciated.I am running a script that does the following in succession.1-Drop existing database and create new database2-Defines tables, stored procedures and functions in the database3-Imports data using bulk insert4-Analyzes data using stored proceduresI would like to improve the performance of the analysis in step 4 bycreating indexes in step 2.Question 1-Are indexes updated when data is bulk inserted? I know they arewhen using normal insert, update, or delete T-SQL but I am not sure aboutbulk insert of data.Question 2-Do I need to update the index statistics in any way or would theybe ready to use in step 4.Thanks,CJ
I have installed a msde engine (sql server 7 desktop) on a nt 4.0 sp6 workstation. I have around 17 megs of data of need to transfer from a Sql Server 7 from a nt 4.0 sp6 Server. I can't get anything to work. Bcp complains something to the effect there is no server attribute which is true because it is a work station. if I remove the -S option it complains that I did not designate the server. This bcp script works just fine from SQL Server 6.5 on servers. DTS puts a message that I do not have a liscense to use DTS for the SQL Server Desktop. I tried transfering the database to Access and then use the upsizing Wizard but the database is to big. I tried a bulk insert but I got the vague message ' OLE DB provider 'STREAM' reported an error. The provider did not give any information about the error' What is the best way to do this? Why am I having trouble with Bcp and Bulk insert?
Short version: The best/fastest way to load large amounts of data from a comma delimited text file into an SQL Server table. Where the text file contains date fields in ccyy/mm/dd format and the SQL Server table defines those fields as datetime data types.
Details: When I attempt to load files (using either bcp or BULK INSERT) containing datetime data the load process errors because the datetime fields in my text file are in ccyy/mm/dd format and the default format for SQL Server is mm/dd/yy. I have been unable to change the default format by using the SET DATEFORMAT statement (apparently the SET DATEFORMAT statement will not work for bcp because bcp runs outside of the SQL Server session???). The only alternatives that I have come up with are: 1) Change the format of date fields in the text file from ccyy/mm/dd to mm/dd/ccyy. 2) Create a temporary table that defines the date fields as a char(n) datatype. Then load the data into the temp table. Then SET the DATEFORMAT to ccyy/mm/dd. Then copy the temp table into the permanent table (the permanent table using datetime data types).
Both of these alternatives would require additional processing time. Since this is a process that loads large amounts of data on a monthly (soon to be weekly) basis, speed is of the essence.