We have created CSV files on HPUX 11.0 and transferred them via ASCII ftpto our SQL Server machine file store to load large amounts for data usingthe BULK INSERT command. This is the command:BULK INSERT db..table FROM 'S:pathfilename.csv'WITH (DATAFILETYPE = 'char',FIELDTERMINATOR = ',',ROWTERMINATOR = '')Now these files are written on Linux and there seems to be a linefeedproblem when loadingthe data. This is the error:/Server: Msg 4866, Level 17, State 66, Line 1Bulk Insert fails. Column is too long in the data file for row 1, column31. Make sure the field terminator and row terminator are specifiedcorrectly./Column 31 is our last column in the CSV file.I've tried ' ', ' ' for ROWTERMINATOR and cannot get it execute pastthe 1st line.If one opens the CSV file in Wordpad and saves it, then executes theabove statement,the ROWTERMINATOR = '' suffices but this is an unreasonable solution.Any help on this is greatly appreciated.Michael Husler
Hi I have a page that bulkinsert data to my sql server, I build up the bulk insert part like this.... 1 sb.Append("Exec p_BulkInsertPDI '<ROOT><PROT>") 2 3 sb.Append("<PDI NID=""" & HiddenField1.Value & """ AID=""" & HiddenField1a.Value & """ MID="" GID="" UID=""" & UserID & """/>") 4 5 sb.Append("</PROT></ROOT>'")The problem I have here is that sometimes the AID value doesn't have any value beacuse on the previous page haven't sent any value to that hiddenfield. So when I try to run this, I get a error message like this... "Conversion failed when converting the nvarchar value 'AID=' to data type int".It would be the best if I could insert Null values if no value have been provided. Is this possible to do? Regards
I'm setting up a new 2005 server and bulk insert from a client workstation (using windows authentication) is failing with:
Msg 4861, Level 16, State 1, Line 1 Cannot bulk load because the file "\FILESERVERNAMEsharedfolderfilename.txt" could not be opened. Operating system error code 5(Access is denied.).
Here's my BULK INSERT statement (though I'm pretty sure there's nothing wrong with it):
BULK INSERT #FIRSTROW FROM '\FILESERVERNAMEsharedfolderfilename.txt' WITH ( DATAFILETYPE = 'char', ROWTERMINATOR = '', LASTROW = 1 )
If I run the same transact SQL when remote desktopped into the new server (under the same login as that used in the client workstation), it imports the file without errors.
If I use the sa client login from the client workstation (sql server authentication) the bulk insert succeeds.
My old SQL 2000 server lets me bulk insert the file without errors even from my client workstations using windows authentication.
I have followed the instructions on this site: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=928173&SiteID=1 , but still no luck and same error.
I'm pretty sure it is being caused by the increased constraints on bulk insert in 2005. Hoping someone can help. The more specific the better. If you need more info, let me know.
Oh and I've also made sure that the SQL service uses a domain logon account rather than the local system account (this would work on the 2000 server, but not 2005).
Note that the file server (source file resides there) is a DIFFERENT machine than the 2005 SQL server. If I move the source file to the sql server machine the error goes away (not a preferred solution though).
I'm almost positive that what I need to do is make sure that the SQL server is setup for delegation when a windows authenticated user attempts to bulk load a file from a second server.
I am new to MS DTS and i am using MS SQL 2000 as my database. I am trying to do a Bulk insert using MS DTS package. The package is trying to load data from Text file to a SQL 2000 table. When runninh the package i am getting an error saying that 1 task failed during execution and the task is shown in red colour indicating that the task has failed. Now when i get the details of the error it shows the follows:
Could not bulk insert because the file D:DtsFile.txt could not be opened. Operation system error code: 21 (The device is not ready).
Please help me in solving this problem, if any one has got this error and resolved or have any idea of the error please help. :)
I have two versions of "rsk.txt" one with 1.9mill rows and one with the first 2000 rows only. The files have one column only with 115 characters that I'll split in to several columns later using SUBSTRING. The one with 2000 rows fires in to the database with no problems whatsoever using this exact code, the other one throws the following error:
Server: Msg 4866, Level 17, State 66, Line 1 Bulk Insert fails. Column is too long in the data file for row 1, column 1. Make sure the field terminator and row terminator are specified correctly.
How can I resolve this problem?
EDIT: I tried several different row- and fieldterminators but this exact one works for the small data-file so I assume it should also work for the large one...the large one is however copyed directly using binary ftp from a unix-filesystem and the small one is manually copied into a new txt-file using UltraEdit.
I'm having a problem doing a bulk insert on a tab delimited text file into mssql 2005 using either bulk insert or bcp. When using the following bulk insert command I get the "The column is too long in the data file for row 1, column 2" error. I have tried
The data file only has data for the first 10 columns of a table with over 100 columns.
First 10 table columns have the format of CREATE TABLE [dbo].[CustomerDefinition]( [Rowid] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL, [CustomerId] [varchar](15) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [Name] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Addr1] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Addr2] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Addr3] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [City] [varchar](30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [State] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Zipcode] [varchar](10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Country] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL CONSTRAINT [PK_CustomerDefinition] PRIMARY KEY CLUSTERED
The data-file looks like this (it is tab delimited):
1 1 BEN ONE BENVENUTO 1 BENVENUTO ST. CLAIR & AVENUE ROAD TORONTO ON 2 1 BIGGIN DDP PARTNERSHIP 1 BIGGIN LTD. 1 BIGGIN COURT NORTH YORK ON 3 1 EVA MELIA CORPORATION 1 EVA ROAD SUITE #412 ETOBICOKE ON 4 1 FINANC CONCERT PROPERTIES 200 BAY STREET- SOUTH TOWER SUITE 2100- PO BOX 56 TORONTO ON 5 1 LONGBRID BERKLEY PROPERTY MANAGEMENT INC 1 LONGBRIDGE ROAD 2ND FL THORNHILL ON 6 10 DORA VILLA LASFLORES C/O FOCUS PROPERTIE 10 DORA AVENUE TORONTO- ON 7 10 HOLMES HALTON COMMUNITY HOUSING 10 HOLMESWAY PLACE ACTON ON 8 100 CANYON DEL PROPERTY MANAGEMENT 100 CANYON AVENUE BATHURST & SHEPPARD NORTH YORK ON 9 100 CEDAR LAWRENCE CONSTRUCTION 100 CEDAR AVENUE YONGE & MAJOR MACKENZIE WEST RICHMOND HILL ON 10 100 GOWAN KANCO - 100 GOWAN LTD. 100 GOWAN AVENUE PAPE & DANFORTH EAST YORK ON
It seems to me that files created on Unix machines with line terminator , or chr(10), cannot be imported using the Bulk Insert statement. Is this a bug, or an oversight by Microsoft? Does this mean that unless one replaces all with , there is no way to use Bulk Insert to import Unix files? This is a very strange behavior by MSSQL. Even lessor programs such as Excel and Word automatically recognize chr(10) as a line termination character. Am I missing something, or is this just the way MSSQL is?
I am using SQL Server Data Tools for Visual Studio 2012. I have a very simple SSIS package with a Data Flow task that exports from an OLE DB Source to a tab-delimited unicode Flat File Destination and a Bulk Insert task that loads from the file. Both the Flat File Destination and Bulk Import are using the same code page. The Bulk Insert task is using the wide char format to read from the file. The process works fine with nvarchar and int columns, but when I add a unique identifier column it fails with "type mismatch or invalid character for the specified code page".
I'm just learning SSIS and I've hit my first bump. I am doing a bulk import from a tab delimited text file to an empty sql table that has a Idendity column defined. How do I tell the bulk insert task to skip that column when inserting from the text file. If I remove the identity column it imports the data fine, but I want to create the indentity column in the table too.
I have to update a field within a table of 60 records or so. Each record has a different field value. it's type varchar. i was given an excel file with the field values and was thinking of a bulk update like bulk insert, but i don't recall that it's possible that way.
Is the only way to create a table, bulk insert, then merge the two tables together with UPDATE?
Just wanted to see if there was an easier way to do it, otherwise i'll take the latter route. Thanks!
I have a table containing 8 million records. I need to replace 2 million of these records with a scaled down query that goes something like: SELECT 1, ShareholderID, Assets1 FROM MyTable (Yields appx. 200,000 recods) SELECT 2, ShareholderID, Assets2 FROM MyTable (Yields appx. 200,000 recods) . . . SELECT 10, ShareholderID, Assets1 + Assest2 + Assets3 + ... + Assets9 FROM MyTable (Yields appx. 200,000 recods)
Updates and cursors just seem to be too slow.
So far I have done the following, but was wondering if anyone could think of a better way. SELECT 6 million records that don't need to be deleted into a #TempTable Use statements above to select into same #TempTable DROP and recreate Original Table SELECT 6 + 2 million records INTO original table.
This seems rather convoluted. Is there a better approach? Would it be worth while to dump data to a file and use bcp / Bulk Insert
I receive the following error message when I try to use the Bulk Insert Task to load BCP data into a table:
Error: 0xC002F304 at Bulk Insert Task, Bulk Insert Task: An error occurred with the following error message: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.The bulk load failed. The column is too long in the data file for row 1, column 4. Verify that the field terminator and row terminator are specified correctly.Bulk load data conversion error (overflow) for row 1, column 1 (rowno).".
Task failed: Bulk Insert Task
In SSMS I am able to issue the following command and the data loads into a TableName table with no error messages: BULK INSERT TableName FROM 'C:DataDbTableName.bcp' WITH (DATAFILETYPE='widenative');
What configuration is required for the Bulk Insert Task in SSIS to make the data load? BTW - the TableName.bcp file is bulk copy file as bcp widenative data type. The properties of the Bulk Insert Task are the following: DataFileType: DTSBulkInsert_DataFileType_WideNative RowTerminator: {CR}{LF}
Any help getting the bcp file to load would be appreciated. Let me know if you require any other information, thanks for all your help. Paul
I'm trying to use Bulk insert for the first time and getting the following error. I think it might have something to do with my Format File and from the error msg there's a conversion error for the first column. In my database the Field is nvarchar(6) so my best guess is to use SQLNChar for the first column. I've checked the end of each line is CR LF therefore the is correct for line 7 right?
Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 1 (ASXCode). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
BULK INSERTtbl_ASX_Data_temp FROM 'M:DataASXImportTest.txt' WITH (FORMATFILE='M:DataASXSQLFormatImport.Fmt')
Before implementing memory based bulk copy insert with IRowsetFastLoad interface of SQL Server 2005 OLE DB provider, I want to know some considerations.
- performance : compared with T-SQL's "BULK INSERT ..." and bcp utility
- SQL Server's resource usage : when running memory based bulk copy, server resource's influence
- server side action(behavior) : when server is busy, delayed-update means IRowsetFastLoad::Commit(true) method can insert right after?
- row-count : The rowcount limitation can be inserted by IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit
I have a web form with a text field that needs to take in as much as the user decides to type and insert it into an nvarchar(max) field in the database behind. I've tried using the new .write() method in my update statement, but it cuts off the text after a while. Is there a way to insert/update in SQL 2005 this without resorting to Bulk Insert? It bloats the transaction log and turning the logging off requires a call to sp_dboptions (or a straight-up ALTER DATABASE), which I'd like to avoid if I can.
select * from openrowset(bulk '\server1c$file.txt', SINGLE_BLOB) as t works from sql server itself, but doesn't work from any other machine. I got "Operating system error code 5(Access is denied.)." I am running as the domain admin, the file.txt has full control for everyone. In server1 even log, I see Anonymous Logon.
On my site users can register using ASP Membership Create user Wizard control. I am also using the wizard control to design a simple question and answer form that logged in users have access to. it has 2 questions including a text box for Q1 and dropdown list for Q2. I have a table in my database called "Players" which has 3 Columns UserId Primary Key of type Unique Identifyer PlayerName Type String PlayerGenre Type Sting
On completing the wizard and clicking the finish button, I want the data to be inserted into the SQl express Players table. I am having problems getting this to work and keep getting exceptions. Be very helpful if somebody could check the code and advise where the problem is??
To match the answers to the user I get the UserId and insert this into the database to.protected void Wizard1_FinishButtonClick(object sender, WizardNavigationEventArgs e) { SqlDataSource DataSource = (SqlDataSource)Wizard1.FindControl("InsertArtist1"); MembershipUser myUser = Membership.GetUser(this.User.Identity.Name); Guid UserId = (Guid)myUser.ProviderUserKey;String Gender = ((DropDownList)Wizard1.FindControl("PlayerGenre")).SelectedValue; DataSource.InsertParameters.Add("UserId", UserId.ToString());DataSource.InsertParameters.Add("PlayerGenre", Gender.ToString()); DataSource.Insert();
I am working with parent child tables and want to populate the primary key on insert so that the user does not have to enter this for each record. Here is my codeInsertCommand="INSERT INTO [Awards] ([UFID], [DateAwarded], [Amount], [AwardingAgency]) Select UFID, @DateAwarded, @Amount, @AwardingAgency from master where GatorlinkName = @LoginName" <InsertParameters><asp:Parameter Name="LoginName" Type="String" /> <asp:Parameter Name="strusername" Type="String" /> <asp:Parameter Name="UFID" Type="String" /> <asp:Parameter Name="DateAwarded" Type="DateTime" /> <asp:Parameter Name="Amount" Type="Decimal" /> <asp:Parameter Name="AwardingAgency" Type="String" /> </InsertParameters> The UFID field is the only field that should be populated from SQL data the others are coming from a form view insert form. When I run an insert I get no error but the insert does not happen. I know that the @LoginName works since I am using this same logic in my select statement. Thanks in advance for your help,Ken
Hello, I am wondering is the Transaction Log logged differently between BULK INSERT vs INSERT? Performance speaking, which operations is generally faster given the same amout of data inserted.
Hi~, I have 3 questions about memory based bulk copy.
1. What is the limitation count of IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit(true)? For example, how much insert row at below sample?(the max value of nCount) for(i=0 ; i<nCount ; i++) { pIFastLoad->InsertRow(hAccessor, (void*)(&BulkData)); }
2. In above code sample, isn't there method of inserting prepared array at once directly(BulkData array, not for loop)
3. In OLE DB memory based bulk copy, what is the equivalent of below's T-SQL bulk copy option ? BULK INSERT database_name.schema_name.table_name FROM 'data_file' WITH (ROWS_PER_BATCH = rows_per_batch, TABLOCK);
------------------------------------------------------- My solution is like this. Is it correct?
// CoCreateInstance(...); // Data source // Create session
I have two SSIS packages that import from the same flat file into the same SQL 2005 table. I have one flat file connection (to a comma delimited file) and one OLE DB connection (to a SQL 2005 Database). Both packages use these same two Connection Managers. The SQL table allows NULL values for all fields. The flat file has "empty values" (i.e., ,"", ) for certain columns.
The first package uses the Data Flow Task with the "Keep nulls" property of the OLE DB Destination Editor unchecked. The columns in the source and destination are identically named thus the mapping is automatically assigned and is mapped based on ordinal position (which is equivalent to the mapping using Bulk Insert). When this task is executed no null values are inserted into the SQL table for the "empty values" from the flat file. Empty string values are inserted instead of NULL.
The second package uses the Bulk Insert Task with the "KeepNulls" property for the task (shown in the Properties pane when the task in selected in the Control Flow window) set to "False". When the task is executed NULL values are inserted into the SQL table for the "empty values" from the flat file.
So using the Data Flow Task " " (i.e., blank) is inserted. Using the Bulk Insert Task NULL is inserted (i.e., nothing is inserted, the field is skipped, the value for the record is omitted).
I want to have the exact same behavior on my data in the Bulk Insert Task as I do with the Data Flow Task.
Using the Bulk Insert Task, what must I do to have the Empty String values inserted into the SQL table where there is an "empty value" in the flat file? Why & how does this occur automatically in the Data Flow Task?
From a SQL Profile Trace comparison of the two methods I do not see where the syntax of the insert command nor the statements for the preceeding captured steps has dictated this change in the behavior of the inserted "" value for the recordset. Please help me understand what is going on here and how to accomplish this using the Bulk Insert Task.
hi friends i am trying for bulk insert using SQL server 2000using this codebulk insert xyzfrom 'D:authors.txt'WITH (FIELDTERMINATOR = ',') but it gve me error saying thatCould not bulk insert because file 'D:authors.txt' could not be opened. Operating system error code 21(error not found). i check file securityit has given full control to the file can any one give me idea about Operating System error code 21(error not found) thanks
Hi,I've a SP that insert records in one table and then call another insert SP on a second table. The first table is like a master table and the second is like a child table. After inserting the right record in the master table, I've to insert some record in the child table. This records differ each other only by two of about ten field, so what I'd want is not to call the second SP X times, but only one time.. Is it possible??ExampleTable1: Id (identity), Desc;Table2: Id (identity), Id_table1, Id_TableX, Num, Field1, Field2, ... Field10.In Table2 only Id_TableX and Num change every time... the other are all the same (for one record in Table1). How can I do? Probably with a bulk insert and a bulk update?? But, can I make a bulk xxx without a file??
hi friends i am using bulk insert cmd using my table name but i am facing error.....SO IS IT POSSIBLE TO USE BULK INSERT WITH TEMPRARY TABLE VARIABLE PLZ HELP ME
I want to move data from a text file to a SQL table. After DTS creates the table, does it use Bulk Insert to copy the data from the file to the table, or BCP?