Does sql server have a way to handle errors in a sproc which would allow
one to insert rows, ignoring rows which would create a duplicate key
violation? I know if one loops one can handle the error on a row by row
basis. But is there a way to skip the loop and do it as a bulk insert?
It's easy to do in Access, but I'm curious to know if SQL Server proper
can handle like this. I am guessing that a looping operation would be
slower to execute?
Hi, I have a data file and the contents of it are as follows
2 -- This is the header indicating the no of records in my files 1001|s1 1006|s2
The content of format file is as follows. This is to skip first column of the all the rows and get only Subs (i.e s1 and s2 )
9.0
2 1 SQLCHAR 0 100 "|" 0 ID ""
2 SQLCHAR 0 100 " " 1 Subs ""
Here is my query to get all the Subs from my data file
SELECT * FROM OPENROWSET( BULK 'datafile.txt',
FORMATFILE = 'FormatFile.fmt',
FIRSTROW = 2 ) AS a
But this query retuns only s2 where i was expeting s1 and s2. The reason being is that the firts row i.e header doesn't follow the format Can any one please let me know how to skip the first line in the data file and get the result as required
Hi, I have a data file which consists of data as below, 4 PPU_FFA7485E0D|| T_GLR_DET_11||
While iam inserting into table using bulk insert, this pipe(||) is also getting inserted into the table, here is my query iam using to insert the data using bulk insert.
BULK INSERT TABLE_NAME FROM FILE_PATH WITH (FIELDTERMINATOR = ''||'''+',KEEPNULLS,FIRSTROW=2,ROWTERMINATOR = '''')
I receive the following error message when I try to use the Bulk Insert Task to load BCP data into a table:
Error: 0xC002F304 at Bulk Insert Task, Bulk Insert Task: An error occurred with the following error message: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.The bulk load failed. The column is too long in the data file for row 1, column 4. Verify that the field terminator and row terminator are specified correctly.Bulk load data conversion error (overflow) for row 1, column 1 (rowno).".
Task failed: Bulk Insert Task
In SSMS I am able to issue the following command and the data loads into a TableName table with no error messages: BULK INSERT TableName FROM 'C:DataDbTableName.bcp' WITH (DATAFILETYPE='widenative');
What configuration is required for the Bulk Insert Task in SSIS to make the data load? BTW - the TableName.bcp file is bulk copy file as bcp widenative data type. The properties of the Bulk Insert Task are the following: DataFileType: DTSBulkInsert_DataFileType_WideNative RowTerminator: {CR}{LF}
Any help getting the bcp file to load would be appreciated. Let me know if you require any other information, thanks for all your help. Paul
I'm trying to use Bulk insert for the first time and getting the following error. I think it might have something to do with my Format File and from the error msg there's a conversion error for the first column. In my database the Field is nvarchar(6) so my best guess is to use SQLNChar for the first column. I've checked the end of each line is CR LF therefore the is correct for line 7 right?
Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 1 (ASXCode). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
BULK INSERTtbl_ASX_Data_temp FROM 'M:DataASXImportTest.txt' WITH (FORMATFILE='M:DataASXSQLFormatImport.Fmt')
Hi, I need to insert rows into table1 from table2 and table3 but I don't want to insert repeated combinations of col2, col3. So, table1 has the primary key col2, col3.
This the table1:
create table table1( col1 int not null, col2 int not null, col3 int not null, constraint PK_table1 primary key (col2, col3) )
This is my "insert" code:
INSERT INTO table1 SELECT table2.col1,table2.col2, table3.col3 FROM table2, table3 WHERE table2.col1 = table3.col1
I have a table using an identity column as its Primary Key and twocolumns (table reduced for simplicity) EmployeeNumber and ArrivalTime.CREATE TABLE [tblRecords] ([ID] [bigint] IDENTITY (1, 1) NOT NULL ,[EmployeeNumber] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_ASNOT NULL ,[ArrivalTime] [datetime] NOT NULL ,CONSTRAINT [PK_tblRecords] PRIMARY KEY CLUSTERED([ID]) ON [PRIMARY]) ON [PRIMARY]GOI have an insert procedure that checks for duplicates before insertinga new record:IF (SELECT TOP 1 [ID] FROM tblRecords WHERE EmployeeNumber =@SocialSecurity) IS NULLBEGININSERT INTO tblRecords(EmployeeNumber,ArrivalTime)VALUES (@EmployeeNumber, @ArrivalTime)SELECT SCOPE_IDENTITY()ENDELSESELECT 0 AS DuplicateRecordIn 99.9% of the cases, this works well. However, in the event that theinsert attempts are literally "ticks" apart, the "SELECT TOP 1..."command completes on both attempts before the first attempt completes.So I end up with duplicate entries if the procedure is called multipletimes vey quickly. The system needs to prevent duplicateEmployeeNumbers within the past 45 days so setting the EmployeeNumberto UNIQUE would not work. I can check for older entries (45 days ornewer) very easily, but I do not know how to handle the times when theprocedure is called multiple times within milliseconds. Would aTRANSACTION with a duplicate check after the INSERT with a ROLLBACKwork in this case? Any help is greatly appreciated!-E
Hi Guys, My little bulk insert is only bringing every second row of a CSV file. this is not good as i need every row. My SQLcommand is thus. InsertCommand="BULK INSERT TBL_Unitel_services FROM 'C:/webroot/servicedesk/csvs_Services/csv.csv' WITH (FIRSTROW = 1, FIELDTERMINATOR = ',', ROWTERMINATOR = '', MAXERRORS = 0) "
Got bulk insert thing going real nice thanks to your feedback! But now I notice that the BULK INSERT creates 2 identical rows while the flat file has the column headers and then corresponding row, just one row. Why would it do that. Interestingly if the data file and format file are residing on a shared folder on SQL 2005, and I ran the BULK INSERT from studio it only inserts one row. But when I do BULK INSERT via user interface it creates 2 rows. What is going here.
Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Desktop Engine on Windows NT 5.1 (Build 2600: Service Pack 2)
I am using BULK INSERT to import some pipe-delimited flat files into a database.
I am firstly converting the file using VB.NET, to ensure each line of the file has a carriage return (by using streamwriter.writeline), and I am also ensuring there is no blank line at the end of the file (by using streamwriter.write).
Once I have done this, my BULK INSERT command appears to work OK. This is how I am using the statement:
BULK INSERT tempHISTORY FROM 'C:TEMPHISTORY.TXT' WITH ( FIRSTROW = 2, FIELDTERMINATOR = '|', ROWTERMINATOR = ' ' )
NB: The first row in the file is a header row.
This appears to work OK, however, I have found that certain files seem to miss the final line of the file! I have analysed these files incase they have an inconsistant number of columns but they don't.
I have also found that if I knock off the last column of the tempHISTORY table, the correct number of rows are imported. However, of course, I can't just discard one of the columns from the file, I need to import the entire file.
I cannot understand why BULK INSERT is choosing to miss the final line in the file, when the schema of the destination table matches the structure of the file.
I have a file which has some wind data that i am trying to import into a sql data base through bulk insert. if the script works as it supposed i should see 144 rows impacted but i see 0 rows affected.
BULK INSERT TOWER.RAWINTERFACE_1058 FROM 'C:Temp900020150427583.txt' WITH(CHECK_CONSTRAINTS,CODEPAGE='RAW',DATAFILETYPE='char',FIELDTERMINATOR=' ',ROWTERMINATOR=' ',FIRSTROW=172)
The code works if the file is large but if its small 0 rows are affected. and also if i remove the header rows then the file works again. want to understand what is going on here. i am including the screen shot of the file in notepad++. I have tried changing the row terminator to ' ' , ' ' and also tried to change the codepage but nothing seems to work. No error file is being generated either, if i give a error file option.
I have a stored procedure which will run automatically. I've got try...catch code in the procedure, but I found a bug with the code where if there are any import errors, it doesn't recognize that that there was an error and it runs through the try code as through there was no problems. (I reported the bug). I added some code using @@rowcount to check if there were rows imported, and if not, it moves the data file to a error folder so I know there was a problem with the import. But this only checks if at least one row was imported, not if all the rows in the datafile have been imported. (i.e. if the first row imported correctly, and the second did not, it still sees it as successful). The problem is some of the data files have only one row to import and some have multiple rows. Is there a way to count the number of rows in the datafile, then count the number of rows imported, to verify they are the same number imported? Thanks, Laura
I have a stored procedure which will run automatically. I've got try...catch code in the procedure, but I found a bug with the code where if there are any import errors, it doesn't recognize that that there was an error and it runs through the try code as through there was no problems. (I reported the bug). I added some code using @@rowcount to check if there were rows imported, and if not, it moves the data file to a error folder so I know there was a problem with the import. But this only checks if at least one row was imported, not if all the rows in the datafile have been imported. (i.e. if the first row imported correctly, and the second did not, it still sees it as successful). The problem is some of the data files have only one row to import and some have multiple rows. Is there a way to count the number of rows in the datafile, then count the number of rows imported, to verify they are the same number imported? Thanks, Laura
I am trying to BULK INSERT csv files using a stored procedure in SQL SERVER 2008R2 SP3. Although the files contain several thousand lines and BULK INSERT returns no errors, no data is actually imported into the table. Every field in the table is a NVARCHAR(50) datatype.
Here is the code for the operation (only the parameters for the insert itself):
set @open = 'bulk insert [DWHStaging].[dbo].[Abverkaufsquote] from ''' set @path = 'G:DataStagingDWHStagingSourceAbverkaufsquote' set @params = ''' with (firstrow = 2 , datafiletype = ''widechar'' , fieldterminator = '';'' , rowterminator = '' '' , codepage = ''1252'' , keepnulls);'
The csv file originates from a DB2 database. Using exactly the same code base I can import several other types of CSV files without problem.
The files are stored on the local server with as UCS2 Little Endian and one difference is that the files that do not import do not include a BOM. The other difference is that the failed files are non-UNICODE files.
we can easily load a file into db tables. However, my main concern here is the number of columns in the file. A text file TEXT_1400.txt has 1400 columns. I am unable to load data to my db table using BCP or BULK INSERT commands, as maximum of 1024 columns are allowed per table in SQL Server 2008.
We can still go ahead and create ‘Wide Table’ (a special table that holds up to 30,000 columns. The maximum size of a wide table row is 8,019 bytes.). But when operating on wide table, BCP/BULK INSERT commands still fail. After few hours of scratching my head over BCP and BULK INSERT, I observed that while inserting BCP/BULK INSERT commands are unable to identify SPARSE columns and skip these columns, which disturbs column mapping and results in data conversion and trancation errors.
Is there any proper way to load this kind of files into the db table?
I'm just learning SSIS and I've hit my first bump. I am doing a bulk import from a tab delimited text file to an empty sql table that has a Idendity column defined. How do I tell the bulk insert task to skip that column when inserting from the text file. If I remove the identity column it imports the data fine, but I want to create the indentity column in the table too.
Hello All,Does the BCP utility enable you to selectively import rows from a flatfile to a table ?For example:The first column in my flat file contains a record type - 1, 2..7I only need to import types 1, 2, & 3Can this be specified in the .fmt file ?Thanks in advancehharry
Is there a way to split at this point and put the 12 rows in a different location? The task is twofold - I don't need these control rows in my data and I need value of "records" to verify loaded number of rows.
UPDATED: After some testing I found out that the Flat File source does not see that footer at all. This is good and bad - I do want to load this metedat into some other tables.
I'm doing a bulk insert from a text file to sql server 7 I'm getting an error:
Server: Msg 4867, Level 16, State 1, Line 1 Bulk insert data conversion error (overflow) for row 1, column 169 (LOT_WIDTH). Server: Msg 7399, Level 16, State 1, Line 1 OLE DB provider 'STREAM' reported an error. The provider did not give any information about the error. The statement has been terminated.
Now my lot-width field coming in is defined as a numeric 9(5). My table is defined as an INT.
I am attempting to bulk insert a comma delimited text file with double quotes as the text qualifier but I keep getting an error message(EOF) on the bulk insert.
I think the problem lies in my format file (see below)
Please take a look and let me know what I am missing?
Thanks, Matt
Error message: Msg 4832, Level 16, State 1, Line 1 Bulk load: An unexpected end of file was encountered in the data file. Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
I found error in bulk insert: - "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.The bulk load failed. The column is too long in the data file for row 1, column 1. Verify that the field terminator and row terminator are specified correctly.".
I do read a few article saying that after apply SP2 and hotfixes, this error should be fix, but unfortunately, it is not in my case, what should i do to fix it?
This is my script: - BULK INSERT wng01_work..nw_business_person FROM 'g:SQLFTPCDIS_Extractew_worker.dat' WITH ( MAXERRORS = 1, FORMATFILE ='g:sqlftpcdis_extractew_work.fmt' )
When I try to execute the code, I get the following error, on this line: select @cmd = @cmd + ' with (Fieldterminator = ',')'
Msg 141, Level 15, State 1, Procedure Imp_Header_PO_sp, Line 46
A SELECT statement that assigns a value to a variable must not be combined with data-retrieval operations.
I've tried to find a fix for this error, but it seams to only relate to a select statement and not a Bulk Insert. Can someone please help me figure out how to fix this error?
Simple test project. Created Flat File connection, database connection (both local), and Bulk Insert Task. When running the package I get the following error:
[Bulk Insert Task] Error: An error occurred with the following error message: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.Bulk load: An unexpected end of file was encountered in the data file.".
I've tried different settings for the Flat File config, and the database connection, but still get the error. Any suggestions would be helpful.
Hi All,I have this data file with fix length(see below). I am able to insertit into the database using bcp, but now I want to skip (do not insert)the row which start with letter 'S' into the database. Is there away todo it? By the way I am using -F2 option to skip the first record.Here is my data:Record 1 04XXX2 13106900240120042003040045061 Testing N POLYDOROS TRUSTEEE2 12621241640280041004040045633 What are they MARTIN &XXXXXS C1000003200400409850000059611000000500001000000001 9613000000576497500S X1000003200000209850000059613000000000000000000001 9613000000573497000Thanks for your help.Ted Lee
In Flat File Source properties windows there's Preview node, when we check that node there's an option to skip the data in how many rows. Is it affect the result ?
i have this particular problem with the unpivot.The below is my flat file source.The dates can go upto 130 columns.this count can also vary.SM,SR,SB are again values repeating for diff instrument.They are the values of the instrument on the particular dates.This is a snap shot of one feed.Other feeds may have the dates differing.How do i read this file.
Problem 1:If i skip the first row and unpivot the 2nd row,then with the new feed,with new dates my SSIS package will bomb as it will not find the col names.
Problem 2:IF i uncheck the "Use first row as column headers" then the problem 1 is solved but the o/p will be
20080101
20061102
20061103 1.2
1.3
1.2.
1.5
.....and so on..
IS there any other way to fix this.These are feeds with the spread values of instruments on particular dates.Please help.
RUN 2.01E+11 132238 0 45 INSTRID DATATYPES 20081101 20061102 20061103 Z03369 SM 1.1 1.2 1.3 Z03369 SB 1.3 1.3 1.7
Help! I am importing a large comma delimited text file into an existing table useing the BULK INSERT command. The table is 4 colums (char16, char16, varchar50, char1). The first 100 or so lines go in without an error. then, I recieve an error stateing that an entry is too long for the field in the database, and kicks me out. The entry is 50 characters, which is allowed. Any ideas why this would happen?
bulk insert DB_Kash.dbo.tb_category from 'C:cpdataSUPPLIER_5305OUTPUTTb_Category.txt' with (formatfile = 'C:b_category.txt')
This works on one sql server and the same code does not work on another server.I have taken care to see that the path is appropriate. Are there any server settings involved? The table structure is the same in both cases and the select into nulk copy option has been selected. The table has full text indexing set on it but I don't think that this would make any difference. The only error it gives is "ILE DB Stream reported an error.The stream does not provide any expalanation regarding this error." Something like this.
Infact the same format file and datafile work fine when I am doing BCP. I have tried with select into bulk copy option on and off too. Any info on this is greatly appreciated.