Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38
Copyright (c) 1988-2003 Microsoft Corporation
Desktop Engine on Windows NT 5.1 (Build 2600: Service Pack 2)
I am using BULK INSERT to import some pipe-delimited flat files into a database.
I am firstly converting the file using VB.NET, to ensure each line of the file has a carriage return (by using streamwriter.writeline), and I am also ensuring there is no blank line at the end of the file (by using streamwriter.write).
Once I have done this, my BULK INSERT command appears to work OK. This is how I am using the statement:
BULK INSERT
tempHISTORY
FROM 'C:TEMPHISTORY.TXT'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = '|',
ROWTERMINATOR = '
'
)
NB: The first row in the file is a header row.
This appears to work OK, however, I have found that certain files seem to miss the final line of the file! I have analysed these files incase they have an inconsistant number of columns but they don't.
I have also found that if I knock off the last column of the tempHISTORY table, the correct number of rows are imported. However, of course, I can't just discard one of the columns from the file, I need to import the entire file.
I cannot understand why BULK INSERT is choosing to miss the final line in the file, when the schema of the destination table matches the structure of the file.
Hi Guys, My little bulk insert is only bringing every second row of a CSV file. this is not good as i need every row. My SQLcommand is thus. InsertCommand="BULK INSERT TBL_Unitel_services FROM 'C:/webroot/servicedesk/csvs_Services/csv.csv' WITH (FIRSTROW = 1, FIELDTERMINATOR = ',', ROWTERMINATOR = '', MAXERRORS = 0) "
Got bulk insert thing going real nice thanks to your feedback! But now I notice that the BULK INSERT creates 2 identical rows while the flat file has the column headers and then corresponding row, just one row. Why would it do that. Interestingly if the data file and format file are residing on a shared folder on SQL 2005, and I ran the BULK INSERT from studio it only inserts one row. But when I do BULK INSERT via user interface it creates 2 rows. What is going here.
I have a file which has some wind data that i am trying to import into a sql data base through bulk insert. if the script works as it supposed i should see 144 rows impacted but i see 0 rows affected.Â
BULK INSERT TOWER.RAWINTERFACE_1058 FROM 'C:Temp900020150427583.txt'Â WITH(CHECK_CONSTRAINTS,CODEPAGE='RAW',DATAFILETYPE='char',FIELDTERMINATOR=' ',ROWTERMINATOR=' ',FIRSTROW=172)
The code works if the file is large but if its small 0 rows are affected. and also if i remove the header rows then the file works again. want to understand what is going on here. i am including the screen shot of the file in notepad++. I have tried changing the row terminator to ' ' , ' ' and also tried to change the codepage but nothing seems to work. No error file is being generated either, if i give a error file option.Â
I have a stored procedure which will run automatically. I've got try...catch code in the procedure, but I found a bug with the code where if there are any import errors, it doesn't recognize that that there was an error and it runs through the try code as through there was no problems. (I reported the bug). I added some code using @@rowcount to check if there were rows imported, and if not, it moves the data file to a error folder so I know there was a problem with the import. But this only checks if at least one row was imported, not if all the rows in the datafile have been imported. (i.e. if the first row imported correctly, and the second did not, it still sees it as successful). The problem is some of the data files have only one row to import and some have multiple rows. Is there a way to count the number of rows in the datafile, then count the number of rows imported, to verify they are the same number imported? Thanks, Laura
I have a stored procedure which will run automatically. I've got try...catch code in the procedure, but I found a bug with the code where if there are any import errors, it doesn't recognize that that there was an error and it runs through the try code as through there was no problems. (I reported the bug). I added some code using @@rowcount to check if there were rows imported, and if not, it moves the data file to a error folder so I know there was a problem with the import. But this only checks if at least one row was imported, not if all the rows in the datafile have been imported. (i.e. if the first row imported correctly, and the second did not, it still sees it as successful). The problem is some of the data files have only one row to import and some have multiple rows. Is there a way to count the number of rows in the datafile, then count the number of rows imported, to verify they are the same number imported? Thanks, Laura
I am trying to BULK INSERT csv files using a stored procedure in SQL SERVER 2008R2 SP3. Although the files contain several thousand lines and BULK INSERT returns no errors, no data is actually imported into the table. Every field in the table is a NVARCHAR(50) datatype.
Here is the code for the operation (only the parameters for the insert itself):
set @open = 'bulk insert [DWHStaging].[dbo].[Abverkaufsquote] from ''' set @path = 'G:DataStagingDWHStagingSourceAbverkaufsquote' set @params = ''' with (firstrow = 2 , datafiletype = ''widechar'' , fieldterminator = '';'' , rowterminator = '' '' , codepage = ''1252'' , keepnulls);'
The csv file originates from a DB2 database. Using exactly the same code base I can import several other types of CSV files without problem.
The files are stored on the local server with as UCS2 Little Endian and one difference is that the files that do not import do not include a BOM. The other difference is that the failed files are non-UNICODE files.
Does sql server have a way to handle errors in a sproc which would allowone to insert rows, ignoring rows which would create a duplicate keyviolation? I know if one loops one can handle the error on a row by rowbasis. But is there a way to skip the loop and do it as a bulk insert?It's easy to do in Access, but I'm curious to know if SQL Server propercan handle like this. I am guessing that a looping operation would beslower to execute?
In a t-sql 2012 sql update script listed below, it only works for a few records since the value of TST.dbo.LockCombination.seq only contains the value of 1 in most cases. Basically for every join listed below, there should be 5 records where each record has a distinct seq value of 1, 2, 3, 4, and 5. Thus my goal is to determine how to add the missing rows to the TST.dbo.LockCombination where there are no rows for seq values of between 2 to 5. I would like to know how to insert the missing rows and then do the following update statement. Thus can you show me the sql on how to add the rows for at least one of the missing sequence numbers?
UPDATE LKC SET LKC.combo = lockCombo2 FROM [LockerPopulation] A JOIN TST.dbo.School SCH ON A.schoolnumber = SCH.type JOIN TST.dbo.Locker LKR ON SCH.schoolID = LKR.schoolID AND A.lockerNumber = LKR.number
we can easily load a file into db tables. However, my main concern here is the number of columns in the file. A text file TEXT_1400.txt has 1400 columns. I am unable to load data to my db table using BCP or BULK INSERT commands, as maximum of 1024 columns are allowed per table in SQL Server 2008.Â
We can still go ahead and create ‘Wide Table’ (a special table that holds up to 30,000 columns. The maximum size of a wide table row is 8,019 bytes.). But when operating on wide table, BCP/BULK INSERT commands still fail. After few hours of scratching my head over BCP and BULK INSERT, I observed that while inserting BCP/BULK INSERT commands are unable to identify SPARSE columns and skip these columns, which disturbs column mapping and results in data conversion and trancation errors.  Is there any proper way to load this kind of files into the db table?Â
I'm just learning SSIS and I've hit my first bump. I am doing a bulk import from a tab delimited text file to an empty sql table that has a Idendity column defined. How do I tell the bulk insert task to skip that column when inserting from the text file. If I remove the identity column it imports the data fine, but I want to create the indentity column in the table too.
I have to update a field within a table of 60 records or so. Each record has a different field value. it's type varchar. i was given an excel file with the field values and was thinking of a bulk update like bulk insert, but i don't recall that it's possible that way.
Is the only way to create a table, bulk insert, then merge the two tables together with UPDATE?
Just wanted to see if there was an easier way to do it, otherwise i'll take the latter route. Thanks!
I have a table containing 8 million records. I need to replace 2 million of these records with a scaled down query that goes something like: SELECT 1, ShareholderID, Assets1 FROM MyTable (Yields appx. 200,000 recods) SELECT 2, ShareholderID, Assets2 FROM MyTable (Yields appx. 200,000 recods) . . . SELECT 10, ShareholderID, Assets1 + Assest2 + Assets3 + ... + Assets9 FROM MyTable (Yields appx. 200,000 recods)
Updates and cursors just seem to be too slow.
So far I have done the following, but was wondering if anyone could think of a better way. SELECT 6 million records that don't need to be deleted into a #TempTable Use statements above to select into same #TempTable DROP and recreate Original Table SELECT 6 + 2 million records INTO original table.
This seems rather convoluted. Is there a better approach? Would it be worth while to dump data to a file and use bcp / Bulk Insert
Hi, I need to write a query which I have never attempted before and could do with some help.... I have a Groups table and a Users_Groups look up table. In this model, users can only be assigned to 1 group. If a group is deleted, a trigger should fire and delete any rows in User_Groups having a matching Groups.Ref. Unfortunately, the trigger hasn't been firing and I now have a load of defunct rows in Users_Groups relating users to groups which do not exist.I now need to find all of these defunct rows in Users_Groups so that I can delete them. How can I find rows in Users_Groups where the parent rows and refs in Groups are null? I've tried searching the net for something similar but don't even know how to word the search properly to get any half relevant results. Cheers PS, I do realise I need to tighten the constraints on my database
I receive the following error message when I try to use the Bulk Insert Task to load BCP data into a table:
Error: 0xC002F304 at Bulk Insert Task, Bulk Insert Task: An error occurred with the following error message: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.The bulk load failed. The column is too long in the data file for row 1, column 4. Verify that the field terminator and row terminator are specified correctly.Bulk load data conversion error (overflow) for row 1, column 1 (rowno).".
Task failed: Bulk Insert Task
In SSMS I am able to issue the following command and the data loads into a TableName table with no error messages: BULK INSERT TableName FROM 'C:DataDbTableName.bcp' WITH (DATAFILETYPE='widenative');
What configuration is required for the Bulk Insert Task in SSIS to make the data load? BTW - the TableName.bcp file is bulk copy file as bcp widenative data type. The properties of the Bulk Insert Task are the following: DataFileType: DTSBulkInsert_DataFileType_WideNative RowTerminator: {CR}{LF}
Any help getting the bcp file to load would be appreciated. Let me know if you require any other information, thanks for all your help. Paul
I'm trying to use Bulk insert for the first time and getting the following error. I think it might have something to do with my Format File and from the error msg there's a conversion error for the first column. In my database the Field is nvarchar(6) so my best guess is to use SQLNChar for the first column. I've checked the end of each line is CR LF therefore the is correct for line 7 right?
Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 1 (ASXCode). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
BULK INSERTtbl_ASX_Data_temp FROM 'M:DataASXImportTest.txt' WITH (FORMATFILE='M:DataASXSQLFormatImport.Fmt')
Before implementing memory based bulk copy insert with IRowsetFastLoad interface of SQL Server 2005 OLE DB provider, I want to know some considerations.
- performance : compared with T-SQL's "BULK INSERT ..." and bcp utility
- SQL Server's resource usage : when running memory based bulk copy, server resource's influence
- server side action(behavior) : when server is busy, delayed-update means IRowsetFastLoad::Commit(true) method can insert right after?
- row-count : The rowcount limitation can be inserted by IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit
I have a web form with a text field that needs to take in as much as the user decides to type and insert it into an nvarchar(max) field in the database behind. I've tried using the new .write() method in my update statement, but it cuts off the text after a while. Is there a way to insert/update in SQL 2005 this without resorting to Bulk Insert? It bloats the transaction log and turning the logging off requires a call to sp_dboptions (or a straight-up ALTER DATABASE), which I'd like to avoid if I can.
Just thought I would ask the question on here with regards a problem we have. We have PDA based auditing tool which use SQL Compact to store information via on Compact Framework application. Now we have a user who swears blind they have carried out an audit but there is no trace of it anywhere on the database even though there are audits on there prior to the said date and after and there is no other faciity to delete data from the app other than a complete wipe of the database hence everything would of gone!
Now, the data is stored over 4 tables from which there are no traces of any unrelated data between the tables etc. so its dissappeared, anybody any experience of data simply vanishing into thin air??
I have a sql 2000 database in which reports are generated on a monthly basis from the data inside on of my tables. The reports have been working fine, until some of the rows seemed to have disappered!
I know the data use to be in the table, since it is showing on the old reports, however, when I try to pull that same data, it is not in the database at all.
Does anyone have any ideas on what could have caused this or how I can resolve??
Hello, I am wondering is the Transaction Log logged differently between BULK INSERT vs INSERT? Performance speaking, which operations is generally faster given the same amout of data inserted.
I have enabled drillthrough on a cube (AS 2005) and selected the columns required. This worked. When I use the drillthrough option in Excel, drillthrough queries return less rows than expected. For example, for a cells combinations (pivot table) I saw 21198 rows, but after access to detail, query return just 21106 rows. Less rows than expected.
I'm trying to create a performant script to copy records from a table in a source database, to an identical table in a destination database.
In SQL 2000, I used to create a little lookup which did a count using certain fields. If the record was missing, I executed an INSERT query, otherwise an UPDATE query. The result was that the table on the destination side was always up to date. Duplicate rows were out of the question.
This was, if I'm not mistaking, a Data Transformation, using a bit of custom VBA code to govern the transfer. For each source row, the custom code was executed. Depending on the result of the custom code, a different query was launched.
Now I'm trying to do the same using SSIS in SQL 2005. Is there a task which does this for me, or do I have to script again? In the latter case, which type of task would I use?
(I thought of the Script Task, but then I would need to set up quite a bit myself.)
Hi~, I have 3 questions about memory based bulk copy.
1. What is the limitation count of IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit(true)? For example, how much insert row at below sample?(the max value of nCount) for(i=0 ; i<nCount ; i++) { pIFastLoad->InsertRow(hAccessor, (void*)(&BulkData)); }
2. In above code sample, isn't there method of inserting prepared array at once directly(BulkData array, not for loop)
3. In OLE DB memory based bulk copy, what is the equivalent of below's T-SQL bulk copy option ? BULK INSERT database_name.schema_name.table_name FROM 'data_file' WITH (ROWS_PER_BATCH = rows_per_batch, TABLOCK);
------------------------------------------------------- My solution is like this. Is it correct?
// CoCreateInstance(...); // Data source // Create session
Hi, I used the /e in my bcp code. yet did not get all the rows from the main frame into the sql talbes... here is the case I have 11 million rows in an ftp server I use this code to bcp into sql server can anyonecheck if this code is good for the process, I am missing one million row in the bcp process and do not know why??? I put the /e to see if there is any error but could not see any error file in my hard drive? Please check it out and let me know
SQL 2k, DDL below.I have a simple table with the following data:fldYear fldCode1 fldCode22000 ABC1 ABC122000 ABC1 ABC132001 ABC1 ABC122002 ABC1 ABC122002 ABC1 ABC13I need to know, for every distinct combination of fldCode1 andfldCode2, if there are any years missing.For example,SELECT DISTINCT fldCode1, fldCode2 FROM MyTablereturnsABC1 ABC12ABC1 ABC13I need to know that in 2001 there was no entry for ABC1/ABC13Thanks!Edwardif exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[MyTable]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)drop table [dbo].[MyTable]GOCREATE TABLE [dbo].[MyTable] ([fldYear] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[fldCode1] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[fldCode2] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL) ON [PRIMARY]GO
Running this code on my PC via VS 2005 .Net version 2.0.50727 on the server (shown in IIS) Code is in ASP.NET 2.0 and is a VB.NET Console application SSIS 2005
Problem & Info:
I am bringing in an Excel file. I need to first strip out any non-detail rows such as the breaks you see with totals and what not. I should in the end have only detail rows left before I start moving them into my SQL Table. I'm not sure how to first strip this information out in SSIS specfically how down to the right component and how to actually code the component to do this based on my Excel file here: http://www.webfound.net/excelfile.xls
Then, I assume I just use a Flat File Source coponent or something to actually take the columns in the Excel and split into an OLE DB Datasource to shove each column into a corresponding column in my SQL Server Table. I have used a Flat File Source in the past to do so with a comma delimited txt file but never tried with an Excel.
Desired Help:
How to perform
1) stripping out all undesired rows 2) importing each column into sql table
I have two SSIS packages that import from the same flat file into the same SQL 2005 table. I have one flat file connection (to a comma delimited file) and one OLE DB connection (to a SQL 2005 Database). Both packages use these same two Connection Managers. The SQL table allows NULL values for all fields. The flat file has "empty values" (i.e., ,"", ) for certain columns.
The first package uses the Data Flow Task with the "Keep nulls" property of the OLE DB Destination Editor unchecked. The columns in the source and destination are identically named thus the mapping is automatically assigned and is mapped based on ordinal position (which is equivalent to the mapping using Bulk Insert). When this task is executed no null values are inserted into the SQL table for the "empty values" from the flat file. Empty string values are inserted instead of NULL.
The second package uses the Bulk Insert Task with the "KeepNulls" property for the task (shown in the Properties pane when the task in selected in the Control Flow window) set to "False". When the task is executed NULL values are inserted into the SQL table for the "empty values" from the flat file.
So using the Data Flow Task " " (i.e., blank) is inserted. Using the Bulk Insert Task NULL is inserted (i.e., nothing is inserted, the field is skipped, the value for the record is omitted).
I want to have the exact same behavior on my data in the Bulk Insert Task as I do with the Data Flow Task.
Using the Bulk Insert Task, what must I do to have the Empty String values inserted into the SQL table where there is an "empty value" in the flat file? Why & how does this occur automatically in the Data Flow Task?
From a SQL Profile Trace comparison of the two methods I do not see where the syntax of the insert command nor the statements for the preceeding captured steps has dictated this change in the behavior of the inserted "" value for the recordset. Please help me understand what is going on here and how to accomplish this using the Bulk Insert Task.