Transact SQL :: Bulk Insert When Data File Is On Network Share
Dec 3, 2015
I am running Microsoft SQL Server 2012 SP on a Windows Server 2008 R2 Standard SP1 box. The SQL Server service is running as a simple windows domain user (nothing special, no admin rights, etc.) I am having some issues with using Bulk Insert when the data file is on a network share when using Windows Authentication. What is known is that the SQL Server service account has access to the network resource, which is shown by logging into SQL Server with a SQL account and doing the Bulk Insert. I also have rights to the files on the share, as shown by the fact that I put the files there. My SQL is in the form of:
Bulk Insert [table name] From '[server][share][filename]' With (FirstRow = 2, FormatFile='FormatFile.xml')
Now, when connecting to SQL Server with Windows Authentication and running the Bulk Insert I get the following error:
Msg 4861, Level 16, State 1, Line 2 Cannot bulk load because the file "[server][share][filename]" could not be opened. Operating system error code 5(Access is denied.).
I found this snip at
BULK INSERT (Transact-SQL)Security Account Delegation (Impersonation), which says, in part (emphasis mine):
To resolve this error [4861], use SQL Server Authentication and specify a SQL Server login that uses the security profile of the SQL Server process account, or configure Windows to enable security account delegation. For information about how to enable a user account to be trusted for delegation.
How to Configure the Server to be Trusted for Delegation, and we tried the unconstrained delegation and I rebooted the SQL server, but it still does not work. Later we tried constrained delegation and it still does not work.
I have verified the SPNs:
C:>setspn adsvc_sqlRegistered ServicePrincipalNames for CN=SVC_SQL,OU=Service Accounts,OU=Users,OU=ad domain,DC=ad,DC=local: MSSQLSvc/SQLQA.ad.local:1433 MSSQLSvc/SQLDev.ad.local:1433 MSSQLSvc/SQLQA.ad.local MSSQLSvc/SQLDev.ad.local
I have verified that my SQL connection is TCP and I am getting/using a Kerberos security token.
C:>sqlcmd -S tcp:SQLQA.ad.local,1433 -E1> Select dec.net_transport, dec.auth_scheme From sys.dm_exec_connections As dec Where session_id = @@Spid;2>
gonet_transport auth_scheme------------- -----------TCP KERBEROS(1 rows affected)1>
If I move the source file to a local drive (on the SQL server), all works fine, but I must be able to read from a file share?
USE TEST GO /****** BULK INSERT ******/ BULK INSERT [Table01] FROM 'C:empdata.csv'
[code]....
I am using above code to insert csv file data which consist of arabic data as well. Upload is successful however Arabic field data is uploaded with invalid characters and getting the following error Msg 4864, Level 16, State 1, Line 3...Bulk load data conversion error (type mismatch or invalid character for the specified codepage)
Is there a way to read a file name automatically from a network folder? I can successfully bulk insert from this particular folder. The next step is as I add files, I wish to bulk insert the latest file added so the program must make that determination and import that specific file. I can delete the older files if necessary and save them elsewhere but it would still be nice to be able to read the file name. I then wish to store the name of this file, whatever it is, into a field called "SourceFileName" in my table that I am bulk inserting into. Does anyone have an example in dynamic SQL? Thanks.
I have a CSV file that I am trying to bulk load into a temp table. The data in the file is all jumbled together, as in, there does not appear to be a row terminator. However, I do see a bunch of little rectangular boxes that I assume are the row terminators.
When I run the bulk insert, the data is treated as one string. For example... If I have 10 columns in the table, the 10 columns will be populated, but the remainder of the data is dumped into the last column.
Here are the row terminators I have used so far that haven't worked.
I try to import data with bulk insert. Here is my table:
CREATE TABLE [data].[example]( col1 [varchar](10) NOT NULL, col2 [datetime] NOT NULL, col3 [date] NOT NULL, col4 [varchar](6) NOT NULL, col5 [varchar](3) NOT NULL,
The first column should store double (in col2 and col3) in my table
My file: Col1,Col2,Col3,Col4,Col5,Col6,Col7 2015-04-30@|@MDDS@|@ADP@|@EUR@|@185.630624@|@2015-04-30@|@MDDS 2015-04-30@|@MDDS@|@AED@|@EUR@|@4.107276@|@2015-04-30@|@MDDS
My command: bulk insert data.example from 'R:epoolexample.csv' WITH(FORMATFILE = 'R:cfgexample.fmt' , FIRSTROW = 2)
Get error: Msg 4823, Level 16, State 1, Line 2 Cannot bulk load. Invalid column number in the format file "R:cfgexample.fmt".
I changed some things as: used ";" and "," as column delimiter changed file type from UNIX to DOS and adjusted the format file with " " for row delimiter
Removed this line from format file 1 SQLCHAR 0 10 "@|@" 2 Col2 "" Nothing works ....
SQL Agent will not successfully execute my package as a job. Bids executes the package correctly as well as when I run the package manually (right click, run package) through SQL Server Management Studio. This is a permissions issue with the flat file any help will be much appreciated.Background Information:OS: SQL 2005 on Windows Server 2003Flat File Connection: \servernamefolderfile.txt (If I change the flat file location to a local file the package will run as a job successfully)Domain: The package is running on a Windows machine that is not on any domain. The network location is a Windows machine on a domain.Security: The network location folder (\servernamefolderfile.txt) has no security, namely anyone can access any file to read/write/delete/etc. I can manually add and delete files as well as add and delete files when the package runs through BIDS or when I manually run it through management studio.Permissions: I have created a login, security credential, and proxy which I am using to run the package. The security credential is tied to the Administrator account on the local machine. Error Message: Executed as user: COMPUTER-NAMEAdministrator. Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 12:05:37 PM Error: 2007-06-19 12:05:39.25 Code: 0xC001401E Source: DataTransfer Connection manager "FILECONNECTION.FileConnection" Description: The file name "\servernamefolderflatfile.txt" specified in the connection was not valid. End Error Error: 2007-06-19 12:05:39.25 Code: 0xC001401D Source: DataTransfer Description: Connection "FILECONNECTION.FileConnection" failed validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 12:05:37 PM Finished: 12:05:40 PM Elapsed: 2.297 seconds. The package execution failed. The step failed. (note: I replaced the fileconnection strings with FILECONNECTION and the serverpath with "servernamefolder" for privacy reasons. Any help would be greatly appreciated. This is some sort of security issue with SQL Agent. But, the error claims that the user is running as localmachineAdministrator. Isn't this how the package would run if I manually execute it?
In the "Choosing Between SQL Server 2005 Compact Edition and SQL Server 2005 Express Edition" white paper, i can read that: "SQL Server 2005 Compact Edition support data file storage on a network share" and "Number of concurrent connections = 256"
But when i try to connect with two different PC at the same time to a .sdf file store on a network share, i have an error message : "File is locked by an other processus"
The firsth PC is connected but the secondth can't
";Mode=Read Write" in the connection string don't change anything.
we can easily load a file into db tables. However, my main concern here is the number of columns in the file. A text file TEXT_1400.txt has 1400 columns. I am unable to load data to my db table using BCP or BULK INSERT commands, as maximum of 1024 columns are allowed per table in SQL Server 2008.
We can still go ahead and create ‘Wide Table’ (a special table that holds up to 30,000 columns. The maximum size of a wide table row is 8,019 bytes.). But when operating on wide table, BCP/BULK INSERT commands still fail. After few hours of scratching my head over BCP and BULK INSERT, I observed that while inserting BCP/BULK INSERT commands are unable to identify SPARSE columns and skip these columns, which disturbs column mapping and results in data conversion and trancation errors.
Is there any proper way to load this kind of files into the db table?
When both the two fields are set to SQLCHAR data types the data imports successfully without the quotes as 01 and 02. These fields will always be numbers and I want them as integers so I set the data type to int in the database and SQLINT in the format file. The results was that the 01 became 12592 and the 02 became 12848. where these numbers are coming from?
I'm trying to use Bulk insert for the first time and getting the following error. I think it might have something to do with my Format File and from the error msg there's a conversion error for the first column. In my database the Field is nvarchar(6) so my best guess is to use SQLNChar for the first column. I've checked the end of each line is CR LF therefore the is correct for line 7 right?
Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 1 (ASXCode). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
BULK INSERTtbl_ASX_Data_temp FROM 'M:DataASXImportTest.txt' WITH (FORMATFILE='M:DataASXSQLFormatImport.Fmt')
Hi, I´m trying to bulk insert files that looks like this:
aaaa,bbb,dddd, ccc,dfd,tghj,
each file can have up to 10 data fileds per line, and each file will have same number of data fileds in particular file, let´s say 3 like above. Second file could have let´s say 10 and that is maximum.
I read the file and insert data with fieldterminator in temp table from witch I insert data to other tables regarding some parameters inside.
Now problem is: Msg 4832, Level 16, State 1, Line 1 Bulk load: An unexpected end of file was encountered in the data file. Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
That is because I´m trying to insert 3 fields of data in temporary table which is made of 10 columns (It have to be 10 because next file could have 10 fileds of data). If the temp table has same number of columns like text file has data fields than it works.
What is solution for this problem? Can I bulk insert NULL in columns for which I don't have data?
I can also import each line of text file to one column (with delimiter inside) but than I don´t know how to insert that data to correct tables or even to one table but to seperate data fields to columns with fieldterminator which is , in this case.
I'm new to SQL and I would apriciate any help. Thank you
I have a bulk insert situation that would be nice to be able to pull off. I have a flat file with 46 columns that are to go into a table. The table, I want to have a 47th column to be updated later on by means of a stored proc saying if the import into the system was sucessful or not. I have the rowterminator set as '"' thinking that would tell SQL to begin on the next row, leaving the importstatus column null but i still receive an error.
First of all, is this idea possible within this insert statement. Secondly, if so, what would be the syntax to tell the insert statement to skip that particular column. It is the last column listed in the table so it just needs to start on the next row after it inserts the last bit of data in the flatfile.
If this is not possible, is it possible to bulk insert into a temp table?
I am running a set of SQL statements on a SQL server, to insert flat file data into a SQL table. The flat file is already FTP'ed to the SQL server. I seem to be getting an error, which is possibly pointing to a permissions issue
The statements:
BULK INSERT [Jedox_prod].[dbo].[B_BP_Customer] FROM 'c:jedox_dailyjdcom4401.txt' WITH ( FIRSTROW = 2, MAXERRORS = 0, FIELDTERMINATOR = '|', ROWTERMINATOR = ' ' ) GO
The error is : Msg 4861, Level 16, State 1, Line 1 Cannot bulk load because the file "c:jedox_dailyjdcom4401.txt" could not be opened. Operating system error code 3(failed to retrieve text for this error. Reason: 1815)
If it is permissions issue, how do I overcome this?
I just wanted to insert only SubjectIds into my table 'Subjects' which has the follwing schama ignoring the classes The row delimeter is " " and the column delimeter is ' | '
Table Subjects {
ID (Autoincrement) SubjectId varchar(20) }
Can any one provide the format file for doing this or suggest anyway to do this? Please do note that the file may contain millions of records
With "bcp MyDatabase.dbo.MyTable out C:MyFile.Dat -n -T" command line, I could get an exported data file. And I can also import this file into MyTable using 'BULK INSERT MyDatabase.dbo.MyTable FROM 'C:MyFile.dat' WITH (DATAFILETYPE='native');' query statement.
Now, I want to make my own data file just like made by bcp above. Although I could make file of 'char' type, 'native' type file is needed for performance and other reasons. And the format file should not be used.
Hi, I have a data file and the contents of it are as follows
2 -- This is the header indicating the no of records in my files 1001|s1 1006|s2
The content of format file is as follows. This is to skip first column of the all the rows and get only Subs (i.e s1 and s2 )
9.0
2 1 SQLCHAR 0 100 "|" 0 ID ""
2 SQLCHAR 0 100 " " 1 Subs ""
Here is my query to get all the Subs from my data file
SELECT * FROM OPENROWSET( BULK 'datafile.txt',
FORMATFILE = 'FormatFile.fmt',
FIRSTROW = 2 ) AS a
But this query retuns only s2 where i was expeting s1 and s2. The reason being is that the firts row i.e header doesn't follow the format Can any one please let me know how to skip the first line in the data file and get the result as required
I are using a BULK INSERT to insert the data from a ascii file to a sql table. The table has a ProductInstanceId column that exists in the tables but does not exist in the ascii DICast data. I am setting the ProductInstanceId to a Guid that will be used for Metrics. I would like to create the Guid in C++ and then set it somehow during the BULK INSERT DICastRaw1hr and DICastRaw6hr. I am calling the BULK INSERT from C++/ADO. I do not see how you can set a static data in the BULK INSERT for a column that exists in the table but does not the source data ... seems there should be a way to do this with the format file?
The other way to do this is with a TRIGGER. I have the TRIGGER below. Prior to the calling the BULK INSERT using ADO I will use ADO to ALTER the TRIGGER with the new Guid. When the BULK INSERT runs the ProductInstanceId will be populated with the new Guid.
ALTER TRIGGER DICastRaw1hrInsertGuid ON Alphanumericdata.dbo.DICastRaw1hr FOR INSERT AS UPDATE dbo.DICastRaw1hr SET ProductInstanceId = '4f9a44eb-092b-445b-a224-cc7cdd207092' WHERE modelrundatetime = (select max(modelrundatetime) from Alphanumericdata.dbo.DICastraw1hr(NOLOCK))
More Questions:
- The Trigger is slow. The Bulk Insert without the Trigger runs in about 10 sec ... with the Trigger in about 40 sec. I tried to use the sql code below in the TRigger but it was only doing the UPDATE on the last row. The TRIGGER must run after the BULK INSERT is complete. Now I am using the select (bad). Any comments ...
ALTER TRIGGER DICastRaw1hrInsertDate ON Alphanumericdata.dbo.DICastRaw1hr FOR INSERT AS DECLARE @ID as integer SELECT @ID = i.recordid from inserted i UPDATE dbo.DICastRaw1hr SET ProductInstanceId = '4f9a44eb-092b-445b-a224-cc7cdd207092' WHERE recordid = @ID
- I understand that I could set the Guid in the Default Value part of the table definition using the NEWID() function. I need the Guid to be the same for all the rows that are inserted during the BULK INSERT (all have the same modelrundatetime) ... how would I do this?
I have two versions of "rsk.txt" one with 1.9mill rows and one with the first 2000 rows only. The files have one column only with 115 characters that I'll split in to several columns later using SUBSTRING. The one with 2000 rows fires in to the database with no problems whatsoever using this exact code, the other one throws the following error:
Server: Msg 4866, Level 17, State 66, Line 1 Bulk Insert fails. Column is too long in the data file for row 1, column 1. Make sure the field terminator and row terminator are specified correctly.
How can I resolve this problem?
EDIT: I tried several different row- and fieldterminators but this exact one works for the small data-file so I assume it should also work for the large one...the large one is however copyed directly using binary ftp from a unix-filesystem and the small one is manually copied into a new txt-file using UltraEdit.
I'm importing a large csv file two different ways - one with Bulk Import Task and the other way with the Data Flow Task (flat file source -> OLE DB destination).
With the Bulk Import Task I'm putting all the csv rows in one column. With the Data Flow Task I'm mapping each csv value to it's own column in the SQL table.
I used two different flat file sources and got the following:
SELECT from openrowset(BULK 'SERVERNAMEsomepathsomefile.csv'... fails while SELECT from openrowset(BULK 'c:somepathsomefile.csv' ... works.
I am running the task as a specific sql server user. If I run the same query in management studio using execute as login='batchuser', it also works for any path.
How can I make this work without an extra step moving the data to the local server? Because that would cause extra administration.
I have a file which has some wind data that i am trying to import into a sql data base through bulk insert. if the script works as it supposed i should see 144 rows impacted but i see 0 rows affected.
BULK INSERT TOWER.RAWINTERFACE_1058 FROM 'C:Temp900020150427583.txt' WITH(CHECK_CONSTRAINTS,CODEPAGE='RAW',DATAFILETYPE='char',FIELDTERMINATOR=' ',ROWTERMINATOR=' ',FIRSTROW=172)
The code works if the file is large but if its small 0 rows are affected. and also if i remove the header rows then the file works again. want to understand what is going on here. i am including the screen shot of the file in notepad++. I have tried changing the row terminator to ' ' , ' ' and also tried to change the codepage but nothing seems to work. No error file is being generated either, if i give a error file option.
Need to know a mode whereby somehow I can every time insert an additional column in a table while bulk inserting data to an existing table from a new flat file thus identifying the file from which, or the time when, the data was inserted in an existing table.
I want to be able to run the following command from SSMS (as an ad-hoc query).
BULK INSERT Database_Name.dbo.Table_Name FROM 'serverfile.txt' WITH (FIELDTERMINATOR = '|', ROWTERMINATOR = '0x0a', MAXERRORS = 0);
When I do I get:
Msg 4861, Level 16, State 1, Line 1
Cannot bulk load because the file "serverfile.txt" could not be opened. Operating system error code 5(Access is denied.).
I have full access to the file.I can do the same command successfully if the file is stored on a local drive on the server.
According to my DBA I can not run it with a remote file location because I don't have the SA permission. His solution is for me to create a job that runs the command. I have done so and the job works correctly.
Is he correct that there is no way for me to be able to run it from SSMS without SA permissions?
Similar to a previous post (http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=244646&SiteID=1), I am trying to import data into a SQL Table.
I am trying to program a small application that will import product data obtained through suppliers via CD-ROM. One supplier in particular uses Fixed width colums, and data looks like this:
Example of Data
0124015Apple Crate 32.12
0124016Bananna Box 12.56
0124017Mango Carton 15.98
0124018Seedless Watermelon 42.98 My Table would then have: ProductID as int Name as text Cost as money
How would I go about extracting the data with an XML Format file? I am stumbling over how to tell it where to start picking up data for a specific column. Is there any way that I could trim the Name column (i.e.: "Mango Carton " --> "Mango Carton")?
I don't know if it makes any difference, but I've been calling SQL from my code by doing this:
Code in C# Form
SqlConnection SqlConnection = new SqlConnection(global::SQLClients.Properties.Settings.Default.ClientPhonebookConnectionString); SqlCommand cmd = new SqlCommand();
SqlConnection.Open(); cmd.ExecuteNonQuery(); SqlConnection.Close(); RefreshData(); I am running Visual Studio C# Express 2005 and SQL Server Express 2005.
I would like to backup my databases to a network share (NAS) instead of local disk using Maintenance Plans created by Enterprise Manager. I have successfully used a UNC path to target the destination network share but have not been to figure out how to submit a logon to the network share before the backup is executed.
The SQL Server instance is running in the context of the local system account.
Can I insert a step in the SQL SQL job that is created by the Maintenance Plans that changes the Windows account that the backup runs under? If yes what command syntax would I use in the inserted step or is there another way to accomplish that I'm attempting to do?
I am trying to backup a database with a command like:
BACKUP DATABASE my DataBase TO DISK = '\bkSystemkDiskBackup1.bak'
but I get the error 'Cannot open backup device '\bkSystemkDiskBackup1.bak'. Device error or device off-line. The bkDisk folder is shared, with Everyone full-control access (it's a test environment)