Using Openrowset (bulk...) With Variable In Filespec
Sep 7, 2006
Hello,to accelerate loading xml data from many files into a table using openrowset (bulk...) I want to use a variable in the file specification and increment it within a loop similar to this:
declare @datnam varchar(100);
DECLARE @MyCounter int;
SET @MyCounter = 1;
set @datnam = 'c:XML_DatenPOS_LOG_200608_'+ltrim(str(@MyCounter))+'.xml';
INSERT INTO GK_TO_KFH_ADAPTER_XML_NS (LOC_ID, MSG_CONTENT)
SELECT @MyCounter, MSG_CONTENT
FROM (
SELECT * FROM OPENROWSET (BULK @datnam, SINGLE_CLOB)
AS MSG_CONTENT) AS R(MSG_CONTENT)
But I got the following error:
Msg 102, Level 15, State 1, Line 8
Incorrect syntax near '@datnam'.
Is there a way to this in that manner?
Or is the bcp utility an alternative?
select * from openrowset(bulk '\server1c$file.txt', SINGLE_BLOB) as t works from sql server itself, but doesn't work from any other machine. I got "Operating system error code 5(Access is denied.)." I am running as the domain admin, the file.txt has full control for everyone. In server1 even log, I see Anonymous Logon.
I am trying to import some data from csv files. When I try it using bulk insert I get a conversion error. When I use the exact same format file and data file with an openrowset it works fine. I would prefer to use the BULK insert as I can make some generic stored procedures to handle all my imports and not have to code the column names in the SQL. Any suggestions?
BULK Insert stuff
From 'c:projects estdatalist.txt'
with
(FORMATFILE='c:projects estdatamyformat.xml')
insert into stuff (ExternalId, Description, ScheduledDate, SentDate, Name)
select *
from OPENROWSET (BULK 'c:projects estdatalist.txt',
FORMATFILE='c:projects estdatamyformat.xml')
as t1
The destination table has more columns than the data file. The Field IDs represent the ordinal position of the columns in the destination table. Column 1 in the destination table is an int identity. The conversion failure is from trying to convert column 5 to int which makes me think bulk insert is ignoring the name attributes in the XML and just trying to insert the columns into the table in order without skipping.
I'm experiencing issues importing XML data using a distributed query with the following statement which is run from an XP client named WorkstationA connecting to SQL2005 SP2 ServerB, the XML data is located on ServerC.
AdHoc Queries using OpenRowSet has been enabled and verified.
The SQL Server service is running using a domain user account with permissions to read the remote files. I have logged in locally to the SQL server and verified this. It still fails even if the SQL services are running using LocalSystem.
User on Workstation A is authenticated with Integrated security (SQL Admin) and has rights to read the XML files on ServerC.
WorkStationA = SQL2005 Mgt Studio running the query ServerB = SQL2005 SP2 ServerC = XML data files
DECLARE @xml XML SELECT @xml =CONVERT(XML, bulkcolumn, 2) FROM OPENROWSET(BULK '\SERVERCSHAREPATHDATAFILE.XML', SINGLE_BLOB) AS x SELECT @xml
Results: Msg 4861, Level 16, State 1, Line 2
Cannot bulk load because the file "\SERVERCSHAREPATHDATAFILE.XML" could not be opened. Operating system error code 5(Access Denied).
The query fails when it is run from Workstation A connected to SQL ServerB querying data on ServerC via a UNC. The query is succesful when it is run from the local SQL ServerB. The problem is with distributed queries. The query is succesful when the XML files are local to the SQL server including referencing them via a local UNC
I have a text file that is being insert into a table in a remote db. I have a dev server named dbname_trunk and a production server named dbname. The dataflow task refers to
Actually, I know that doesn't work. What I need to know is what would work to accomplish my purpose. Ultimately, I would like to put the value in the configuration file.
I had problem when combining OpenRowSet and SP_EXECUTESQL, when i tried to run the following query, it complaints that RESID is not declared. any idea how should i put the query so i will pass @RESID as 1 of the parameter? BTW, i know that the SP_EXECUTESQL is able to run query which length up to 8000, but how about the parameter?
I am able to import a CSV file into a temporary table as long as I know the number of fields in the CSV file. Here is what I would like to do:
I would like to have a CSV file which has UP to 6 entries per row. I would like to insert each row into a table; if the there three fields, then I want to insert them into the first three columns to the temporary table. If there are four, then insert into the first four fields. Is this possible?
Hello I need to write a proc to load data from txt files I receive into a table. It works fine when I specify bulk insert.... from 'myfilename.txt' BUT my filename will always change and I store it into a variable @filename
When I try to run the bulk insert instruction ... from @filename it doesn't work.. do you know why?
I have to update a field within a table of 60 records or so. Each record has a different field value. it's type varchar. i was given an excel file with the field values and was thinking of a bulk update like bulk insert, but i don't recall that it's possible that way.
Is the only way to create a table, bulk insert, then merge the two tables together with UPDATE?
Just wanted to see if there was an easier way to do it, otherwise i'll take the latter route. Thanks!
I have a table containing 8 million records. I need to replace 2 million of these records with a scaled down query that goes something like: SELECT 1, ShareholderID, Assets1 FROM MyTable (Yields appx. 200,000 recods) SELECT 2, ShareholderID, Assets2 FROM MyTable (Yields appx. 200,000 recods) . . . SELECT 10, ShareholderID, Assets1 + Assest2 + Assets3 + ... + Assets9 FROM MyTable (Yields appx. 200,000 recods)
Updates and cursors just seem to be too slow.
So far I have done the following, but was wondering if anyone could think of a better way. SELECT 6 million records that don't need to be deleted into a #TempTable Use statements above to select into same #TempTable DROP and recreate Original Table SELECT 6 + 2 million records INTO original table.
This seems rather convoluted. Is there a better approach? Would it be worth while to dump data to a file and use bcp / Bulk Insert
I'm trying to use Bulk insert for the first time and getting the following error. I think it might have something to do with my Format File and from the error msg there's a conversion error for the first column. In my database the Field is nvarchar(6) so my best guess is to use SQLNChar for the first column. I've checked the end of each line is CR LF therefore the is correct for line 7 right?
Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 1 (ASXCode). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
BULK INSERTtbl_ASX_Data_temp FROM 'M:DataASXImportTest.txt' WITH (FORMATFILE='M:DataASXSQLFormatImport.Fmt')
Before implementing memory based bulk copy insert with IRowsetFastLoad interface of SQL Server 2005 OLE DB provider, I want to know some considerations.
- performance : compared with T-SQL's "BULK INSERT ..." and bcp utility
- SQL Server's resource usage : when running memory based bulk copy, server resource's influence
- server side action(behavior) : when server is busy, delayed-update means IRowsetFastLoad::Commit(true) method can insert right after?
- row-count : The rowcount limitation can be inserted by IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit
I'm just learning SSIS and I've hit my first bump. I am doing a bulk import from a tab delimited text file to an empty sql table that has a Idendity column defined. How do I tell the bulk insert task to skip that column when inserting from the text file. If I remove the identity column it imports the data fine, but I want to create the indentity column in the table too.
How come when I run the script: select * from openrowset ('MSDASQL', 'Driver=Microsoft Excel Driver (*.xls);DBQ=\inawwwpro01d$atrUploadACK102.xls' , 'select * from [sheet1$]') from Server A I get a RESULT, and when I run from Server B I get the following error: Ad hoc access to OLE DB provider 'MSDASQL' has been denied. You must access this provider through a linked server. Both servers are using IDENTICAL SQL Logins. Both servers are SQL 2000 SP2.
When I run the script on Server B logged in as SA, then I get a Result!!!
how can i use openrowset. my aim is to develop a import and export data from diff. server . i am using vb.net 2003 , sql sever 2000. when i am running openroeset function with window or server authontication it is showing 'OLE DB provider 'SQLOLEDB' reported an error. The provider did not give any information about the error.
anybuddy help me , pls. tell me the right solution.
Hi! I decided to use OPENROWSET for importing data from an excel file into a sql table.When I import the data,I have a problem:
-not all the data is imported to my sql table -some values in the sql table are different from the ones in the excel file For example,a value that in the excel file is:87878787 will be in the SQL table:8.78788e+007.
Hi Folks,I am trying to load data from a table in MS Access to SQL Server 2000using T-SQL OPENROWSET. When I select data from the remote database (MSAccess) using SQL Query analyzer, the columns do NOT appear in the sameorder as seen in Access directly.For e.g. if Access table has columns Cy, Cx, Cz the output in Queryanalyzer appears as Cx, Cy, Cz. It appears to arrange the fieldsalphabetically. This causes problems when I do a 'insert into select *from' as the field definitions do not agree.Is this a bug or is there a setting in Access/SQL which I am missing?Also, please let me know if there is a workaround for this issue.Thanks in advance!Bhaskar
I am trying to use OPENROWSET in SQL Server to connect to my Access database, but I keep getting the following error:
OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: The Microsoft Jet database engine cannot open the file 'X:SetupDatabaseKDB_X2.mdb'. It is already opened exclusively by another user, or you need permission to view its data.] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ].
I have checked the database, it is not in use, and there is no password set on the database that would prevent me from getting access to it. Any ideas? Here's the syntax I'm using:
SELECT * FROM OpenRowset('Microsoft.Jet.OLEDB.4.0', 'X:SetupDatabaseKDB_X2.mdb';'Admin';'', subPSEL_PList))
I having an excel file called TEST.XLS (in c: drive) which has 3 columns (c0,c1,c2) & 8 rows c0 c1 c2 1 a A 2 b B 3 c C 4 d D 5 e E 6 f F 7 g G 8 h H
In sql server i have one table called TEST which has three columns c0 char(10) c1 char(10) c2 char(10)
when i runnig the following query
select * into TEST from openrowset('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=C:TEST.xls;HDR=YES', 'select * from [Sheet1$]')
i am getting error like this
Server: Msg 7399, Level 16, State 1, Line 1 OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: The Microsoft Jet database engine could not find the object 'Sheet1$'. Make sure the object exists and that you spell its name and the path name correctly.] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IColumnsInfo::GetColumnsInfo returned 0x80004005: ].
Version : Sql server 2000 & Excel 2002
Anybody having any idea why this error occuring? thanks in advance... Philkos
I am having trouble exporting the correct data from sql server to an excel spreadsheet using 'OPENROWSET'
The problem is that although the data from my sql table is say : 'a',1,2,3 , the excel spreadsheet sees the data as : 'a','1,'2,'3 ..1,2,3 are of course numbers NOT text, its just that the driver has put a single appostraphy before the number !
I know there is a bug with the ISAM driver but has anyone managed to solve this or has anyone have any alternatives ?
I'm doing an openrowset query on an excel sheet. (Using SQL Server 2005) Everything works great, except that I have one column that has both numeric and text data in the spreadsheet. The query returns that column as datatype varchar but puts nulls in the rows that have numeric data in the spreadsheet.
Any suggestions?
I run:
select * from openrowset('microsoft.jet.oledb.4.0', 'Excel 8.0;database=[filepathandname], 'select * from [Sheet1$A4:G5000]')
and would like to create views for each distinct table, using openrowset. An added complexity is that the library name depends on the company code (i.e. BISxxSET.ALPA0A turns to BIS03SET.ALPA0A for company 03 whereas BSFFBUA.ALPA1A remains intact).
Hy Forum !I tried to do somsthing like this:CREATE PROCEDURE dbo.maches ASselect * from openrowset('Microsoft.Jet.OLEDB.4.0','C: empFehler.mdb'; 'Administrator'; ' ' , Fehlerliste)This user 'Admistrator' exists and there is no password for the .mdbfile. In the end the syntax checker tells me:Fehler 7303: Datenquellenoblekt von OLE DB Provider'Microsoft.Jet.OLEDB.4.0' konnte nicht initialisiert werden.I'm using Access 2000 and SQL Server 7.Thank you