Questions About Memory Based Bulk Copy Operation(InsertRow Count,array Insert Directly,set Memory Based Bulk Copy Option)
Feb 15, 2007
Hi~, I have 3 questions about memory based bulk copy.
1. What is the limitation count of IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit(true)?
For example, how much insert row at below sample?(the max value of nCount)
for(i=0 ; i<nCount ; i++)
{
pIFastLoad->InsertRow(hAccessor, (void*)(&BulkData));
}
2. In above code sample, isn't there method of inserting prepared array at once directly(BulkData array, not for loop)
3. In OLE DB memory based bulk copy, what is the equivalent of below's T-SQL bulk copy option ?
BULK INSERT database_name.schema_name.table_name FROM 'data_file' WITH (ROWS_PER_BATCH = rows_per_batch, TABLOCK);
-------------------------------------------------------
My solution is like this. Is it correct?
// CoCreateInstance(...);
// Data source
// Create session
Before implementing memory based bulk copy insert with IRowsetFastLoad interface of SQL Server 2005 OLE DB provider, I want to know some considerations.
- performance : compared with T-SQL's "BULK INSERT ..." and bcp utility
- SQL Server's resource usage : when running memory based bulk copy, server resource's influence
- server side action(behavior) : when server is busy, delayed-update means IRowsetFastLoad::Commit(true) method can insert right after?
- row-count : The rowcount limitation can be inserted by IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit
I have a set of records in application memory seperated by a record terminator ''. I can write the memory stream to a local disk file and call bcp api functions to load the file in to SQL server. But how do I transfer the in memory data directly to the SQL server, without writing to a data file, using ODBC. I am not using any .Net Framework classes in my code. The SQL server and application server(generating the data records) are on two different physical servers connected through network. I am trying to figureout the fastest and efficient way to load the data to SQL server from a remote application server. Thanks for your help.
I need to move all of the contents of one database into anther with the same schema, and it looks like this might be just what I need. But it is from 2007, so I wonder if it is still current?
Also, having tried to run it on another database to generate the script that will actually do the copying, I have a few questions. It looks like it generates statements to import the data twice. For example:
BULK INSERT [TaPerfGDB].[dbo].[i1] Â Â FROM 'C:Tempi1.Dat' Â Â WITH (FORMATFILE = 'C:Tempi1.FMT', Â Â Â Â Â Â Â Â BATCHSIZE = 1000000, Â Â Â Â Â Â Â Â ERRORFILE = 'C:TempBI_i1.ERR', Â Â Â Â Â Â Â Â TABLOCK);
And a little later:
INSERT INTO [TaPerfGDB].[dbo].[i1] Â Â Â SELECT * Â Â Â Â Â FROMÂ OPENROWSET(BULKÂ 'C:Tempi1.Dat', Â Â Â Â Â FORMATFILE='C:Tempi1.Xml' Â Â Â Â Â ) as t1;
That does not really make any sense to me. It also generates statements like this:
bcp "[TaPerfGDB].[dbo].[GDB_GEOMNETWORKS]" format nul -n -CRAW -f "C:TempGDB_GEOMNETWORKS.fmt" --S"PGALLUCC-M7" -T
What is the deal with the double hyphen by the servername? Won't it just see that as a comment? It can be easily fixed, but I am just suprised that it is still there after all these years. My purpose in doing this is a desparate attempt to salvege a database that sits on a server with multiple drive errors. These prevent backups, so I cannot just restore the database on the new server. That is why I want to try an approach that goes table by table, so that at least all the tables which are not touched by the drive errors can be moved.
It is a 3 TB database running on SQL Server 2008 R2 std. ed.
So I started a new job recently and have noticed a few strange configurations. Typically I would never mess with min memory per query option and index create memory option configuration because i just haven't seen any need to. My typical thought is that if it isn't broke... They have been modified on every single server in my environment.
From Books Online: • This option is an advanced option and should be changed only by an experienced database administrator or certified SQL Server technician. • The index create memory option is self-configuring and usually works without requiring adjustment. However, if you experience difficulties creating indexes, consider increasing the value of this option from its run value.
I have a question...As per my knowledge bulk copying is not possible during the backup operation. If backup starts first backup will go and bulk copy will fail or bulk copy starts first backup fails and bulk copy will continue... Today I was testing bcp to run in dts using the EXCUTE PROCESS TASK (with this task we run any Win32 Excutable or batch file). I am trying to bcp out from one database(source) and bcp in to another database(destination). While running this package backup also running... I have started the database (destination) backup job and it was running and I started another job to run the dts (even I ran dts manually). Both the josbs succeeded and inserted the data into the table.... Can any one shed some light on this ....
I want to bulk copy a table using 'SELECT INTO' from a database in server1 to a database in server2. The 'FROM' part of the select into only allows three parameters 'database, user & table' within the the one server.
I remember seeing some option that allows one to bulk copy across servers but cant find it ?????
hi , I am using command line for bulk copy operation. I do have a couple of tables with some triggers to move data from one table to another on an insert trigger, I was just wondering, Is those triggers going to fire when I import data into the tables using bcp command line?
I am doing a full snapshot on couple of subcriptions but getting this message from last couple of days. I have to do this at night because tables are very large so it does not block the users. I am not sure which table this error is happening. It used to work fine but from last couple of days this has started to happen. I need to re-sync my subscription database with production database but some of the tables are giving problem.
The process could not bulk copy out of table '[dbo].[syncobj_0x3735393934363031]'.
I/O error while writing BCP data-file (Source: ODBC SQL Server Driver (ODBC); Error number: 0) ---------------------------------------------------------------------------------------------------------------
I am trying to use Bulk Insert for a user that is not sysadmin.I have already set up the user as a member of "bulkadmin".When I run the following script:DECLARE @SQLVARCHAR(1000)CREATE TABLE amdbo.#temp ([id] [varchar] (10) NULL,[fld2] [varchar] (10) NULL,[fld3] [varchar] (10) NULL)set @SQL ='BULK INSERT amdbo.#tempFROM ''F: est.txt''WITH (DATAFILETYPE = ''char'', FIELDTERMINATOR = ''|'', ROWTERMINATOR= '''')'EXEC (@SQL)select * from #tempI still get the message ...Server: Msg 8104, Level 16, State 2, Line 1The current user is not the database or object owner of table '#temp'.Cannot perform SET operation.Anyone have an idea what I am doing wrong?Drew.
With the following I try to save the content of an excel-sheet to a sql table. This works perfectly with SQL Server Express but not with MSDE, which I would need also. Here the code:String rootPath1 = Request.MapPath("~/Kontoauszug.xls"); String strConn = "Provider=Microsoft.Jet.OLEDB.4.0;" +"Data Source=" + rootPath1 + ";Extended Properties=Excel 8.0;"; OleDbDataAdapter da = new OleDbDataAdapter("SELECT * FROM [Mappe1$]", strConn);DataTable dtCustomers = new DataTable(); da.Fill(dtCustomers);
string ConnectionString = ConfigurationManager.ConnectionStrings["ConnectionString"].ToString();using (SqlConnection destinationConnection = new SqlConnection(ConnectionString)) { // open the connection destinationConnection.Open();using (SqlBulkCopy bulkCopy = new SqlBulkCopy(destinationConnection.ConnectionString)) { bulkCopy.BatchSize = 500; bulkCopy.BulkCopyTimeout = 90;bulkCopy.DestinationTableName = "dbo.Auszug"; bulkCopy.WriteToServer(dtCustomers);
bulkCopy.Close(); } } With MSDE I see the following error-message Fehler bei der Anmeldung für den Benutzer 'sa'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Fehler bei der Anmeldung für den Benutzer 'sa'.Source Error:
Line 148: Line 149: // Write from the source to the destination. Line 150: bulkCopy.WriteToServer(dtCustomers); Line 151: Line 152: Source File: d:InetpubWww_rootXXXXXXXXXXXXXXXXXXXXXXXX.cs Line: 150 Stack Trace:
Can I still use bcp comman to export data into a table when the target table has a referential integrity with another table ( parent child relationship ) .. thanks
I have a bcp which generates .txt file perfectly. I just wanted to know how can i generate a text file in distributed environment.
Assuming that my Sql Server is running in machine A. I wanted the bcp to generate in Machine B. What are the permission's i should give in order to generate it in Machine B.
Server: Msg 7399, Level 16, State 1, Line 1 OLE DB provider 'STREAM' reported an error. The provider did not give any information about the error. The statement has been terminated.
i am beginner in .NET visual programming using C#.NET with SQL Server 2005
i am facing a problem in SQL Server. suppose I have a table given below.. ------------------------------------------------ Table name is "A" ------------------------------------------------ 1 Michle Administrator 2 John Consumer Finance Officer 3 Jackson Employer 4 Goeffery Empl0yer
------------------------------------------------ Here the identity column is the first one. Now if i delete the 2nd row the this table becomes like that ------------------------------------------------ Table name is "A" ------------------------------------------------ 1 Michle Administrator 3 Jackson Employer 4 Goeffery Empl0yer
------------------------------------------------
Now i have an another table "B" having same structure (3 columns and first column is the identity column). Now if i move the all the data from table "A" to table "B" then the original seqeunce is distrubed the table "B" contains the data like this.
------------------------------------------------ Table name is "B" ------------------------------------------------ 1 Michle Administrator 2 Jackson Employer 3 Goeffery Empl0yer
------------------------------------------------
but i need that what ever the first column is identity column but sequence of numbering must be same as the numbering in the table "A" to preserve the relationships.... like that...
------------------------------------------------ Table name is "B" ------------------------------------------------ 1 Michle Administrator 3 Jackson Employer 4 Goeffery Empl0yer
------------------------------------------------
waiting for ur responce. i will very thankful to you to solve my problem.
Note:- the is moved using SQLBULKCOPY command.....
We have bulk copy option enabled for our DB and we really use it. Will it be possible to set up a snapshot replication over the Internet of particular tables to a remote server from which the data will be only retrieved and never changed? Also, is it necessary to have PKs in all tables for this one-way snapshot replication? (for transactional replication it is needed, as I know)
I am working on sql server bulk copy program. I am getting data files from our vendors for shares and stocks. The data files are pipe separated values. for ex the Ascii file format is
8388182|"ACC consultanats"|"rating for the current financial year"|23
My doubt is i have four columns in my sql server table named stocks.
In this the third column named memo1 in the data file would be a large volume data. That is it may be upto one full A4 size page. One important thing is, data in the third column is not formatted.
Since it is very urgent, let me know what would be the format file for this type of data file and the bulk copy program utility.
Hi, I am have a text file that contain row header. I want to export this text file into pub database to the author table. I usually use this code: Exec master..xp_cmdshell "bcp pubs..authors in d:dataauthors.txt /c /Snameofserver /Usa /Ppassword"
from sql analyser window. it the text file does not have a header, I am able to export the data, but if the text file Does have a header, I got an error, I know that I can open the text file and delete the header then run the bcp process. But I do not want to do so, IS there a code that I can add to the bcp code above to accept the header row and have a successful bcp procedure. thanks
Can someone confirm that setting a db to use the simple recovery model in sql server 2000 is the same as setting a db to user trunc. log on checkpt. and select into/bulk copy on sql server 7?
Hi guys,in my db i have these three tables1.Stores 2.Products3.Partstheir structure is something like :Stores ----Products ----PartsStores----------------StoreId, StoreNameProducts----------------ProductId, StoreId, ProductNameParts----------------PartId, ProductId, PartNamenow, in my application i wanna to implement a bulk-copy operation souser can copy products from one store to another one and when aproduct copied to new store;all of it's parts should copy too.in fact i need a method to insert a Product item in Products table andsynchronously copy it's parts into Parts table and repeat this stepsuntil all of proucts copied.how can i do that without cursors or loops ?Thanks
I'm trying to start a new publication. When the snapshot agent runs, it stops on a table with the error "Bulk Copy Failed". If I remove the table from the publication, it just moves the error to the next table it tries to copy. What could cause a bulk copy to fail during a snapshot?
I got a problem in regarding Transactional Replication.
Let me explain my scenario.
I€™m doing transactional replication between two databases.
When publisher and subscriber created the data going to be bulk copied from publisher table to subscriber table.
My main intension was to create replication between different tables with different fields in which I got succeeded.
But main problem is I want to stop this bulk copy from publisher to subscriber.
Scenario 1: my subscriber table may contain some previous data which will be replaced with publisher data due to bulk copy. I don€™t want this .I want to avoid this bulk copy and wants to create procedures(for insert, update and delete transactions) in subscriber which will take care of replication.
I achieved almost everything but not able to avoid this bulk copy during the creation of subscriber.
As I know the only way I can stop bulk copy is by creating subscription without subscription agent. But here without subscription agent the procedures(for insert, update and delete transactions) won€™t get created in subscriber.
Help me regarding the above scenario and I need it urgently.
I'm using Bulk Insert for the first time and have a question.I'm getting an error message about a field being truncated:Bulk insert data conversion error (truncation) for row 2, column 12(Depleted)The data type for the "Depleted" column is Char(1). Looking at thefile, there is indeed only one character in the column. I'm not surehow to fix this, or what I can do about it. Any suggestions would beappreciated.Thanks!JenniferThe table:CREATE TABLE [dbo].[parSalesDetailTemp] ([parSalesHdrID] [int] NOT NULL ,[parSalesDetailID] [int] NOT NULL ,[Before] [int] NOT NULL ,[Quantity] [int] NOT NULL ,[After] [int] NOT NULL ,[Promo] [money] NOT NULL ,[PromoBefore] [money] NOT NULL ,[ItemPrice] [money] NOT NULL ,[PromoAfter] [money] NOT NULL ,[POSItem] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[UnitNumber] [int] NOT NULL ,[Depleted] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ) ON[PRIMARY]GOThe SQL:BULK INSERT parSalesDetailTempFROM '\wbhq.comdfsdviDataIntTLDFilesExtractSalesD tl.csv'WITH (FIELDTERMINATOR =',')The file contents (1st few rows):8032753,37312006,0,1,0,0,0,4.39,0,"WB-ML",2,N8032753,37312007,0,1,0,0,0,4.39,0,"WB-ML",2,N8032753,37312008,0,2,0,0,0,.00,0,"ML-M-COK",2,N
I have two questions about bulk insert. The first is why can't I use it in a stored procedure with a parameter to give the FROM filename?
Second, I tried using a bulk insert statement interactively and it executes with no errors, but says 0 row(s) affected. If I paste one of the lines from the csv file into a query window and apply all the right single quotes in a regular insert statement it inserts fine.
I am trying to export a table to .CSV file using Sql 2005 BCP utility through .Net web page. I get the following error: " An error occurred during the execution of xp_cmdshell. A call to 'CreateProcessAsUser' failed with error code: '1314'. " I have done following steps: -- Set up proxy for a windows account using command : sp_xp_cmdshell_proxy_account '<winAccount>', '<password>' -- EXEC sp_grantdbaccess '<winAccount>' -- GRANT exec ON xp_cmdshell TO <winAccount> -- BCP command is inside a stored proc, so I run the stored proc as : Create Proc dbo.Export_Table with execute as '<winAccount>' What I am I missing? I can run the stored proc in SQL Managaement studio using that windows account. I don't get the error when I use the Visual Studio Developement Web Server, the export file is created without any problem. I get the error only when I run my .Net web page though IIS. Please help. Thanks
I'm running SQL 7.0 SP3 on two different machines (one with additional hotfixes). I'm taking a nightly snapshot of imported data on Server1 and pushing it out to another SQL 7.0 server on our network, Server2. All but one table is copied successfully. On the final table, I receive the message, "The process could not bulk copy into table '"%"'." Error Information Category: Data Source, Source: Server2, Number 4813.
Full error message: "Expected the text length in data stream for bulk copy of text, ntext, or image data."
I've looked up 4813, but it's pretty ambiguous/generic. Also, when I SELECT from Server1 and INSERT INTO Server2 in the QA, I receive no errors. Does anyone have any insight?