Bulk Copy Command Or Exporting Data Table Wise From Database To CSV Files
Dec 10, 2013
I am using Bulk Copy command for Exporting data table wise from database to csv files and it was working fine. Since last 3-4 days when exporting for some tables data in csv file is coming junk.
I have to export the table data from my databse into text files as I nedd to put it in Informix database using a sheel script. Is there a way by which I can do this.
Is there any other way by which I can put the data from SQL Server to Informix.
I have more than 500 CSV files with a similar structure [Same column name and same data format]. I would like to load these files in a database table on the SQL Server 2014 database.
Before implementing memory based bulk copy insert with IRowsetFastLoad interface of SQL Server 2005 OLE DB provider, I want to know some considerations.
- performance : compared with T-SQL's "BULK INSERT ..." and bcp utility
- SQL Server's resource usage : when running memory based bulk copy, server resource's influence
- server side action(behavior) : when server is busy, delayed-update means IRowsetFastLoad::Commit(true) method can insert right after?
- row-count : The rowcount limitation can be inserted by IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit
Hi~, I have 3 questions about memory based bulk copy.
1. What is the limitation count of IRowsetFastLoad::InsertRow() method before IRowsetFastLoad::Commit(true)? For example, how much insert row at below sample?(the max value of nCount) for(i=0 ; i<nCount ; i++) { pIFastLoad->InsertRow(hAccessor, (void*)(&BulkData)); }
2. In above code sample, isn't there method of inserting prepared array at once directly(BulkData array, not for loop)
3. In OLE DB memory based bulk copy, what is the equivalent of below's T-SQL bulk copy option ? BULK INSERT database_name.schema_name.table_name FROM 'data_file' WITH (ROWS_PER_BATCH = rows_per_batch, TABLOCK);
------------------------------------------------------- My solution is like this. Is it correct?
// CoCreateInstance(...); // Data source // Create session
Hi guys,in my db i have these three tables1.Stores 2.Products3.Partstheir structure is something like :Stores ----Products ----PartsStores----------------StoreId, StoreNameProducts----------------ProductId, StoreId, ProductNameParts----------------PartId, ProductId, PartNamenow, in my application i wanna to implement a bulk-copy operation souser can copy products from one store to another one and when aproduct copied to new store;all of it's parts should copy too.in fact i need a method to insert a Product item in Products table andsynchronously copy it's parts into Parts table and repeat this stepsuntil all of proucts copied.how can i do that without cursors or loops ?Thanks
I have a set of records in application memory seperated by a record terminator ''. I can write the memory stream to a local disk file and call bcp api functions to load the file in to SQL server. But how do I transfer the in memory data directly to the SQL server, without writing to a data file, using ODBC. I am not using any .Net Framework classes in my code. The SQL server and application server(generating the data records) are on two different physical servers connected through network. I am trying to figureout the fastest and efficient way to load the data to SQL server from a remote application server. Thanks for your help.
Doing the standard processing, taking source data in a staging database, where I then create the new target tables, transform the source data into the new target table structure, and load the data.
However, having created the new target tables in my staging database, I cannot accurately migrate these tables into the new database.
An example table.
The following is the script created by SQL Management Studio if I right click and script the table as CREATE. CREATE TABLE [dbo].[tbPRO_Package]( [PRO_PackageID] [int] IDENTITY(1,1) NOT NULL, [CreatedDate] [datetime] NOT NULL CONSTRAINT [DF_tbPRO_Package_CreatedDate] DEFAULT (getdate()), [CreatedBy] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL CONSTRAINT [DF_tbPRO_Package_CreatedBy] DEFAULT (suser_sname()), [ModifiedDate] [datetime] NOT NULL CONSTRAINT [DF_tbPRO_Package_ModifiedDate] DEFAULT (getdate()), [ModifiedBy] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL CONSTRAINT [DF_tbPRO_Package_ModifiedBy] DEFAULT (suser_sname()), [GUID] [uniqueidentifier] NOT NULL CONSTRAINT [DF_tbPRO_Package_GUID] DEFAULT (newid()), CONSTRAINT [PK_tbPackage_ID] PRIMARY KEY CLUSTERED ( [PRO_PackageID] ASC)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY]
Note that the field PRO_PackageID has an identity, and is a primary key.
Also note the constraints on several fields and the default values.
If I try to move this table and data using the SQL Server Import and Export Wizard, telling it to drop and create the table, and to keep identity, it does not correctly generate the table defintion - it generates the following script
CREATE TABLE [Migration Staging].[dbo].[tbPRO_Package] ([PRO_PackageID] int NOT NULL,[PackageCode] varchar(8) NOT NULL,[LKU_VatRateCode] nchar(10),[CancellationCharge] decimal(18,2),[CancellationMultiplier] decimal(18,2),[CreatedDate] datetime NOT NULL,[CreatedBy] varchar(50) NOT NULL,[ModifiedDate] datetime NOT NULL,[ModifiedBy] varchar(50) NOT NULL,[GUID] uniqueidentifier NOT NULL)
We've lost the identity, the PK,the constraints and the defaults.
If I use the SSIS Transfer SQL Server Objects Task, then the primary key is kept, but the identity is not, nor are the constraints or defaults.
Am I missing something? Is it actually POSSIBLE to accurately move tables between databases?
Still in my republisher scheme I added a trigger at the publisher server created the snapshot than reinitialized the subscription , started the merge agent and the trigger passed. But at the republisher I followed the same technic that worked previously and now it gives me those errors when I start the merge agent.....
The process could not deliver the snapshot to the Subscriber. (Source: Merge Replication Provider (Agent); Error number: -2147201001) --------------------------------------------------------------------------------------------------------------- The process could not bulk copy into table '"dbo"."MSmerge_genhistory"'. (Source: ***************** (Agent); Error number: 20037) --------------------------------------------------------------------------------------------------------------- Cannot insert duplicate key row in object 'MSmerge_genhistory' with unique index 'unc1MSmerge_genhistory'. (Source: ***************** (Data source); Error number: 2601) --------------------------------------------------------------------------------------------------------------- Function sequence error (Source: ***************** (ODBC); Error number: 0) ---------------------------------------------------------------------------------------------------------------
Does anyone have any idea of what to do ??? except delete the subscription....
I have setup an 'indexed view logged' article so that I can replicate my view to a table in the destination server. I need to do transactional replication without changing the data at the destination so i used 'Keep existing object unchanged' in the article but I'm always getting 'The process could not bulk copy into table '"nnn"."musictracks1"'.
Hi everybody, i'm new to SSIS, so it's possible that mine is a very stupid question
I have to develop a simple ETL package that reads data from a csv file and writes them to an xls file; the problem is that when the number of rows exceeds the maximum number of rows allowed for an xls file i get an error.
There is a way to solve this problem? for example adding a new sheet or creating a new file?
The process could not bulk copy out of table '[dbo].[syncobj_0x3944323636373031]'.
Snapshot replication was running fine up until last week. The e drive on the system failed. The server was rebooted and the drive returned. Since then the replication has failed. (e drive is the location of the database and log files)
The database is approx. 12 gig. I googled the error message an found several references to the drive being full. So I tried to map the replication to a different drive. The replication fails with the same error.
Also tried to drop and recreate the replication process. Still the same error.
Can anyone provide an expample of bulk copying XML data to a SQL table. I am also looking at using column mapping so that I can map fields and also insert a new GUID into the key of the SQL table. Many thanks
I need to move all of the contents of one database into anther with the same schema, and it looks like this might be just what I need. But it is from 2007, so I wonder if it is still current?
Also, having tried to run it on another database to generate the script that will actually do the copying, I have a few questions. It looks like it generates statements to import the data twice. For example:
BULK INSERT [TaPerfGDB].[dbo].[i1] FROM 'C:Tempi1.Dat' WITH (FORMATFILE = 'C:Tempi1.FMT', BATCHSIZE = 1000000, ERRORFILE = 'C:TempBI_i1.ERR', TABLOCK);
And a little later:
INSERT INTO [TaPerfGDB].[dbo].[i1] SELECT * FROM OPENROWSET(BULK 'C:Tempi1.Dat', FORMATFILE='C:Tempi1.Xml' ) as t1;
That does not really make any sense to me. It also generates statements like this:
bcp "[TaPerfGDB].[dbo].[GDB_GEOMNETWORKS]" format nul -n -CRAW -f "C:TempGDB_GEOMNETWORKS.fmt" --S"PGALLUCC-M7" -T
What is the deal with the double hyphen by the servername? Won't it just see that as a comment? It can be easily fixed, but I am just suprised that it is still there after all these years. My purpose in doing this is a desparate attempt to salvege a database that sits on a server with multiple drive errors. These prevent backups, so I cannot just restore the database on the new server. That is why I want to try an approach that goes table by table, so that at least all the tables which are not touched by the drive errors can be moved.
It is a 3 TB database running on SQL Server 2008 R2 std. ed.
I'm new to SQL Server 2005 SSIS. I'm trying to do something very simple, but I cannot figure it out, PLEASE HELP!
I have a flat file, which I read and then insert the data in a database table, that works fine. The problem is that I don't want to insert duplicate records. For example; if I run the package again, it will appent to the table. What I need to do is that if the package runs again, check if the record already exist, based one two columns, date and hour, and do not insert the record.
I am transferring data from Oracle tables into text files, and facing these errors.
1. I have a varaible working as an expression and my query goes into that variable and onwards that variable is passed to dataflow task, which parse the query. my query is simple saying "Select * from PLS.ABC" where PLS is my schema, but the task generates error "Opening a rowset for "Select * from PLS.ABC" failed. check that the table exists in the database. and surely the table is there.
2. I have a foreach loop that iterates through all the table names and the table names are passed onwards to the varaible query, the dataflow task inside the foreach loop gets the variable query and will generate text files based on tablenames which i have supplied in another variable to the connectionstring property of the flatfile destination. Is it possible or not. all the tables have different columns and i need the output in text files.
I created a file People.txt containing firstName, LastName seperated by a pipe.
------------------content----------- John | Doe Mike | James Adam | Smith -----------------------------------------
and another one called gender.txt
------------------content----------- M ---------------------------------------
I will would like to create integration services package that compines each record of the first file with the record of the second file and inserts the result into table.
I have around 100 XL Files in a folder ,i want to import all the files dynamically and load all the data in a single table in sql server 2008. Without using SSIS i want to query using openrowset.
I copied a database using backup and restore from Sql server 2000 to Sql server 2005. I noticed the data(mdf) and log(ldf) files did not copy. when I try to do a copy and past it gives me an error: cannot copy lof: it is being used by another person or program. Close all programs and try again. Does this mean I have to stop both servers to copy between them? Please help.
Hi,I have(had) an old Win2k Server server with about 30 web site databases(SQL 2000) that just went under due to hardware problems. Thankfully, Ihave backups of all the databases plus the MDF and LDF files from thehard drive.I want to move all of these sites and their data to a newer server(Win2003) running SQL2000.What's the best way to copy the database from the old server hard drive(now mounted as an extrnal drive to a local machine; I'm currentlyFTPing all of the web site directories from it to the new server)?Just upload the original data to the new server and then mount the MDFand LDF files within the new SQL server? Or do I restore the backupfiles in the new SQL2000?All of my previous data migrations have been DTS operations from onelive server to another, so no experience with either of the abovescenarios. I'll certainly have a lot more experience at one of them bythe time this weekend is through.Thanks for any help you can offer.
It seems to me that files created on Unix machines with line terminator , or chr(10), cannot be imported using the Bulk Insert statement. Is this a bug, or an oversight by Microsoft? Does this mean that unless one replaces all with , there is no way to use Bulk Insert to import Unix files? This is a very strange behavior by MSSQL. Even lessor programs such as Excel and Word automatically recognize chr(10) as a line termination character. Am I missing something, or is this just the way MSSQL is?
We are having a problem with trying to backup the database device and log DAT files located in the MSSQLData directory. The Seagate Backup Exec. states that the files are busy and skips them during its backup cycle. It skips all the devices in the directory.
Microsoft SQL Server 2005 - 9.00.3159.00 (Intel X86) Mar 23 2007 16:15:11 Copyright (c) 1988-2005 Microsoft Corporation Enterprise Edition on Windows NT 5.2 (Build 3790: Service Pack 2)
40 subsribers. Adding new article. On any aubscribe need regenerate snapshot agent agent. The tables are very big. With filters. Regenerating process anew copy data in in ...ReplDataame.bcp, during 30-40 minutes for any subscriber! Full work to small correct of replication flow during 2-3 days....
Is it possible to initialize subscription without copying bcp data?
Hi! I did: alter database mydb set single_user with rollback immediate; exec sp_detach_db @dbname='mydb', @keepfulltextindexfile='true';
then I tried to copy files to new location on other drives, same server but got >>Cannot copy <myfile>: Access is denied Make sure the disk is not full or write-protected and that the file is not currently in use<<
I also tried rename of file without success. I also tried with db service stoppet (not preferred) without success.
How to find out, which process locks the files? Best regards
I have a sql server 2008 backend with an Access 2007 frontend database. Each time I export a query I get the following error:
Code: Microsoft Access was unable to append all the data to the table.
The contents of fields in 0 record(s) were deleted, and 1 record(s) were lost due to key violations.
*If data was deleted, the data you pasted or imported doesn't match the field data types or the FieldSize property in the destination table. *If records were lost, either the records you pasted contain primary key values that already exist in the destination table, or they violate referential integrity rules for a relationship defined between tables. Do you want to proceed anyway?
I don't know what if anything is actually missing because of the amount of data is more thant 6000 records. It seems everything exported but I would have to comb through the data to be sure.
Let me frist start saying that I am no SQL DB guru or have any great knowledge. I am sure this questions is the most basic question posted here. I would like to know how to export a single table from an SQL 2005 DB. I need to export this table from one SQL server to another. Just one table and its contents. Also, I woud like this exported table to be saved as a *.dat file?