I was wondering if there was a different approach I should take in appending data to a table...
My destination table has about 94+ million records in it, and I have been taking two approaches to getting new files into this table:
1) I do a data pump task in a DTS to import the file to a trans (temp) table, which is truncated every time, and then do an INSERT INTO statement from the temp table to my destination table.
The import to the trans table only takes a few minutes (about 1 - 2 million records per file, but have short record lenghts,) but when I do the INSERT INTO statement, it takes upwards of 6 hrs to append.
2) I have tried doing a bulk insert task, going directly to the destination table (which defeats the purpose of my trans table to check out the data prior, but I feel the data is clean at this point.)
I am running the bulk insert right now, and it's been running for over 3 hours...so I'm going to assume this will take just as long as the INSERT INTO statement does like I did before.
My destination table does not have any indexes in it at all, and I don't need to do any transformations to the data when bringing it into SQL since the data is clean. Also, I have a default value constraint on one of my fields on the destination table.
Plus there are other ppl and applications hitting the server which could impact the overall processing, but nothing out of the ordinary is going on the server today. I know there are only so many ways to get a file into a table...but maybe someone knows a different way I should try this.
I have a sql server 2008 backend with an Access 2007 frontend database. Each time I export a query I get the following error:
Code: Microsoft Access was unable to append all the data to the table.
The contents of fields in 0 record(s) were deleted, and 1 record(s) were lost due to key violations.
*If data was deleted, the data you pasted or imported doesn't match the field data types or the FieldSize property in the destination table. *If records were lost, either the records you pasted contain primary key values that already exist in the destination table, or they violate referential integrity rules for a relationship defined between tables. Do you want to proceed anyway?
I don't know what if anything is actually missing because of the amount of data is more thant 6000 records. It seems everything exported but I would have to comb through the data to be sure.
I have an existing table I need to add data to. The data is in a text file, and the existing table already has data in it (I don't want to delete this I want to add to it).
I used Microsoft's import utility but this created a seperate table with generic fieldnames (column01, column02, ect). Is there a step in this wizard I missed?
Looking for a faster method of moving data from SQL to Oracle.
I'm attempting to push a sql table into an oracle table (sql server 2000 oracle 7, 8, and 9). I have no problem doing this with either 'Oracle Provider for OLE DB' or the 'Microsoft OLE DB Provider for Oracle'. None of my data is being transformed so its a straight import. With the hardware I'm using it takes nearly 3 seconds to import 1000 rows. While this isn't too bad, I need to import upwards of 4 million rows and this results in unacceptable time results.
I do have an oracle script that imports the csv files of the tables, but I'm looking for an all inclusive sql solution.
Does anyone know of another method in SQL that I can use to push the data faster?
I have 1 table with a huge amount of data that I recive from someone else in a flat file format. I want to be able to filter through that data and scrub it and find out the good data and bad data from it.
I'm scrubbing the data using different stored procs that i've created and through a web interface that the user can pick which records they wish to create.
If I were to create a new table for clean records, what is the syntax to keep Appending to that table through the data that i'm obtainig via the stored procs that i've created.
Any thoughts or suggestions are greatly appriciated in advance
A view named "Viw_Labour_Cost_By_Service_Order_No" has been created and can be run successfully on the server. I want to import the data which draws from the view to a table using SQL Server Import and Export Wizard. However, when I run the wizard on the server, it gives me the following error message and stop on the step Setting Source Connection
Operation stopped...
- Initializing Data Flow Task (Success)
- Initializing Connections (Success)
- Setting SQL Command (Success) - Setting Source Connection (Error) Messages Error 0xc020801c: Source - Viw_Labour_Cost_By_Service_Order_No [1]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "SourceConnectionOLEDB" failed with error code 0xC0014019. There may be error messages posted before this with more information on why the AcquireConnection method call failed. (SQL Server Import and Export Wizard)
Exception from HRESULT: 0xC020801C (Microsoft.SqlServer.DTSPipelineWrap)
- Setting Destination Connection (Stopped)
- Validating (Stopped)
- Prepare for Execute (Stopped)
- Pre-execute (Stopped)
- Executing (Stopped)
- Copying to [NAV_CSG].[dbo].[Report_Labour_Cost_By_Service_Order_No] (Stopped)
- Post-execute (Stopped)
Does anyone encounter this problem before and know what is happening?
I'm moving data between identical tables and have to use a flat file as an intermediary. I thought: "No problem, SSIS can do a quick export to a file, then move the file to another server, then use SSIS to import the data to the new server."
Seems simple, right?
I'm hitting all sorts of surprising data conversion errors. I used the export wizard to create the export package. This works fine. However using the same flat file definition, the import package fails -- even when I have no destination. That is I have just one data flow task that contains only one control: the Flat File source. When I run the package the flat file definition fails with data type conversion and truncation errors. One of the obvious errors is for boolean types. The SQL field is a bit, SSIS defined the column as DT_BOOL, the output of the data are literal text values "TRUE" and "FALSE". So SSIS converts a sql datatype of bit to "TRUE" and "FALSE" on export, but can't make the reverse conversion on import?
Does anyone else find this surprising? I would expect that what SSIS exports, it can import given all the same table and flat file definitions. Is SSIS the wrong tool to do such simple bulk copies? I'd like to avoid using BCP because this process will need to run automatically within SQL Agent so we can leverage all the error tracking and system monitoring.
Strange one here - I am posting this in both SQL Server and Access forums
Access is telling me it can't append any of the records due to a key violation.
The query:
INSERT INTO dbo_Colors ( NameColorID, Application, Red, Green, Blue ) SELECT Colors_Access.NameColorID, Colors_Access.Application, Colors_Access.Red, Colors_Access.Green, Colors_Access.Blue FROM Colors_Access;
Colors_Access is linked from another MDB and dbo_Colors is linked from SQL Server 2000.
There are no indexes or foreign contraints on the SQL table. I have no relationships on the dbo_ table in my MDB. The query works if I append to another Access table. The datatypes all match between the two tables though the dbo_ tables has two additional fields not refrenced in the query.
I can manually append the records using cut and paste with no problems.
All, Using access 2003 frontend and sql server 2008 backend. I have an append query to insert 80000 from one table to an empty table. I get an error:
"Microsoft Office Access set 0 field(s) to Null due to a type conversion failure, and didn't add 36000 record(s) to the table due to key violations, 0 record(s) due to lock violations, and 0 record(s) due to validation rule violations."
I know this error normally comes if there are dups in a field that doesnt allow.
HelloI have created a table in mssql.2000 which holds details of names etc. I have also included categories of interest. However the table is growing very big and unmanageable as the list of interest expand.Instead I would like to create seperate tables for each category of interest within the same database and populate the table with names taken from the Names_Table I could then indicated yes or no if any name is interested in this category.for example: Art_category. However, I am unsure how I can import the column of names from the Names_Table to polulate the NameID column in the Art_category table.I would appreciate advice and possibly a link to step by step tutorial.Thanks.Lynn
If you were doing paging of results on a web page and were interestedin grabbing say records 10-20 of a result set. But also wanted to knowthe total # of records in the result set (so you could know the total #of pages in the set).Would it be better to query the DB table 2X. Once for Count(*). Andagain for the records for the current page?Or better to create a temp table, select the records into it, and thenget count(*) and the page results from the temp table?I saw an example in a book that made a temp table to do this and to meit seemed like it would be slower. I don't get the reason for a temptable. Anyone have any ideas?
Hi everyone,I have some data in a CSV file, and I have to import it into a table. For some reason, I am supposed to import this data into a temp table and then move it to the original table and I have to convert it to the right data types while I do this. Is there a better way to do this and how can I give custom error messages saying, for e.g., the data type cannot be converted, the right number of records are not present etc. Thanks for the help.
Hi, I'm new to SQL Server, and would appreciate some advice on the quickest way to import data from a CSV file.
I've created a database using Visual Web Developer Express, and added a couple of tables. The Help file in SQL Server Express (which is installed on the same PC) indicates that I should use BULK INSERT to populate the table. Only snag is, I could find anywhere to enter the commands! Eventually, I found out about the SQLCMD command which I entered in a Windows Command Window. I successfully connected to the default (SQLEXPRESS) server instance this way, but when I typed USE <my database name> I got an error back saying it couldn't find the database. I know that Visual Web Developer Express by default creates user-specific instances of the database, but I've turned that off (I think!) via the connection string. So, please could someone tell me how I can connect to my database via the SQLCMD command, or alternativley please let me know how else I can bulk inmport data from a CSV file. Many thanks in advance.
I am very new to the entire world SQL Server databases. I am starting from scratch.
Currently I have a little Website I am doing for myself that is .asp based and will allow users to query some sports boxscores. I hope to create a user interface that will allow folks to seperate team results based on certain criteria...
It is just a hobby of mine that I have been doing for year with excel and now hope to let others like me do it aswell.
here is what I got.
MSSQL 2005 Server with a database. Iam using SQL 2005 Server Express Studio. Therefore, do not have access to SSIS or DTS or anything like that.
However, I want to import several hundred records into a db I created (hosted by Crystal tech). Since, I don't have access to the Server root directory, I can't use the BULK INSERT statement.
I am looking for a method to query an excel file (or .csv something..) that is stored on my local drive and upload it to the Server db tables.
I would like to do this either through SQL with a query. Or I would to add this VB code to the current VB that I use in my Excel file.
How to refresh table data on sqlserver without dts or backup/restore whole database. In sqlserver, is there a way to import table(s) data (like import export in oracle) in a file and export to another unconnected server/db.
I'm trying to use DTS to import data from an XLS into a SQL table.It works fine in that it INSERT's the data. However, I need it toUPDATE the table, based upon a ProjectID. Can this be done?Can a DTS package be fired from a SP using parameters?Eg UPDATE tProjects SET MyField1=XLS.Sheet1.CellA1,MyField2=XLS.Sheet2.CellA1 WHERE ProjectID = @ProjectID.Also, it must handle dynamic XLS file names, eg 981-Budget.xls,513-Budget.xls, xyz-Budget.xlsIs this the best way to go? Other suggestions most welcome?Thanks everyone in advance!
Hi! I have to develop an application for transfering data from an excel file into a sql table.The excel file is uploaded to a server.The database(and the table) is on another server.At first,I used openrowset for transferring data to the table.My sql command looked like this(in my asp page):
SQLstr = "SELECT * INTO dbo.shopping_TSR FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database="+Server.MapPath("upload/tmb2.xls")+";hdr=yes', 'SELECT * FROM [Sheet1$]')"
I kept getting this error: [Microsoft][ODBC SQL Server Driver][SQL Server]OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0'IDBInitialize::Initialize returned 0x80004005: The provider did not give any information about the error.]
After reading a few articles,I think the cause of my error is that the excel file is uploaded into the folder where the asp script is located.I have 2 servers : one running the asp scripts and one containing the database. Is my error generated by the fact that the excel file is on a different server than the sql server?How could I make this work?
I am new to SSIS, and was only a novice to intermediate skill level with SQL 2000 DTS, so please excuse me if this is an easy question. I am trying to import data from a table in one DB into a table in another. After insertion, I need to store the newly created ID (an identity seed) in a separate table that maps to the original DB's row id. My eventual goal is to import a bunch of related tables from the old DB into the new DB, and maintain relationships, so the mapping of newly created IDs is necessary to make sure data is imported with the correct relationships. Any advice would be greatly appreciated!
i need to fetch data into multiple tables from a remote server (ss 2005)...so inside a sequence container , i have placed various data flow task...each task in turn gets data from oledb source to oledb destination.....there is minimal transformation involved...but the data is huge...and its taking unexpectably long...(im gettin only fresh data..identified by datetime fields of source tables..thats the only check.),
and way to make it faster...source db in not in my control...so i cant get indexes on it..:(
i have a table in sql 2000 db and want to import data from excel sheet in to the table. my table = Table1 excel file = data.xls is there a simple method where i can import data from the sheet into the existing table?
I have a remote DB I am wokring with at present. The DBA has provided me with a non owner LOGIN so I can't copy tables from the live to the staged DB as objects I can only copy tables and data.
The PKEY and IDENTITY COLUMNS get reset to just regular columns on each table. I can restore the PKEY constraint and have come across the DBCC CHECKIDENT to get the new ident value. I just can't figure out how to set a column to be an identity. The ALTER TABLE command isn't having any of it.
I am obviously missing the right bit on Books online
Our SQL 2008 R2 relational database has tables with foreign key relationships for part numbers. We receive production data from a separate program and we need to import the CSV data into our database application.
The problem is our separate program creates a CSV file with the actual part number "362S162-33". In our database we have a separate parts table (example: 362S162-33 has identity "15").
We need to import data into a production table that has a "part number" (FK) column.
How can we, when importing, cross-reference the "parts table" to convert the part number to the identity number. We have thousands of parts, so we need this change of part number column to the FK identity automatically on import.
Production Table: idComponent (PK), [1000] ComponentName, [Assembly108] idPartNumber (FK), [15] ComponentLength, [230.5] UserMessage, [Assembly is 230.5 inches using 362S162-33] Qty; [1]
I am using the import wizard in SQL Server 2008 R to import data from an Excel spreadsheet into a table I have created.
The spreadsheet contains 3 columns that SQL recognises as DOUBLE and they contain a 1 or 0. What data type do the corresponding fields in SQL table need to be? I have tried BIT, INT and FLOAT but keep getting an error (can't view details of the error because I get chucked out every time the error pops up). I know the problem is with the DOUBLE data because when I 'ignore' those columns the import works fine.
I am looking solutions to import csv data into my SQL database table. BUT we want to collect the data from specific columns in the csv file, (NOT the whole csv file) into SQL Database Table.
I need to make a script in SQL 2005 to import data from an Excel sheet into a SQL table. I am using the wizard to import now. Import from Excel 2000. First row of the excel sheet has column names. Excel file name is: EXL.xls, sheet name is: Sheet1 Destination sql database name is: NM, table name is: Sht1 I use SQL Server Authentication to access the database. User name: ABC and password: DEF Database name is: DB I am using the following setting when importing now: - Delete rows in destination table - Enable identity insert
I need to import data from more than 10 excels having the same format in to a single sql server table.
I tried to use
INSERT INTO MyTempTable SELECT * FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 11.0;Database=C:Book1.xls', [Sheet1$])
but got the below error Ad hoc access to OLE DB provider 'Microsoft.Jet.OLEDB.4.0' has been denied. You must access this provider through a linked server.
If DTS package is used then I am not sure how I can place 10 excels at a time so that they can be picked one by one and data is imported in to table.
I've imported data from an Excel spreadsheet to a table that has fields to match the destination table I'm trying to populate. The destination table has an Insert trigger with several checks on certain fields to make sure they have corresponding records in other tables.
If I do a statement like "INSERT INTO destinationTable ( ItemId, Product, SuperID, etc etc ) SELECT * FROM oldtable" it runs for a while then gives me error messages from the trigger and rolls back the Insert.
The trigger has code such as "IF (SELECT COUNT(*) FROM inserted WHERE ((inserted.Product Is Not Null))) != (SELECT COUNT(*) FROM tblInProduct, inserted WHERE (tblInProduct.Product = inserted.Product)) BEGIN (Error message code goes here) END"
So, do I need to do an INNER JOIN to each of the related files? When I try that, I get this error: "Msg 121, Level 15, State 1, Line 2 The select list for the INSERT statement contains more items than the insert list. The number of SELECT values must match the number of INSERT columns." Is SQL counting the foreign key fields as separate fields, or what?
Hi guys, I need to import all data from Excel spreadsheet to a Sharepoint Content Database (SQL Server).Please suggest the best way to do this. For this when i run the Import wizard under Tasks--> Import in Management Studio 2005 ....it asks me to choose the database name etc....but How to use the Import/Export Wizard to Export Data from a .xls source to an existing table in a database....that is i need to append/insert my excel data into an existing table.
I want to delete 30-40 million rows from a transactional table. Whats the fastest way to delete these rows. just to delete 300,000 rows it takes 30 min. also i don't want to truncate the table.
I have another table with the following structure (Basically this table will contain a subset of coloumns of Table1)
Table2 ------- Dept Field1 Field2
Now using a query I would like see all the records with all coloumns in Table1 plus all the records in Table2 appended
i.e if Table1 row is
IT F1 F2 F3 F4 F5
and if Table2 row is
IT F11 F22 Sales F12 F23
I would like to see a result set with the following structure
Resultset
IT F1 F2 F3 F4 F5 IT F11 F22 NULL NULL NULL Sales F12 F23 NULL NULL NULL
Can some body explain me how to do this with a query. I tried using union but it requires identical coloumns on both ends( Ofcourse, we can acheive this by having Field3,Field4 and Field5 as blank columns in Table 2 but I don't wanna do that as my original tables are too huge to handle this).