How can i import multiple accessfiles in sql server 2000 by using a dts
package?
The dts package should import files with the most current date in a
directory (not today's date). The date can be found in the name, e.g.:
<companyname>_20041214.mdb.
Also i would like to use the companyname from the filename to fill in a
empty column called companyname, so i can import all files into one
normalized table.
The files are placed in on a root drive.
If someone can help me on this i would be very happy.
I have a requirement where in i have around 15 different flat files , filenames are fixed but folder path can be changed(i think i should use a variable for folder path). These 15 files data should go to their respective tables in the database.
Whether I need to create separate data flow task for each file or separate package? In addition to these, example : while importing product data into product table, if product ID already exists, we need to ignore it and upload only the new records.
Is there a way to import multiple csv files from a directory into sql2005? The situation I have right now is that I have a folder withmultiple csv files that i need to import into sql 2005. I can do itwith the import wizard but it takes to long. The files will be updatedmonthly. The first row in the files contains all the header informationwhich may change monthy. What I am looking to do is import all of thesecsv into tables. One csv file into for one table. Ideally I would liketo use the name of the csv file as the name of the table. Any bump inthe right direction would be apprecieted
I got a situation in which we need to connect one of our transaction source systems to our datawarehouse using the regular ETL-steps with the help of SSIS. After 4 months of lobbying and discussions we have chosen a platform free solution in the form of CSV-files, since that was the only option we got eventually. This morning we did receive the first batch of CSV-files which almost gave me a heart-attack since I didn't quite expect there to be 275 separate (and different) CSV-files... but there were.
So at this point I have 2 options: Either I find a miraculous solution to (dynamically and automatically) insert all these csv-files into our staging-database, or I have to make 275 separate dataflow-tasks (or 275 SSIS-packages) to insert each csv-file manually... A small extra challenge is that the process should be reproducable since we are going to get these files on a periodic basis (problably each week).
Is there anyone here that has experience with such a situation or that can point me i the right direction towards solving this the easiest way.
Hi, I have about 300-400 XML files I want to load in my SQL database (2005). The following code will load one (1) file. How do i do a mulitple collections? INSERT INTO MEL (DATA) SELECT * FROM OPENROWSET (BULK'C:TempCHAPTER1.xml', SINGLE_BLOB) AS TEMP Thanks,
I have 8GB of text files which are basically log files from the past few years. There is 24 text files per directory which are labeled for every day (so they are not all in 1 folder). It would make reading them much easier if I could import them to SQL but I only seem to be able to import 1 at a time? (with the wizards :eek: )
Surely there is a way to mass import without all the costly applications that google searches give me? cheers :P
I have a script which imports the contents of a csv file from our CRM system and updates a table in my database. This works OK but the problems I have are that a) sometimes there is more than one file in the folder, and b) that I wish to move any csv files that have been imported into an archive folder. The csv files arrive with a time/datestamp and I currently rename them manually to FREXPORT before importing (the name is in the format FREXPORT_20141101_1217.csv).How do I:
1) get it to process the file without me having to manually rename the file(s) each time, 2) if there is more than 1 file in the folder process all the files and 3)move the correctly processed files to an archive folder which is: importarchive?
Ultimately, I would like the script to be run as a scheduled job, so it also has to deal with the fact that sometimes there will be no files to import too.
I'm working on a Web course where each student will be using ASP.NET to connect to a SQL Server database. What I'd like to do is to export my development database from my local MSDE (tables, data, etc.) and then import one copy for each student (who will have a SQL user that can only access their DB) to the course's SQL Server.
I've spent a couple nights now looking for articles and help - and have come up with a few articles citing oSQL and batch files - but haven't found anything the really matches my needs.
Does anyone have any ideas or links to articles that might help me out?
Hi all, I am a newbie to .NET and would appreciate all your valuble suggestions. I have and issue were I am trying to import data from a few selected columns MS Access and a couple of columns in SQL Server Table Y and trying to populate another table X . Both tables X and Y are in the same Database . I am wondering if I could design a custom package for this task.
What is the best way to insert CSV records, where 1 record maps onto multiple tables with parent/child or PK/FK relationship.
eg. CSV record (ID, param1, param2, param3)
ParentTable(ID as PK) Param1Table(ID as FK in ParentTable, param1) Param2Table(ID as FK in ParentTable, param2) Param3Table(ID as FK in ParentTable, param3)
Currently i am using multiple Transform Data tasks with ActiveX script. But it seems to work slow.
I have zero experience running any databases that spread further than 1 machine, so I have a few theory questions here that hopefully someone can help with. Hopefully this is the right forum, I'm not sure if it classifies as 'clustering'.
Anyways, we are launching a web app that is going to start with just 1 webserver/db server. For speed reasons, after some growth we might have to have a load balanced setup with a webserver in europe and one in north america. Basically the webservers are going to be serving 100,000's of files and each time a file is served it needs to be recorded in the database.
I think that if I'm connecting my european webserver across the internet to my db server, thats killing the purpose of having a webserver in europe to make for faster responses.
I am thinking that this european web server/db serving is only going to be logging the files served. Is there a way to import them into north american database everynight ?
I'm not sure what the best approach would be for something like this, but any suggestions are greatly appreciated.
Hi all,New to SQL Server - trying to create an SSIS package that will look forand import a series of Visual Foxpro tables (.DBFs) when they appear ina folder.The tables are/can be all different fields, field widths, etc. Withquite a bit of overlap though.The end result should be table "ABC.DBF" is pulled into SQL Server astable "ABC"Using: SQL Server 2005 Enterprise, SSIS, *latest* version of VFPOLEDBdownloaded from MSI have set up a package and tested it with several different tables andit works great - but I have to redo the data source and destinationeach time...I need to get this to be a somewhat automated process, pulling in all..DBFs no matter what they contain.Can I do this with SSIS alone (and variable substitution) or do I needto write a bunch of code...Thanks very much for your time and thoughts...
Hi all,I have de following application to do :I receive several .csv files from another application in a determined folderof my PC.Those files are named with the format log1.csv logs2.csv logs...The number of file is variable but the internal format is always : time_sec;levelSo the files content a field that may be used as unique key in the target database.I'm trying to build a DTS package that should import periodicallyall the CSV's present in the folder and then destroy them if donesuccessfully.Apparently its not so simple than I supposed. I have always to give the nameof the table I want to import.any idea?
I am building a ssis package that imports multiple xml files containing data into the tables using one xsd file. I am using xml source task for this. I can only import one file as the primary key constraint gets violated.
I have four tables with four primary keys. The xml file does not have the primary key column data. So every time these columns get populated as 1, 2, 3, 4.
I am preety new to xml, so was wondering if anyone can help? Why doesn't the xml file have primary key column data?
I need to import around 200 excel file data into one table. Is there a way of doing this using SSIS or DTS? I know how to import single excel file into table but i need to automate this process for many files. All help appreciated
I have a text file that I'd like to import into a SQL 2005 table. The file is tab delimited, which is easy enough to import, but I'd like the final field broken into multiple fields as well. The final field is space delimited. I've had no luck at being able to get this done. Has anyone done this?
I am importing all the files from a particular folder to a table on my database KB. It is working perfectly if i use it on the same system where the DB exists and not working from the network.
USE TESTDB
--Table Creation Starts here
Create table Account([ID] int IDENTITY PRIMARY KEY, Name Varchar(100), AccountNo varchar(100), Balance money)
Create table logtable (id int identity(1,1), Query varchar(1000), Importeddate datetime default getdate())
--Table Creation ends here
---Stored Procedure Starts here
Create procedure usp_ImportMultipleFiles @filepath varchar(500), @pattern varchar(100), @TableName varchar(128) as set quoted_identifier off declare @query varchar(1000) declare @max1 int declare @count1 int Declare @filename varchar(100) set @count1 =0 create table #x (name varchar(200)) set @query ='master.dbo.xp_cmdshell "dir '+@filepath+@pattern +' /b"' insert #x exec (@query) delete from #x where name is NULL select identity(int,1,1) as ID, name into #y from #x drop table #x set @max1 = (select max(ID) from #y) --print @max1 --print @count1 While @count1 <= @max1 begin set @count1=@count1+1 set @filename = (select name from #y where [id] = @count1) set @query ='BULK INSERT '+ @Tablename + ' FROM "'+ @Filepath+@Filename+'" WITH ( FIELDTERMINATOR = ",",ROWTERMINATOR = "")' --print @query exec (@query) insert into logtable (query) select @query end
I am importing xml multiple times a day from a vendor. However when SSIS created the ID's for nested XML data it is not unique. So importing the first time and I get 3-4 records it looks fine. However subsequent imports all use the same ID's so it isn't unique, how do I go about changing this as I cant find anything about it.
at first let me specify my requirement. a) i have an excel file with more than one sheets b) i want to import data from that excel file into sqlserver 2000 using asp.net & c# NOW i need a program that automatically realize total sheets of excel file AND insert into seperate table please help me
I have over 600+ Excel .xlsx file that I have been trying to import to Sql database table. I've been trying to complete this task with SSIS but no luck yet. I have seen several videos and read articles but when I run the package the source is validated but I always get an error in the destination. I am using Excel 2010 and SQL Server 2012.
How to import multiple text files (residing in single folder) into SQL Server table? I know how to import single file but not sure how multiple files could be loaded? Pls. guide.
We are trying to use SSIS Import export wizrd to import the flat files (CSV format) that we have into MS SQL Server 2005 database tables. We have huge number of CSV files. Is there a way by which we can import these flat (CSV) files in to corresponding SQL server tables in a single shot. I would really appreciate this help as it is painful to convert each and every file using the Import Export wizard.
I have a text file which contains the data that has to be inserted into multiple tables.The columnames of table 1 form the H1 follwed by Details D1,D1,D1... The column names of table two form the H2 followed by details D2,D2,D2 so on and similarly for Table 3. Am using a link server to the file directory and schema.ini which defines the column names fofr the text file
Is there any way of defining column names for more than one table through the schema.ini? or is there any other way through I can parse the text file contents to multiple tables?
Sample text file: H1,JobDate,JobNumber,FileName, D1,13/02/2008,asdf123,text1.txt D1,13/02/2008,asdf123,text2.txt D1,13/02/2008,asdf123,text3.txt
I need to consume a live data feed from a golf tournament. And by consume, I really mean insert (merge) into our own SQL Server database on a regular intervals as a tournament progresses.This site didn't let me upload an XML file, but you can see a sample of the data feed here: URL....
I need to insert this data into 2 tables, Player_Holes and Player_Shots. But while doing the insert, I need to lookup several things such as our player ID match to theirs on an external_id against the players table. The shot types translation, and some other logic about the process overall.
The columns in my player_holes tables are: id, player_id, hole_id, round, shots (this is a total # of strokes) and date_created/date_modified.Shots table is similar: id, player_id, hole_id, round, shot_number, shot_type_id, club, distance, date_created/date_modified.
The only way I know how to do it, is inefficient. I would parse the XML in ColdFusion (please no comments on ColdFusion, that's what we use for webdev), and then loop over it and do inserts for each player, each hole for each round, and the shots would probably be separate for each hole.
It would be so much better and more efficient if I could do it in SQL directly. I've done some research and SQL Server Data Tools looks promising. I've never used it, so would have to learn, but also I'm not sure if that'd work in this application when we want to run is as a scheduled task every few minutes.
We just upgraded from SQL Server 2000 to 2005. In the past, when I ran the import/export wizard to copy multiple tables from one database to another with SQL Server 2000, I had no problem. Now when I used the import/export wizard to copy multiple tables with SQL Server 2005, I kept getting an error. For example, when copied three tables, the first table might be copied fine and I got an error with the second table and the whole thing stop. Sometimes I could copy two tables. However, when I ran the import/export wizard to copy each table one at a time, it worked.
The error that I got was "Cannot insert duplicate key in object..." I selected the options to "Delete rows in existing destination tables", and "Enable identity insert". What am I doing wrong?
I'd recently posted a question about using SQL CE as a database server for a multi-user desktop app. I did some development and tested it, and it seemed to work fine. What I did was:
1. create a remoteable object that used SqlCe classes to perform read and write operations to an encrypted CE database.
public class RemData : MarshalByRefObject
{
public DataSet GetData()
{
//Read data }
public int AddData(DataSet data)
{ //Write data } }
2. hosted this object in a Remoting Server
TcpServerChannel channel = new TcpServerChannel(props, bp);
// Register the channel with the runtime remoting services
So, basically the CE DB is running in-proc with this Remoting Server. This is hosted on a regular P2 1GB box.
3. created client WinForms app to connect to this object through remoting with url tcp://myserverip/RM_RemData and distributed this client EXE to various machines within the intranet to execute the GetData and AddData methods
This seems to work perfectly fine and super fast, and i was also concurrently executing the above methods in loops of 100.
So what I don't understand is why most of the posts I read about multi-user scenario here and on the web are always discouraging people to only use CE for single-user desktop? As long as I use the SQL CE ONLY as a Data Store and all logic into my data layer such as the Remotabe Objects, will this be a feasible option for around 10-20 Users since CE allows 256 Connections anyway?
My other questions are with regards to programmatically Import/Export to and from CSV and Excel..is this supported or anything planned?
Would appreciate a detailed response..my product hangs in balance as i need some closure on this
I have an excel file which contains lots of sheets. Some of them are named as DW-<day>-<month> (for e.g; DW-1-July). Like this I have sheets for the whole month. I have other sheets too with a different name. I would like to import data from these sheets only (DW ones). Upon my research I have found that this can be achieved via For Each Loop Container (I guess!).
Post data import, I have a set of T-SQL query that I plan to execute via Execute SQL Task.
I just used the SSIS Import and Export Wizard to copy 50+ tables from SS05 to SS2K.
I found that the wizard created a package that I could not figure out how to edit, e.g., to change whether or not it had to CREATE a table, or just use an existing one. (I created some problems by manually editing the receiving table names to be ones that already existed -- but the original names it had did not exist, so it knew it had to create them. What I should have done, and eventually ended up doing, was scroll through my list of tables in the "receiving" box; I just figured editing the name would be faster, not realizing what problems I would create for myself.)
Anyhow, now that I see the complex package that the wizard creates, with a LOOP over the 50+ tables, I would like to know how/where in the package it is storing the information about the tables to copy.
Basically the wizard creates the following Control Flow tab entries (in processing sequence order):
an Execute SQL Task: NonTransactableSql an Execute SQL Task: START TRANSACTION a Sequence Container: Transaction Scoping Sequence, which contains an Execute SQL Task: AllowedToFailPrologueSql an Execute SQL Task: PrologueSql a Foreach Loop Container, which contains a Transfer Task with an icon I did not notice in the Toolbox an Execute Package Task: Execute Inner Package an Execute SQL Task: EpilogueSql an "on success" arrow to an Execute SQL Task: COMMIT TRANSACTION an Execute SQL Task: PostTransaction Sql an "on failure" arrow to an Execute SQL Task: ROLLBACK TRANSACTION an Execute SQL Task: CompensatingSql
Where, and how, can I look within this package to see the details about the tables I am transferring? I see that one of the Connection Managers is "TableSchema.XML" -- but it points to a temporary file on my hard drive, that I presume is populated by the package. Where does it get its information?
This is certainly much more complex than the package I would have written, based on my limited knowledge of SSIS. I would have been inclined to create 50+ Data Flow tasks, one for each table.
So now I'm trying to understand why the Wizard created this more-complex package.
Any help will be appreciated, including references to non-Microsoft books/websites/etc.
A view named "Viw_Labour_Cost_By_Service_Order_No" has been created and can be run successfully on the server. I want to import the data which draws from the view to a table using SQL Server Import and Export Wizard. However, when I run the wizard on the server, it gives me the following error message and stop on the step Setting Source Connection
Operation stopped...
- Initializing Data Flow Task (Success)
- Initializing Connections (Success)
- Setting SQL Command (Success) - Setting Source Connection (Error) Messages Error 0xc020801c: Source - Viw_Labour_Cost_By_Service_Order_No [1]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "SourceConnectionOLEDB" failed with error code 0xC0014019. There may be error messages posted before this with more information on why the AcquireConnection method call failed. (SQL Server Import and Export Wizard)
Exception from HRESULT: 0xC020801C (Microsoft.SqlServer.DTSPipelineWrap)
- Setting Destination Connection (Stopped)
- Validating (Stopped)
- Prepare for Execute (Stopped)
- Pre-execute (Stopped)
- Executing (Stopped)
- Copying to [NAV_CSG].[dbo].[Report_Labour_Cost_By_Service_Order_No] (Stopped)
- Post-execute (Stopped)
Does anyone encounter this problem before and know what is happening?
I am trying to import an xlsx spreadsheet into a sql 2008 r2 database using the SSMS Import Wizard. When pointed to the spreadsheet ("choose a data source") the Import Wizard returns this error:
"The operation could not be completed" The Microsoft ACE.OLEDB.12.0 provider is not registered on the local machine (System.Data)
How can I address that issue? (e.g. Where is this provider and how do I install it?)