Ok - dont' throw rotten food at me for asking this question...
Is there any advantage (faster I/O) to creating multiple mdb files on the same RAID 5 for the same database?
In other words, database ABC is 100 gb in size and has one primary file on H:. If I create another (or 3 or 6) secondary files for the mdb, would there be an appreciable performance gain? Same with log file?
I am configuring a new database server, without SAN access, and want to know what is the best practice for SCSI RAID configuration. Do most folks prefer RAID 5 or RAID 10 configurations where their databases will reside?
RAID 5 beats RAID 10Can I get some feedback on these results? We were having some seriousIO issues according to PerfMon so I really pushed for RAID 10. Theresults are not what I expected.I have 2 identical servers.Hardware:PowerEdge 28502 dual core dual core Xeon 2800 MHz4GB RAMController Cards: Perc4/DC (2 arrays), Perc4e/Di (1 array)PowerVault 220SEach Array consisted of 6-300 GB drives.Server 1 = Raid 103, 6-disk arraysServer 2 = Raid 5 (~838 GB each)3, 6-disk arrays (~1360 GB each)TestWinner% FasterSQL Server - UpdateRAID 513Heavy ETLRAID 516SQLIO - Rand WriteRAID 1040SQLIO - Rand ReadRAID 1030SQLIO - Seq WriteRAID 515SQLIO - Seq ReadRAID 5MixedDisktt - Seq WriteRAID 518Disktt - Seq ReadRAID 52000Disktt - Rand ReadRAID 562Pass Mark - mixedRAID 10VariesPass Mark -Simulate SQL ServerRAID 51%I have much more detail than this if anyone is interested.
I have a requirement where in i have around 15 different flat files , filenames are fixed but folder path can be changed(i think i should use a variable for folder path). These 15 files data should go to their respective tables in the database.
Whether I need to create separate data flow task for each file or separate package? In addition to these, example : while importing product data into product table, if product ID already exists, we need to ignore it and upload only the new records.
I have a couple of hundred flat files to import into database tables using SSIS.
The files can be divided into groups by the format they use. I understand that I could import each group of files that have a common format at the same time using a Foreach Loop Container.
However, the example for the Foreach Loop Container has multiple files all being imported into the same database table. In my case, each file needs to be imported into a different database table.
Is it possible to import each set of files with the same format into different tables in a simple loop? I can't see a way to make a Data Flow Destination item accept its table name dynamically, which seems to prevent me doing this.
I suppose I could make a different Data Flow Destination item for each file, in the Data Flow. Would that be a reasonable solution, or is there a simpler solution, or should I just resign myself to making a separate Data Flow for every single file?
I am trying to restore multiple .bak backup SQL database files onto a new server. However, I have found that it will not allow me to restore multiple databases at once. Is there a way to do this so that I do not have to manually upload one at a time? I tried adding all the .bak files at once to the backup device window but it only did the first one listed. It would be so much easier to restore them all at once so that I do not have to continue this manual process. I am restoring them via device.
I need to be able to bulk insert a bunch of tables from their corresponding flat file. I have created an XML file (see below) which has the file name/table name pair at each node. I then created a ForEachLoop task and used the Node enumeration type and the following OuterXpathString: ReferenceFiles/File. At this point I get lost. How do I pass the 2 inside node values (file name and table name) to variables which I can then use as expressions for the bulk insert task inside the Foreach?
I used the data export wizard to export a single table to a single flat file (multiple wasn't allowed). I saved the package as a *.dtsx file which I'm attempting to edit to add the additional tables.
Creating additional sources is fairly easy copy of the first source and change to the table name.
I've tried copying the destination connection and changing to a new text file, but can't get past having to add each column manually to the new destination.
How can I duplicate the mapping that must be taking place in the wizard in the *.dtsx editing environment?
This seems like a simple / common task, but I've been unable to find a solution.
I have searched but not found quite the best way to look at this so far..
I have an application that outputs data to several text files (up to 30). These have commonality by an object name, but then contain completely different column data.
In DTS I had each of the source text file connections going to one OLE DB connection and then individual transform data tasks pointing to the one OLE DB connection.
Looking at SSIS, it would appear that I would need to have one source and one destination for each of these and therefore 30 parallel data flows?
Just wondering if there is a neater way of doing this??
It is a regular data import that happens a few times a day - the text files are named the same as the SQL tables - ie app_userdata.txt goes to app_userdata table.
I am trying to write (my first, unfortunatly) DTS, and am having some problems.
I need to be able to import multiple flatfiles (all in the same format, just with different schema), each one going into a different table. I have written an application to call my DTS, sending it variables for the tablename and the filename. This works fine when I test it on a single flatfile.
My problem is, the Tranformation object does not reset after each DTS call, so I get "Column does not exist" errors after the first successful import. I can go into the DTS Manager and reset the Transformation options, but that would defeat the purpose of automation. Is there anyway to reset, or another technique, the Transformation object so that it will continuosly work on files that use different schema?
I am very new at DTS, so please consider me "ignorant" when replying.
I have multiple .sql files, exe each one manually is a pain, so how do you run multiple .sql files all at once? Beside creating a batch file, are there t-sql commands that could execute .sql files?
I have a rather large sale transaction DB. Basic header, and detail tables. I am providing a third party company with daily sales information, and I need to give them back data from about 8 or 9 months ago. I currently have a DTS package that gets sales for the current day, but since I have to go back, I have to manually edit the query in the DTS package, and change the date range...UNLESS ...
Blah, blah, blah. The problem is that they can only take the data in Daily files. So, there would be ONE file for each day. I really don't need to be manually running these jobs, so I'm wondering if someone could point me to a way of writing a package (maybe ActiveX, not sure) that would run through a loop, basically, of dates, and create a seperate file for each day. Versus having to edit a generic DTS package, and changing the date range 350 times...
Using an expression to set the log filename to include the date and time results in 3 log files being created.
Ummm. Why? I only ran the package once. Is SSIS not sharing my log file connection among the different components?
On first thought my workaround would involve using a script task to "set" the log file name to a variable and use an expression to set the log connection to the variable. But the problem with that is that logging starts BEFORE the script task is run...
I have a job with a single step that executes a stored procedure that performs the following steps:
1. Checks for the existance of a file A in a folder A
2. If it exists,
a. executes the cmdshell to run a DTS package to drop a table, recreate it and load the data in the file A to table X
b. runs other stored procedures that use the data in table X to create other tables Y and Z
c. executes the cndshell to remove and rename the file A from Folder A into Folder B
What I'd like to do is use this same stored procedure if possible, but create a job or another store procedure that would loop thru and process multiple files in Folder A instead of just one.
Hi, I have about 300-400 XML files I want to load in my SQL database (2005). The following code will load one (1) file. How do i do a mulitple collections? INSERT INTO MEL (DATA) SELECT * FROM OPENROWSET (BULK'C:TempCHAPTER1.xml', SINGLE_BLOB) AS TEMP Thanks,
Usually, our in house ERP software has 1 database and 1 database file. After an upgrade from MS SQL 6.5 to MS SQL 7.0 I have a database who's properties show that it is made up of multiple datafiles. What is the easiest and safest method to return this database to only have 1 datafile?
Is it possible to take a text file that contains multiple record types through the Data Transformation Service in MS SQL 7.0 and load each different record type into a seperate table?
We have a large Database (91 GB) that is currently in one large data file. Now that we have muliple disk arrays I can split that up on I would like to have a couple data files. My question is, what is the best way to split this up? Should I keep one primary file group and just create another file, or should I create a file group for indexes and put those on that? This database is used for reporting only so it doesn't really have any writes being done on it.
I have 8GB of text files which are basically log files from the past few years. There is 24 text files per directory which are labeled for every day (so they are not all in 1 folder). It would make reading them much easier if I could import them to SQL but I only seem to be able to import 1 at a time? (with the wizards :eek: )
Surely there is a way to mass import without all the costly applications that google searches give me? cheers :P
I have a script which imports the contents of a csv file from our CRM system and updates a table in my database. This works OK but the problems I have are that a) sometimes there is more than one file in the folder, and b) that I wish to move any csv files that have been imported into an archive folder. The csv files arrive with a time/datestamp and I currently rename them manually to FREXPORT before importing (the name is in the format FREXPORT_20141101_1217.csv).How do I:
1) get it to process the file without me having to manually rename the file(s) each time, 2) if there is more than 1 file in the folder process all the files and 3)move the correctly processed files to an archive folder which is: importarchive?
Ultimately, I would like the script to be run as a scheduled job, so it also has to deal with the fact that sometimes there will be no files to import too.
Hi i have a FTP task on my SSIS package where i want to select 3 files from a directory, this directory already contains other files but i only want 3 of them how do i only pull down the 3 files from the FTP site i can't seem to find a option on the FTP task to select what files i want from the direcroty.
I have over three hundred text files that I need to import to SQLServer.Each is in the exact same format.I want to import tham as seperate tables.Is there any way to do it in one process?Regards,Ciarán
I have a situation where every morning files are downloaded from an ftp site.
Problem is, I don't know beforehand how many files there will be. Usually it's one or two, but might be more. All files need to be loaded in the database.
How can I account for there multiple files, whose number I do not know beforehand? That is, in the file connection manager, how do I tell it which files to load?
Is there any control flow task or any task in SSIS that is able to get 50 XML files from a directory on a server and insert them into a table/column of xml datatype in a sql server database?
I lnow I can use the foreachloop container and specify the directory path to pick up the xml files but what do you do after that?
I also know of the OPENROWSET Function but that does 1 file at a time only!
I want to import data from multiple XML files into SQL 2005 using SSIS. I know how to set up a flat data source etc... but am unsure how I would go about importing from multiple files.