i have a package here which updates a DB from a flat file source.now the problem is i may get multiple files.i have used a for each loop to handle this. it takes files based on the files name9(file names has a timestamp in it).but i want to give files in order of its Creation time.
Please help me on this.i have written a script task before the for each loop and i have got the minimum creation date from all the files,i am not able to going forward from here.
I have a requirement where in i have around 15 different flat files , filenames are fixed but folder path can be changed(i think i should use a variable for folder path). These 15 files data should go to their respective tables in the database.
Whether I need to create separate data flow task for each file or separate package? In addition to these, example : while importing product data into product table, if product ID already exists, we need to ignore it and upload only the new records.
I have a couple of hundred flat files to import into database tables using SSIS.
The files can be divided into groups by the format they use. I understand that I could import each group of files that have a common format at the same time using a Foreach Loop Container.
However, the example for the Foreach Loop Container has multiple files all being imported into the same database table. In my case, each file needs to be imported into a different database table.
Is it possible to import each set of files with the same format into different tables in a simple loop? I can't see a way to make a Data Flow Destination item accept its table name dynamically, which seems to prevent me doing this.
I suppose I could make a different Data Flow Destination item for each file, in the Data Flow. Would that be a reasonable solution, or is there a simpler solution, or should I just resign myself to making a separate Data Flow for every single file?
Problem: I need to build several databases on a quarterly basis. The databases range in size from 30 GB to 250 GB. I want to keep each dtabase file <= 20 GB so my databases contain from 2 to 13 database files (I put these all in one filegroup. the filegroup is separate from the PRIMARY filegroup which contains only the system tables in my databases).
I would like to create the files for the databases asynchronously (I have four physical drive letters on which to create the files and would like to be building one file on each drive simultaneosly). I can acheive the asynchronous operation by creating a separate job for each of the drive letters and then callin sp_start_job for each of the jobs.
The problem is that the ALTER DATABASE command apparently locks the sysfiles table and three of the four processes are always blocked and I therfore end up build the files serially instaed of in parallel.
Is there a way to make these processes work in parallel?
I am trying to restore multiple .bak backup SQL database files onto a new server. However, I have found that it will not allow me to restore multiple databases at once. Is there a way to do this so that I do not have to manually upload one at a time? I tried adding all the .bak files at once to the backup device window but it only did the first one listed. It would be so much easier to restore them all at once so that I do not have to continue this manual process. I am restoring them via device.
I need to be able to bulk insert a bunch of tables from their corresponding flat file. I have created an XML file (see below) which has the file name/table name pair at each node. I then created a ForEachLoop task and used the Node enumeration type and the following OuterXpathString: ReferenceFiles/File. At this point I get lost. How do I pass the 2 inside node values (file name and table name) to variables which I can then use as expressions for the bulk insert task inside the Foreach?
I used the data export wizard to export a single table to a single flat file (multiple wasn't allowed). I saved the package as a *.dtsx file which I'm attempting to edit to add the additional tables.
Creating additional sources is fairly easy copy of the first source and change to the table name.
I've tried copying the destination connection and changing to a new text file, but can't get past having to add each column manually to the new destination.
How can I duplicate the mapping that must be taking place in the wizard in the *.dtsx editing environment?
This seems like a simple / common task, but I've been unable to find a solution.
We have 1 TB database and we recently got space so 1) can i add data files and put in different disk in production hours 2) what are the effects of doing this. JUst want to get expert advise
I use management studio on the sql server. Each time i want to run scripts over new data, i have to delete the old files in a database and import new ones (from csv files to .dbo). These are the same files everytime except that the data change. Is it possible to make an automated proces for this import?
Have a SQL2008R2 instance on a VM where the single .mdf for the tempDb database is located on a high contention disk. I've managed to get another 60GB disk and thought it would be a good time to move the .mdf and also increase it's size and number of files.
The server has 12 cores and after a bit of reading I've decided that it would be best just to have four files for this database as the 1 file per core (-1) seems to be disputed.
-- Move the existing file to the new disk and rename it. ALTER DATABASE tempdb MODIFY FILE (NAME='tempdev', FILENAME='E:SQLData empdb0.mdf');
-- Change the size to 1GB ALTER DATABASE tempdb MODIFY FILE (NAME='tempdev', SIZE= 1048576KB, FILEGROWTH=5%);
-- Add three new files, all with the same size & growth ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev1', FILENAME = N'E:SQLData empdb1.mdf' , SIZE = 1048576KB , FILEGROWTH = 5%) ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev2', FILENAME = N'E:SQLData empdb2.mdf' , SIZE = 1048576KB , FILEGROWTH = 5%) ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev3', FILENAME = N'E:SQLData empdb3.mdf' , SIZE = 1048576KB , FILEGROWTH = 5%)
-- Now restart the instance.
Also, what are peoples thoughts on percentage growth for tempDb? I've read that it's not recommend and yet it seems to be the norm.
I have searched but not found quite the best way to look at this so far..
I have an application that outputs data to several text files (up to 30). These have commonality by an object name, but then contain completely different column data.
In DTS I had each of the source text file connections going to one OLE DB connection and then individual transform data tasks pointing to the one OLE DB connection.
Looking at SSIS, it would appear that I would need to have one source and one destination for each of these and therefore 30 parallel data flows?
Just wondering if there is a neater way of doing this??
It is a regular data import that happens a few times a day - the text files are named the same as the SQL tables - ie app_userdata.txt goes to app_userdata table.
When the database is configured for mirroring and you want to do partitioning on that database, How can we do? Is this similar process or any variation there while adding file groups and files? The partition will reflect in the mirroring database also?
I am trying to write (my first, unfortunatly) DTS, and am having some problems.
I need to be able to import multiple flatfiles (all in the same format, just with different schema), each one going into a different table. I have written an application to call my DTS, sending it variables for the tablename and the filename. This works fine when I test it on a single flatfile.
My problem is, the Tranformation object does not reset after each DTS call, so I get "Column does not exist" errors after the first successful import. I can go into the DTS Manager and reset the Transformation options, but that would defeat the purpose of automation. Is there anyway to reset, or another technique, the Transformation object so that it will continuosly work on files that use different schema?
I am very new at DTS, so please consider me "ignorant" when replying.
I am pretty new to the DB part of this but have built an asp.net web appplication with 2 tables: FORMS and UNITS I have created a web page that will allow users to add forms and associate a unit with that form. I now need to be able to allow users to associate the form with multiple units. I can change the web page list box to allow multiple selections but that doesn't solve the problem. This seems like a pretty simple task but I can't seem to find anything on it. any help??? below is the stored procedure I was using: CREATE PROCEDURE dbo.USP_AddForm
Whats the fastest easiest way to take a select that returns say 4 values for the expression into a single column on defined row
basically I mean i want to do an update to say a persons i dunno ummm places they have traveled and I want it listed like france;usa;germany etc etc and the data would always be in the tables i pull from so I can overwrite the data each time i run it but has to take 3 or more values from a query and put them in separated by say a ; into the same persons coloumn that stores the info.
I did this once before with a cursor and adding a variable to itself with colasce or whatever the command was, but was just wondering if there is a fast way to do this by chance that im not thinking about :P.
I am trying to add a website address to all the records in a database that match a certain criteria. The statement that I am using is shown below, but surprise surprise, it's not working! I'm new to SQL so any help would me much appreciated! Thanks.
declare @ComNum int set @ComNum = (select max(communication_number)+1 from communications)
insert into "communications" (contact_number, device, ex_directory, dialling_code, std_code, number, extension, notes, amended_by, amended_on, cli_number, communication_number) values (NULL, 'WW', 'N', NULL, 'WW', 'W', NULL, 'www.abc.co.uk', 'Jon', 2007-11-29, NULL, @ComNum); select name from organisations where name ='Abc Limited'
So, I have not included a VALUE for CONTACT_NUMBER as I wish to update all records with the website details as per the INSERT statement where the NAME column in the ORGANISATIONS table is 'Abc Limited'. I know something is wrong but I can't quite work out what!
I'm trying to put a range of dates into the datatextfield of my ddl. I'm using the query:
string strProps2 = "SELECT sc_id, cast(begin_date AS varchar) + ' - ' + cast(end_date AS varchar) AS XYZ From Archived_Property_Changes WHERE Property_number = " + qryPN + " ORDER BY end_date";
but it gives me an empty ddl. the datavaluefield shows the correct results, but it's not displaying the correct data from the datatextfield. I removed the casts from the SQL query and it returns:
The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value
Below is my ddl code:
SqlCommand objCommandProps2 = new SqlCommand(strProps2, myConnection); SqlDataReader objReaderProps2 = objCommandProps2.ExecuteReader();
My current environment has multiple packages stored in SQL server (MSDB). When working on a set of packages I want to bring them into my local development area Add existing package only allows you to pull one package at a time - anyone have the secret to selecting multiples
I have one dimension and one measure group. I deployed and processed the cube. Now I am able to browse the data. Now I added one more dimension. I deployed and reprocessed again the Cube. Now I am not able to see any values. I am getting like below.
The first line of code works fine but when I try to set the value of the property I get the following excpetion:
An unhandled exception of type 'System.Runtime.InteropServices.COMException' occurred in MyDll.dll Additional information: Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.
I have multiple .sql files, exe each one manually is a pain, so how do you run multiple .sql files all at once? Beside creating a batch file, are there t-sql commands that could execute .sql files?
I have a rather large sale transaction DB. Basic header, and detail tables. I am providing a third party company with daily sales information, and I need to give them back data from about 8 or 9 months ago. I currently have a DTS package that gets sales for the current day, but since I have to go back, I have to manually edit the query in the DTS package, and change the date range...UNLESS ...
Blah, blah, blah. The problem is that they can only take the data in Daily files. So, there would be ONE file for each day. I really don't need to be manually running these jobs, so I'm wondering if someone could point me to a way of writing a package (maybe ActiveX, not sure) that would run through a loop, basically, of dates, and create a seperate file for each day. Versus having to edit a generic DTS package, and changing the date range 350 times...
Using an expression to set the log filename to include the date and time results in 3 log files being created.
Ummm. Why? I only ran the package once. Is SSIS not sharing my log file connection among the different components?
On first thought my workaround would involve using a script task to "set" the log file name to a variable and use an expression to set the log connection to the variable. But the problem with that is that logging starts BEFORE the script task is run...
I have a job with a single step that executes a stored procedure that performs the following steps:
1. Checks for the existance of a file A in a folder A
2. If it exists,
a. executes the cmdshell to run a DTS package to drop a table, recreate it and load the data in the file A to table X
b. runs other stored procedures that use the data in table X to create other tables Y and Z
c. executes the cndshell to remove and rename the file A from Folder A into Folder B
What I'd like to do is use this same stored procedure if possible, but create a job or another store procedure that would loop thru and process multiple files in Folder A instead of just one.
Hi, I have about 300-400 XML files I want to load in my SQL database (2005). The following code will load one (1) file. How do i do a mulitple collections? INSERT INTO MEL (DATA) SELECT * FROM OPENROWSET (BULK'C:TempCHAPTER1.xml', SINGLE_BLOB) AS TEMP Thanks,
Usually, our in house ERP software has 1 database and 1 database file. After an upgrade from MS SQL 6.5 to MS SQL 7.0 I have a database who's properties show that it is made up of multiple datafiles. What is the easiest and safest method to return this database to only have 1 datafile?
Is it possible to take a text file that contains multiple record types through the Data Transformation Service in MS SQL 7.0 and load each different record type into a seperate table?
We have a large Database (91 GB) that is currently in one large data file. Now that we have muliple disk arrays I can split that up on I would like to have a couple data files. My question is, what is the best way to split this up? Should I keep one primary file group and just create another file, or should I create a file group for indexes and put those on that? This database is used for reporting only so it doesn't really have any writes being done on it.