I have a DTS that reads in a bunch of transactions daily to a tran history table. I read them in from a text file each day. The problem is, that about half of the lines in the text file contain semicolons because they are comments. What I do now is, import the whole thing, and then do a Delete on my tran hist table for all lines with semicolons. As my tranhist table grows, this Delete will start to take a long time. How do I filter it so it doesn't even import the lines with semicolons to start with, that will run faster and save me time later.
I am trying to bcp import a text file into a SQL Server 2000 database.The text file is coming out of a java application where orderinformation is written to the text file. Each record is on it's ownrow, so the last item in each record has a new line character at theend of it to create the next row. This works well in creating the filehowever bcp does not like to import this text file with the extra blankline at the end. If I change the new line character to the beginning ofthe records then there is a blank line at the top of the text file,which bcp also does not like. Does anyone have any suggestions for meto get around this issue?Thanks,
i have a text file that is like:date = OCT0606asdfsdafasdfasdgsdghasdfsdfasdgSTART-OF-DATAasdfasdfgasdfgdfgsfgsadfsdfgsaasdfgsdfgEND-OF-DATAasdfgalsdkdfklmlkmasdfgasdfgi need to clear everything from this file except the data between theSTART-OF-DATA and END-OF-DATA using a batcj file... elternitavly i amopen to suggestions of how to import using bulk insert in sql withoutchanging the file at all. data is pipe seperated but obvioulsy hasplenty of junk data in it. i have 2 similar files at about 30mb and60mb in size. thnks everyone
I have a dtsx package that works fine with one exception. When I open the dtsx package in BI, it gives me the following message:
Document contains one or more extremely long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file?
When I respond yes, the package opens and I can edit or execute with no problem. Still, I want to understand what could cause this message to occur and, more importantly, how I can get rid of the message. When I try to simply execute the package I still get the same error and it seems this will be a problem for trying to run the package from SQL Server agent.
It seems likely to me that this message refers to the dtsx file (in xml format) itself. Does that make any sense?
Basically remove any line that start with a "#" or any blank lines..
I am assuming you can do this only using a script component and not directly through ssis..but i am not too familiar with scripting...so some code would be helpful
Hi everybody, I'm sending text based e-mails using SMS and I need some lines to be bolded. I don't want to switch to HTML based just to make some lines bold. But, I cannot figure out how I can make the lines bold in SMS. I didn't see any options in SMS to bold a line of text. Is there any any function available for varchar datatype that will bold the text or something like that? or will I have to go to HTML based e-mail? any help is greatly appriciated. devmetz
SQL SERVER 2000: My problem is that I have to process a special text file every day which contains 0 ASCII values to separate fields. The DTS import program drops everything after the ascii 0 value in the row, but of course I need the entire row with all fields. So how can I prevent the text file import task from dropping everything after the 0 ascii value?
I've created a dataset with 27 measures and 20 query parameters. When attempting to load the report containing this dataset I'm shown the message;
'Document contains one or more extremely long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file.'
If I do open the file it does indeed respond very slowly or even hangs.
I can manually format the XML code but amending the code in any way (i.e. using the layout designer to move a chart) removes my formatting and re-introduces the problem.
Are these an unreasonable amount of measures / parameters?
Environment; VS2005 v8.0.507 MSSQL 2005 9.00.1399.06 Build 3790 SP2 Windows Server 2003 SP2
I have a gridview connected to a sqldatasource, and it works pretty good. It gives me the subsets of the information that I need. But, I really want to let them choose all the companies and/or any status. What's the best way to get all the values in the gridview...besides removing the filters :) I thought the company would be easy, I'd just set the selected value to blank "", and then it'd get them all....but that's not working. And, for the boolean, I have no idea to get the value without having a separate query. (tabs_done=@tabsdone) and (company like '%' + @company + '%')1 <asp:DropDownList ID="drpdwnProcessingStatus" runat="server"> 2 <asp:ListItem Value="0">Open</asp:ListItem> 3 <asp:ListItem Value="1">Completed</asp:ListItem> 4 </asp:DropDownList> 5 6 7 <asp:DropDownList ID="drpdwnCompany" runat="server"> 8 <asp:ListItem Value="">All</asp:ListItem> 9 <asp:ListItem Value="cur">Cur District</asp:ListItem> 10 <asp:ListItem Value="jho">Jho District</asp:ListItem> 11 <asp:ListItem Value="sea">Sea District</asp:ListItem> 12 <asp:ListItem Value="san">Net District</asp:ListItem> 13 <asp:ListItem Value="sr">Research District</asp:ListItem> 14 </asp:DropDownList> 15 16 17 <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:HRFormsConnectionString %>" 18 SelectCommand="SELECT DISTINCT [id], [lastname], [company] FROM [hr_term] hr where (tabs_done=@tabsdone) and (company like '%' + @company + '%')"> 19 <SelectParameters> 20 <asp:ControlParameter ControlID="drpdwnProcessingStatus" DefaultValue="0" Name="tabsdone" PropertyName="SelectedValue" /> 21 <asp:ControlParameter ControlID="drpdwnCompany" DefaultValue="" Name="company" PropertyName="SelectedValue" /> 22 </SelectParameters> 23 </asp:SqlDataSource> 24
We are trying to import a fixed length text file. It has two fields. The first is ten characters, the second is the rest of the row, which may be several hundred characters. Each row is terminated by a {lf}{cr} The problem is the DTS text import utility. It generates a red line (column seperator) at the 85 character mark. This causes the second field to wrap. We can not move this line, or delete it. Any idea what is going on?
I have a small project on that involves importing a series of csv files held within an ftp directory into our Datawarehouse. Every day a series of csv files will be added to the directory. These will be named something like:
Audit1.csv,Audit2.csv etc.
I would like to automate this process as this can involve up to 400 files at a time. The proecedure would need to identify a valid file, import it into the database, delete the file and then move onto the next one.
Does anyone know of a way to achieve this? I was thinking along the lines of using a cursor and bcp but I'm not sure how to identify these files to the database i.e. how do i make it step through the directory and process the files?
In a DTS package I have a text file import object, a data pump, and a SQL object. The text file import object has been set up to splice a 500 character wide file into 20 columns. The data pump task does a copy column for all the columns into the appropriate table. What I need to do is have a way of changing the file name I specify in the text import object. I have 12 months worth of data in seperate files (DBF0199.TXT, DBF0299.TXT, DBF0399.TXT, etc..) which all use the same format. Is there a way to change the text import objects file name inside the package using an active script task or something?
I'm trying to import a fixed field text file into SQL Server using DTS but everytime I go past 3640 characters, I am not able to add, delete or move column breaks after that. Is anyone else experiencing this problem and know of a work around. Any help would be appreciated. Thanks!
Hi All, I'm having a problem in importing an Excel file into SQL server 2000.Here is the scenario with my data. One of the column has got the mixed data which is putting null's in the SQL server table in some rows.I found in the MSFT Technet that it is a bug in SQL server 7.0/2000.The workaround for it ( according to MSFT ) is to get the data into text file and import into SQL server. Now the question is , my data contains some currency fields and numeric fields in addition to the char and date fields. When I'm importing the table using DTS wizard , it is failing. I'm trying to use conversion functions like cdate and clng etc . Still the DTS is failing. What I noticed is when I try to import into a table with data type varchar for all columns, it is working fine.But the data is of no use. I would appreciate if any one can help me out in solving this problem. Thanks, Sammy.
Quick advice question. I import lots of text files -- many with 50 plus data columns. Few come with a table layout other than perhaps the first row having a set of column names.
When I go to pull them into SQL server the columns default to varchar 8000. Is anyone aware of a tool (as a part of SQL Server or otherwise) that can scan a column of data and recommend a data type and size.
Hi All,I use DTs Wizard to import a text file into my MS SQLServer 2000 everymonth. This text file contains 236 text fields and it's format neverchanges.The problem that i've found is that the DTS wizard sets each of thedestination fields to varchar (which is fine) but the size to 8000!!!!I have to go through each one of the fields to reduce the size down to255 ...... is there any way to change the default field size?Cheers & Merry ChristmasDave
I have created a DTS that imports a text file to by data table. I geterrors when ever I run this since there are fields in the table thatare numeric. I understand that I need to create an activeX script toimport those fields. DOes anyone have any guidance?
Does anyone know if it's possible to use the wizard or DTS Designer toaccept a source file with the following simplified format:<field1label>: <record1field1value><field2label>: <record1field2value>- - - - - - -<fieldNlabel>: <record1fieldNvalue><field1label>: <record2field1value><field2label>: <record2field2value>etc.i.e. each input record is delimited by {LF}{LF}, and each column by {LF}. Orwill it be necessary to write a Perl script (say) to convert it first into a..csv file?Thanks,Dave--************************************************** **********************Dave Stone e-mail: Join Bytes!Computing Services Telephone: +44 131-650-3314University of Edinburgh Internal ext: 503314Main Library, George Square FAX: 0131-650-3308Edinburgh EH8 9LJ************************************************** **********************
Thank you for the help and support you have given me. Now i am confronted with a new problem. I have to import some textfiles to SQL Server Tables . I have to create a tool to automate the porting using C# .The columns in textfile is seperated with pipe"|" . I f any body knows this please help me .
Something I find myself wanting to do frequently is the traditional foreach loop looping through a directory of files and importing (which works great in SSIS) - Only I don't want to import data I have already imported.
In a previous job I used Perl for thing like this and the structure would be as follows:
For Each File:
1. Get filename and timestamp
2. Query db table with list of already imported filenames and timestamp. If Filename not in the table or is in table with older date return 1 to import or if file already imported return 0
3. Based on the result of step 2 either import or skip to next file.
Any recommendations how to do something similar in SSIS? I run stuck when I try to get the timestamp of the file and I also can't figure out how to do the conditional inside the foreach container... I am also open to other ideas on how to only import files I have not already imported.
Thank you for the help and support you have given me. Now i am confronted with a new problem. I have to import some textfiles to SQL Server Tables .The columns in textfile is seperated with pipe"|" . I f any body knows this please help me .
Hi i'm new to DTS and need to be able to import a text file into a table each day.
The main problem I have is the file is datestamped so the name of the file changes each day.
Today it would be called file20070419.txt tomorrow it would be file20070420.txt. When I select a text file as a source I have to pick a valid file ??? how can I get round this ???
Hi allI am looking for examples of scripts that will help me doing these things: - import a text file delimited with the character "*", representing a new month of data, for example data from march 2007 - create a new table with the structure of an existing one to import the data, for example Data_March_2007 - alter an existing totals table adding a new column for the new moth imported, adding a new colum for the month of March 2007.
Hello Everyone and thanks for your help in advance. I am having a problem importing a text file into SQL Server 2005 and can't figure out where the issue is. The file is in CSV format with the text delimiter field as a ". When attempting to import the field into a SQL Server 2005 table, the import fails due to numerous truncation errors. I have tried importing into an existing table, but have also tried importing and allowing the import to create the table. I receive the same failures in both cases. For whatever reason, when allowing the import to create the table, each column is created as a nvarchar(50) even thought the column sizes vary widely. Oddly enough, when importing into a SQL Server 2000 table with the correct layout, the file imports perfectly fine, thus verifying there is nothing wrong with the data source. It also creates the table with appropriate column sizes in SQL 2000. I'm really at a loss as to what is going on. Any assistance woul dbe greatly appreciated. Thanks.
Everyday, there are users FTP the text files to a directory in the server. During the night, a job will be run to import these text files to a table.
First, the job need to read the file name, then open the file, read the first line and insert to the table until then end of the file. Then the second file will be read ... until no more file to be read.
I am using VB to read the file name and open the file, and my question is how to pass the file name to the second step which is in T-SQL?
If you have both invoice header lines and invoice detail lines in a comma delimited file, how can I get the data in the file to be imported into two different tables. I can produce a text file eg:
1,20,10/03/2002,39 High Street Any Town,, 2,20,Fluffy Slippers,2,Red,10.99 2,20,Pyjamas,3,Black,15.99 2,20,Trousers,1,Lilac,24.99 1,21,10/03/2002,11 Gibson Close, 2,21,Sandles,1,Black,12.99 2,21,Shoes,4,Blue,23.99 1,22,13/03/2002,45 Mill Street, 2,22,Womble Feet,4,White,16.99 2,22,Glass Slipper,1,Transparent,23.99
Lines with 1 in the first column should go in the InvoiceHeader table Lines with 2 in the first column should go in the InvoiceDetails table.
I have tried with DTS but to no avial - ActiveX scrips in the Transform Data Task can only seem to access one data destination - it one table not two.
I have a fixed field text file I am trying to setup an import for SQL 2000. If I use Access 2000 the file reads fine, but if I use the SQL Import feature or DTS, SQL doesn't line up the records correctly, I tried all the combinations of row returns with no luck, but Access works just fine. ANy ideas?
Hi I need to know how to import data from .txt to the MS SQL Server. IT is really important and i have 1 hour only, any help ....... appreciated. Thanks