Hi€¦
During my web search looking for a solution I ran across SQL CE 3.5 articles. My questions about SQL CE 3.5 are:
1) Can SQL CE 3.5 handle a 4 €“ 6 GB file
- Read
- Parse (SQL)
2) Can SQL CE 3.5 act as a standalone client that a user can view a large (4-6 GB) text file?
- Will I need a .NET (small) client to read the large (4-6 GB) text file?
More info:
The text file will reside on the machine where the SQL CE 3.5 is installed. There is no pull to get the data.
I NEED TO READ A TEXT FILE INTO A SQL SERVER 6.5 TABLE. THE FILE HAS VARIABLE LENGTH FIELDS AND THE FIELDS ARE SEPARATED BY PLUS SIGNS ("+"). ANY IDEAS ? THANKS FOR YOUR TIME.
how do i insert a large chunk of text into a table column. my project is to build a news website. where people can go and read news articles. the articles are provided by the author in word format, so how do i insert that news article into the table's column? any help would be appreciated
What is the easiest way to get a large fixed width text file (200 columns) defintion into SSIS? To have to define each column with the ruler would be very cumbersome.
Hi All, I Need to load the text(CSV) files into sql server using text reader. Please can any one give me the code for that. I want read that file in web page only. I can't use Bulk Insert. First I will read the file into data set. Then i wanna update that in sql server table. Thank you,
I have a table that I'm inserting a file into and using the Image data type to store the binary object. Now the code below works fine for files around 1.5 MB, but anything larger and it's like the code won't even execute and I get a Page Not found error. I'm in the process of running some traces to find out what's going on in the backend, but I'm assuming there's something amiss with my code. The Image data type should handle files that size with no problem but for some reason it isn't. Does anyone see anything wrong? Thanks Dim iLength As Integer = CType(File1.PostedFile.InputStream.Length, Integer) If iLength = 0 Then Exit Sub 'not a valid file Dim sContentType As String = File1.PostedFile.ContentType Dim sFileName As String, i As Integer Dim bytContent As Byte() ReDim bytContent(iLength) 'byte array, set to file size
'strip the path off the filename i = InStrRev(File1.PostedFile.FileName.Trim, "") If i = 0 Then sFileName = File1.PostedFile.FileName.Trim Else sFileName = Right(File1.PostedFile.FileName.Trim, Len(File1.PostedFile.FileName.Trim) - i) End If conn = New SqlConnection(eco) conn.Open() cmd = New SqlCommand("INSERT INTO ECO_Attachments (ECOID, FromType, DocName,OldRev,NewRev,NtLogin,DisplayName, FileName, FileSize, FileData, ContentType) VALUES (@ECOID, @FromType,@DocName,@OldRev,@NewRev,@NtLogin,@DisplayName, @FileName, @FileSize, @FileData, @ContentType) ") cmd.Connection = conn Try File1.PostedFile.InputStream.Read(bytContent, 0, iLength) With cmd .Parameters.Add("@ECOID", SqlDbType.Int) .Parameters.Add("@FromType", SqlDbType.NVarChar, 50) .Parameters.Add("@DocName", SqlDbType.NVarChar, 250) .Parameters.Add("@OldRev", SqlDbType.NVarChar, 50) .Parameters.Add("@NewRev", SqlDbType.NVarChar, 50) .Parameters.Add("@NTLogin", SqlDbType.NVarChar, 100) .Parameters.Add("@DisplayName", SqlDbType.NVarChar, 200) .Parameters.Add("@FileName", SqlDbType.NVarChar, 255) .Parameters.Add("@FileSize", SqlDbType.Real) .Parameters.Add("@FileData", SqlDbType.Image) .Parameters.Add("@ContentType", SqlDbType.NVarChar, 50) .Parameters("@ECOID").Value = ECOID .Parameters("@FromType").Value = From .Parameters("@DocName").Value = DocName .Parameters("@OldRev").Value = OldRev .Parameters("@NewRev").Value = NewRev .Parameters("@NTLogin").Value = NTLogon .Parameters("@DisplayName").Value = DisplayName .Parameters("@FileName").Value = sFileName .Parameters("@FileSize").Value = iLength .Parameters("@FileData").Value = bytContent .Parameters("@ContentType").Value = sContentType .ExecuteNonQuery() '.ExecuteScalar() End With Catch ex As Exception Response.Write(ex) 'Handle your database error here conn.Close() End Try
Hello everybody, I've got a little problem wich i'm trying to solve since 1-2 years and i hoped it would go away with SQL 2005 - but that wasn't the case :(.
Situation: I've just bought a new Server containing: SQL 2005 64 Bit Enviroment 4 GB RAM 2x AMD Opteron 2 GHz Prozeccors (Dual Core) 2x RAID Controllers (RAID 1) containing 1.1 System 1.2 Data 2.1 Transaction Logs
I've created a full-text table containing all the search terms i need to search. Table build: RecID - int - Primary Key SrcID - varchar(30) ArticleID - int - referring to an original table SearchField - varchar(150) - Containing the search terms timestamp - timestamp field
Fulltext index: RecID as Primary Key SearchField as indexed field - Wordbreaker: Neutral (containing several languages), Accent sensitivity off
Now i've got different tables imported in here resulting in a table size of ~ 13 million rows.
There is no problem with the performance on this catalog if i search a term wich isn't contained in more than 200-300 recordsets - but if i search for a term wich could occur in 200'000 upwards it gets extremely slow.
On the slow query the first records get in after no time, but until the query finished up to 60 seconds pass. The problem is that i have to sort by a ranking value wich is stored externally - so i need all results to sort them...
current (debugging) query: SELECT ArticleID FROM fullTextTable AS ft INNER JOIN CONTAINSTABLE(FullTextCatalog,SearchField,'"term*"') AS ftRes ON ftRes.[KEY]=ft.idEntry
Now if i check in the performance monitor: As soon as i run the query the 'Avg. Disk Read Queue Length' counter on disk D (SQL Data Files) jumps to the top, until the query has finished. Almost no read/write activity on C: where the Fulltext is stored...
If i rerun the query, after it finished once successfully - it takes place below 1-2 seconds, would be nice to get that result in first place :).
Hi We are using the SQL Server 2005 Full Text Service. The data is not huge, but the kind of data is that each record is small and there are a large number of records. There are 35 million records now with 11 GB of data and about 1.6 GB of FT catalog on the table. This is expected to grow to at least 10 times the size of this data. The issue is with FTS taking a long time to return results when the number of hits (rows) getting returned from FTS is large for some searches, it takes a very long time. With the same data & catalog, those full text queries for less common words return timely. The nature of the problem doesnt allow us to only have top results. We need all the results. So it’s not about the size of data but the number of results getting returned from FT. (As the catalog is inverted). The machine is dual processor with 4 GB RAM.
I am considering splitting the table and hence the catalog and using multiple servers to do full text searches in smaller catalogs. Is there any other way this issue can be solved ?
If splitting is the only way, can you give me an idea as to what is a statistical/standard limit to the number of search results/cataog size as which FTS gives good results
I've created a linked server (and set up the corresponding schema.ini file) in order to perform bulk-inserts from some CSV text files into SQL tables (from my standpoint the text files are just for reading purposes). The linked server works fine (I can select the data in the files without a problem).
Now the question: is possible to automatically detect when one or more of those files change in order to start the import process automatically? Something like having a trigger created on the CSV files Or there's no easy way to do that so I have, to say something, to create a Job that periodically checks if the files have changed programatically (say, recording each file's timestamp everytime is imported and comparing the recorded value with the current one, or whatever)?
I think we dont have option to read Transaction file in SQLserver Other than using Logexplorer. IS this Logexplorer working file to audit the sql server. We are planning to buy Logexplorer. Is it good product to buy.
I have created a C2 Audit .trc file in sql server 2000 windows platform. i would like to be able to read it. I have tried the following but i get a message that the file does not exist or is not a recognizable trace file format or there was an error opening the file.
SELECT * FROM ::fn_trace_gettable('D:Program FilesMicrosoft SQL ServerMSSQLDataaudittrace_20070207073931.trc', default) GO
I know the file and the path are valid. what else could be the problem??
I have 2 tables EmployeeA(Eng) and EmployeeB(Spanish) kept in seperate mdb's. I want to add the records into Sql Server 2005 table called StateEmployee.
Procedure:
1. Loop through 2 folders..one containing table EmployeeA in mdb and other containing tbl EmployeeB in diff mdb's. 2. Pick a file from EmployeeA and EmployeeB, both at the same time. 3. Count the total no of rows in both files. If equal proceed. 4. Compare the 'employeeid' of one row of employeeA to the employeeid of EmployeeB. 5. If employee id matches, load both the rows in Sql server else file it to the error table. 6. Loop through all rows simultaneously till end of row. 7. Go to next mdb.
How do i go about this step by step. I am fairly new to SSIS. I asked my other friends too but they have complex answers which i couldnt follow. Hope someone gives an 'easy to understand' solution with sample.
I am trying to read a 36 byte files that contains compressed data. I create my Flat File data source and SSIS reads it fine UNTIL it hits a x00 in the file. Then it stops reading and I can't get any data after it. There is data after the x00. Here the entire hex string: C7 C7 CF 6A 00 00 05 02 3D 03 21 01 E0 02 00 00 00 00 00 00 00 00 3D 3C 1E FD 02 C8 00 00 00 AE 41 E3 28 7C
To test, I changed the two x00 in bytes 5 and 6 to x01 and SSIS read until the next x00.
I have a challenge i am trying to overcome, hopefully soneone would have come across this issue before..
I am creating a DTS package that will be scheduled to run at a certain time everyday. A source folder exists that get a set of new files everyday.The DTS Package will then read each file and copy the data into a load table in my database the challenge is this:
I am trying to load files from a source folder into my load table, Within each file, the entires are in a specific format using pipes to seperate the data that goes into which column e.g
example of a file entry:
column1 | column2 | column3
data1 | data2 | data3
data1 | data2 | data3
data1 | data2 | data3
And now i am using DTS to specify the file format and map the cloumns as apprporiate to my table...all this is well and good, but my problem is each file has a different name as well as being timestamped, now how do i use DTS to specify the source folder, open each file sequentially and read (or more appropriate, copy the entries into my table, inserting new data from each file into my load table as well as overwriting old data in the load table from the files in the folder ?) is there a way in specifying your source folder in DTS rather than specifying the file in the Menu options (in the transformation data task properties )given, and or do i need to write a script for this(reading the file?)
can someone please give me a solution and how to approach this?
We have a package that is using a ForEach loop container to access files on a network drive. For some reason I am getting a message that the ForEach enumerator is empty and did not find any files that matched the pattern. For the pattern I left the default *.* for testing purposes. I have specified the file folder as \remoteserverfilesharesubfolder and also as \remoteserverc$filesharesubfolder and have gotten the same message. However when I map a network drive and set the file folder to the network drive it finds the files. Is this a permissions issue?
After I finish processing the file I want to move it to a new directory. Once this is deployed in production, the package will not be running under a domain account and probably won't have access to the network folder. Is there any way to specifiy in the connection manager itself that it should use a specific account to access the folder?
Has anyone had any trouble moving a package using a OLE DB Connection Manager reading DBASE IV files? While developing I never had a problem, the confiugration string described int his thread (http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=76237&SiteID=1) worked just fine. Since I have enabled Configurations, my package will always fail when trying to read the dbf file. I've gone through just about every setting in the config file that I can think of.
Information: 0x4004300A at MyDataFlow, DTS.Pipeline: Validation phase is beginning.
Error: 0xC0202009 at MyPackage, Connection manager "MyDBASEIVConnManager": An OLE DB error has occurred. Error code: 0x80040E21.
An OLE DB record is available. Source: "Microsoft OLE DB Service Components" Hresult: 0x80040E21 Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.".
Error: 0xC020801C at MyDataFlow, OLEDBFileSource [15]: The AcquireConnection method call to the connection manager "MyDBASEIVConnManager" failed with error code 0xC0202009.
Error: 0xC0047017 at MyDataFlow, DTS.Pipeline: component "OLEDBFileSource" (15) failed validation and returned error code 0xC020801C.
Error: 0xC004700C at MyDataFlow, DTS.Pipeline: One or more component failed validation.
Error: 0xC0024107 at MyDataFlow: There were errors during task validation.
My requirement is I have to read 2 sets of files from a folder. For example, I have to read all files starting with either 'a' or 'b' only. In 'Foreach Loop', if I say 'a*,b*', it is not working. Instead of comma (,), I tried colon, semi-colon and pipeline characters also. It is not working. So I am using 2 loops now. But I would like to know is there any way to do it using a single loop?
I need to know how to import a text file into a stored procedure as one big varchar. I don’t want to import the data straight into my tables. I need to be able to work with it in the stored proc.
Guys, need help! I know this is not area for VBScript question, but possible I will find someone to help. Here is my question.
How can I read a text file of product IDs (ProductID contain only the first three character at the bigining of each line -- for example 220)and retrieve just those lines that meet a specified pattern?
Hello,I have a column (text datatype) and has to send an email as a text(not attachment) using cdonts. I am reading the data from text columnstoring in a varchar field and saying cdonts.body = [data].This way I can send email to as a text format. Now, my problem is whenlength of data is greater than the 8000 chars it truncates the rest ofthe data.......and email I send is a truncated email.......loosingimporatnt data.How should I resolve this situation.......I am trying some differentideas but not worked yet. Finally, I am writting the entire content ina file and sending it as attachment but the reaquirement is to send itas a body text.Any ideas?Let me know if you need more details!Thanks,-Hayatwww.mysticssoft.com
Is there anyway Sql Server reads a "Tab Delimited Text File" and Compare each record with the Column in a table..
my question is..
I've a Country_Code table which has 3 letter Country Code and the Actual Country names are listed in a Tab Delimited Text File "Country Data" with Country Code and Country Name, how do i read each record and compare to get the Actual Country Name for Display.
I have text output files which are semi-structured.(Headers + irregular length tables below)
Is there a simple method of getting them into sql format(line by line) to try and extract data from them?
I know this won't be easy but its been worrying me for a long time. I have a method of importing the data into excel, but although difficult, it must be possible to get a system to get it into sql server. This must be a fairly common issue.
Hi everyone I have a directory that contains a lot of text files that have data I need to draw from. I want to know if it is possible to write a program that will read all of the text files in the directory and pull out data and save it to a new textfile. For example: Each text file is formatted this wayColumn1, Column2, Column3"1","xxxx","yyyy""2", "xxxx", "yyyy""3", "XXXX", "yyyy" I want to put all lines that begin with 1 in one text file, all the lines that begin with two in another text file, and the same with all lines that begin with 3. my problem is I want to be able to point at the folder that contains those files and have it read every text file in the folder and perform the operation. If this is possible can someone point me in the right direction on how to get started.Thank you for any help!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
How would I be able to query a table (ie. all people with last name 'Smith'), have that set of data outputted to a regular text file (in a formatted way)
And what's the best way to manipulate that set of data to let's say update a Yes/No field in that table to mark that that those individuals('Smith') which were outputted in that text file?
What about the reverse? If I got a regular text file with Last Name, Social Security(delimited by tab), etc is there a way I can get SQL Server to read that text file and make an update to the database based on the Social security in that text file.
A section of this company's intranet site where I just started interning at has little company anniversary and birthday sections that look like (for the anniversary section.. in the birthday section, it looks the same, except it doesn't say how old the comployee is):
-Steve Cunningham 6/1 - 6 yrs -Andrew Brown 6/3 - 11yrs -Lisa Stone 6/4 - 3 yrs
How can I get it so instead of manually changing that text every month, it will look at a SQL database and automatically change that text every month? I'm guessing the pseudocode would be if the b-day or anniv. month matches the current month, display the first and last name, the date, and number of years (which would have to be calculated maybe?) Any help would be GREAT! Thanks!!
I have inherited some databases whith extremely large Log files. I tried the truncate transaction log but did not work. Can some body please tell me how to truncate these log files.
I have text data files from a third party and they use comma as field delimiters and enclose the text for each column in double-quotes. Not a problem for most of the data files until they start sending files where there is " within the column values. SSIS package fails with the error:
The column delimiter for column "Column 1" was not found.
Any ideas on how to resolve this issue will be greatly appreciated.Thankspcp
I'm wondering if there is any way to get SSIS to notice, in the Flat File Source, that a "Ragged right" text input file has a record that is too short to populate all the specified columns.
I am reading data from a file that is supposed to be fixed length records, but record 193,591 (out of approx. 500,000) is 20 bytes short of the fixed length (60 bytes). So I changed the input to "ragged right" and found that I can thereby continue to read the file, and load the data (after setting the "maximum errors" to a number greater than the initial "1"). (Without this change to "ragged right", every record after the bad one was "out of synch" with the column arrangement -- so they never made it into the database table destination.)
But the "failures" I am now getting are during the Data Conversion step, when I try to convert some columns to integers (from text, in the input stream). And by looking at the data with a "Redirect Row" setting for the Data Conversion step, I am able to see that the Flat File Source is reading "right past the end of the row."
Is there a way to get the Flat File Source to honor the CR-LF record terminator, and decide that some text columns should contain "nothing" (NULL or zero-length strings), rather than the bytes that contain the CR-LF and the initial text from the next record? Can this somehow be noticed as an error condition?
I'm trying to read in a text file, fixed width, very long records (over 7000 characters) in an ssis package. The first record appears ok in the 'preview' in the connection manager setup, but each record after that is offset by 2 characters (record 2 offset by 2, record 3 offset by 4, record 4 offset by 6 and so on), like it's inserting special characters.