I would like to take one table and split it into multiple files tablefile1, tablefile2, tablefile3, ... each with 50K rows. We currently has a package that does something like this; however, it performs poorly. The current package reads 50K rows writes them into a file and then writes the remaining rows into another file. Then it loops and renames the output file that contains (number of rows - 50K) of the last iteration to the input file of the new interation. Even though this works, as the file gets large (in the millions of rows) this rewriting the rows to a separate file is very disk io intensive and slow (over an hour). Ideally, it would be great to be able to split the table into files in 1 pass.
Another possibility would be to use an operation system command to split the file and just have the SSIS package read files and manipuate them (convert them to well-formed XML fragments). I am open any solution that could help me improve the performance. Let me know if you know of any OS file split utilities that can be called from a Job or SSIS.
SET @RowCnt = 1 SET @date = CONVERT(CHAR(10),GETDATE(),110) SET @ArchPath = '\D$EDATAWorkFoldersSendSendData' SELECT @TotalRows = count(*) FROM table1 --select @ArchPath
WHILE (@RowCnt <= @TotalRows) BEGIN SELECT @AccountNumber = AccountNumber, @output_filename FROM table1 WHERE Identity_Number = @RowCnt --PRINT @AccountNumber --test SELECT @sql = N'bcp "SELECT h.HeaderText, d.RECORD FROM table2 d INNER JOIN table3 h ON d.HeaderID = h.HeaderID WHERE d.ccountNumber = ''' + @AccountNumber+'''" queryout "'+@ArchPath+ @output_filename + '.txt" -T -c' --PRINT @sql EXEC master..xp_cmdshell @sql SELECT @RowCnt = @RowCnt + 1 END
I have a filetable that contains a binary file. I need to do a selective read of the file stored in the file table. I can write a C# CLR function that will open the file, read n bytes the from a starting byte. Or I can write a SQL statement that reads the stream in the filetable into a VARBINARY variable using SUBSTRING beginning at the starting byte (offset from 1) for the same n bytes.
Both give me the same result. However, the SQL statement takes considerably longer to read. I know there is overhead in reading through SQL (interpreted language), but the difference in performance is substantial, and I can only attribute this performance degradation if SQL first tries to "load" the entire stream before it identifies the portion of the stream that it needs to read beginning at the starting byte offset.
I wonder if this is the case or if there is another option to read a stream from a filetable directly through SQL queries that is more efficient.
I am looking for a Sql query to verify the inserted values from one table(in CSV file) with another table(in sql database)
For example: I have below Values column that is present in once CSV file, after my data migration the values get stored in Results table under Message column.
I need to very whether values(1X,1Y) are inserted in Message record "successfully inserted value 1X"
Values (CSV) 1X 1Y
Results Table(SQL) CreatedDate Message 2015-08-04 08:45:29.203 successfully inserted value 1X 2015-08-04 08:44:29.103 TEst pass 2015-08-04 08:43:29.103 successfully inserted value 1X 2015-08-04 08:42:29.203 test point 2015-08-04 08:35:29.203 successfully inserted value 1Y 2015-08-04 08:30:29.203 Test Pass 2015-08-04 08:28:29.203 successfully inserted value 1Y
If all values are inserted:
Output: All values from values table are inserted successfully Total count of values inserted: 2 If only few values are inserted, example only 1X from Values table is inserted in Message
Example: Results Table CreatedDate Message 2015-08-04 08:45:29.203 successfully inserted value 1X 2015-08-04 08:44:29.103 TEst pass 2015-08-04 08:43:29.103 successfully inserted value 1X 2015-08-04 08:42:29.203 test point
Output: All values from values are not inserted successfully in result table. Total count of values inserted: 1 Missing Values not inserted in results table are: 1Y
SET @QUERY = "INSERT INTO tblDocTable(FileName, FileType, ImportExportID, BuildingID, Document)
SELECT '"+@fName+"' AS FileName, '"+@fType+"' AS FileType, " + cast(@fID as nvarchar(18)) + " as ImportExportID, '"+@bID+"' AS BuildingID, * FROM OPENROWSET( BULK '" +@fPath+"' ,SINGLE_BLOB)
AS Document"
EXEC (@QUERY)
This puts some values including a pdf or .doc file into a table, tblDocTable.
Is it possible to change this so that I can get the values from a table rather than as parameters. The Query would be in the form of: insert into tblDocTable (a, b, c, d) select a,b,c,d from tblimportExport.
tblImportExport has the path for the document (DocPath) so I would subsitute that field, ie. DocPath, for the @fPath variable.
Otherwise I can see only doing a Fetch next from tblIportExport where I would put every field into a variable and then run this exec query on these. Thus looping thru every row in tblImportExport.
Any easy way for a batch file or automated process to read from one db and table and what ever entry is missing out of another database + table it writes those missing entries to.
This is a simple table in one db that is filled with usernames, I want to see if there are missing usernames in another db and table and write those entries.
db1.usr_table.usr_name = jdoenew If jdoenew is missing in the 2nd db I will need to write entries like: db1.usr_table.usr_name = jdoenew db1.usr_table.password = tmppassword db1.usr_table.active = 1
First, a couple of important bits of information. Until last week, I had never touched SISS, and therefore, I know very little about it. I just never had the need to use it...until now. I was able to convert my first 3 flat files to SQL2005 tables by right clicking on "SISS Package" and choosing "SISS Import and Export Wizard". That is the extent of my knowledge! So please, please, please be patient with me and be as descriptive as possible.
I thought I could attach some sample files to this post, but it doesn't look like I can. I'll just paste the information below in two separate code boxes. The first code box is the flat file specifications and the second one is a sample single line flat file similar to what I'm dealing with (the real flat file is over 2 gigs).
My questions are below the sample files.
Code Snippet Record Length 400
Positions Length FieldName
Record Type 01 1,2 L=2 Record Type (Always "01") 3,12 L=10 Site Name 13,19 L=7 Account Number 20,29 L=10 Sub Account 30,35 L=6 Balance 36,37 L=1 Active 37,41 L=5 Filler Record Type 02 1,2 L=2 Record Type (Always "02") 3,4 L=2 State 5,30 L=26 Address 31,41 L=11 Filler Record Type 03 1,2 L=2 Record Type (Always "03") 3,6 L=4 Coder 7,20 L=14 Locator ID 21,22 L=2 Age 23,41 L=19 Filler Record Type 04 1,2 L=2 Record Type (Always "04") 3,9 L=7 Process 10,19 L=10 Client 20,26 L=6 DOB 26,41 L=16 Filler Record Type 05 1,2 L=2 Record Type (Always "05") 3,16 L=14 Guarantor 17,22 L=6 Guar Account 23,23 L=1 Active Guar **There can be multiple 05 records, one for each Guarantor on the account**
and the single line flat file...
Code Snippet 01Site1 12345 0000098765 Y 02NY1155 12th Street 03ELL 0522071678 29 04TestingSmith,Paul071678 05Smith, Jane 445978N 05Smith, Julie 445989N 05Smith, Jenny 445915N 01Site2 12346 0000098766 N 02MN615 Woodland Ct 04InfoJones,Chris 012001 01Site3 12347 0000098767 Y 02IN89 Jade Street 03OWB 6429051282 25 04Screen New,Katie 879500
As you can see, each entry could have any number of records and multiples of some of the record types, with one exception, every entry must have a "01" record and can only have one "01" record. Oh, and each record has a length of 400.
I need to get this information into a SQL 2005 database so I can create a front end for accessing the data. Originally, I wanted one line for each account and have null values listed for entries that don't have a specific record. Now that I've looked at the data again, that doesn't look like a good idea. I think a better way to do it would be to create 5 different tables, one for each record type. However, records 2 through 5 don't have anything I can make a primary key. So here are my questions...
Is it possible to make 5 tables from this one file, one table for each of the record types?
If so, can I copy the Account number in record 01, position 13-19 in each of the subsequent record types (that way I could link the tables as needed)?
Can this be done using the SISS Import and Export Wizard to create the package? If not, I'm going to need some very basic step by step instructions on how to create the package.
Is SISS the best way to do this conversion or is there another program that would be better to use? I know this is a huge question and I appreciate the help of anyone who boldly decides to help me! Thank you in advance and I welcome anyone's suggestions!
Hi...I have .csv file(with headers) that i want to send its data to temporary table in sql.Do i need to get the data as dataTable and than send it to sql table?any way, i need help to do so...thank you...
i have a .txt file. I want to create a table to put the following data into the table. how can i do that. My text file looks like this. There is no comma in the text file.
#02781 MEMORIAL HOSPITAL OF MAR
PATIENT_NAME NO DT PATIENT_NAME NO DT
PAMELA 2345677 2008-01-01 ANDREW 45647567 2008-01-02 VINCENT 0349588 2007-01-05 SYLVIA 03945887 2007-01-06 ANITA 3958888 2006-01-09 JORDAN 45868886 2006-01-06
#02725 COASTAL CAROLINA MEDICAL
PATIENT_NAME NO DT PATIENT_NAME NO DT
RICHARD 3845789 2005-04-05 BROWN 39450880 2003-01-05 CARSON 0998348 2003-05-05 HANKS 99999345 2005-01-05
I have 2 .txt files. File A has got 3 columns. File B has got 5 colums. File B has got similar column names as A , but 2 more extra columns. Both the text file data has to goto one single table. I want to use only one data flow task to do this. Is this possible. Thanks.
Hi,I have an online store, as you can imagine my prices can change once a week. I have loads of products in a products table in a sql 2000 db. I have been updating the price of these products manully witch is very painfull. I would like to know how to take a csv file with just a sku number and price column. And only update the products table with the new price. The products table has loads of columns but i only want to update the price. the product table does not use a sku number as key it uses a productId as the key. Obviously the csv file does not have the productId field. But they both have the sku field so i would like to use that as the lookup key.Could someone please give me some sample code on how to do this? Or point me in the right direction? or know of a product that can do this? I am using Windows XP SP2, with SQL 2000 and i have Visual Studio.Net 2005. My development skills are limited but can generally find my way through samples. I do code in C# if that helps. Thank you very muchHave a good day
hi! my problem both concerns vb and sql. the problem is I have a text file delimited with semicolon. I have to enter this data into Sql table. for example; item description ;code1;code2;code3; there are too many lines like this. The code should enter the table these values.Table name is products, column names are desc,code1,code2,code3 I am using vb.net and sql server thanks all!
I have an access database (access 95 Version7)dumping a delimited text file onto my server. I am then using DTS in SQL 2000 to import the file into a table.
My issue is that each time the DTS runs, it imports the whole text file each time, this is causing duplicate records.
So I created a transformation script as follows :
Function Main()
If DTSSource("counter") <= DTSDestination("counter") Then Main = DTSTransformStat_SkipRow Else
The theory behind the If statement, is if it sees that the counter field is less than or equal to what is there, it will skip the record and move forward. For some reason this is not working.
Does anyone have a workaround or another solution to this problem
I have a asp.net application where i can choose to save any file(could be jpg or dat or txt or xml etc etc) into a table.So far,i've manage to save it into an SQL server table in a column(set to Image datatype).But when i get the data from the table,put it into a byte array and recreate it.I notice that the files can be recreated correctly if they are of type .txt files.But jpg files can't be recreated and trying to open the jpg files gives an error.I don't think it is the file saving that is the problem since any files can be save.Rather it is the file recreating that is the problem.Can anyone help?
I want to load a table from a file. My file has a fixed length(fixed block) and have the same fields of the table. I need the right sintaxis, because The next with errors:
"load from file1 insert into table1"
ANY ADVICE will be greatly apreciatted because I'm not an expert in databases. Thank you veru much.
Hi. How I can do a query on datatype ntext (xml file) in "SQL Query Analyzer" when I need all data? When I get select from this table, I get back only first 256 bytes (chars)??? Thanks
Following on from http://forums.databasejournal.com/showthread.php?t=42959 it has been asked for the ability to specify certain tables to be in certain database files, but I'm unaware this can be done. They want the archive tables to be on a seperate volume (on a cheaper set of disks/lower performing RAID set, for example) to the live data.
Hi friends, I have a immediate requirement for our company to load text files into different tables in sql 2000. Requirement is this:: people submit txt files to our webserver via asp pages. Acording to the schedule event(immediate,end of day) DTS should pick up the file and the table they selectd and load the file into the coresponding table. (main feature we are looking for is how we do a generic dts pakage, which will satisfy different tables loads acording to the corresponding text files.) I need to have only one dts package doing the upload, regardless the filename and the tablename they selected. I think it should be done by active x and global variable. I am new to dts programming and I am kinda stuck here. If anyone out there could help me on how to do it, I really really appreciated. Thanks in ad... Jay
this is the worst noob question ever but Google isn't helping and I searched all the threads in this forum...no help. I have a comma delimited csv file that i need to load into an existing database on MS SQL 2000. I need to know how to create the table and populate it with my data...Any pointers of the differences between syntax Of MSSQL and MySql or a tutorial of how to do it with enterprise manager would be great...Dang I feel noobie.
Hi, How to make the diffrent data files for different table in a database (SQL SERVER 2K)? By default it is the single data file and log file for the data base. But I wnat the separate data files for all my table in the database. So, any one can help me out in this regard......................???
Can we insert the data from #temp table to csv file using bcp in sql server ? I really appreciate if anyone can answer me this or any workarounds for this.
Newby question but can't find it myself.How can I get a file-description from a file / table ?eg. the decription must deliver the following info:Record nr.Field nr.FieldnameField descriptionData typeLenghtDecimalsetc....Can somebody please tell me what to do?ThanxMF
I need to export certain fields of a table from sql 2000 into a dbffile, but can't find the proper query command to make it work. I alsowant to set this up to run each night automatically.Any help will be greatly appreciated.Randy
I have a set of flat files and transforming it to SQL server. If I do that in 2000 it was done with in 45 seconds for 1.5 M records. If I do the same in SSIS it takes 3 minutes. Why there is difference in time that too lower when compared to the previous version. I used the data access mode as "Fast load". Am I missing anything while doing through SSIS?