Hello.
My application reads one file with more or less than 80 000 rows(like 09905003101399800464520220080408710200070050000020 90604500012000 )
Based on the index of each character the row is splited in 8 columns and then i must verify that 5 of this columns are not allready in the database... if they are the row is a duplicate and will not be inserted.
Wich is the best way to do that?
To make 80000 cals to the database or to send the file as xml to the database and after that to parse it .......
or if you know any other way it would help me very much.
In this moment i m using the 80000 calls method and it takes ~20 hours.
Please advice.... any advice will be highly apreciated.
Thank you.
How do I insert data from a flat file or .csv file into an existing SQL database???
Here what I've come up with thus far and I but it doesn't work. Can someone please help? Let me know if there is a better way to do this... Idealy I'd like to write straight to the sql database and skip the datset all together...
How do I insert data from a flat file or .csv file into an existing SQL database???
Here what I've come up with thus far and I but it doesn't work. Can someone please help? Let me know if there is a better wway to do this... Idealy I'd like to write straight to the sql database and skip the datset all together...
I'm new to SQL Server 2005 SSIS. I'm trying to do something very simple, but I cannot figure it out, PLEASE HELP!
I have a flat file, which I read and then insert the data in a database table, that works fine. The problem is that I don't want to insert duplicate records. For example; if I run the package again, it will appent to the table. What I need to do is that if the package runs again, check if the record already exist, based one two columns, date and hour, and do not insert the record.
i found that database log file can contain more records after performing backup database statement.
for example:
i create a database and limit the log file to 2mb. then i create a table and insert data.
If i backup the database before i insert data , the database file can contain 192 records unitl the log file is full.
If i don't perform the 'backup database' statement. The 'dbcc sqlperf(logspace)' indicate the utilization ratio is less than 40% after inserting 192 records
why?
I list my code:
Code Snippet create database db_test on primary ( name=db_test, filename='C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDatadb_test.mdf' ) log on ( name=db_test_log, filename='C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDatadb_test_log.ldf', maxsize=2mb ) go backup database db_test to disk='db_test.bak' --- if i don't execute this line, log file can contain a lot of record go create table db_test..table1(col char(8000)) --insert data to fill up the database log declare @n int set @n=0 while @n<192 begin insert into db_test..table1 values(replicate('a',8000)) set @n=@n+1 end
question 2: i create a database and limit the log file to 2mb. Then i create a table and insert data in an endless loop.
After the inserting operation executing for a while, the 9002 error occurs, indicate the log file for the database is full. But the 'dbcc sqlperf(logspace)' command indicate the unilization ratio is low, and log_reuse_wait_desc in sys.database is 'CHECKPOINT' And I can insert data , and i'm sure the state of log_use_wait_desc is 'CHECKPOINT'.
As i known, the checkpoint can't truncate log under full recovery model. Only the back log operation can truncate the transaction log. So log is not full, why 9002 error is encounterd. and why the log_reuse_wait_desc return 'CHECKPOINT'?
I list my code:
Code Snippet create database db_test on primary ( name=db_test, filename='C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDatadb_test.mdf' ) log on ( name=db_test_log, filename='C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDatadb_test_log.ldf', maxsize=2mb ) go create table db_test..table1(col char(8000))
--insert data to fill up the database log declare @n int set @n=0 while @n<>-1 begin insert into db_test..table1 values(replicate('a',8000)) end
In a foreachloop, I am inserting records into a flat file which is working fine. But the thing is that as the file grows, it takes longer for it to locate the EOF(End of File) of the flat file so as to insert the records.
I have around 70-100 lines written to the file at each loop and there are more than 20k records to be looped. wihich means that at the end I should be having 1400k - 20000k line in the text file.
One solution would be to insert the records at the start of the file itself so that it does not has to lookup the EOF each time before writting.
Another would be to generate separate files and then merge it.
Any idea how can this can be done?
Beside this I have to zip the file and then SFTP to a given address.
Greetings, I have just arrived back into the country (NZ) and back into ASP.NET. I am having trouble with the following:An attempt to attach an auto-named database for file (file location).../Database.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. It has only begun since i decided i wanted to use IIS, I realise VWD comes with its own localhost, but since it is only temporary, i wanted a permanent shortcut on my desktop to link to my intranet page. Anyone have any ideas why i am getting the above error? have searched many places on the internet and not getting any closer. Cheers ~ J
Is it possible to actually insert a file into the email sent via sqlmail/ I ask cos when I try to send the results of a query via SQLMail, the formatting is shot. I want to automate a job to actually run a query, capture data to a file and then email the results to my users. I can get to the stage where the file is available, but then I'm stumped. Any Ideas please.
I want to use sql server as a back end for my asp.net shopping cart using vb.net..i have created an "news.mdf" under app_data folder of my website in visual studio2010...but i tried to connect it to database using below command...but showing error...
Hello everyone!I'm having a problem with inserting the content of a text file into a Sql Server 2005 database.I'm reading the text file into a dataset, and works fine. What I can't do is what I suspect is the simple part: Insert all the data into a table that has exactly the same configuration that the file. I've never worked with dataset's before, and I can't seem to find the answer to this!This is what I have done so far: Dim i2 As Integer Dim j As Integer Dim File As String = Server.MapPath("..DocsFactsFORM_MAN_V3_1.txt") Dim TableName As String = "Facts" Dim delimiter As String = "9"
Dim result As DataSet = New DataSet() Dim s As StreamReader = New StreamReader(File) Dim columns As String() = s.ReadLine().Split(Chr(9)) result.Tables.Add(TableName) Dim strs1 As String() = columns For i2 = 0 To CInt(strs1.Length) - 1 Dim col As String = strs1(i2) Dim added As Boolean = False Dim [next] As String = "" Dim i As Integer = 0 While Not added Dim columnname As String = String.Concat(col, [next]) columnname = columnname.Replace(Chr(9), "") If Not result.Tables(TableName).Columns.Contains(columnname) Then result.Tables(TableName).Columns.Add(columnname) added = True Else i += 1 [next] = String.Concat("_", i.ToString()) End If End While Next i2 Dim strs2 As String() = s.ReadToEnd().Split(Chr(13) & Chr(10).ToString()) For j = 0 To CInt(strs2.Length) - 1 Dim items As String() = strs2(j).Split(Chr(9)) result.Tables(TableName).Rows.Add(items) Next j So now I have my dataset populated with all the information, but how can I insert it into the database?If anyone can help I would appreciate very, very much!Thank you Paula
I work with sql server 2008 on a database.we have export schema and datas with the command export datas
click rigth on database => tasks => generate scripts => select all object => click advanced => select type of data to script => schema and data
Now we have a file with all datas and schema That's perfect ...But how i can insert the file in a other database?ok i can copy paste all datas in management studio and press f5 but when i do this the management studio fail because the size of the file is > 200 mega !
I have the data of ACTTAB, APTTAB and etc, how to i export this data from SQL and insert into a text file??? this is the first time i need to do. as for the 2nd thing is that after export the data from SQL into text file, i need to import this data into mongoDB. So basically how to export this following data (ACTTAB, APTTAB and etc) into text file?
Hi, I´m trying to bulk insert files that looks like this:
aaaa,bbb,dddd, ccc,dfd,tghj,
each file can have up to 10 data fileds per line, and each file will have same number of data fileds in particular file, let´s say 3 like above. Second file could have let´s say 10 and that is maximum.
I read the file and insert data with fieldterminator in temp table from witch I insert data to other tables regarding some parameters inside.
Now problem is: Msg 4832, Level 16, State 1, Line 1 Bulk load: An unexpected end of file was encountered in the data file. Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
That is because I´m trying to insert 3 fields of data in temporary table which is made of 10 columns (It have to be 10 because next file could have 10 fileds of data). If the temp table has same number of columns like text file has data fields than it works.
What is solution for this problem? Can I bulk insert NULL in columns for which I don't have data?
I can also import each line of text file to one column (with delimiter inside) but than I don´t know how to insert that data to correct tables or even to one table but to seperate data fields to columns with fieldterminator which is , in this case.
I'm new to SQL and I would apriciate any help. Thank you
Hi, I was wondering which is the best way to read data from a txt file and insert each row into sql. OLE DB Command could be? It will be necesary to work with variables? My txt file will have a defined width (if it is necessary to know). I will have many rows with many columns. I have to map eah column from the txt file to it's corresponding column in sql server and insert data into it for each row. Thanks for your help!
I want to load data that i receive everydays from my customers in .xls file format (excel) or cvs file format, to the database that i have created on this purpose. but when trying to do that whith SSIS; i got an error message .... that i can't import redudant data in my database column.
i'm an sql server beginer. i was wondering if some of you guys can help me out. i need for the sql server to be able to read an outside file (just text) and be able to run a script that will insert it in the database. it's a dcc output file. we've tried running this script:
DROP TABLE tests go DECLARE @SQLSTR varchar(255) SELECT @SQLSTR = 'ISQL -E -Q"dbcc checkdb(master)"' CREATE TABLE tests (Results varchar(255) NOT NULL) INSERT INTO tests EXEC('master..xp_cmdshell ''ISQL -E -Q"dbcc checkdb(master)"''')
and it's running good but the problem is the results of the dbcc here did not come from a file but directly after executing the dcc command. is there a way to do it?
I have a bulk insert situation that would be nice to be able to pull off. I have a flat file with 46 columns that are to go into a table. The table, I want to have a 47th column to be updated later on by means of a stored proc saying if the import into the system was sucessful or not. I have the rowterminator set as '"' thinking that would tell SQL to begin on the next row, leaving the importstatus column null but i still receive an error.
First of all, is this idea possible within this insert statement. Secondly, if so, what would be the syntax to tell the insert statement to skip that particular column. It is the last column listed in the table so it just needs to start on the next row after it inserts the last bit of data in the flatfile.
If this is not possible, is it possible to bulk insert into a temp table?
I am running a set of SQL statements on a SQL server, to insert flat file data into a SQL table. The flat file is already FTP'ed to the SQL server. I seem to be getting an error, which is possibly pointing to a permissions issue
The statements:
BULK INSERT [Jedox_prod].[dbo].[B_BP_Customer] FROM 'c:jedox_dailyjdcom4401.txt' WITH ( FIRSTROW = 2, MAXERRORS = 0, FIELDTERMINATOR = '|', ROWTERMINATOR = ' ' ) GO
The error is : Msg 4861, Level 16, State 1, Line 1 Cannot bulk load because the file "c:jedox_dailyjdcom4401.txt" could not be opened. Operating system error code 3(failed to retrieve text for this error. Reason: 1815)
If it is permissions issue, how do I overcome this?
I just wanted to insert only SubjectIds into my table 'Subjects' which has the follwing schama ignoring the classes The row delimeter is " " and the column delimeter is ' | '
Table Subjects {
ID (Autoincrement) SubjectId varchar(20) }
Can any one provide the format file for doing this or suggest anyway to do this? Please do note that the file may contain millions of records
USE TEST GO /****** BULK INSERT ******/ BULK INSERT [Table01] FROM 'C:empdata.csv'
[code]....
I am using above code to insert csv file data which consist of arabic data as well. Upload is successful however Arabic field data is uploaded with invalid characters and getting the following error Msg 4864, Level 16, State 1, Line 3...Bulk load data conversion error (type mismatch or invalid character for the specified codepage)
With "bcp MyDatabase.dbo.MyTable out C:MyFile.Dat -n -T" command line, I could get an exported data file. And I can also import this file into MyTable using 'BULK INSERT MyDatabase.dbo.MyTable FROM 'C:MyFile.dat' WITH (DATAFILETYPE='native');' query statement.
Now, I want to make my own data file just like made by bcp above. Although I could make file of 'char' type, 'native' type file is needed for performance and other reasons. And the format file should not be used.
Hi, I have a data file and the contents of it are as follows
2 -- This is the header indicating the no of records in my files 1001|s1 1006|s2
The content of format file is as follows. This is to skip first column of the all the rows and get only Subs (i.e s1 and s2 )
9.0
2 1 SQLCHAR 0 100 "|" 0 ID ""
2 SQLCHAR 0 100 " " 1 Subs ""
Here is my query to get all the Subs from my data file
SELECT * FROM OPENROWSET( BULK 'datafile.txt',
FORMATFILE = 'FormatFile.fmt',
FIRSTROW = 2 ) AS a
But this query retuns only s2 where i was expeting s1 and s2. The reason being is that the firts row i.e header doesn't follow the format Can any one please let me know how to skip the first line in the data file and get the result as required
I have two SQL Express database and I want to do two things. One is to transfer a table over to the other database. Two, move the files from one table in one database to another. Please let me know when you get a chance.
i am creating/uploading a new file on the webserver, and if it is successfully i want to insert a record in the database (with the filename).is there a way to create a transaction for this so that if either operation fails they both fail?
Hai Everbody, for me in my project i want to read data from a csv file and insert it in a sql server databse table.The csv file may contain n number of columns,but i want only certain columns from that, and insert it in the database table.How to achieve this. Plz help me it is urgent. Thanks in advance. Thanks and regards Biju.S.G
I are using a BULK INSERT to insert the data from a ascii file to a sql table. The table has a ProductInstanceId column that exists in the tables but does not exist in the ascii DICast data. I am setting the ProductInstanceId to a Guid that will be used for Metrics. I would like to create the Guid in C++ and then set it somehow during the BULK INSERT DICastRaw1hr and DICastRaw6hr. I am calling the BULK INSERT from C++/ADO. I do not see how you can set a static data in the BULK INSERT for a column that exists in the table but does not the source data ... seems there should be a way to do this with the format file?
The other way to do this is with a TRIGGER. I have the TRIGGER below. Prior to the calling the BULK INSERT using ADO I will use ADO to ALTER the TRIGGER with the new Guid. When the BULK INSERT runs the ProductInstanceId will be populated with the new Guid.
ALTER TRIGGER DICastRaw1hrInsertGuid ON Alphanumericdata.dbo.DICastRaw1hr FOR INSERT AS UPDATE dbo.DICastRaw1hr SET ProductInstanceId = '4f9a44eb-092b-445b-a224-cc7cdd207092' WHERE modelrundatetime = (select max(modelrundatetime) from Alphanumericdata.dbo.DICastraw1hr(NOLOCK))
More Questions:
- The Trigger is slow. The Bulk Insert without the Trigger runs in about 10 sec ... with the Trigger in about 40 sec. I tried to use the sql code below in the TRigger but it was only doing the UPDATE on the last row. The TRIGGER must run after the BULK INSERT is complete. Now I am using the select (bad). Any comments ...
ALTER TRIGGER DICastRaw1hrInsertDate ON Alphanumericdata.dbo.DICastRaw1hr FOR INSERT AS DECLARE @ID as integer SELECT @ID = i.recordid from inserted i UPDATE dbo.DICastRaw1hr SET ProductInstanceId = '4f9a44eb-092b-445b-a224-cc7cdd207092' WHERE recordid = @ID
- I understand that I could set the Guid in the Default Value part of the table definition using the NEWID() function. I need the Guid to be the same for all the rows that are inserted during the BULK INSERT (all have the same modelrundatetime) ... how would I do this?