Transact SQL :: Read Data From CSV Files And Insert Into DataBase
May 6, 2015
I have a requirement to
a. Read data from Different CSV files.
b. insert and update data to Data base in multiple table using joins.
This execution runs for 1-2 hours.I can use C# with Ado.net, but only concern I see is if in between execution fails due to some connection or other error. All insert data has to be cleaned up again.I feel writing and Store procedure inside transaction, which will take path's for CSV file as input and insert data in database. using transaction we will have flexibility rollback to original state.
I have two database files, one .mdf and one .ndf. The creator of these files has marked them readonly. I want to "attach" these files to a new database, but cannot do so because they are read-only. I get this message:
Server: Msg 3415, Level 16, State 2, Line 1 Database 'TestSprintLD2' is read-only or has read-only files and must be made writable before it can be upgraded.
What command(s) are needed to make these files read_write?
I have 2 tables EmployeeA(Eng) and EmployeeB(Spanish) kept in seperate mdb's. I want to add the records into Sql Server 2005 table called StateEmployee.
Procedure: 1. Loop through 2 folders..one containing table EmployeeA in mdb and other containing tbl EmployeeB in diff mdb's. 2. Pick a file from EmployeeA and EmployeeB, both at the same time. 3. Count the total no of rows in both files. If equal proceed. 4. Compare the 'employeeid' of one row of employeeA to the employeeid of EmployeeB. 5. If employee id matches, load both the rows in Sql server else file it to the error table. 6. Loop through all rows simultaneously till end of row. 7. Go to next mdb.
How do i go about this step by step. I am fairly new to SSIS.
i have a data base sql Server .I wrote a stored procedure or attach my database but it is attached in read only mode how can remove read-only.
this is my stored procedure.
create procedure attache as declare @trouvemdf int declare @trouveldf int if exists (select name from sysdatabases where name='Gestion_Parc')
[Code] ....
i try this EXEC sp_dboption 'Gestion_Parc', 'read only', 'FALSE' but it causes those error
Msg 5120, Level 16, State 101, Line 1 Unable to open the physical file "C: Gestion_Parc.mdf". Operating system error 5: "5 (Access is denied.)". Msg 5120, Level 16, State 101, Line 1 Unable to open the physical file "C: Gestion_Parc_log.ldf". Operating system error 5: "5 (Access is denied.)". Msg 945, Level 14, State 2, Line 1
I'm contemplating a sql server archive strategy that rolls really old data off any sort of dbms and onto low cost media like dvds in a non relational archive format. I dont want to ever worry about these archives spanning different versions of sql when i go to retrieve a range of data that happens to span sql versions (eg one disc was sourced from 2005 another by 2008 but my report needs a union of both).
So I'm thinking about neutral/efficient formats for these archives and a live homegrown catalog that can determine exactly what disc(s) need to be mounted based on passed from and to date parameters...all so that the data that might span discs (and versions and maybe even schemas) can be merged and loaded into my sql version d'jour's "throw away" archive database for a one time report or other unplanned activity.
I remember raw data types being very convenient as an ETL format for our customers who have ssis, but wouldn't want our sqlexpress customers to be left without the archiving capability. Do the "things" that read and write raw data files really originate in some special T-SQL command that all sql editions can use, or is it strictly an ssis thing?
Hi, I was wondering which is the best way to read data from a txt file and insert each row into sql. OLE DB Command could be? It will be necesary to work with variables? My txt file will have a defined width (if it is necessary to know). I will have many rows with many columns. I have to map eah column from the txt file to it's corresponding column in sql server and insert data into it for each row. Thanks for your help!
i'm an sql server beginer. i was wondering if some of you guys can help me out. i need for the sql server to be able to read an outside file (just text) and be able to run a script that will insert it in the database. it's a dcc output file. we've tried running this script:
DROP TABLE tests go DECLARE @SQLSTR varchar(255) SELECT @SQLSTR = 'ISQL -E -Q"dbcc checkdb(master)"' CREATE TABLE tests (Results varchar(255) NOT NULL) INSERT INTO tests EXEC('master..xp_cmdshell ''ISQL -E -Q"dbcc checkdb(master)"''')
and it's running good but the problem is the results of the dbcc here did not come from a file but directly after executing the dcc command. is there a way to do it?
Hai Everbody, for me in my project i want to read data from a csv file and insert it in a sql server databse table.The csv file may contain n number of columns,but i want only certain columns from that, and insert it in the database table.How to achieve this. Plz help me it is urgent. Thanks in advance. Thanks and regards Biju.S.G
If possible how data will be organized in MDF and NDF files. Because we have a MDF file and 11 ndf files. when we are loading the data into one table, it is using all the files in equal ratio. Is there any address reference of ndf files in MDF file?
I am using SQL 2012 SE. I am trying to move .mdf and .ndf files after a database is detached. Here is my code that is just to copy the mdf file. I am testing it against this file now and if it worked then I would do the move ldf file the same way.
EXEC master.dbo.sp_configure 'show advanced options', 1 RECONFIGURE WITH OVERRIDE GO EXEC master.dbo.sp_configure 'xp_cmdshell', 1 RECONFIGURE WITH OVERRIDE DECLARE @cmd nvarchar(4000)
[Code] ...
The only message is I see while the query is executing is :Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
If I manually copy over the file it takes 30 seconds since the file is only 2GB and the script takes 45 minutes and still executing.
I am using Sql Server 2012 in always on configuration with multi subnet failover clustering. Size of data file has suddenly increased, i dropped all the unnecessary table from database three days back. day before yesterday i tried shrinking data file using DBCC command but it is taking too much time. is there any other option for deallocating the space.
I have more than 500 CSV files with a similar structure [Same column name and same data format]. I would like to load these files in a database table on the SQL Server 2014 database.
I've a dataflow task on For Each Loop container at control flow of SSIS package. This For Each Loop container reads the CSV files from the specified location one by one, and populates a variable with current file name. Note, the tables where I would like to push the data from each CSV file are also having the same names as CSV file names.On the dataflow task, I've Flat File component as a source, this component uses the above variable to read the data of a particular file.
Now, here my question comes, how can I move the data to destination, SQL table, using the same variable name?I've tried to setup the OLE DB destination component dynamically but it executes well only for first time. It does not change the mappings as per the columns of the second CSV file. There're around 50 CSV files, each has different set off columns in it. These files needs to be migrated to SQL tables using the optimum way.
Which is the best way to setup the Dataflow task for this requirement?Also, I cannot use Bulk insert task here as we would like to keep a log of corrupted rows.
I want to make data changes in read_only database , that's why i must set database read_write. While database is at read_write mode, i want to be sure that no one makes change in database.
For this aim, i write the code below, but i suspect that after setting the database read_write, till the setting database single_user ,is it possible get DML script from another user. Is the code below enough for this operation. Or is there another way?
Reminding: Read_only database can not be set single_user mode. That's why, first you must set database read_write.
The code;
use master alter database xxx set read_write with rollback immediate alter database xxx set single_user with rollback immediate
use xxx update tablexxx set columnxxx=yyy use master alter database xxx set read_only with rollback immediate alter database xxx set multi_user with rollback immediate
We are facing a problem with loading data from .pdf files from vendor..pdf files have data in tabular format and we would like to insert those fields into a SQL table.We do not want to insert the physical location of the file but, we need to insert the data within the file.How can we read a pdf file?
I am still learning T-SQL .Lets consider the table below, ID 1-3 shows our purchase transactions from various Vendors and ID 4-6 shows our payments to them
Table 1 - VendorTransactions
ID PARTY AMOUNT VOUCHER --------------------------------------- 1 A 5000 Purchase 2 B 3000 Purchase 3 C 2000 Purchase
4 A 3000 Payment 5 B 1000 Payment 6 C 2000 Payment 7 A 1000 Payment
Now we have a blank table Table 2 - Liabilities
ID PARTY AMOUNT
I want that SQL should look for each individual party from Table 1 and Calculate TOTAL PURCHASE and TOTAL PAYMENTS and then deduct TOTAL PAYMENTS from TOTAL PURCHASE so we get the remaining balance due for each party and then add the DIFFERENCE AMOUNT alongwith PARTY to the TABLE 2 so I can get the desired result like below
ID PARTY AMOUNT ------------------------- 1 A 1000 2 B 2000 3 C 0
I have three tables and all three are linked. Looking for query so that I can get the desired result. Notes for Table_A and Table_B:
ROW Data are given in the variables to insert the Row data. If these Row data are already exist with the exactly same sequence in the row of table then don't need to INSERT data again.If these variable date doen't exist then need to add this row.
Notes for Table_C:
Seq_id_Table_A is a Seq_id of #table_A. Seq_id_Table_B is a Seq_id of #table_B.
--Table_A----------------------Variables for Table_A-------------------- Declare @table_A_Y char(4)='TRC' Declare @table_A_Y1 char(2)='1'
While trying to insert data into existing XLS file, using below command, i am getting following error.
Insert into OPENDATASOURCE( 'Microsoft. ACE.OLEDB.12.0','Data Source=e:ediuploadhello1.xlsx;Extended Properties=Excel 12.0')...[Sheet1$] Select top 50 product_no From product_mst Msg 7343, Level 16, State 2, Line 1 The OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" could not INSERT INTO table "[Microsoft.ACE.OLEDB.12.0]". Unknown provider error.
Now i am trying to insert a duplicate copy test by passing testId..Here is my sp
if not EXISTS(SELECT testName from tblTest WHERE UPPER(@testName) = UPPER(@testName)) BEGIN INSERT INTO tblTest SELECT @userId, testName,duration,totalQuestion,termsCondition,0,GETUTCDATE(),GETUTCDATE() from tblTest WHERE id=@testId SET @insertedTestId=@@identity INSERT INTO tblTestQuestion SELECT @insertedTestId,question,0,GETUTCDATE(),GETUTCDATE() from tblTestQuestion WHERE testId=@testId END
how to insert in answer table as one question can be multiple answers.
I'm running this procedure which insert into table_name(id, name.....) select id, name.... from table_name. For some reason the tempdb data file grow up to 200GB. The tempdb is set to expand unrestricted by 10%. How can I prevent that from hapening? Thanks.
USE TEST GO /****** BULK INSERT ******/ BULK INSERT [Table01] FROM 'C:empdata.csv'
[code]....
I am using above code to insert csv file data which consist of arabic data as well. Upload is successful however Arabic field data is uploaded with invalid characters and getting the following error Msg 4864, Level 16, State 1, Line 3...Bulk load data conversion error (type mismatch or invalid character for the specified codepage)
The OLE DB provider "SQLNCLI10" for linked server "192.168.0.40" does not contain the table ""CP_DW"."dbo"."StgDimSalesTargetSetup"". The table either does not exist or the current user does not have permissions on that table.
Insert into [192.168.0.40].CP_DW.dbo.StgDimSalesTargetSetup
Select t.SalesTargetCode,t.SalesTargetId,t.TargetDefination, --LEFT(DATENAME(MM, t.PeriodFrom), 3) + '-' + CONVERT(VARCHAR(4),DATEPART(YY, t.PeriodFrom)) AS MonthYear --CONVERT (char(8),PeriodFrom,112) [Datkey], Left(CONVERT (char(8),PeriodFrom,112),6) [Monthkey],t.PeriodFrom,t.PeriodTo,t.TargetAmount FROM [SO_SalesTargetSetup] t
It seems to me that files created on Unix machines with line terminator , or chr(10), cannot be imported using the Bulk Insert statement. Is this a bug, or an oversight by Microsoft? Does this mean that unless one replaces all with , there is no way to use Bulk Insert to import Unix files? This is a very strange behavior by MSSQL. Even lessor programs such as Excel and Word automatically recognize chr(10) as a line termination character. Am I missing something, or is this just the way MSSQL is?
I've created an excel spreadsheet with a data connection. This data connection uses a query that runs against a read-only database.
The issue I'm having is that the query never seems to finish running against the database, whether I open the Excel spreadsheet to view the data or run the query in SSMS.
I created the connection on the Data ribbon by going to From Other Sources --> From SQL Server and using the Data Connection Wizard.
Is there some kind of setting or property I'm missing that would allow this query to finish running?
In C# .NET I have the possible to create some validations of my data, with regulary expressions. Do SQL have the same feature? I will like to do an data validation of all my insert statement inside the sql-server. Is that possible?
I have a database which will be generated via script. However there are some tables with default data that need to exist on the database.I was wondering if there could be a program that could generate inserts statments for the data in a table.I know about the import / export program that comes with SQL but it doesnt have an interface that lets you copy the insert script and values into an SQL file (or does it?)
When both the two fields are set to SQLCHAR data types the data imports successfully without the quotes as 01 and 02. These fields will always be numbers and I want them as integers so I set the data type to int in the database and SQLINT in the format file. The results was that the 01 became 12592 and the 02 became 12848. where these numbers are coming from?