My current project is creating a social network for the university I work for. One of the features allows members of a group to send a message to all other group members. Currently, I run a foreach loop over each of the group members, and run a separate INSERT statement to insert a message into my messages table. Once the group has several hundreds members, everybody starts getting timeout errors. What is the best way to do this?
Here are two suggestions I've received: construct one sql statement that would contain multiple INSERT statements. It would be a large statement like:
Or, do the foreach loop in a stored procedure. I know the pros and cons of sprocs versus dynamic sql is a sticky subject, and, personally, I'd prefer to keep my logic in the C# code-behind file. What is the best way to do this is an efficient manner? I'd be happy to share some code, if that would help. Thanks for your input!
Hi everyone,I am trying to bulk insert some data from a csv file to a table. I can do it as part of a button on click event, but don't know how to do it using a stored procedure. This is what I have,ALTER PROCEDURE dbo.TestImportData ( @filename varchar(50) ) AS BULK INSERT dbo.[TestTable] FROM @filename WITH ( FIELDTERMINATOR = ',' ) /* SET NOCOUNT ON */ RETURNI get the error message "Incorrect syntax near '@filename', Incorrect syntax near 'with'). What am I doing wrong? What should I do? Please help!
OK, I'm probably being a bone-head here and am clearly in over my head but how do you (or can you?) set up a Bulk Insert to take a dynamic path/file name?
What I want to do is pass in the path and file name from an external process to a stored procedure that bulk inserts the content of the file and then does some other routines on it. I haven't had any luck getting Bulk Insert to run if the path/file name is not hard-coded in the sproc as a string.
The point is to have a master routine that can exercise the process for several different customers and use meta data in a table to inform what file to bulk insert.
I'm just learning SSIS and I've hit my first bump. I am doing a bulk import from a tab delimited text file to an empty sql table that has a Idendity column defined. How do I tell the bulk insert task to skip that column when inserting from the text file. If I remove the identity column it imports the data fine, but I want to create the indentity column in the table too.
Hi guys, Consider this Scenario. I have two Tables. Table1-Users Fields are id, name,joindate,designation, status Table2-People Fields are id, name, status The table Users have data in it say 100 records I have to fill it toPeople Table where id=id and name=name and status=status Any Way? Regards, Naveen
I have a table that contains comma delimited text, and I am trying to convert this into another table
eg my target table looks like
Produce|Price|QuantityPerPrice
and my input table contains strings such as
"apples","7.5","10"
"pears","10","8"
"oranges","8","6"
Does anyone have any ideas on how to do this? I am after a solution that does them all at once: I am currently using charindex() to find each column, one at a time, but given the speed of BULK INSERT I would much rather do it as a table. The one solution that I don't want to resort to is to export the table with delimited strings to a data file, then BULK INSERT it...
This is in the context of an ETL process - loading large blocks of data.
I bulk insert a bunch of rows (could be millions, more likely 10's of thousands) into a table, perform some queries and then I need to append those rows into a second table and truncate the first table. From an efficiency standpoint, switching the load table into a partitioning scheme would be best, but I can't use partitioned tables for reasons not relevant here.
So, what's going to be the most efficient solution? I can easily do a simple insert into/select from to copy the rows, but that will be fully logged, and I'd really like a minimally logged solution. Looking at the docs for bulk insert/bulk copy, I can't see a solution that will copy data from one table to another, but I'm suspecting that I'm overlooking something. I could re-load the rows from the client using a second bulk copy, but that seems like a terrible waste (although the client is on the same box, and always will be, so it's not as bad as it might be).
Here, is the example of Bulk Insert into SQL Server Table. From Application you have to pass a XML string to a Stored Procedure and it will insert all data into table using that XML. Example SP.
CREATE PROCEDURE StoredProcName ( @strXML varchar(8000) ) AS Declare @intPointer int exec sp_xml_preparedocument @intPointer output, @strXML
I used bulk insert to insert a txt file into a table. It works fine. (see code below) Now, one txt file with column's name at first row and has about 200 columns. There is no table created before. How to code to create a destination table based on first row of the txt file so that bulk insert will work for that txt file?
BULK INSERT #MBRACCT FROM 'c:order.TXT' WITH ( FIELDTERMINATOR = '|', FIRSTROW = 2, ROWTERMINATOR = '' )
I have a file I'm trying to do some non-set-based processing with. Inorder to make sure I keep the order of the results, I want to BULKINSERT into a temp table with an identity column. The spec says thatyou should be able to use either KEEPIDENTITY or KEEPNULLS, but I can'tget it to work. For once, I have full code - just add any file of yourchoice that doesn't have commas/tabs. :)Any suggestions, folks?--create table ##Holding_Tank ( full_record varchar(500)) -- thisworkscreate table ##Holding_Tank (id int identity(1,1) primary key,full_record varchar(500)) --that doesn't workBULK INSERT ##Holding_TankFROM "d: elnet_scriptspsaxresult.txt"WITH(TABLOCK,KEEPIDENTITY,KEEPNULLS,MAXERRORS = 0)select * from ##Holding_tank
I have Three tables Student,Daily_Attendance_Master and Daily_Attendence_Details.
I want to run sql of insert or update of student attendence(apsent or present) in Daily_Attendence_Details based on Daily_Attendance_Master_Id and Student_Id(from one roll number to another).
If Both are present in table Daily_Attendence_Details then i want to run Updating of attendance from one roll number to another roll number in Daily_Attendence_Details on the basis of Daily_Attendence_Details_Id
And if both or any one is not present i want to run insert of student attendense from one roll number to another roll number in Daily_Attendence_Details.
I give below the structure of three tables Student,Daily_Attendance_Master and Daily_Attendance_Details.
I have a bulk insert situation that would be nice to be able to pull off. I have a flat file with 46 columns that are to go into a table. The table, I want to have a 47th column to be updated later on by means of a stored proc saying if the import into the system was sucessful or not. I have the rowterminator set as '"' thinking that would tell SQL to begin on the next row, leaving the importstatus column null but i still receive an error.
First of all, is this idea possible within this insert statement. Secondly, if so, what would be the syntax to tell the insert statement to skip that particular column. It is the last column listed in the table so it just needs to start on the next row after it inserts the last bit of data in the flatfile.
If this is not possible, is it possible to bulk insert into a temp table?
I saved the result into a csv file and then truncated the table. Now, I am trying to bulk insert the data into the table. So I used:
bulk insert rdb.dbo.scd_event_tab from 'C:userssluintel.ctrdesktopeventtab.csv' with ( codepage = 'RAW', datafiletype = 'native', fieldterminator = ' ', keepidentity, keepnulls ); go
However, I get this error:
Msg 4867, Level 16, State 1, Line 1 Bulk load data conversion error (overflow) for row 1, column 1 (JOB_ID). Msg 4866, Level 16, State 5, Line 1
The bulk load failed. The column is too long in the data file for row 1, column 3. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
I am running a set of SQL statements on a SQL server, to insert flat file data into a SQL table. The flat file is already FTP'ed to the SQL server. I seem to be getting an error, which is possibly pointing to a permissions issue
The statements:
BULK INSERT [Jedox_prod].[dbo].[B_BP_Customer] FROM 'c:jedox_dailyjdcom4401.txt' WITH ( FIRSTROW = 2, MAXERRORS = 0, FIELDTERMINATOR = '|', ROWTERMINATOR = ' ' ) GO
The error is : Msg 4861, Level 16, State 1, Line 1 Cannot bulk load because the file "c:jedox_dailyjdcom4401.txt" could not be opened. Operating system error code 3(failed to retrieve text for this error. Reason: 1815)
If it is permissions issue, how do I overcome this?
Hi all,We have an application through which we are bulk inserting rows into aview. The definition of the view is such that it selects columns froma table on a remote server. I have added the servers usingsp_addlinkedserver on both database servers.When I call the Commit API of oledb I get the following error:Error state: 1, Severity: 19, Server: TST-PROC22, Line#: 1, msg:SqlDumpExceptionHandler: Process 66 generated fatal exception c0000005EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process.I would like to know if we can bulk insert rows into a view thataccesses a table on the remote server using the "bulk insert" or bcpcommand. I tried a small test through SQL Query Analyser to use "bulkinsert" on a such a view.The test that I performed was the following:On database server 1 :create table iqbal (var1 int, var2 int)On database server 2 (remote server):create view iqbal as select var1,var2 from[DBServer1].[SomeDB].[dbo].[iqbal]set xact_abort onbulk insert iqbal from '\MachineIqbaliqbaldata.txt'The bulk insert operation failed with the following error message:[Microsoft][ODBC SQL Server Driver][DBNETLIB]ConnectionCheckForData(CheckforData()).Server: Msg 11, Level 16, State 1, Line 0General network error. Check your network documentation.Connection BrokenThe file iqbaldata.txt contents were :112233If the table that the view references is on the same server then weare able to bulk insert successfully.Is there a way by which I should be able to bulk insert rows into aview that selects from a table on a remote server. If not then couldanyone suggest a workaround. I would actually like to know someworkaround to get the code working using OLEDB. Due to unavoidablereasons I cannot output the records to the file and then use bcp tobulk insert the records in the remote table. I need to have some wayof doing it using OLEDB.Thanks in advanceIqbal
Overall goal: Write a Bulk Insert statement using the UNC path of a filetable directory.
Issue: When using the UNC path of the filetable directory in a Bulk Insert Statement, receiving "Operating system error code 50(The request is not supported.)" Looking for confirmation as to whether this is truly not supported.
Environment: SQL Server 2012 Standard. Windows Server 2008 R2 Standard
Hi, I have a data file which consists of data as below, 4 PPU_FFA7485E0D|| T_GLR_DET_11||
While iam inserting into table using bulk insert, this pipe(||) is also getting inserted into the table, here is my query iam using to insert the data using bulk insert.
BULK INSERT TABLE_NAME FROM FILE_PATH WITH (FIELDTERMINATOR = ''||'''+',KEEPNULLS,FIRSTROW=2,ROWTERMINATOR = '''')
Can we bulk insert only the desired column from a flat file to a table?
I am using SSIS to bulk insert from a file with more than 200 columns. I am trying to find a way I can bulk insert them to multiples table through SSIS.
The one way I can think is pre map the columns from the file to the destination tables. Build numerous Bulk Insert tasks to achieve that. But not sure if SSIS will let me do that.
Hello, i wanted to give the forum my current process flow to see if i am close or have some more work to do. The object is to import the data as fast as possible without loosing query responsiveness of the search on the web side. Any type of response will be greatly appreciated.
Current Process I receive multiple product inventory lists form multiple vendors. These inventory lists are in many different formats like .xls format, .txt format, .csv format and .dbf format. My server converts the file format to raw .csv. The table is very large and will consist of millions of rows. Inventory comes in alphanumeric format.
Each of the inventory lists are NOT in the same format. What i mean by this is that there are different header names for the users inventory list than what matches our database table. For Example, a user may have an excel document with the header name "Amt" but our database table field name is "Amount". In order to make this import process automated I make a single mapping file for each user. This mapping file relates the users field header in their inventory file to that of the database table.
Currently I have a process converting all files to .csv format. Every 15 minutes a routine runs that converts the .csv to XML and then performs the import Bulk Insert routine. The process deletes the users entire inventory then imports to a "tempinventory" table, which then updates the inventory table. During this 15 minutes we may get 20 inventory lists to import which could be a half a million records. I have this inventory table indexed using the primary key which is the inventory number. This is the search criteria used (Inventory Number) on the web side. The import routine to SQL works quickly ONLY if Full-Text Index is turned OFF. I assume i need the Full Text Indexing on so the queries of the full inventory lists a fast response on the web side.
Assumed Issues Right now if I were to turn off the Indexing for each import then we would have slow queries against the database on the web side.
To have one table with millions of rows that is constantly updated as well as queried at the same time is not efficient.
Assumed Corrections Right now it seems that partitioning the table into 10 numeric partitions, one for each number would leverage the import routine as well as the web side search routine. I would then partition the alpha partition in 1 letter increments so we would have 26 partitions, one for each letter for a total of 36 partitions.
If I have the partitions seperated then it will be easier to update the seperate index as well as perform maintenance on the index, i assume this is correct.
Future Plans I am upgrading the web side application to .NET 3.5 so we can take advantage of XLINQ and LINQ so our searches from the web side are faster and more efficient.
I am also looking into building a SilverLight application that will allow the user to install the application locally that will take their live database file and send updates to my server so they do not need to send in their inventory file at all and the inventory lists are live. This will alleviate the need to delete the users full listing in order to make a complete update. As stated above sometimes an update may just include updating the amount of only 2 of 40,000 rows.
I was also looking into db4o to see if it would be beneficial as well, http://www.db4o.com/, has anyone worked with this before in a similar manner?
Questions
I would like to make this process much more efficient from the import routine to the search routine. Is setting up the partitions as discussed a stable plan for both routines? Is the BULK INSERT using XML to SQL the most efficient way of importing the data to SQL?
How would i handle the full text indexing to allow fast import routines without slowing down the web side searches? After import of new data do i need to "update" the index as well? What is a good set of "preventative maintenance" standards should i follow when dealing with this many table updates as well as the catalogs and table data? I know there are benefits of using LINQ when querying the database from the web side but are there any other benefits that would fit into this current process? As for the SilverLight application would it be beneficial for the user as well as me to have the application poll their database file to find changes and send only the updated values of the list to the server via XML, which is then updated by SQL?
I am unsure what is the best way to make this process as easy and automated as possible giving the user the fastest experience possible when searching from the web side. Is this a smarter idea so i can track just the changes made by the user on their inventory list instead of importing the same redundant data they have? Would implementing something like db4o be beneficial for this process, http://www.db4o.com/? Please let me know if i am way off on this process or if there are some benefits that i am not using in this process. I have been doing a lot of research and this is what i have come up with so i wanted to ask the community what they thought about it as many heads are better than one. Please feel free to rip the process apart to, i take constructive criticism.
I have to update a field within a table of 60 records or so. Each record has a different field value. it's type varchar. i was given an excel file with the field values and was thinking of a bulk update like bulk insert, but i don't recall that it's possible that way.
Is the only way to create a table, bulk insert, then merge the two tables together with UPDATE?
Just wanted to see if there was an easier way to do it, otherwise i'll take the latter route. Thanks!
we can easily load a file into db tables. However, my main concern here is the number of columns in the file. A text file TEXT_1400.txt has 1400 columns. I am unable to load data to my db table using BCP or BULK INSERT commands, as maximum of 1024 columns are allowed per table in SQL Server 2008.
We can still go ahead and create ‘Wide Table’ (a special table that holds up to 30,000 columns. The maximum size of a wide table row is 8,019 bytes.). But when operating on wide table, BCP/BULK INSERT commands still fail. After few hours of scratching my head over BCP and BULK INSERT, I observed that while inserting BCP/BULK INSERT commands are unable to identify SPARSE columns and skip these columns, which disturbs column mapping and results in data conversion and trancation errors.
Is there any proper way to load this kind of files into the db table?
I am currently working with C and SQL Server 2012. My requirement is to Bulk fetch the records and Insert/Update the same in the other table with some business logic? How do i do this?
I have a table containing 8 million records. I need to replace 2 million of these records with a scaled down query that goes something like: SELECT 1, ShareholderID, Assets1 FROM MyTable (Yields appx. 200,000 recods) SELECT 2, ShareholderID, Assets2 FROM MyTable (Yields appx. 200,000 recods) . . . SELECT 10, ShareholderID, Assets1 + Assest2 + Assets3 + ... + Assets9 FROM MyTable (Yields appx. 200,000 recods)
Updates and cursors just seem to be too slow.
So far I have done the following, but was wondering if anyone could think of a better way. SELECT 6 million records that don't need to be deleted into a #TempTable Use statements above to select into same #TempTable DROP and recreate Original Table SELECT 6 + 2 million records INTO original table.
This seems rather convoluted. Is there a better approach? Would it be worth while to dump data to a file and use bcp / Bulk Insert