OLE DB Destination - Fast Load With Maximum Insert Commit Size
Sep 8, 2006
I'm seeing some strange behavior from the OLE DB Destination when using the "fast load" access mode and setting the "Maximum insert commit size".
When I do not set the "Rows per batch" or the "Maximum insert commit size", the package I'm working with inserts 123,070 rows using a single "insert bulk" statement. The data seems to flow through the pipeline until it gets to the OLE DB Destination and then I see a short pause. I'm assuming the pause is from the "insert bulk" statement handling all of the rows at once.
When I set the "Rows per batch" option but leave the "Maximum insert commit size" alone, I generally see the same behavior -- a single "insert bulk" statement that handles all 123,070. In this case, however, the "insert bulk" statement has a "ROWS_PER_BATCH" option appended to the statement that matches the "Rows per batch" setting. This makes sense. I'm assuming the "insert bulk" then "batches" the rows into multiple insert statements (although I'm unsure of how to confirm this). This version of the "insert bulk" statement appears to run in about the same time as the case above.
When I set the "Maximum insert commit size" option and leave the "Rows per batch" statement alone, I see multiple "insert bulk" statements being executed, each handling the lower of either the value I specify for the "Maximum insert commit size" or the number of rows in a single buffer flowing through the pipeline. In my testing, the number of rows in a buffer was 9,681. So, if I set the "Maximum insert commit size" to 5,000, I see two "insert bulk" statements for each buffer that flows into the OLE DB Destination (one handling 5,000 rows and one handling 4,681 rows). If I set the "Maximum insert commit size" to 10,000, I see a single "insert bulk" statement for each buffer that flows into the OLE DB Destination (handling 9,681 rows).
Now the problem. When I set the "Maximum insert commit size" as described in the last case above, I see LONG pauses between buffers being handled by the OLE DB Destination. For example, I might see one buffer of data flow through (and be handled by one or more "insert bulk" statements based on the "Maximum insert commit size" setting), then see a 2-3 minute pause before the next buffer of data is handled (with its one or more "insert bulk" statements being executed). Then I might see a 4-5 minute pause before the next buffer of data is handled. The pause between the buffers being passed through the OLE DB Destination (and handled via the "insert bulk" statements) is sometimes shorter, sometimes longer.
Using Profiler, I don't see any other activity going on within the database or within SQL Server itself that would explain the pauses between the buffers being handled by the OLE DB Destination and the resulting "insert bulk" statements...
Can anyone explain what is going on here? Is setting the "Maximum insert commit size" a bad idea? What are the differences between it and the "Rows per batch" setting and what are the recommended uses of these two options to try to improve the performance of the insert (particularly when handling millions of rows)?
If I use an OLE DB Destination with Fast Load, and enable check constraints, I would expect to see this work as BCP would in this scenario on 2005. However, instead, I get the error ALTER TABLE permissions required.
I understand that when using BCP, if you disable check constraints and triggers, then you need alter permissions. But, when you explicitly enable these, then you do not need this permission. I would expect the same behaviour in SSIS, but I am not seeing it. Fast Load seems to always require ALTER TABLE permissions.
I have an OLE-DB Command transformation that inserts a row. If the insert SQL command fails for some reason, I use the "Redirect Row" option to send the row to a script component. Inthere, I get the error description into a string variable in order to log the error into an error table.
For example, if a primary key violation arises, I would like the error description to be "The data value violates integrity constraints". I get it using the ComponentMetadata.GetErrorDescription. When I use the "table or view mode", I get the error description above without any problem. But If I use the "table or view fast load", the description is something like "No status available". But, If I use the error output to fail the component, in the OnError, I get the right error description. Is there a way to have both behaviour, I mean, to be able to redirect error rows to an output and have the cotrrect error description (like the one in OnError event handler) using fast load mode?
The DBA is not around and I would like to see if someone had a good recommendation on what the Maximum insert commit size (MICS) should be for an OLE DB Destination where the default of ZERO is not being used.
I want to use Fast Load and I want to use Redirect Row to catch the errors. I just performed a test where the OLE DB Destination was NOT set to Fast Load - it took FOREVER and I cannot have this kind of performance.
I know that this may be totally dependent on what is being inserted, but is there any problem with just setting this value to say 800,000? -.
The destination SQL database's recovery mode is set to SIMPLE as it is not a transactional database.
I have some code I build 2 weeks ago which I’ve been running daily but it’s suddenly stopped working with the following error.
“The table "tbl_Intraday_Tmp" has been created, but its maximum row size exceeds the allowed maximum of 8060 bytes. INSERT or UPDATE to this table will fail if the resulting row exceeds the size limit” When I google this there seems to be a related to tables with vast numbers of columns.
My table tbl_Intraday_tmp is relatively small. It has 7 columns. 1 of varchar(5), 3 of decimal(9,3) and 2 of decimal(18,0). The bit I’m puzzled with is it was working and stopped.
I don’t recall changing anything but I wouldn’t rule that out. I ‘ve inspected the source files and I don’t believe they have changed either.
How can commit interval for OLE DB destination be set when the data access mode is not "fast load".
What happens in oledb destination in case of a failure in package? How does the roll back happens. I mean how is the commit point set in oledb destination? I know about the transaction options which are at the package level.
We are experiencing problems when using OLEDB Fast Load option with transaction. We have a sequence container containing a Data Flow Task with a OLEDB source selection from tab1 left join tab2 and inserting into tab2 on a OLEDB Destination Fast Load.
The setup/troubleshooting is:
We are using TransactionOption=Requiered on the sequence container holding the Data Flow Task
OLE DB destination with Fast Load, no table lock, no check constraints
Dosn't happen on small sized data - seems as if everything can be contained in one buffer the task succeeds
Only a problem when selection and insertion is on the same table - or a view based on the same table
I know that we can use the OLE DB destination without Fast Load but this will perform badly on big sized data so that is not an option.
The errormessage is as follow:
Code Block Information: 0x402090DF at dft Upsert Sag, OLE DB Destination [8295]: The final commit for the data insertion has started. Error: 0xC0202009 at dft Upsert Sag, OLE DB Destination [8295]: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "This operation conflicts with another pending operation on this transaction. The operation failed.".
Does anyone have any ideas how to solve this problem?
i got a error [OLE DB Destination [16]] Error: Failed to open a fastload rowset for "[dbo].[tempMaster]". Check that the object exists in the database.
i am creating and doping this table in beginning after insert/update i will drop this table but this is error.i am using sql server 2008R2
Is there a way to get replication to commit records in batches instead of all at once?? I am in a 24/7 shop and some of my updates end up being thousands of rows and it locks the subscriber table for a few minutes sometimes. If I could get it to commit say every 1000 rows it might give me some relief in this area..
Or am I thinking about this wrong?? If this is possible, would it help at all...
My DB size was from 500MB to 10GB since 8/1998 to 12/2004. But now is 16GB (from 1/2005 - 5/2005), I don't why the data size growth too fast (as double) ?
I have got another annoying problem. The MDF file size on one of the machines is growing really fast. We zip the mdf/ldf files every day from all the machines in the dataentry dept. On this particular machine, the mdf file size is growing by about 1GB per day. However, when the file is zipped, the zipped file size comes closer to the zipped files from the other machines.
I'm using 2 OLE DB Commands; 1 to perform an insert the other an update. I have found that on the column mapping tab, I only have 10 parameters available to map to. The issue is I need 40 parameters. Am I doing something wrong? Is there a setting I am missing? Is there another way to do this? Am I out of luck . Here is the sql query I am using, so you can see that I have the correct number of parameters listed:
In the document, it says that the maximum database size is 524,258 terabytes; however, it also says that the maximum data file size--which I assume is the .MDF file--is 16 terabytes. My question is, how can you create a 524,258 TB database if the maximum file size 16 TB?
Is there any limit to the maximum size of a datafile or transaction log you can have with SQL Server 2000 on Windows 2000. Also is there a maximum size that should be adhered to for performance and admin reasons ?.
I've found a two different answers for this question:
one - on the http://support.microsoft.com/Default.aspx?kbid=920700 site where on the Performance improvements section there is a 128MB value in the Database size.
other is in the product datasheet there is a information that this version supports databases up to 4 GB.
Hello! I'm trying to figure out what the ultimate size limitation for a SQL 2005 Enterprise server is. This document is helpful but I'm a bit confused:
In the document, it says that the maximum database size is 524,258 terabytes; however, it also says that the maximum data file size--which I assume is the .MDF file--is 16 terabytes. My question is, how can you create a 524,258 TB database if the maximum file size 16 TB?
I'd like to replicate an SQL Server Database to an SDF file. For Simplicity I want to use the SQL Server 2005 Management Console. The Console reports that the maximum buffer size were to small. In the comment (c# code) I can see it is set to 512. How can I increase the value in the replication assistant?
I need to insert data to a temp table in SQL , I have
CREATE TABLE TMP_X ( doc_name varchar(200) )
--select * from TMP_X
INSERT into TMP_X values ( '...,
but its saying there isn't a match, and i know why its trying to insert all the data as one row, but i need them as seperate rows as i want only 1 column.. is there another INSERT type function ?
If I have a table with one column and i want to insert a few 100's rows of names I can't use the INSERT stmt as that does one row at a time , how can i achieve this ?
I cannot find any information on this error. It occurs on packages that are writing to the same table using a sql server destination. I suppose it would be a good exercise in error handling, but I'd rather avoid it.
One of our production databases was setup mirroring, log shipping and replication on it, the log file was setup unrestricted growth. This morning one index rebuilding process generated lots of logs, and the log file disk ran out of space, the database was in recovery mode. so we had to disable log shipping, pause mirroring and replication, expand log file disk, restarted SQL instance to fix the issue. Now we want to setup the log file to maximum size 80G, the whole log file disk is 120G.
So if the log file reached 80G next time, we can change the max size to 90G or 100G and it's easier to fix the space issue. My question is, if the database log file reached max size,
1. is the database still available? 2. Will the active session causing the issue be rollback to release space back?
I'm looking for a way to insert 50k records into a SQL Server table, and need to get it done faster. right now using BULK INSERT takes 5-10 seconds, but faster would be better, and even better if it were a consistent amount of time.
I've heard of DTS but don't know quite how to use it - would be offer any performance gains? any clue what the bottleneck is for BULK INSERT? hard drive speed? amount of RAM (this was on a 512mb machine)? parsing the fields?
Hi everyone I want to know if it's possible to do a for/while-loop so i can use INSERT
Look: I've this int [] test = new test[140]; But i need to insert for every value (140) a number so normally it would be : INSERT ... (case1, case2, case3 ...) value (test[1],test[2],test[3] ...) But isn't there a way to it with a loop? SOmething Like this ?
for( int i = 0 , i< 140, i++) { INSERT case[i] value test[i] }