We are noticing a lot of blocking when performing bulk inserts.
Sometimes the lead blocker(s) are blocking inserts into the same tables, but most of the time, it's insert to other tables. So, while bulk insert into table X is running, bulk insert into table Y is blocked from the insert into X.
- We turned off transactions from the application performing the bulk inserts - no change.
- We enabled table locking from the application - no change
- We enabled lock_on_bulk_load on the tables - this seems to have worked; however still see some blocking.
According to the lock_on_bulk article - When you specify table locking for a bulk import operation, a bulk update (BU) lock is taken on the table for the duration of the bulk-import operation, shouldn't we be seeing this BU lock? Instead, we see nothing but LCK_M_X (exclusive table locks).
Just found out that in order get a BU lock, no indexing can exist on the table ... sure be nice if that was in the article!
Also, we saw the exclusive locks happening before we made these changes, meaning it wasn't using the row locks that it should be by default. Assuming we're missing something here with lock escalation. I am profiling just for lock escalation, and it never happens ...
So I guess my pending question at this time is that why when inserting into table X do inserts into table Y get blocked?
Our company records live sales into an SQL Server database. The same tables that store the sale information are also used by a reporting interface to query sales figures, but occasionally a SELECT generated by the reports is so slow that it causes the INSERT operations to time out and fail.
We've tried increasing the CommandTimeout property in .NET of the application performing the inserts, but despite it being set to 90 seconds, it's still not enough to prevent the occasional sale from failing to record which is a big no-no for us. We've run the Database Tuning Advisor on the stored procedure that generates the SELECT for the reports, and it is now fully optimized but still this isn't enough. The tables are quite massive (millions of rows) and the SELECT requires JOINs to other tables, some of which are in a separate database on the same server.
Is there a solution to this problem, aside from increasing the CommandTimeout property to the point where no timeout errors occur? My concern is that doing this could increase the number of concurrent connections and we'd hit another limit, so it's not really solving the problem. Is there a way to configure SQL Server to always favour INSERTs over SELECTs? The reporting users won't really care if the reports are slower but it's critical to get these INSERTs up to 100% reliability.
I'm not a DBA (just a developer) so I don't have an intricate knowledge of databases and this problem is a bit beyond my level of expertise. Obviously we don't want to re-design our entire system, but we'll do whatever necessary to ensure we aren't failing to record sales.
One idea I had, which may be awful I don't know, is to submit these sales to an MSMQ and write some software that will read from the queue and insert the records from there instead. We could then deal with the timeout issue by just re-submitting the sale until it is accepted, then removing it from the queue.
I am running MS SQL 2000 server. The table involved is only about10,000 records. But this is the behavior I am seeing.The local machine is querying the table looking for a particularrecord. Everything works fine.The remote amchine goes to clear the table and then insert all 10,000records, into the table the following happens.1) the local machines queries do not compilete until the remotemachine is done.2) the remote machine can take up to 6 minutes to do these 10,000insert operations. So nothing on the local machine works right forthese 6 minutes.I do not have access to the remote machines source to see what isrunning but I am told it is simply a for loop with a insert query init. Nothing of a locking natture.Any idea the types of things I should look for to track this down? Ifound this by doing SQL profiler profiling and finding the remoteoperations. Turn these operatiiosn off and the local machine worksfine again, with no other action.Thanks,David
I'm trying to perform a bulk insert as shown below. It's problematic b/c it's not updating the identity fields correctly and we're getting dups. I think, but I'm not sure, that On Update Cascade would solve all this, b/c we wouldn't have to concern ourselves with even touching the identity fields, b/c they would be autogenerated. Can someone shed some light?? I'm pretty confused.
CREATE PROCEDURE AddMiamirecords AS
BEGIN TRANSACTION
--USERS INSERT INTO [Undex_Production].[dbo].[USERS]([LastName], [UserName], [EmailAddress], [Address1], [WorkPhone], [Company], [CompanyWebsite], [pword], [IsAdmin], [IsRestricted],[AdvertiserAccountID]) SELECT dbo.fn_ReplaceTags (convert (varchar (8000),Advertisername)), [AdvertiserEmail], [AdvertiserEmail],[AdvertiserAddress], [AdvertiserPhone], [AdvertiserCompany], [AdvertiserURL], [AccountNumber],'3',0, [AccountNumber] FROM Miami WHERE not exists (select * from users Where users.Username = miami.AdvertiserEmail) AND validAD=1
--PROPERTY INSERT INTO [Undex_Production].[dbo].[Property]([ListDate],[CommunityName],[TowerName],[PhaseName],[Unit], [Address1], [City], [State], [Zip],[IsActive],[AdPrintId]) SELECT [FirstInsertDate],[PropertyBuilding],[PropertyStreetAddress],PropertyCity + ' ' + PropertyState + ' ' + PropertyZipCode as PhaseName,[PropertyUnitNumber],[PropertyStreetAddress],[PropertyCity], [PropertyState], [PropertyZipCode],'0',[AdPrintId] FROM [Undex_Production].[dbo].[miami] WHERE miami.AdvertiserEmail IS NOT NULL AND validAD=1
--ITEM INSERT INTO [Undex_Production].[dbo].[ITEM] ([SellerID],[Price],[StartDate],[EndDate], [HomePageFeatured],[Classified],[IsClosed]) SELECT USERS.UserID, miami.PropertyPrice, convert(datetime,miami.FirstInsertDate), dateadd(day, 30, miami.FirstInsertDate)as EndDate, 1, convert (int,AdNumber) as Classified, 0 FROM USERS RIGHT OUTER JOIN miami ON USERS.UserName = miami.AdvertiserEmail WHERE validAD=1
--PROPERTYITEM INSERT INTO [Undex_Production].[dbo].[propertyItem]( [propertyId], [ItemId]) SELECT Property.propertyId, ITEM.ItemID FROM ITEM RIGHT OUTER JOIN miami ON ITEM.StartDate = miami.FirstInsertDate AND ITEM.Price = miami.PropertyPrice AND ITEM.Classified = convert(int,miami.AdNumber) LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1
--CONDOFEATURES INSERT INTO [Undex_Production].[dbo].[CondoFeatures](PropertyId,[Bedrooms], [Area], [PropertyDescription], [Bathrooms], [NumOfFloors]) SELECT Property.propertyId, [PropertyBedrooms], [PropertySquareFeet], dbo.fn_ReplaceTags (convert (varchar (8000),PropertyDescription)), [PropertyBathrooms], [PropertyTotalFloors] FROM miami LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1
--COMMUNITY FEATURES INSERT INTO [Undex_Production].[dbo].[CommunityFeatures](PropertyId,[totalFloors],isComplete1) SELECT Property.propertyId, miami.propertyTotalFloors,'0' as IsComplete FROM miami LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1
--UNITDISCLOSURES INSERT INTO [Undex_Production].[dbo].[UnitDisclosures]([propertyId],[monthcondoasso]) SELECT Property.propertyId, [propertyassocfee] FROM miami LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1
--BROKERDEVELOPER INSERT INTO [Undex_Production].[dbo].[BrokerDeveloper]([IsFSBO],[FSBOName], [FSBOEmail],[FSBOWebsite],[IsDeveloper],[DeveloperName],[DeveloperWebsite],[IsBroker],[BrokerName],[BrokerageWebsite], [propertyId],[brokercommission],[isComplete])SELECT CASE AdvertiserType when 'FSBO' THEN 1 else 0 end, CASE AdvertiserType when 'FSBO' THEN [AdvertiserName] else NULL end, CASE AdvertiserType when 'FSBO' THEN [AdvertiserEmail] else NULL end, CASE AdvertiserType when 'FSBO' THEN [AdvertiserURL] else NULL end, CASE AdvertiserType when 'Developer' THEN 1 else 0 end, CASE AdvertiserType when 'Developer' THEN [AdvertiserName] else NULL end, CASE AdvertiserType when 'Developer' THEN [AdvertiserURL] else NULL end, CASE AdvertiserType when 'Realtor' THEN 1 when 'Broker' THEN 1 else 0 end, CASE AdvertiserType when 'Realtor' THEN [AdvertiserName] when 'Broker' THEN [AdvertiserName] else NULL end, CASE AdvertiserType when 'Realtor' THEN [AdvertiserURL] when 'Broker' THEN [AdvertiserName] else NULL end, Property.propertyId,[PropertyCommBroker],'0' as IsComplete FROM miami LEFT OUTER JOIN Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1 WHERE validAD=1 IF @@ERROR <> 0 BEGIN ROLLBACK TRAN RETURN END COMMIT TRANSACTION GO
This is inserting thousands of rows into a table that has a trigger on it. The trigger updates a seperate table.
Now my question is will the trigger fire for every record inserted because I need it to fire only once as the table the trigger is updating is becoming locked when the insert statement is executed. I am assuming it is because of the number of times the trigger is being fired that the table is becoming locked. Any help would be greatly appreciated.
Hi folks, I have a table located in DB2 nd I need to have a mirror image of this table on a SQL2000 database to avoid some server downtime problems. Right now I have a solution using ADO.NET with Windows Services.
This windows service invokes itself everyday morning and pulls all the records from this table in DB2 to a dataset. Then I loop through the dataset and insert every record into SQL 2000 Table. This method is working fine ( It take approximately 2 minutes to insert 5000 records). I am just wondering whether there is any way to acheive bulk insertion in this case. Considering future growth of table I am not thinking the existing solution is neither elegant nor efficient.
Please let me know if I can achive the same either using XML, BULK INSERTS or any other mechanism in ADO.NET and please remeber that we are talking about data migration between different DBMS ( DB2 to SQL 2000)
I have a table with 370 million rows and 50+ columns. I need to change the data type of a column from character to numeric. Here's what it contains:
40 million with numbers I want to keep, the rest I just want to set to null:
4k with alpha characters 55 million other numbers 275 million empty strings
An alter column statement fails not just on the alpha characters but on the empty strings. So I tried a couple things on a test database to get an idea of the time it would take:
An update statement to clear out the non-numeric data is too slow (~1.5 days, batched 10000 at a time). I think I probably should create a new column anyway though, so I'm going to copy the data to a new table since it would be faster than adding a new column to the original table.
An insert ... select ... takes about 12 hours; adding WITH (TABLOCK) didn't seem to have any effect, and I'm not sure how to batch it. Recovery model is simple.
A select ... into ... only takes about 1 hour, but can't be batched.
Using a 3rd party ETL tool takes about 5 hours, batched.
I wanted to batch it to minimize impact on other queries but primarily the logs. Is there any way to do a fast batched bulk transfer within SSMS?
Got bulk insert thing going real nice thanks to your feedback! But now I notice that the BULK INSERT creates 2 identical rows while the flat file has the column headers and then corresponding row, just one row. Why would it do that. Interestingly if the data file and format file are residing on a shared folder on SQL 2005, and I ran the BULK INSERT from studio it only inserts one row. But when I do BULK INSERT via user interface it creates 2 rows. What is going here.
Hello all,I just started a new job this week and they complain about the length oftime it takes to load data into their data warehouse,which they do once a month.From what I can gather, they rebuild the indexes before the insert with an80% Fillfactor, then insert the data (with theindexes enabled), then rebuild the indexes with a 100% Fillfactor.Most of my RDBMS experience is with a different product. We would havedisabled the indexes and Foreign Keys, loaded the data, thenre-enabled them, moving any records that violated the constraints into anappropriate audit table to be checked after.Can someone share with me what the accepted "best practices" are for loadingdata efficiently into a data warehouse?Any thoughts would be deeply appreciated.Steve
I'm setting up a new 2005 server and bulk insert from a client workstation (using windows authentication) is failing with:
Msg 4861, Level 16, State 1, Line 1 Cannot bulk load because the file "\FILESERVERNAMEsharedfolderfilename.txt" could not be opened. Operating system error code 5(Access is denied.).
Here's my BULK INSERT statement (though I'm pretty sure there's nothing wrong with it):
BULK INSERT #FIRSTROW FROM '\FILESERVERNAMEsharedfolderfilename.txt' WITH ( DATAFILETYPE = 'char', ROWTERMINATOR = '', LASTROW = 1 )
If I run the same transact SQL when remote desktopped into the new server (under the same login as that used in the client workstation), it imports the file without errors.
If I use the sa client login from the client workstation (sql server authentication) the bulk insert succeeds.
My old SQL 2000 server lets me bulk insert the file without errors even from my client workstations using windows authentication.
I have followed the instructions on this site: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=928173&SiteID=1 , but still no luck and same error.
I'm pretty sure it is being caused by the increased constraints on bulk insert in 2005. Hoping someone can help. The more specific the better. If you need more info, let me know.
Oh and I've also made sure that the SQL service uses a domain logon account rather than the local system account.
Note that the file server (source file resides there) is a DIFFERENT machine than the 2005 SQL server. If I move the source file to the sql server machine the error goes away (not a preferred solution though).
Hi i followed Microsofts "Implementing Row-and-Cell-Level Security in Classified Databases Using SQL Server 2005"
this works fine when i insert delete data on a normal script (mangement studio)
my project runs in a SSIS package, different users. i cannot do a bulk insert using OLEDB data Destination i get the following error
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Conflicting locking hints are specified for table "dbo.tblUniqueLabelMarking". This may be caused by a conflicting hint specified for a view.". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Conflicting locking hints are specified for table "dbo.tblUniqueLabelMarking". This may be caused by a conflicting hint specified for a view.". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Conflicting locking hints are specified for table "dbo.tblUniqueLabel". This may be caused by a conflicting hint specified for a view.". Error: 0xC0209029 at Data Flow Task, OLE DB Destination 1 [1741]: The "input "OLE DB Destination Input" (1754)" failed because error code 0xC020907B occurred, and the error row disposition on "input "OLE DB Destination Input" (1754)" specifies failure on error. An error occurred on the specified object of the specified component.
I've run it for ih and ih_restore and can see that the "reserved" and "data" fields are growing but no extra rows - so no inserts are happening in the database?? What or why will this be happening.
Example of csv file of a table that I've exported -
From ih - name,rows,reserved,data,index_size,unused em_comm_costing,384191,1011704 KB,512424 KB,498648 KB,632 KB From ih_restore name,rows,reserved,data,index_size,unused em_comm_costing,384191,119808 KB,62960 KB,56088 KB,760 KB
So the em_comm_costing rows are 384191 in both but the data field has increased to 512424 from 62960.
The database is being mirrored as well, but not sure if that would be effecting the size?
At one of your client sides we have configured Always on with synchronous mode.Also we have schedule rebuild index and update statistics job which runs in night every alternate day. the issue is there are more then 100 sleeping queries which is blocking update statistics job.
I have to stop update statistics job manually once i come to office manually.
Once I have killed blocking sleeping query but then other sleeping query blocked it and so on.
I use code below to upload a csv file to SQL but got an error said that
Msg 4860, Level 16, State 1, Line 1 Cannot bulk load. The file "C:Test.csv" does not exist. BULK INSERT Test FROM 'C:Test.csv' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '' ) GO
I have several 2012 availability groups running on a cluster. I have one database that is bulk loaded every 30 minutes. The DB is about 1 GB in size. To be on the availability group it has to be set to full recovery mode, but simple or even bulk would obviously be better. Is there a better way to handle the transaction log size other than to run a backup after each bulk load causing extra overhead? With mirrors you could use simple, but since those are going away . . .
I used bulk insert to insert a txt file into a table. It works fine. (see code below) Now, one txt file with column's name at first row and has about 200 columns. There is no table created before. How to code to create a destination table based on first row of the txt file so that bulk insert will work for that txt file?
BULK INSERT #MBRACCT FROM 'c:order.TXT' WITH ( FIELDTERMINATOR = '|', FIRSTROW = 2, ROWTERMINATOR = '' )
We have some tables that are bulk-loaded every day and they do not have RI to the other tables in the database.
To ease pressure on the logs, I had the idea of spinning them off to another database on the same AG in simple or bulk-load recovery model and using synonyms to point to them so the code base would not need changing.
I know an earlier bug in 2005 existed that basically made the query analyzer ignore indexes if a table was accessed via a synonym.
I have a VB.NET scheduled job as a task scheduler ( windows 2012) that calls a stored procedure that bulk inserts. I have added the user in the server role "bulkadmin" yet I get the "You do not have permission to use the bulk load statement error"
System.Data.SqlClient.SqlException (0x80131904): You do not have permission to use the bulk load statement. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream,
SQL Server 2012 running under a domain Managed Service Account. (Server A) File located on a Windows 2012 server in a directory which has been shared to user A. (Server B). User A is a domain account and is using his laptop, (laptop C) which is using SSMS to run a bulk insert command.
User A (Bulk Insert from laptop SSMS Client) --- > SQL Server (server A) --- > File Server (Server B)
The command fails and is returning Access denied to the file/folder share on Server B.
Running the same command on the SQL Server (Server A), the command works fine, so this is a double hop kerberos issue.
If I use a SQL Login from Laptop C, then the command works fine as the SQL Server will use the SQL's Managed service account to connect to the file share, which is set up for delegation and impersonation.
I am struggling to work out why a domain user cannot bulk insert a file from a remote location. I have checked that the user is connected with Kerberos authentication and they are. All articles seem to talk about setting up SPN's for the SQL Server so that SQL Login authentication can work over remote bulk insert, and just say to set up the file share properties properly if using a domain account.
What I am missing to allow domain accounts to bulk insert remotely, from a remote file share?
For Bulk Load requests in SQL server, Are there any specific profiler event? Like the one we have for RPC RPC:Starting and for Batch Requests, we have SQL:BatchStarting.
Are Bulk Load requests that are being monitored through Profiler captured as SQL:Batch... events at the backend?
Are there any new features added in 2012 or 2014 to identify a Bulk request submitted through bcp.exe utility or any other sqlbulkcopy program?
I am trying to load a fixed width text file using `Bulk Insert` and a XML format file. I have used the same process and XML file on another fixed width, except with less columns.
Error Msg 4857, Level 16, State 1, Line 16 Line 4 in format file "PATHCaddr.xml": Attribute "type" could not be specified for this type.
I have come up with an issue where I want to update data in a table using bulk/SET update to get the result shown in below code with output in column titled "Arrear Amt".
Please use this test data.
CREATE TABLE ##vOD_Calc ( Seq_No INT , Contract_id INT , Rental_id INT , Actual_OD INT , Logic_OD INT , Due_dte DATETIME ,
[Code] .....
Logic required is that once the sum of column [ArrearAmt] of current row and all previous rows becomes greater than $100 then column [ChArrrearAmt] should show that summed up value and in else case the column [ChArrrearAmt] should show the same value as that of column [ArrearAmt].
Once the column [ChArrrearAmt] reaches the threshold of $100 then the same cycle should start again i.e. in above example rental#1 had $37.17 < $100 then rental#1 + rental#2 is also < $100 and at rental#3 sum of rental#1, rental#2 and rental#3 becomes $111.51 which is greater than $100 so its updated in column [CHArrrearAmt]. The same cycle start overs from rental#4 onwards however the summation of [ArrearAmt] will now begin after rental#4 onwards and not from the starting.
Below is the loop based SQL script which handles the above situation, however in BULK its a total deterioration of performance if thousands of rows are to be processed i.e. with a contract having multiple rentals.
The case here is that I have to use the result of previously updated column value of [ChArrrearAmt] to take decision for the next row, however with BULK update since the row is not yet updated with latest amount therefore the decision on next row is also giving wrong result.
This is the code with which I have achieved to update the column 'chArrear Amount', however its a loop based solution and performance killer.
INSERT INTO ##vOD_Calc_loop ( Rows_count , contract_id ) SELECT COUNT(*) , T.Contract_id FROM ##vOD_Calc T GROUP BY T.Contract_id
Overall goal: Write a Bulk Insert statement using the UNC path of a filetable directory.
Issue: When using the UNC path of the filetable directory in a Bulk Insert Statement, receiving "Operating system error code 50(The request is not supported.)" Looking for confirmation as to whether this is truly not supported.
Environment: SQL Server 2012 Standard. Windows Server 2008 R2 Standard
Is there a switch I can use to force a bulk insert and if data is truncated, I'm good with that. The truncated data, in this case, is not data I can use anyway if it is long enough to be truncated.
I need to keep the field at VARCHAR(23) and if I expand it, I won't be able to join on it after the file load completes. I'd like the data to be inserted (truncated if need be) and then I'll deal with the records that are truncated after I load the file.