Tuning An UPDATE Statement On A Large Data Set?

May 26, 2004

I'm updating the name data in a large user database with the following UPDATE statement. The staging table was bulk loaded from a flat file and contains 10 million records. The production table (Recipients) contains 15 million records. This worked correctly but this single update statement took an entire ten hours to run which is way too long. While it was running the server was clearly 100% disk bound. CPU activity was near nothing. We've just upgraded RAM from 1GB to 2GB but we expect data sizes to grow significantly and we can't keep adding RAM. Absolutely nothing else is running on this server. Any ideas how I can optimize this?

UPDATE Recipients
SET [First] = Stages.[First]
, [Last] = Stages.[Last]
FROM
Stages
INNER JOIN Recipients ON
(Stages.UserName = Recipients.UserName
AND Stages.DomainID = Recipients.DomainID)
WHERE
(CASE WHEN Stages.[First] IS NULL THEN 1 ELSE 0 END
+ CASE WHEN Stages.[Last] IS NULL THEN 1 ELSE 0 END)
<=
(CASE WHEN Recipients.[First] IS NULL THEN 1 ELSE 0 END
+ CASE WHEN Recipients.[Last] IS NULL THEN 1 ELSE 0 END)

Text execution plan. I've made small annotations with the % information from the graphical execution plan:

|--Clustered Index Update(OBJECT:([Recipients].[dbo].[Recipients].[PK_Recipients]), SET:([Recipients].[First]=[Stages].[First], [Recipients].[Last]=[Stages].[Last]))
|--Top(ROWCOUNT est 0)
|--Sort(DISTINCT ORDER BY:([Bmk1000] ASC))
14% |--Merge Join(Inner Join, MANY-TO-MANY MERGE:([Stages].[DomainID], [Stages].[UserName])=([Recipients].[DomainID], [Recipients].[UserName]), RESIDUAL:(([Recipients].[UserName]=[Stages].[UserName] AND [Recipients].[DomainID]=[Stages].[Domain
25% |--Clustered Index Scan(OBJECT:([Recipients].[dbo].[Stages].[IX_Stages]), ORDERED FORWARD)
61% |--Clustered Index Scan(OBJECT:([Recipients].[dbo].[Recipients].[PK_Recipients]), ORDERED FORWARD)

Everything I've heard on the subject suggests you change the index scans to index seeks. How do I do this?

Any other tuning advice is greatly appreciated.

Here are the exact statements I used to create the tables:

CREATE TABLE Recipients (
ID INT IDENTITY (1, 1) NOT NULL,
UserName VARCHAR (50) NOT NULL,
DomainID INT NOT NULL,
First VARCHAR (24) NULL,
Last VARCHAR (24) NULL,
StreetAddress VARCHAR (32) NULL,
City VARCHAR (24) NULL,
State VARCHAR (16) NULL,
Postal VARCHAR (10) NULL,
SourceID INT NULL,

CONSTRAINT PK_Recipients PRIMARY KEY CLUSTERED (DomainID, UserName)
)

CREATE TABLE Stages (
ID INT NULL,
UserName VARCHAR(50) NOT NULL,
DomainID INT NULL,
Domain VARCHAR(50) NOT NULL,
First VARCHAR(24) NULL,
Last VARCHAR(24) NULL,
StreetAddress VARCHAR(32) NULL,
City VARCHAR(24) NULL,
State VARCHAR(24) NULL,
Postal VARCHAR(10) NULL
)
CREATE CLUSTERED INDEX IX_Stages ON Stages (DomainID, UserName)

View 11 Replies


ADVERTISEMENT

Performance Tuning For Row-by-Row Update Statement

Jul 20, 2005

hiFor an unavoidable reason, I have to use row-by-row processing(update) on a temporary table to update a history table every day.I have around 60,000 records in temporary table and about 2 million inthe history table.Could any one please suggest different methods to imporve the runtimeof the query?Would highly appreciate!

View 9 Replies View Related

Performance Tuning UPDATE Statement

Jul 20, 2005

Below is a simple UPDATE that I have to perform on a table that hasabout 2.5 million rows (about 4 million in production) This queryruns for an enourmous amount of time (over 1 hour). Both theChangerRoleID and the ChangerID are indexed (not unique). Is thereany way to performance tune this?Controlling the physical drive of the log file isn't possible at ourclient sites (we don't have control) and the recovery model needs tobe set to "Full".UPDATE CLIENTSHISTORY SET ChangerRoleID = ChangerID WHEREChangerRoleID IS NULLAny Help would be greatly appreciated!

View 2 Replies View Related

Update On Large Table - Change Data Type For Text Column

Dec 10, 2014

I need to update a large table, about 55 million rows, without filling the transaction log, in the shortest time as possible. The goal is to alter the table and change the data type for Text column from VARCHAR(7900) to NVARCHAR(MAX).

Since I cannot do it with an ALTER TABLE statement (it would fill up the transaction log) I'm thinking to:

- rename column Text in Text_OLD
- add Text column of type NVARCHAR(MAX)
- copy values in batches from Text_OLD to Text

The table is defined like:

create table DATATEXT(
rID INTEGER NOT NULL,
sID INTEGER NOT NULL,
pID INTEGER NOT NULL,
cID INTEGER NOT NULL,
err TINYINT NOT NULL,

[Code] ....

I've thought about a stored procedure doing this but how to copy values in batch from Text_OLD to Text.

The code I would start with (doing just this part) is the following, but maybe there are more efficient ways to do it, or at least there's a better way to select @startSeq in the WHILE loop (avoiding to select a bunch of 100000 sequences and later selecting the max).

declare @startSeq timestamp
declare @lastSeq timestamp
select @lastSeq = MAX(sequence) from [DATATEXT] where [Text] is null
select @startSeq = MIN(Sequence) FROM [DATATEXT] where [Text]is null
BEGIN TRANSACTION T1
WHILE @startSeq < @lastSeq

[Code] ....

View 1 Replies View Related

SQL Statement Tuning

Jul 23, 2005

hi!!!the following is the sql which is veri slow cos of the 'Not In' clause,would appreciate if u anyone can suggest any other way to bring aboutthe same resultSELECT Id, LOC, AGENCY, BATCHFROM tblRowsWHERE tblRows.AGENCY NOT IN (SELECT DISTINCT B.AGENCYFROM tblRows AS A, tblRows AS BWHERE A.LOC = "B"AND B.AGENCY = A.AGENCYAND B.BATCH = A.BATCH)tblRowsID LOC AGENCY BATCH1 B 1000 WAD2 R 1000 WAD3 B 1010 QAD4 B 1020 WAD5 R 1020 WAD6 R 1030 RRR7 I 1030 RRR8 V 1030 RRR9 B 1040 UIA10 R 1040 UIA11 I 1040 UIA12 V 1040 UIAthe subquery is to return the rows with LOC = B. the above query as thewhole should return the rows where LOC <> b and also must exclude rowsbelong to the LOC = B subset (that is for example the first two rowswith ID 1 and 2. the first row with ID 1 has LOC as B, Agency as 1000and BATCH as WAD. the second row with Id 2 has LOC as R as the sameAGENCY and BATCH as first row with ID 1 so is the subset of first row.similarly row with Id's 4 and 5.the above query must return the following the rows (that is we can saythe orphan rows which doesn't have LOC AS B nor belongs to the B'ssubset6 R 1030 RRR7 I 1030 RRR8 V 1030 RRRhope i am clear in my explanation and would appreciate if someone canpoint me in the right direction. the reason for posting this in msaccess group is because this is going to be a query in MS Access.regardsbala

View 7 Replies View Related

Tuning A Delete Statement

Oct 26, 2004

Hi,

I need to delete the following records (from enrollment_fact):


SELECT
a."STU_SID",
a."SCHOOL_YEAR",
a."DATE_SID",
a."LOC_SID"
FROM "dbo"."ENROLLMENT_FEX2" b,
"dbo"."LOCATION_DIM" c
LEFT OUTER JOIN
"dbo"."ENROLLMENT_FACT" a
on c."LOC_SID" = a."LOC_SID"
WHERE
b."LOC_KEY" = c."LOC_KEY"
and
a."DATE_SID" between b."MIN_DATE" and b."MAX_DATE"


This is the approach (excuse the misuse of the concat function, but you get the idea)


DELETE FROM "dbo"."ENROLLMENT_FACT"
WHERE CONCAT (a."STU_SID", a."SCHOOL_YEAR", a."DATE_SID", a."LOC_SID")
IN (
SELECT DISTINCT CONCAT (a."STU_SID",
a."SCHOOL_YEAR",
a."DATE_SID",
a."LOC_SID"
)
FROM "dbo"."ENROLLMENT_FEX2" b,
"dbo"."LOCATION_DIM" c
LEFT OUTER JOIN "dbo"."ENROLLMENT_FACT" a
ON c."LOC_SID" = a."LOC_SID"
AND a."DATE_SID" BETWEEN b."MIN_DATE"
AND b."MAX_DATE")


comments? better way? (without using an sp)

thanks

View 8 Replies View Related

Tuning An Application Statement With Bind Variables

Jul 20, 2005

Hi gurus,I just started to look at a very slow-running SQL statementgenerated by an application (Siebel). I spooled the SQL from theapplication, replaced the bind variables by their values, and tunedfrom the Query Analyser. But after awhile, I realized that thestatement using bind variables and the same statement using the valuesinstead of the bind variables often have completely differentexecution plans! Is that normal? Can someone tell me how the SQLServer treats bind variables. Don't worry about being too technical,I'm an Oracle DBA/developer.ThanxDaniel

View 1 Replies View Related

SQL Server 2012 :: Update Statement Will Not Update Data Beyond 7 Million Plus Rows Out Of 38 Millions Rows

Dec 12, 2014

I run the following statement and it will not update beyond 7 million plus rows and I have about 38 million to complete. I keep checking updated row counts and after 1/2 day it's still the same so I know something is wrong because it was rolling through no problem when I initiated it. I need to complete ASAP so it's adding to my frustration. The 'Acct_Num_CH' field is an encrypted field (fyi).

SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
WHILE @@ROWCOUNT > 0
BEGIN
SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
END
SET rowcount 0

View 5 Replies View Related

Merge Statement - Update All Data

Mar 14, 2014

I am using Merge Statement. Here is my requirement, I don't want to Insert data if Client State is NY, but I want to update all data

When Not Matched
and State not in ('NY')
THEN INSERT

the problem is sometime data NY data is inserted and sometime don't.

View 4 Replies View Related

Update Statement Has A Problem Because Of Data Type.

Mar 4, 2008

I have Repeater which has checkbox to delete selected items. 
 Dim strID As String = ""
Dim newStrID As String = ""Dim anItem As RepeaterItem
For Each anItem In Repeater1.ItemsDim chkBoxDelete As CheckBox = anItem.FindControl("Delete")
If chkBoxDelete.Checked Then
strID += (CType(anItem.FindControl("ID"), Label)).Text + ", "
End If
Next
If Len(strID) > 1 Then
newStrID = Left(strID, Len(strID) - 2)
Else
newStrID = ""
End If
 
It's working great, but if I implement it with Update statement, it doesn't work when it has more than one items (it's working fine with one item).
If I just print the statement, it's
UPDATE dbo.mytable SET active='N' WHERE id IN (1, 2, 3, 4)
 
Dim sqlStr As String
sqlStr = "UPDATE dbo.mytable SET active='N' WHERE id IN (" + newStrID + ")"
Dim cmdUpdate As New SqlCommand(sqlStr, conn)
 
I think it's because of data type, since id is Integer and newStrID is string.  What should I do to make it work?
 
Thank you.

View 2 Replies View Related

Can An Update Statement Be Used For Interpolating Missing Data?

Jul 23, 2005

Here is a small sample of data from a table of about 500 rows(Using MSSqlserver 2000)EntryTime Speed Gross Net------------------ ----- -----21:09:13.310 0 0 021:09:19.370 9000 NULL NULL21:09:21.310 NULL 95 NULL21:10:12.380 9000 NULL NULL21:10:24.310 NULL 253 NULL21:11:24.370 8000 NULL NULL21:11:27.310 NULL 410 NULL21:11:51.320 NULL 438 NULL21:11:51.490 NULL NULL 10After the first row, every row has only one value of the three.I would like to replace all the NULL values with calculatedinterpolations.I can do it w/ cursors or while loops.I could do it w/ VB (I think)Can this be done w/ an Update statement using self joins?What would be the best way?The value for speed can increase or decrease over time, but can neverbe < 0Net is always less than gross, and neither can go below 0.TIA for any helpful suggestions.Thanks,BM

View 9 Replies View Related

Is It Possible To Change The Data Type During An Sql Update Statement?

Jun 13, 2007

Hello,



I have a SQL update statement that updates some user names, however, the user names exceed the length of the data type. Currently, for the column username the data type is set to nvarchar (8).



How can I change that to nvarchar(10) in a SQL Update statement?



Thanks in advance.

View 9 Replies View Related

Problem With Data Refreshing After Insert Or Update Statement

Mar 18, 2008

Dear All,

Im using VS2008, visual basic and SQL Server express.

General questions please: Is it necessary, to issue some sort of command, to 'commit' data to a SQL express database after a INSERT or UPDATE sql command?

I'm getting strange behavior where the data is not refreshed unless i exit my app and re-enter. In other words, i can run a sql command , the data is apparantly saved (because i get no errors) then if i refresh a data set or do a sql select query the data that i expect to return is not there.

Im fairly new to SQL express (and SQL server generally) so i dont know if its my coding or i need to switch some 'feature'
on/off or not.

I hope thats clear

Also, could someone point me to documentation that explains each parameter in the connection string

Many Thanks

Chris Anderson


My code is:


ConnectionString = "Data Source=.SQLEXPRESS;AttachDbFilename=C:myfile.mdf;Integrated Security=True;Connect Timeout=30;User Instance=True"

Connection = New SqlConnection

Connection.ConnectionString = ConnectionString

Connection.Open()
'''''''''''''''''the above code is done at the start of my application


'''''''this code below called during application many times


dim sql as string = "my sql string here"

Dim cmd As SqlCommand = Nothing

cmd = New SqlCommand(sql, Connection)






Try

cmd.ExecuteNonQuery()

Catch err As SqlException

MessageBox.Show(err.Message.ToString())

MessageBox.Show(sql, "SQL ERROR: ", MessageBoxButtons.OK, MessageBoxIcon.Error)

End Try



cmd.Dispose()

View 5 Replies View Related

Query Tuning And Stored Procedure Tuning

Nov 22, 2000

Hi

Is there any good books for Query Tuning and Stored procedure Tuning

Thanks

View 1 Replies View Related

Need Help With Large Dynamic Sql Statement

Mar 16, 2006

Is there a better way to do this? or is basically how you would write a dynamic SQL Stored Procedure? Also, is this code vulnerable to SQL injection? All of the Parameters are being passed in by a SQL DataSource.
set ANSI_NULLS ONset QUOTED_IDENTIFIER ON
go
-- =============================================-- Author: <Author,,Name>-- Create date: <Create Date,,>-- Description: <Description,,>-- =============================================
CREATE PROCEDURE [dbo].[pe_getAppraisals]
-- Add the parameters for the stored procedure here@PType nvarChar(50),@Client nvarChar(50),@City nvarChar(50),@ApptDate nvarChar(50),@OrderDate nvarChar(50),@Status nvarChar(50),@AType nvarChar(50),@Text nvarChar(50),@OrderBy nvarChar(50),@SortDir nvarChar(4),@PageSize INT,@PageNum INT
AS
DECLARE
@l_Select nvarChar(4000),@l_From nvarChar(4000),@l_SetWhere bit,@l_PType nvarChar(100),@l_Client nvarChar(100),@l_City nvarChar(100),@l_ApptDate nvarChar(100),@l_OrderDate nvarChar(100),@l_Status nvarChar(100),@l_AType nvarChar(100),@l_Text nvarChar(4000),@l_SortDir nvarChar(4),@l_TotalRecords INT
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from-- interfering with SELECT statements.
SET NOCOUNT ON;
IF @OrderBy IS NULL     SET @OrderBy = 'OrderDate'
IF @SortDir IS NULL     SET @SortDir = 'DESC'
IF @SortDir = 'DESC'     SET @l_SortDir = 'ASC'ELSE SET @l_SortDir = 'DESC'
--Initialize SetWhere to test if a parameter has Added the keyword WHERE
SET @l_SetWhere = 0
--Create WHERE portion of the SQL SELECT Statement
IF (@PType IS NOT NULL)BEGIN   SET @l_PType = ' WHERE o.PropertyTypeID=' + @PType   SET @l_SetWhere = 1EndELSE SET @PType = ''
IF (@Client IS NOT NULL)BEGIN   IF @l_SetWhere = 0     BEGIN        SET @l_Client = ' WHERE o.ClientID=' + @Client       SET @l_SetWhere = 1       END      ELSE SET @l_Client = ' AND o.ClientID=' + @Client    ENDELSE SET @l_Client = ''
IF (@City IS NOT NULL)BEGIN   IF @l_SetWhere = 0   BEGIN    SET @l_City = ' WHERE o.City=''' + @City + ''''    SET @l_SetWhere = 1   END   ELSE SET @l_City = ' AND o.City=''' + @City + ''''ENDELSE SET @l_City = ''IF (@ApptDate IS NOT NULL)BEGIN   IF @l_SetWhere = 0   BEGIN     SET @l_ApptDate = ' WHERE o.ApptDate= ''' + @ApptDate + ''''     SET @l_SetWhere = 1   END   ELSE SET @l_ApptDate = ' AND o.ApptDate= ''' + @ApptDate + ''''ENDELSE SET @l_ApptDate = ''
IF (@OrderDate IS NOT NULL)BEGINIF @l_SetWhere = 0   BEGIN     SET @l_OrderDate = ' WHERE o.OrderDate=''' + @OrderDate + ''''     SET @l_SetWhere = 1   END   ELSE SET @l_OrderDate = ' AND o.OrderDate=''' + @OrderDate + ''''ENDELSE SET @l_OrderDate = ''
IF (@Status IS NOT NULL)BEGINIF @l_SetWhere = 0   BEGIN     SET @l_Status = ' WHERE o.StatusID=' + @Status     SET @l_SetWhere = 1   END   ELSE SET @l_Status = ' AND o.StatusID=' + @Status  ENDELSE SET @l_Status = ''
IF (@AType IS NOT NULL)BEGIN   IF @l_SetWhere = 0   BEGIN     SET @l_AType = ' WHERE o.ReportID=' + @AType     SET @l_SetWhere = 1   END  ELSE SET @l_AType = ' AND o.ReportID=' + @ATypeENDELSE SET @l_AType = ''
IF (@Text IS NOT NULL)BEGIN   IF @l_SetWhere = 0   BEGIN   SET @l_Text = ' WHERE (o.FileNumber LIKE ''' + @Text + '%''' +   ' OR o.LoanOfficer LIKE ''' + @Text + '%''' +   ' OR o.Borrower LIKE ''' + @Text + '%''' +   ' OR o.StreetAddrA LIKE ''' + @Text + '%''' +   ' OR o.State LIKE ''' + @Text + '%''' +   ' OR o.ContactName LIKE ''' + @Text + '%'')'   SET @l_SetWhere = 1ENDELSE SET @l_Text = ' AND (o.FileNumber LIKE ''' + @Text + '%''' +   ' OR o.LoanOfficer LIKE ''' + @Text + '%''' +   ' OR o.Borrower LIKE ''' + @Text + '%''' +   ' OR o.StreetAddrA LIKE ''' + @Text + '%''' +   ' OR o.State LIKE ''' + @Text + '%''' +   ' OR o.ContactName LIKE ''' + @Text + '%'')'ENDELSE SET @l_Text = ''
--Build the SQL SELECT Statement
SET @l_Select = 'o.OrderID AS OrderID, o.FileNumber AS FileNumber, o.OrderDate AS OrderDate, o.ClientID AS ClientID, o.ClientFileNumber AS ClientFileNumber, o.PropertyTypeID AS PropertyTypeID, o.EstimatedValue AS EstimatedValue, o.PurchaseValue AS PurchaseValue, o.LoanOfficer AS LoanOfficer, o.ReportFee AS ReportFee, o.FeeBillInd AS FeeBillInd, o.FeeCollectInd AS FeeCollectInd, o.CollectAmt AS CollectAmt, o.Borrower AS Borrower, o.StreetAddrA AS StreetAddrA, o.StreetAddrB AS StreetAddrB, o.City AS City, o.State AS State, o.Zip AS Zip, o.ContactName AS ContactName, o.PhoneA AS PhoneA, o.PhoneB AS PhoneB, o.ApptDate AS ApptDate, o.ApptTime AS ApptTime, o.AppraiserID AS AppraiserID, o.InspectionDate AS InspectionDate, o.DateMailed AS DateMailed, o.TrackingInfo AS TrackingInfo, o.ReviewedBy AS ReviewedBy, o.StatusID AS StatusID, o.Comments AS Comments, o.SpecialNotes AS SpecialNotes, o.EmailInd AS EmailInd, o.MgmtName AS MgmtName, o.MgmtContactName AS MgmtContactName, o.MgmtAddress AS MgmtAddress, o.MgmtPhone AS MgmtPhone, o.MgmtFax AS MgmtFax, o.MgmtFee AS MgmtFee, o.MgmtNotes AS MgmtNotes, o.LoginName AS LoginName, on1.NotesDesc AS PreNotesDesc, on2.NotesDesc AS PostNotesDesc, os.StatusDesc AS StatusDesc, ot.ReportDesc AS ReportDesc, ot.ReportFee AS ReportPrice, ot.ReportSeq AS ReportSeq, pc.PriceDesc AS PriceDesc, pt.PropertyTypeDesc AS PropertyTypeDesc, l.LoginName AS AppraiserName, l2.LoginName As ClientName'
SET @l_From = 'Orders AS o LEFT OUTER JOINOrderNotes AS on1 ON o.PreNotesID = on1.NotesID LEFT OUTER JOINOrderNotes AS on2 ON o.PostNotesID = on2.NotesID LEFT OUTER JOINOrderStatus AS os ON o.StatusID = os.StatusID LEFT OUTER JOINOrderTypes AS ot ON o.ReportID = ot.ReportID LEFT OUTER JOINPriceCodes AS pc ON ot.PriceID = pc.PriceID LEFT OUTER JOINPropertyTypes AS pt ON o.PropertyTypeID = pt.PropertyTypeID LEFT OUTER JOINLogins AS l ON o.AppraiserID = l.LoginID LEFT OUTER JOINLogins AS l2 ON o.ClientID = l.LoginID'
SET @l_TotalRecords = @PageSize * @PageNum
PRINT ' ORDER BY ' + @OrderBy + ' ' + @l_SortDir + ') ORDER BY ' + @OrderBy + ' ' + @SortDir
Execute('SELECT TOP(' + @PageSize + ') * FROM (SELECT TOP(' + @l_TotalRecords + ') ' + @l_Select + ' FROM ' + @l_From + @l_PType + @l_Client + @l_City + @l_ApptDate + @l_OrderDate + @l_Status + @l_AType + @l_Text + ' ORDER BY ' + @OrderBy + ' ' + @l_SortDir + ') AS rsltTbl ORDER BY ' + @OrderBy + ' ' + @SortDir)
END
Thank You,
Jason

View 5 Replies View Related

SQL 7 Hanging With Large In() Statement

Mar 2, 2004

For MS SQL 7, an error occurs as shown at bottom.
Basically a piece of embedded sql is being sent to the db which has a syntax like:
select distinct WC_REFERENCE_MANAGER.logicalname,
WC_REFERENCE_MANAGER.referenceid,
wc_site.doclistid,
WC_REFERENCE_MANAGER.shared,
WC_REFERENCE_MANAGER.languageid
from wc_content inner join wc_content_ref on wc_content.contentid = wc_content_ref.documentid
inner join wc_reference_manager on wc_content_ref.referenceid = wc_reference_manager.referenceid
inner join wc_site on wc_reference_manager.referenceid = wc_site.referenceid
where xmlid = @p1 and contains(xmldata, @p2)
and wc_reference_manager.languageid = @p3
and wc_site.siteid = @p4
and wc_site.sectionid IN (

Followed by over 15000 numbers

(dont ask - its a generic product with a generic db design behind it!).

Any comments apart from rewrite the query (which isnt an option).

The error is :



02/27/04 14:30:05 Stack Overflow Dump not possible - Exception c00000fd E at 00499f17
2004-02-27 14:30:05.93 spid8 Address=499f17 Exception Code = c00000fd
2004-02-27 14:30:05.93 spid8 eax=394d29f4 ebx=00000000 ecx=394d29f4 edx=0000000d
2004-02-27 14:30:05.93 spid8 esi=394d29f4 edi=433a499c eip=00499f17 esp=3c263000
2004-02-27 14:30:05.93 spid8 ebp=3c26300c efl=00010a97
2004-02-27 14:30:05.93 spid8 cs=1b ss=23 ds=23 es=23 fs=3b gs=0
2004-02-27 14:30:05.93 spid8 Input Buffer 250651 bytes -
2004-02-27 14:30:05.93 spid8
2004-02-27 14:30:05.93 spid8 s p _ p r e p a r e & @ T @ p 1 i n t , @ p 2 n v a r c h a
2004-02-27 14:30:05.93 spid8 r ( 4 0 0 0 ) , @ p 3 i n t , @ p 4 i n t c s e l e c t
2004-02-27 14:30:05.93 spid8 d

View 9 Replies View Related

Use Command Or BCP For Large SQL Statement

Jun 9, 2006

I have an application (Java) which is converting a relatively large XML files (>8MB) and generating SQL Insert statements from the contents. The resultant SQL from the transformation is quite large (nearly 8MB also) and I'm just wondering if people would have an opinion as to whether to just place the SQL into a command and execute it or write it out to a file and use BCP or something similar.

My preference would be to insert the data straight from the application to cut down on the number of "working parts" but I'm just worried there would be too many performance issues. For operational reasons I can't just pass the XML to SQL Server and let it process it. Anybody come across the same problem or have any opinions???

View 1 Replies View Related

Data Tuning Advisor

Apr 20, 2006

Is Data Tuning Advisor available for any Express version?

--Thanks

View 2 Replies View Related

Data Tuning Advisor

Apr 20, 2006

Is Data Tuning Advisor available for any Express version?

--Thanks

View 1 Replies View Related

One Large Update Vs. Many Small

Oct 8, 2007



Hello,

the application will add items into a "bag". That is, the items in one table will refer a record in another table. This will be done in timely manner -- with second or minute delays between adding a new item. There will be up to thouthand of items per bag. The option is to wait until a full bag accumulates and set up all the references at once by using


UPDATE items SET container_ref = bag WHERE id IN [...]

The disadvantage of such all-at-once I see is inability to encapsulate the functionality into a SP -- the problem is to pass a set of IDs. The advantage should be efficiency in terms of total SQL Server load. How mush would it be?

View 3 Replies View Related

Loading Data Warehouse (Perf. Tuning)

Feb 25, 2008



Hi all,

I'm loading my data warehouse using several SCDs. Some of these SCDs need to occur in sequence, while others can be run at the same time. I'm wondering what the best option for me is in terms of performance. Here is what I was considering:

1) Create a single package. Create two sequence containers --- one that will contain SCD loads that occur in sequence; the other sequence container contains SCD loads that occur in parallel.

OR

2) Create a set of packages for each SCD load. Then create a "Master" package that will use "Execute Package Task" components to call these packages.

The othe reason I want to bring up these difference ways to design an DW Load is because the second option is a "cleaner" approach, or a more organizational approach, to the load. The first option can get quite messy and large if you have several SCDs and several sequence containers. However, I'm looking for the fastest performance. Any thoughts?

View 7 Replies View Related

Use Replace() In An Update In A Large Ntext

Apr 21, 2008

I've got a table that I have to update in preparation for our environment move (2k to 2005 SP2). The developers that designed the application created a table called schemas, which holds the contents of an XML file inside of an ntext field named Data.I need to parse through the field and do a find/replace to replace all instances of www.site.com with www7.site.com. It's all over the place in the file. The problem is, that the datalength() of each of the fields (there are 2 rows) are above 15000.normally, I'd run something like this:update schemas set data=replace (cast(Data as varchar(max)),'www.site.com','www7.site.com') where data like '%www.site.com%'Smaller columns it works great - but it won't work on these because they're too big (the update will chop anything beyond the varchar(max) value). I could do it manually, but this DB will be refreshed from production on a weekly basis and I'd like to script as many of the environment changes to the DB as much as possible. Any ideas?

View 1 Replies View Related

Multiple Tables Used In Select Statement Makes My Update Statement Not Work?

Aug 29, 2006

I am currently having this problem with gridview and detailview. When I drag either onto the page and set my select statement to pick from one table and then update that data through the gridview (lets say), the update works perfectly.  My problem is that the table I am pulling data from is mainly foreign keys.  So in order to hide the number values of the foreign keys, I select the string value columns from the tables that contain the primary keys.  I then use INNER JOIN in my SELECT so that I only get the data that pertains to the user I am looking to list and edit.  I run the "test query" and everything I need shows up as I want it.  I then go back to the gridview and change the fields which are foreign keys to templates.  When I edit the templates I bind the field that contains the string value of the given foreign key to the template.  This works great, because now the user will see string representation instead of the ID numbers that coinside with the string value.  So I run my webpage and everything show up as I want it to, all the data is correct and I get no errors.  I then click edit (as I have checked the "enable editing" box) and the gridview changes to edit mode.  I make my changes and then select "update."  When the page refreshes, and the gridview returns, the data is not updated and the original data is shown. I am sorry for so much typing, but I want to be as clear as possible with what I am doing.  The only thing I can see being the issue is that when I setup my SELECT and FROM to contain fields from multiple tables, the UPDATE then does not work.  When I remove all of my JOIN's and go back to foreign keys and one table the update works again.  Below is what I have for my SQL statements:------------------------------------------------------------------------------------------------------------------------------------- SELECT:SELECT People.FirstName, People.LastName, People.FullName, People.PropertyID, People.InviteTypeID, People.RSVP, People.Wheelchair, Property.[House/Day Hab], InviteType.InviteTypeName FROM (InviteType INNER JOIN (Property INNER JOIN People ON Property.PropertyID = People.PropertyID) ON InviteType.InviteTypeID = People.InviteTypeID) WHERE (People.PersonID = ?)UPDATE:UPDATE [People] SET [FirstName] = ?, [LastName] = ?, [FullName] = ?, [PropertyID] = ?, [InviteTypeID] = ?, [RSVP] = ?, [Wheelchair] = ? WHERE [PersonID] = ? ---------------------------------------------------------------------------------------------------------------------------------------The only fields I want to update are in [People].  My WHERE is based on a control that I use to select a person from a drop down list.  If I run the test query for the update while setting up my data source the query will update the record in the database.  It is when I try to make the update from the gridview that the data is not changed.  If anything is not clear please let me know and I will clarify as much as I can.  This is my first project using ASP and working with databases so I am completely learning as I go.  I took some database courses in college but I have never interacted with them with a web based front end.  Any help will be greatly appreciated.Thank you in advance for any time, help, and/or advice you can give.Brian 

View 5 Replies View Related

Flat File-connection: Tuning Of The Data Types

May 25, 2007

Hello everybody.



I have a problem with "Flat file"-connection, which I cannot understand at the present. Here is the issue: I've got an ASCII-file containing 233898 lines. I try to read this file in two different packages using two different connections. In the first connection I used default data type - DT_STR of length 50, in the second one I used "Propose types..." feature (with 2000 samples) to detect types which are better matching the reality. And, when I try to load my data, the first connection reads exactly 233898 lines from the file, the second one 203898. Somehow it skips 30000 lines unloaded.





I tried to observe the error output for the second connection - everything goes smoothly and problemless. But somehow those 30000 lines are missed.



Has anybody experienced such a situation? Is the issue known?



Thanks in advance,

A.G.

View 9 Replies View Related

SQL Server 2012 :: Create Dynamic Update Statement Based On Return Values In Select Statement

Jan 9, 2015

Ok I have a query "SELECT ColumnNames FROM tbl1" let's say the values returned are "age,sex,race".

Now I want to be able to create an "update" statement like "UPATE tbl2 SET Col2 = age + sex + race" dynamically and execute this UPDATE statement. So, if the next select statement returns "age, sex, race, gender" then the script should create "UPDATE tbl2 SET Col2 = age + sex + race + gender" and execute it.

View 4 Replies View Related

Multiple Update Triggers Or One Large Trigger With If&#39;s

May 1, 2001

I have a table which when certain columns are updated, need a trigger to fire to update a next schedule date in that same table for that record. I can write the trigger, but my question for performance and efficiency is which approach would be better. Separate triggers fo the 8 columns, or a large trigger with an If to check if these columns are updated.
Thanks

View 1 Replies View Related

Exceedingly Long Update On Large Tables - Why?

Mar 28, 2006

We have a simple UPDATE query, joining two tables, that takes much longer than 10 hours to run, but if we break the table in six (10 million rows in each table), it takes only fifteen minutes to run each part.

Why? And how can we tell in advance whether a query will cross the threshold into l.o.n.g.r.u.n.n.i.n.g query? Or, how can we prevent it?

The system is Windows XP Pro with 4GB RAM (/3GB switch), and SQL Server Standard 2005. Log files, swap files, dbf files are on separate drives. The system is dedicated to SQL Server. No other queries are running at the same time. The database is in Simple logging mode. Each table is a few GB with 60 million rows.

An example problem query is: (updating fewer than 10 bytes)
UPDATE bigtable
SET bigtable.custage = scores.custage, bigtable.custscore = scores.custscore
FROM bigtable
JOIN t2 ON bigtable.custid = scores.custid

In this case, each table has 60 million rows. 'custid' is a sequential, unique integer. SCORES table is clustered on 'custid' and is 1.5GB in size. BIGTABLE has an index on 'custid', and is 6GB in size. There is a one-to-one match between the tables on 'custid', but not enforced. The SCORES table was created by exporting a few fields (but all 60 million records) from BIGTABLE, updating the values in a separate program, then importing back in SQL Server into the SCORES table.

The first time this query was run, we stopped it after it ran 16 hours. When we broke up the bigtable into 10 million record chunks (big1, big2, big3..., big6) each update only took 15 minutes, for 90 minutes total.

* How can in we tell in advance that the full chunk would take more than a few hours?
* Why is it taking SO MUCH LONGER than in smaller chunks?
* When a query is taking that long to run, is there any way to tell where in the plan it is?
* What should we do differently?

Thanks for any help; this is a real head scratcher for us.

View 9 Replies View Related

Large Log Table ....SELECT * FROM Statement....killing The Performance Of Server..Help Me Out..

Mar 12, 2008

Hi all

I have a Large log table with large size data(I month only),If I run a query like SELECT * FROM <table_name> Server will go€¦very very slow€¦.

Because of large Data system is going slow€¦..

Please some body helps me with suggestion how get good performance.

View 4 Replies View Related

SQL Server 2012 :: Update Statement With CASE Statement?

Aug 13, 2014

i was tasked to created an UPDATE statement for 6 tables , i would like to update 4 columns within the 6 tables , they all contains the same column names. the table gets its information from the source table, however the data that is transferd to the 6 tables are sometimes incorrect , i need to write a UPDATE statement that will automatically correct the data. the Update statement should also contact a where clause

the columns are [No] , [Salesperson Code], [Country Code] and [Country Name]

i was thinking of doing

Update [tablename]
SET [No] =
CASE
WHEN [No] ='AF01' THEN 'Country Code' = 'ZA7' AND 'Country Name' = 'South Africa'
ELSE 'Null'
END

What is the best way to script this

View 1 Replies View Related

Transact SQL :: Update Statement In Select Case Statement

May 5, 2015

I am attempting to run update statements within a SELECT CASE statement.

Select case x.field
WHEN 'XXX' THEN
  UPDATE TABLE1
   SET TABLE1.FIELD2 = 1
  ELSE
   UPDATE TABLE2
   SET TABLE2.FIELD1 = 2
END
FROM OuterTable x

I get incorrect syntax near the keyword 'update'.

View 7 Replies View Related

UPDATE SQL Statement In Excel VBA Editor To Update Access Database - ADO - SQL

Jul 23, 2005

Hello,I am trying to update records in my database from excel data using vbaeditor within excel.In order to launch a query, I use SQL langage in ADO as follwing:------------------------------------------------------------Dim adoConn As ADODB.ConnectionDim adoRs As ADODB.RecordsetDim sConn As StringDim sSql As StringDim sOutput As StringsConn = "DSN=MS Access Database;" & _"DBQ=MyDatabasePath;" & _"DefaultDir=MyPathDirectory;" & _"DriverId=25;FIL=MS Access;MaxBufferSize=2048;PageTimeout=5;" &_"PWD=xxxxxx;UID=admin;"ID, A, B C.. are my table fieldssSql = "SELECT ID, `A`, B, `C being a date`, D, E, `F`, `H`, I, J,`K`, L" & _" FROM MyTblName" & _" WHERE (`A`='MyA')" & _" AND (`C`>{ts '" & Format(Date, "yyyy-mm-dd hh:mm:ss") & "'})"& _" ORDER BY `C` DESC"Set adoConn = New ADODB.ConnectionadoConn.Open sConnSet adoRs = New ADODB.RecordsetadoRs.Open Source:=sSql, _ActiveConnection:=adoConnadoRs.MoveFirstSheets("Sheet1").Range("a2").CopyFromRecordset adoRsSet adoRs = NothingSet adoConn = Nothing---------------------------------------------------------------Does Anyone know How I can use the UPDATE, DELETE INSERT SQL statementsin this environement? Copying SQL statements from access does not workas I would have to reference Access Object in my project which I do notwant if I can avoid. Ideally I would like to use only ADO system andSQL approach.Thank you very muchNono

View 1 Replies View Related

JDBC 2005 Update Statement - Failing Multi Row Update.

Nov 9, 2007

It appears to update only the first qualifying row. The trace shows a row count of one when there are multiple qualifying rows in the table. This problem does not exist in JDBC 2000.

View 5 Replies View Related

How To Commit The Rest Of The Transactions Of An Update In Large Bulk?

Feb 8, 2004

I have to modify the table structure where the table have a lot of data already. The log is getting full due to uncommitted transactions, there is a lot of data being updated in large bulks, not all of the transactions are committed, the update task cannot be completed.
However, there is no more spare disk space for it to commit the transaction. Anyone can help?

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved