AFTER INSERT Trigger Takes Forever On A Large Table (20 Million Rows)
Aug 30, 2007
I have a row that is being used log track plays on our website.
Here's the table:
CREATE TABLE [dbo].[Music_BandTrackPlays](
[ListenDate] [datetime] NOT NULL DEFAULT (getdate()),
[TrackId] [int] NOT NULL,
[IPAddress] [varchar](20)
) ON [PRIMARY]
There's a CLUSTERED INDEX on ListenDate ASC and a NON CLUSTERED INDEX on the TrackId.
I have a TRIGGER on the Music_BandTrackPlays table that looks like the following:
CREATE TRIGGER [trig_Increment_Music_BandTrackPlays_PlayCount]
ON [dbo].[Music_BandTrackPlays] AFTER INSERT
AS
UPDATE
Music_BandTracks
SET
Music_BandTracks.PlayCount = Music_BandTracks.PlayCount + TP.PlayCount
FROM
(SELECT TrackId, COUNT(*) AS PlayCount
FROM inserted
GROUP BY TrackId) AS TP
WHERE
Music_BandTracks.TrackId = TP.TrackId
When a simple INSERT statement is done on the Music_BandTrackPlays table, it can take quite a long time. When I remove the TRIGGER the INSERTs are immediate. The Execution plan for the TRIGGER shows that a 'Inserted Scan' is taking up most of the resources.
How exactly is the pseudo 'inserted' table formed?
For now, I think the easiest thing to do is update my logging page so it performs 2 queries. One to UPDATE the Music_BandTracks table and increment the counter, and perform the INSERT into the Music_BandTrackPlays table seperately.
I'm ok with that solution but I would really like to understand why the TRIGGER is taking so long. The 'inserted' pseudo table will be 1 row 99% of the time. Does SQL Server perform a table scan on all 20 million rows in order to determine what's new and put it in the inserted pseudo table?
Thanks!
View 6 Replies
ADVERTISEMENT
Apr 24, 2013
IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall')
BEGIN
ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0
IF ( @@ERROR <> 0 )
GOTO QuitWithRollback
END
It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.
View 4 Replies
View Related
Jul 29, 2014
I am doing a performance testing for In-memory option is sql server 2014. As a part I want to insert 500 million rows of records into a in-memory enabled test table I have created.
I need a sample script to insert 500 million records into a table ....
View 9 Replies
View Related
May 25, 2006
SQL Server 2000, QA Database: A table called Telephone_Directory with just 4.000 records.
SELECT * FROM Telephone_Directory is taking forever.
If I stop the select after 1 second I see 162 rows.
If I stop the select after 1 minute I see again 162 rows.
Why this could be happening?
The same querie on Production Database is taking 6 seconds to retrieve the 4.000 records.
View 13 Replies
View Related
Jul 25, 2007
Can anyone help me on this...
when i select data from table using select statement it takes huge amount of time....The table contains 7 million entries and when i select by mentioning a criteria it takes around 45 secs..The system has 4GB RAM and Dual Processing CPU. The select statement does not contain any grouping and all..
Will it take this much time to retrieve data.?.
The table does include an indexed field,
So can anyone help me on the different things i can do to make the retrieval faster?
Andy
View 5 Replies
View Related
Apr 21, 2007
I am trying to create a clustered index on a View of a table that has an xml datatype. This indexing ran for two days and still did not complete. I tried to leave it running while continuing to use the database, but the SELECT statements where executing too slowly and the DML statements where Timing out. I there a way to control the server/cpu resources used by an indexing process. How can I determine the completion percentage or the indexing process. How can I make indexing the view with the xml data type take less time?
The table definition is displayed below.
CREATE TABLE [dbo].[AuditLogDetails](
[ID] [int] IDENTITY(1,1) NOT NULL,
[RecordID] [int] NOT NULL,
[TableName] [varchar](64) NOT NULL,
[Modifications] [xml] NOT NULL,
CONSTRAINT [PK_AuditLogDetails] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
The view definition is displayed below.
ALTER VIEW [dbo].[vwAuditLogDetails] WITH SCHEMABINDING
AS
SELECT P.ID,D.RecordID, dbo.f_GetModification(D.Modifications,P.ID) AS Modifications
FROM dbo.AuditLogParent P
INNER JOIN dbo.AuditLogDetails AS D ON dbo.f_GetIfModificationExist(D.Modifications,P.ID)=1
The definition for UDF f_GetModification
ALTER function [dbo].[f_GetModification]( @Modifications xml,@PID uniqueidentifier )
returns xml
with schemabinding
as
begin
declare @pidstr varchar(100)
SET @pidstr = LOWER(CONVERT(varchar(100), @PID))
return @Modifications.query('/Modifications/modification[@ID eq sql:variable("@pidstr")]')
end
The definition for UDF f_GetIfModificationExist
ALTER function [dbo].[f_GetIfModificationExist]( @Modifications xml,@PID uniqueidentifier )
returns Bit
with schemabinding
as
begin
declare @pidstr varchar(100)
SET @pidstr = LOWER(CONVERT(varchar(100), @PID))
return @Modifications.exist('/Modifications/modification[@ID eq sql:variable("@pidstr")]')
end
The Statement to create the index is below.
CREATE UNIQUE CLUSTERED INDEX [IX_ID_RecordID] ON [dbo].[vwAuditLogDetails]
(
[ID] ASC,
[RecordID] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
View 1 Replies
View Related
Nov 26, 1998
We have a MS SQL Server 6.5 database table with 643,000 records.
There are several indexes including some clustered indexes.
We do a statement: update wo set udf3 = '1234567890123456' where woid = '123'
this returns immediately.
Then we try the same statement where the string is 1 character longer and it
takes 45 minutes to return. There is no indication of what the server is doing
during this time.
There is no index on UDF3 and WOID is the primary key.
Any suggestions what is happening? What can we do to correct it?
DBCC CheckTable finds no errors.
name rows reserved data index_size unused
-------------------- ----------- ------------------ ------------------ ------------------ ------------------
WO 643124 493418 KB 321580 KB 169824 KB 2014 KB
View 1 Replies
View Related
Jan 16, 2007
Hi,
I got a weird problem. I've created a sp that takes in the query analyzer 7 seconds to run. When i put in my code dataAdapter.Fill(dataSet.Tables(0)) it takes forever to finish!!
What's going on?
Any thoughts highly appreciated.
t.i.a.,ratjetoes.
View 2 Replies
View Related
Jul 20, 2005
I'm having a problem with an update operation in a stored procedure. Itruns so slowly that it is unusable, unless I comment a part out in whichcase it is very fast. However, I need the whole thing :). I have atable of email addresses of people who want to get invited to parties.Each row contains information like email address, city, state, country,and preferences for what types of events are of interest.The primary key is an EMAILID, and has a unique constraint on the emailfield. The stored procedure receives the field data as arguments, andinserts the record if the email address passed is not in the database.This works perfectly. However, if the stored procedure is called for anemail address that already exists, it updates the existing row insteadof doing an insert. This way I can build a web page that lets peoplemodify their preferences, opt in and out of the list and so on.If I am doing an update, the stored procedure runs SUPER SLOW (and thepage times out) unless I comment out the part of the update statementfor city, state, country and zipcode. However, I really need to be ableto update this!My database has 29 million rows.Thank you for telling me anything about how I can speed up this update!Here is the SQL statement to run the stored procedure:declare @now datetime;set @now = GetUTCDate();EXEC usp_EMAIL_Subscribe @Email='dberman@sen.us', @OptOutDate=@now,@Opt_GenInterest=1, @Opt_DatePeople=0, @Opt_NewFriends=1,@Opt_OldFriends=0, @Opt_Business=1, @Opt_Couples=0, @OptOut=0,@Opt_Events=0, @City='Boston', @State='MA', @ZCode='02215',@Country='United States'Here is the stored procedure:SET QUOTED_IDENTIFIER ONGOSET ANSI_NULLS ONGOALTER PROCEDURE [usp_EMAIL_Subscribe](@Email [varchar](50),@Opt_GenInterest [tinyint],@Opt_DatePeople [tinyint],@Opt_NewFriends [tinyint],@Opt_OldFriends [tinyint],@Opt_Business [tinyint],@Opt_Couples [tinyint],@OptOut [tinyint],@OptOutDate datetime,@Opt_Events [tinyint],@City [varchar](30), @State [varchar](20), @ZCode [varchar](10),@Country [varchar](20))ASBEGINdeclare @EmailID intset @EmailID = NULL-- Get the EmailID matching the provided email addressset @EmailID = (select EmailID from v_SENWEB_EMAIL_SUBSCRIBERS whereEmailAddress = @Email)-- If the address is new, insert the address and settings. Otherwise,UPDATE existing email profileif @EmailID is null or @EmailID = -1BeginINSERT INTO v_SENWEB_Email_Subscribers(EmailAddress, OptInDate, OptedInBy, City, StateProvinceUS, Country,ZipCode,GeneralInterest, MeetDate, MeetFriends, KeepInTouch, MeetContacts,MeetOtherCouples, MeetAtEvents)VALUES(@Email, GetUTCDate(), 'Subscriber', @City, @State, @Country, @ZCode,@Opt_GenInterest, @Opt_DatePeople,@Opt_NewFriends, @Opt_OldFriends, @Opt_Business, @Opt_Couples,@Opt_Events)EndElseBEGINUPDATE v_SENWEB_EMAIL_SUBSCRIBERSSET--City = @City,--StateProvinceUS = @State,--Country = @Country,--ZipCode = @ZCode,GeneralInterest = @Opt_GenInterest,MeetDate = @Opt_DatePeople,MeetFriends = @Opt_NewFriends,KeepInTouch = @Opt_OldFriends,MeetContacts = @Opt_Business,MeetOtherCouples = @Opt_Couples,MeetAtEvents = @Opt_Events,OptedOut = @OptOut,OptOutDate = CASEWHEN(@OptOut = 1)THEN @OptOutDateWHEN(@OptOut = 0)THEN 0ENDWHERE EmailID = @EmailIDENDreturn @@ErrorENDGOSET QUOTED_IDENTIFIER OFFGOSET ANSI_NULLS ONGOFinally, here is the database schema for the table courtesy ofenterprise manager:CREATE TABLE [dbo].[EMAIL_SUBSCRIBERS] ([EmailID] [int] IDENTITY (1, 1) NOT NULL ,[EmailAddress] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[OptinDate] [smalldatetime] NULL ,[OptedinBy] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[FirstName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[MiddleName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[LastName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[JobTitle] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[CompanyName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[WorkPhone] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[HomePhone] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[AddressLine1] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[AddressLine2] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[AddressLine3] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[City] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[StateProvinceUS] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[StateProvinceOther] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CI_AS NULL ,[Country] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[ZipCode] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[SubZipCode] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[GeneralInterest] [tinyint] NULL ,[MeetDate] [tinyint] NULL ,[MeetFriends] [tinyint] NULL ,[KeepInTouch] [tinyint] NULL ,[MeetContacts] [tinyint] NULL ,[MeetOtherCouples] [tinyint] NULL ,[MeetAtEvents] [tinyint] NULL ,[OptOutDate] [datetime] NULL ,[OptedOut] [tinyint] NOT NULL ,[WhenLastMailed] [datetime] NULL) ON [PRIMARY]GOCREATE UNIQUE CLUSTERED INDEX [IX_EMAIL_SUBSCRIBERS_ADDR] ON[dbo].[EMAIL_SUBSCRIBERS]([EmailAddress]) WITH FILLFACTOR = 90 ON[PRIMARY]GOALTER TABLE [dbo].[EMAIL_SUBSCRIBERS] WITH NOCHECK ADDCONSTRAINT [DF_EMAIL_SUBSCRIBERS_OptedOut] DEFAULT (0) FOR [OptedOut],CONSTRAINT [DF_EMAIL_SUBSCRIBERS_WhenLastMailed] DEFAULT (null) FOR[WhenLastMailed],CONSTRAINT [PK_EMAIL_SUBSCRIBERS] PRIMARY KEY NONCLUSTERED([EmailID]) WITH FILLFACTOR = 90 ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_WhenLastMailed] ON[dbo].[EMAIL_SUBSCRIBERS]([WhenLastMailed] DESC ) ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_OptOutDate] ON[dbo].[EMAIL_SUBSCRIBERS]([OptOutDate] DESC ) ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_OptInDate] ON[dbo].[EMAIL_SUBSCRIBERS]([OptinDate] DESC ) ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_ZipCode] ON[dbo].[EMAIL_SUBSCRIBERS]([ZipCode]) ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_STATEPROVINCEUS] ON[dbo].[EMAIL_SUBSCRIBERS]([StateProvinceUS]) ON [PRIMARY]GOMeet people for friendship, contacts,or romance using free instant messaging software! See a picture youlike? Click once for a private conversation with that person!<a href="http://www.sen.us"><imgsrc="http://www.sen.us/mirror/SENLogo_62_31.jpg"></a>*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
View 9 Replies
View Related
May 7, 2007
I have a SQL 2005 & SQL 2000 server. I am attempting to execute a simple update statement, something that looks like:
update AD
set AD.SomeDate = getdate()
from [ServerX].DB.dbo.Table
where ColumnX = 'X'
ServerX is the SQL 2000 box.
ServerY is the SQL 2005 box. Server Y is where this statement is invoked from. (Not shown in statement).
I have a linked server set up.
When executed from the 2000 box, it runs in < 1 second.
When both environments are 2005 to 2005, it takes less than < 1 second.
View 1 Replies
View Related
Apr 20, 2007
I have a database that is about 300 gig. I am setting up replication to a reporting server. We are doing a series or mock loads and I will need drop the tables and reload the main database a few times before we go live. To do that I plan to stop replication and drop all the articles, drop the subscription, then load the new data, then reinitialize and restart replication.
The first time I tried to do this, when I drop the articles, it seems to be trying to "clean up" the distribution database on the reporting server and that is taking a couple of hours to do. The disruption database is about 40 gig.
Is this correct behavior in SQL2005 replication? Is there a way to avoid this? I have all the replication pieces scripted out and would like to just drop replication, reload, and then run my scripts to recreate replication. But this "clean up" is going to cause me a lot of headache if I don't figure out what is going on.
Am I going down the wrong road here? Is there an easier way to do this? Any comments would be great!!!!
Thanks in advance for any help.
Jim Youmans
St. Louis Missouri
View 1 Replies
View Related
Sep 6, 2007
Hello,
When I try to display HIstory for one of my Maintenance Plan, it takes forever to bring me those results back (up to 15-20 minutes). What can be the problem? What should I check?
View 2 Replies
View Related
Feb 27, 2008
I'm looking for some performance assistance on updating a column value in a table that contains approximately 50 million rows. I have a permanent table in another database that has the key column and value to be set. My query is listed below, but I'm afraid it will run quite awhile. Any suggestions would be appreciated.
update mytable
set column2 = b.column2
from mytable as a
join mytable1 as b
on a.column1 = b.column1
There is a one to one relationship between the two tables.
View 8 Replies
View Related
Sep 17, 2007
I used a stored procedure in my report. If I run the sp in Management Studio (on my pc, database is on a sql server) it takes only several minutes; but from reporting services (also on pc) I put it in the data tab and execute it, it takes forever, actually never finish. I want to know why it's taking so long to execute it from reporting services while it returns data instantly from Mgt Studio. There is cursor in the sp. I don't know whether this is the culprit. Anyone knows why? Thanks!
Below is the sp.
--------------------------------------------------------------------
create proc [dbo].[p_national_by_week]
as
set nocount on
declare @s1 nvarchar(2000), @parmdefinition nvarchar(300), @rangestart smalldatetime, @rangeend smalldatetime
, @price_low money, @price_high money, @weekdate smalldatetime
declare c1 cursor for
--- GG change for reg dates.
select weekdate from vtRealEstate_RealtorListing_WeekDates
open c1
fetch from c1 into @weekdate
while @@fetch_status =0
begin
select @rangeend = @weekdate+7, @rangestart=@weekdate
select @s1 = N'
declare @mlsid_count int, @avg_price money, @avg_day_on_market int, @median_price money, @c1 int
select @mlsid_count=count(*), @avg_price=avg(CurrentPricefilter),
@avg_day_on_market=avg(datediff(dd, FirstListedDate, LastModifiedDate))
from vtRealEstate_RealtorListings
where ((FirstListedDate <= @rangeStart and LastModifiedDate >= @rangeStart) or
(FirstListedDate >= @rangeStart and FirstListedDate < @rangeEnd)
)
and currentpricefilter is not null
and mlsidfilter is not null
select @c1=@mlsid_count/2
set rowcount @c1
select @median_price = CurrentPricefilter from vtRealEstate_RealtorListings
where
((FirstListedDate <= @rangeStart and LastModifiedDate >= @rangeStart) or
(FirstListedDate >= @rangeStart and FirstListedDate < @rangeEnd)
)
and currentpricefilter is not null
and mlsidfilter is not null
order by currentpricefilter
insert report_detail_test (weekdate, mlsid_count, avg_price, median_price
, avg_day_on_market)
values(@weekdate, @mlsid_count, @avg_price, @median_price, @avg_day_on_market)
', @parmdefinition=N'@rangestart smalldatetime, @rangeend smalldatetime, @weekdate smalldatetime'
exec sp_executesql @s1, @parmdefinition, @rangestart=@rangestart, @rangeend=@rangeend
, @weekdate = @weekdate
fetch from c1 into @weekdate
end
select weekdate
, mlsid_count
, avg_price
, median_price
, avg_day_on_market
from report_detail_test
order by WeekDate
View 2 Replies
View Related
Oct 17, 2006
UPDATE CD SET col1=SR.col1,col2=SR.col2,col3=SR.col3,col4=SR.col4,col5=SR.col5,col6=SR.col6,col7=SR.col7,
col8=SR.col8,col9=SR.col9,col10=SR.col10
FROM LNKSQL1.db1.DBO.Table1 CD
join Table2 USRI on USRI.col00 = CD.col00
join table3 SR on USRI.col00 = SR.col00
Here, I'm trying to tun this from an instance and do a remote update. col00 is a primary key and there is a clustered index that exists on this column. When I run this query, it does a 'select * from tabl1' on the remote server and that table has about 60 million rows. I don't understand why it would do a select *... Also, we migrated to SQL 2005 a week or so back but before that everything was running smooth. I dont have the execution plan from before but this statement was fast. Right now, I can't run this statement at all. It takes about 37 secs to do one update. But if I did the update on a local server doing remote joins here, it would work fine. When I tried to show the execution plan, it took about 10 mins to show up an estimated plan and 99% of the time was spent on Remote scan. Please let me know what I can do to improve my situation. Thank you
View 4 Replies
View Related
Sep 17, 2015
I have been tasked with writing an update query to update a table with more than 150 million rows of data. Here are the table structures:
Source Tables :
OC
CREATE TABLE [dbo].[OC](
[OC] [nvarchar](255) NULL,
[DATE DEBUT] [date] NULL,
[DATE FIN] [date] NULL,
[Code Article] [nvarchar](255) NULL,
[INSERTION] [nvarchar](255) NULL,
[Code] ....
The update requirement is as follows:
DECLARE @Counter INT=0 --This causes the @@rowcount to be > 0
while @@rowcount>0
BEGIN
  SET rowcount 10000
  update r
  set Comp=t.Comp
[Code] ....
The update took more than 48h and didn't terminate , how to accelerate it ?
View 6 Replies
View Related
Jun 12, 2015
I have a requirement to delete 1 Million records from a table having 10 Million data and it's being queried on 24/7 basis (don't have a downtime). how can I achieve that?
View 13 Replies
View Related
Mar 12, 2015
We are facing a weird scenario in which the snapshot is getting corrupted after insertupdate few million records in to a table .
SQL Server 2012
windows server 2008 R2
service pack 1
64-bit OS
View 1 Replies
View Related
Aug 11, 2014
I need to use Bulk insert statement for copying a table with 200 million rows to another table on the same server...the table has no primary key or identity column.... script for BULK INSERT ...
View 9 Replies
View Related
Jul 24, 2015
I have a SQL script to insert data into a table as below:
INSERT into [SRV1INS2].BB.dbo.Agents2
select * from [SRV2INS14].DD.dbo.Agents
I just want to set a Trigger on Agents2 Table, which could delete all rows in the table , before carry out any Insert operation using above statement.I had below Table Trigger on [SRV1INS2].BB.dbo.Agents2 Table as below: But it did not perform what I intend to do.
USE [BB]
GO
/****** Object: Trigger   Script Date: 24/07/2015 3:41:38 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
[code]....
View 3 Replies
View Related
Oct 30, 2007
Table 1
First_Name
Middle_Name
Surname
John
Ian
Lennon
Mike
Buffalo
Tyson
Tom
Finney
Jones
Table 2
ID
F
M
S
DOB
1
Athony
Harold
Wilson
24/4/67
2
Margaret
Betty
Thathcer
1/1/1808
3
John
Ian
Lennon
2/2/1979
4
Mike
Buffalo
Tyson
3/4/04
5
Tom
Finney
Jones
1/1/2000
I want to be able to create a trigger that updates table 2 when a row is inserted into table 1. However I€™m not sure how to increment the ID in table 2 or to update only the row that has been inserted.
View 17 Replies
View Related
Feb 5, 2008
A
ID
Name
1
Joe
2
Fred
3
Ian
4
Bill
B
ID
1
4
I want to be able to create a trigger so that when a row is inserted into table A by a specific user then the ID will appear in table B. Is it possible to find out the login id of the user inserting a row?
I believe the trigger should look something like this:
create trigger test_trigger
on a
for insert
as
insert into b(ID)
select i.id
from inserted i
where
--specific USER
View 9 Replies
View Related
Jan 17, 2012
Multiple rows to insert:
---------------------
insert into Customer(CustomerId,Name,Value)
select CustomerId,Name,Value
from CustomerTemp
Trigger in Customer table that invoke a function:
alter TRIGGER [dbo].[Calculation] ON [dbo].[Customer]
AFTER INSERT
AS
update Customer
set Percentage = dbo.GetPercentage((select Value from inserted))
where CustomerId = (select CustomerId from inserted)
I'm getting the error for the multiple row.Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.Is there a way to let me insert multiple rows, using the trigger that invoke a function ?
View 1 Replies
View Related
Dec 24, 2007
Hey Guys
i need to add a datetime column to an exisitng table that has like 1.2 million records and its being accessed frequently
but i cant afford to stop the db at all
whenever i do : alter table mytable add Updated_date datetime
it just takes too long and i have to stop executing the query after a couple of mins
I am running sql express 2005 sp2. db size is over 3 gb but still under the 4 gb limit
can u plz advice on how to add this column. its urgent!!
thanks in advance
View 5 Replies
View Related
Aug 6, 2014
I create a Trigger that allows to create news row on other table.
ALTER TRIGGER [dbo].[TI_Creation_Contact_dansSLX]
ON [dbo].[_IMPORT_FILES_CONTACTS]
AFTER INSERT
AS
[code]...
But if I create an INSERT with 50 rows.. My table CONTACT and ADDRESS possess just one line.I try to create a Cursor.. but I had 50 lines with an AdressID and a ContactID differently, but an Account and an AccountId egual on my CONTACT table :
C001 - AD001 - AC001 - ACCOUNT 001
C002 - AD002 - AC001 - ACCOUNT 001
C003 - AD003 - AC001 - ACCOUNT 001
C004 - AD004 - AC001 - ACCOUNT 001
C005 - AD005 - AC001 - ACCOUNT 001
I search a means to have 50 lines differently on my CONTACT table.
C001 - AD001 - AC001 - ACCOUNT 001
C002 - AD002 - AC002 - ACCOUNT 002
C003 - AD003 - AC003 - ACCOUNT 003
C004 - AD004 - AC004 - ACCOUNT 004
C005 - AD005 - AC005 - ACCOUNT 005
View 9 Replies
View Related
Jan 19, 2015
I have a simple query that joins a largeish fact table (3 million rows) to a view that returns 120 rows. The SKEY in the view is returned via a scalar function. The view returns instantly if queried on it's own however when joined to the fact table in the simple query below results in a query execution plan that runs forever. Interestingly if I change the INNER JOIN to a LEFT OUTER JOIN the query returns the matched results almost instantly.
Select
Dimension.Age_Band.[10_Year_Age_Band],
Count(*)
From
Fact.APC_Episodes
Inner Join Dimension.Age_Band ON
Fact.APC_Episodes.AGE_BAND_SKEY = Age_Band.AGE_BAND_SKEY
Group By
Dimension.Age_Band.[10_Year_Age_Band]
I know joining to a view using a column generated by a scalar function is not a good recipe for performance. I also know that I could fix this by populating a physical table with the view first as I have already tested this though I hoping not to have to go down that route.
Why a LEFT OUTER JOIN works and not an INNER JOIN or anyway I can get the query optimizer to generate an execution plan that works?
View 9 Replies
View Related
Dec 12, 2014
I run the following statement and it will not update beyond 7 million plus rows and I have about 38 million to complete. I keep checking updated row counts and after 1/2 day it's still the same so I know something is wrong because it was rolling through no problem when I initiated it. I need to complete ASAP so it's adding to my frustration. The 'Acct_Num_CH' field is an encrypted field (fyi).
SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
WHILE @@ROWCOUNT > 0
BEGIN
SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
END
SET rowcount 0
View 5 Replies
View Related
May 21, 2013
What will be the best way to go to select summary data from a big table with detail records and insert it into another table.
The source table contains approx 100 million detail records for a couple of months. I have a select statement that select a summary of the latest month's data and it average about 15 million records for the month which i want to insert into another table.
In the past i just used a standard insert into statement but not the best way of doing it.
If a view is created with just the last months summary data and i select from the view will the performance be better or will it just add more overhead?
Will a SSIS package work better to insert the summary data?
View 1 Replies
View Related
Apr 9, 2008
I'm new to using a DB and have a few questions about what I'm trying to do. I have some historical options data and want to place it into a sql express database. (I understand I might need to use a none express version once the db gets to big.) A months worth of data is over 5.5 million rows of data. So six years worth is ~400 million rows. Is it possible to put this into a sql db and be able to search it very fast? I have a months worth in a db now and it is pretty slow. Should I use a new table for each month and then have 6 years * 12 month = 72 tables to increase the search speed? I search by date and stock_symbol and the data looks like this:
Date, Stock_Symbol, Option_Symbol, Strike, BidPrice, AskPrice, Volume, OpenInterest, (and a few others)
The select statement is simple: SELECT * FROM Options WHERE Date = @Date and StockSymbol = @Symbol
Thanks
View 4 Replies
View Related
May 12, 2015
I am using SQL SERVER 2008R2, not Denali, so I cannot use OFFSET FETCH Clause.
In my stored procedure, I am doing a SELECT INTO #tblTemp FROM... Working fine. This resultset is going to be used in an SSIS package which will generate a pipe-delimited .txt file... Working fine.
For recoverability sake, I am trying to throttle back on the commit chunks to 1000 rows per commit until there are no more rows. I am trying to avoid large rollbacks.
Q: Am I supposed to handle the transactions (begin/commit/rollback/end trans) when the records are being inserted into the temp table? Or when they are being selected form the temp table?
Q: Or can I handle this in my SSIS package for a flat file destination? I don't see option for a flat file destination like I do for an OLE DB Destination (like Rows per batch, Maximum insert commit size).
View 6 Replies
View Related
Mar 21, 2000
In our database, we have a very large table that gets updated every morning, start of the day is copying 4 million rows from the fact table from previous date to today's date in the same table and then some other processing. It takes 1 1/2 to 2 hrs to do this. There is a dts package created to copy these rows into temp table and then to this fact table.
This table has more than 200 million rows
Any ideas on how to accomplish this without doing the copy twice and not running into locking problems.
Thanks for any suggestions.
View 5 Replies
View Related
Jul 16, 2013
i am deleating 8 Million rows from my database,I am wondering how to control T-Log,also I heard something about row lock and table lock
View 4 Replies
View Related
Feb 27, 2015
i have a following table
table name : emp_master
empid efname emname elamane efathername emothername deptno edob edoj createdby updateby lastupdatedatetime lastactionperformed
empid is primarykey.
this table contains 20million of records and i want to fire following query on this to get employye all data where eployee is more than 10 year old
select empid ,efname, emname, elamane, efathername, emothername, deptno ,edob ,edoj ,createdby, updateby, lastupdatedatetime ,lastactionperformed
from emp_master
where year(doj)+10 > year(getadate())
this will return approx 10 million rows and taking 18 mins. tune this query what approaches should i take to reduce the time of execution.
View 6 Replies
View Related