SQL 2012 :: Deleting / Archiving From 100 GB Table
May 20, 2014
I would like to archive /delete data from a 100GB table. I have to delete on the basis of date column. Date column has been added to clustered index But not having an individual non clustered index.
My estimated execution plan shows a index scan.
Should I impose an non-clustered index on the date column then try to archive /delete after confirming the index seek is used in estimated execution plan or, is there any other method to do this?
I wrote a script to archive and delete records rom a table back in 2005 and 2009.
I can't seem to get the syntax right. Any sample script to simply archive and delete records?
This is what I have so far.
DECLARE @ArchiveDate Datetime SET @ArchiveDate = (SELECT TOP 1 DATEPART(yyyy,Call_Date) FROM tblCall ORDER BY Call_Date) --SELECT @ArchiveDate AS ArchiveDate DECLARE @Active bit
I am getting a number of deadlocks when inserting and deleting items from the same table.
The delete statement has a U lock and awaiting an IX lock on an index that covers the column in the where clause.
The insert statement has a IX lock and awaiting a U lock on the same index.
The delete statement is deleting about 5000 rows, where as the insert statement is inserting a single row.
Both these statements are found in stored procedures being called from LINQ to SQL.
I am wondering if there is a way I can prevent the delete statement taking the U lock out?My thinking being if the delete didn't take out the U lock then it would not deadlock with the insert. Are there any hints I could use to avoid the particular lock above?
I have seen various examples of multiple updates causing a deadlock, which can be fixed by adding multiple indexes. However, as I am inserting and deleting rows I imagine that all the indexes will need to be updated by both operations.
I have inherited the architecture and don't have the time to redesign everything at present. My backup plan is to deprioritize the delete and build in a retry mechanism.
However, it would be really good if I could find a more elegant way to handle deleting and inserting rows at the same time.
I have deleted nearly 30 million rows from a table. But however when I used the sp_spaceused command to calculate the data occupied by the table I don't see any difference in the data size of the table. In fact the data has increased to few MBs after the deletion, but not much.
Hi i have to delete the master table data without deleting the child table records,is there any solution for this, parent table has relation with the child table. regards vinod.t.v
I have two tables called A and B and C. Where A and C has the same schema A contains the following columns and values-------------------------------------------TaskId PoId Podate Approved 1 2 2008-07-07 No 3 4 2007-05-05 No 5 5 2005-08-06 Yes 2 6 2006-07-07 Yes Table B contains the following columns and values-------------------------------------------------TaskId TableName Fromdate Approved_Status 1 A 7/7/2007 No3 B 2/4/2006 Yes Now i need to create a stored procedure that should accept the values (Yes/No) from the Approved_Status column in Table B and should look for the same values in the Approved column in Table A. If both values match then the corresponding rows in Table A should be archived in table C which has the same schema as that of Table A. That is the matching columns should get deleted from Table A and shoud be inserted into Table C. In both the tables A and B i have the TaskId as the common column Pls provide me with full stored procedure code.
I have been placed in the position of administering our SQL server 6.5 (Microsoft). Being new to SQL and having some knowledge of databases (used to use Foxpro 2.6 for...DOS!) I am faced with an ever increasing table of incoming call information from our Ascend MAX RAS equipment. This table increases by 900,000 records a month. The previous administrator (no longer available) was using a Visual Foxpro 5 application to archive and remove the data older than 60 days. Unfortunately he left and took with him Visual Fox and all of his project files.
My question is this: Is there an easy way to archive then remove the data older than 60 days from the table? I would like to archive it to a tape drive. We need to maintain this archive for the purposes of searching back through customer calls for IP addresses on certain dates and times. We are an ISP, and occasionally need to give this information to law enforcement agencies. So we cannot just delete it.
I am trying to write a SQL Server query that archives x-days old data from [Archive].[TestToDelete] to [Working].[TestToDelete]table. I want to check that if the records on ArchiveTable do not exist then insert then deleted...which will be converted to a proc later.. archives x-days old data from [Working].[TestToDelete] to [Archive].[TestToDelete] table */Here is the table definition, it is the same for both working and archive tables. There are indexes on: IpAddress, Logdate, Server, User and Sysdate (clustered).
CREATE TABLE [Archive].[TestToDelete]([Server] [varchar](16) NOT NULL,[Logdate] [datetime] NOT NULL,[IpAddress] [varchar](16) NOT NULL,[Login] [varchar](64) NULL,[User] [varchar](64) NULL,[Sysdate] [datetime] NULL,[Request] [text] NULL,[Status] [varchar](64) NULL,[Length] [varchar](128) NULL,[Referer] [varchar](1024) NULL,[Agent] [varchar](1024) NULL) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]/* the syntax, which will be converted to proc is: */SET NOCOUNT ON;DECLARE @Time DATETIME,@R INT,@ErrMessage NVARCHAR(1024)SET @R = 1/* set @Time to one day old data */SET @Time =
One of our Application creating run time SQL job to run batch process and application itself deleting this run time SQL job, but when doing deletion it’s not checking whether job is running or not, just doing direct delete. Our challenge is how to capture deleted job details, after incident happen we could see error details only in this log.
We can run profile to trace but we don’t know when it will trigger and we cannot possible keep active profiler as it will kill server.so we can't run the trace for log time as we don't know when the issue happens.
I have three FileStreams (FS1 on F drive, FS2 on H drive, FS3 on E drive) belonging to the same FileStream group of one particular database (DB) which is in Simple recovery mode in the SQL Server 2012.
FS1 contains huge number of files due to which F drive is completely full.
So, I am trying to move some of the extra files from one FileStream (FS1 on F drive) to another FileStreams (FS2 on H drive and FS3 on E drive) using command:
dbcc shrinkfile('FS1', emptyfile)
Then, I take the Full and Differential backup of the database and issue the CheckPoint and try to delete the already duplicated files from the Filestream FS1 to get some space in the F drive using command:
I have found a bunch of duplicate records in our housing database that ideally I need to delete.There are two tables that I need to remove data from ih_cml_log_entry and ih_cml_log_notes. There is no unique identifier between the tables for a log entry. So I have had to join on the person_ref, log_seq and the date/time of entry.How do I go about deleting the data - I've used the script below to identify what I need to delete -
SELECT * FROM ( select cml.person_ref, cml.open_date + open_time as 'datetime',cml.open_user,cml.log_type ,ROW_NUMBER() OVER (PARTITION BY cml.person_ref, cml.open_date + cml.open_time,cml.open_user,cml.log_type ORDER BY (SELECT 0)) AS RowNo ,n.note FROM ih_cml_log_entry cml
In another forum post, a poster was deleting large numbers of rows from a table in batches of 50,000.
In the bad old days ('80s - '90s), I used to have to delete rows in batches of 500, then 1000, then 5000, due to the size of the transaction rollback segments (yes - Oracle).
I always found that increasing the number of deleted rows in a single statement/transaction improved overall process speed - up to some magic point, at which some overhead in the system began slowing the deletes down, so that deleting a single batch of 10,000 rows took more than twice as much time as deleting two batches of 5,000 rows each.
good rule-of-thumb numbers (or even better, some actual statistics and/or explanations) as to how many records should be deleted in a single transaction/statement for optimum speed? 50,000 - 100,000 - 1,000,000 or unlimited? Are there significant differences between 2008, 2012, 2014?
i have a table that i think is bad - how do i delete the table? also i want to copy this same table from another database - can i use enterprise manager to copy the same table in place of the deleted table?
I have a table which consist of 6,185 rows and from this table I want to delete 427 rows. I have placed the 427 rows in a separate table. So how do I delete these 427 from the original table(6185 rows).
I should also mention that the table does not have a primary key so I was thinking about something like this but it didn't work
where exists (select h.CardNumber ,h.ref_no ,h.tran_date ,h.tran_val from test427 h where( h.ref_no=o.ref_no and h.CardNumber = o.CardNumber and h.tran_date =o.tran_date and h.tran_val =o.tran_val ))
I am trying to delete from a SQL table using two arguments, I currently use the following on the aspx page: <ItemTemplate><asp:LinkButton ID="cmdDeleteJob" runat="server" CommandName="CancelJob" Text="Delete" CssClass="CommandButton" CommandArgument='<%# Bind("Type_of_service") %>' OnClientClick="return confirm('Are you sure you want to delete this item?');"></asp:LinkButton></ItemTemplate> and then I use the following on the aspx.vb page: If e.CommandName = "CancelJob" Then Dim typeService As Integer = CType(e.CommandArgument, Integer) Dim typeDesc As String = "" Dim wsDelete As New page1 wsDelete.deleteAgencySvcs(typeService) gvTech.DataBind() I need to be able to pick up two CommandArguments.
in sql server 2000, I have a table that I have to write a script for to delete several columns in this table. I am finding that I have to use alter or drop keywords or a combination of the two but not sure because I have not done this before. I am googling this but finding all kinds of other information that I dont' need to know. I dont have rights on this table so I cannot do this manually. I have to create the script and send it on to someone else. If anyone can provide a good script example that I can use to delete unwanted columns it would be a great thing. Thanks.
Hello everybody, I have problem deleting row from same table using below trigger
CREATE TRIGGER [tDeleteDomain] ON dbo.iMS_Domains FOR DELETE AS IF @@RowCount > 1 BEGIN ROLLBACK TRAN RAISERROR ('You can only delete one domain at a time.', 16, 10) END DECLARE@DomainID int SELECT@DomainID = ID FROMDELETED DELETE FROMiMS_Domains WHEREAliasFor = @DomainID
If I am using this trigger I am getting the below error Server: Msg 217, Level 16, State 1, Procedure tDeleteid, Line 16 Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32)
int1 must have the value 1 int2 must have the value 1 int3 must have the value 0 and now we get to the difficult part - first character in the field [Cost No] have to be different from the letter 'B'. [Cost No] have the datatype varchar(20)
I expected the code to look something like this: DELETE Table1 FROM Table 1 WHERE ([Int1] = 1) AND ([Int2] = 1) AND ([Int3] = 0) AND ((LEFT([Cost No]), 1) <> 'B')
But I get this error message: The left function requires 2 arguments
What am I doing wrong and what should the right code look like?
/****** Object: StoredProcedure [dbo].[dbo.ServiceLog] Script Date: 07/18/2014 14:30:59 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER proc [dbo].[ServiceLogPurge]
-- Purge records dbo.ServiceLog older than 3 months: -- Purge records in small portions to avoid locking production tables -- for a long time. The process takes longer, but can co-exist with -- normal usage of the tables.
[Code] ...
*** Getting this error below when executing the code ***
Msg 102, Level 15, State 1, Procedure ServiceLogPurge, Line 45 Incorrect syntax near 'Failed:'.
Hi, i need the suggestion here in very familiar db situation ..i have a main table and a primary key of that table is used in many other table as foreign key.If i am deleting a record in a main table,how do i make sure that all the corresponding record in the associated tables,where that foreign key is used, gets deleted too?What are my options?Thanks
i have 6 read only tables that every night all the data gets dumped and a new updated copy of the data is copied over. the tables range from 500,000 rows to almost 4 million. i have indexes set up on the fields i use to query against. my questions are 1.since i dump all the data every night and replace it, do i need to rebuild the indexes every night or is that done after the data is reentered? 2.i want to use a fill factor on the table since it is read only, but will dumping the data every night and reinputting it have adverse affects with a fill factor? 3.should i be shrinking the database or defragging it everynight cause of my data dumps and reloads?
How can I quickly delete thousands of rows in a table (SQL2000) according a query and without blowing up the log file? For instance executing the query: Delete from transactions WHERE transactiondatestamp < DATEADD (m,-4,GETDATE())
increases my log file to almost 6GB before job was done an normal size was re-obtained. In addition it took a long to time to get the job done. With the command truncate table I cannot use query unfortunately but this would be faster.
I have an SQL tables [Keys] that has various rows such as: [ID] [Name] [Path] [Customer] 1 Key1 Key1 InHouse 2 Key2 Key2 External 3 Key1 Key1 InHouse 4 Key1 Key1 InHouse 5 Key1 Key1 InHouse
Obviously IDs 1,3,4,5 are all exactly the same and I would like to be left with only: [ID] [Name] [Path] [Customer] 1 Key1 Key1 InHouse 2 Key2 Key2 External
I cannot create a new table/database or change the unique identifier (which is currently ID) either. I simply need an SQL script I can run to clean out the duplicates (I know how they got there and the issue has been fixed but the Database is still currently invalid due to all these duplicate entires).
I have to delete a ton of data from a SQL table. I have a unique identifier called the version. I would like to use if not in these versions then delete. I tried to using the statement below, but learned the hard way that it created an error this is the message I got....
Msg 9002, Level 17, State 4, Line 3...
The transaction log for database 'MonthEnds' is full due to 'ACTIVE_TRANSACTION'.
I was reading about truncate, I am not sure how I would do this or how I would setup the statement.
Delete Products where versions were not in (('48459CED-871F-4971-B888-5083990332BC','D550C8D3-58C7-4C74-841D-1C1675F19AE3','C77C7817-3F04-4145-98D3-37BB1610DB35', '21FE83FA-476D-4604-80EF-2ED57DEE2C16','F3B50B81-191A-4D71-A406-011127AEFBE1','EFBD48E7-E30F-4047-909E-F14DCAEA4181','BD9CCC41-D696-406B- 'C8BEBFBC-D362-4D0F-A555-B281FC2B3023','EFA64956-C2CF-41FC-8E21-F060597DAFCB','77A8DE56-6F7F-4490-8BED-AA6809B947EF','0F4C1E5F-B689-4DCB-
I have numerous entries of same id name belonging to same median number.However,I want to only retain the entries having the longest first and end position and discard the remaining entries
E.g. for id name ="PSK_30s1207681L002" AND median = 5 we have four entries
I know very simple SQL queries but I need help with this one. I have multiple lines in a SQL database that I need to run. Basically, I need to run this (the bracketed text and the XXX are place holders):
DELETE FROM [tableName] WHERE [columnName] = 'XXXXX'
But I need to run it around 90 times where XXXXX is a unique variable each time. I could create 90 lines similar to this one but that would take way too much time to run. Any suggestions for a noob?
Thanks,
- MT
-=<>=-=<>=-=<>=-=<>=-=<>=- Matt Torbin President Center City Philadelphia Macintosh Users Group http://www.ccpmug.org/