Hello,I have a huge database (2 GB / month) and after a while it is becomingnon-operational (time-outs, etc.) So I have written an SQL sentence(delete) that can reduce around 60% of the db size without compromisingthe application data needs. The problem is that when I execute it, thedb does reduce its size 60%, but the transaction log increases at thesame rate. Can I execute the sentence in a "commit" or"transaction" mode so to impede the SQL Server write in the log?Thanks for the help!Antonio
We are using SQL Server 7 on Win 2k and there are some DTS packages set up which empty some large tables (delete from) and then import some datafiles. The imported files are about 13 GB and during the process the log file gets to about 10GB and then runs out of disk space.
Is there a trick to empty a table without logging it? (a la LOAD Replace from Null in DB2)? How can I go about keeping the log file size down during this operation?
I think the DB is set to autocommit, the trunc log on chkpt. is set on as is the select into/bulk copy (altho I'm reasonable sure we arent availaing of the bulk copy for the import).
I've got a large MS Sql Server 2000 database that has 15 indexes, with roughly 180 million rows representing 240 GB worth of data. Due to the massive size of the database we are trying to purge it down to a smaller dataset, about 40 million rows, in order to speed up the query performance and to be able to defrag the indexes (which are 30-50% fragmented). To complicate the matter, this table is also a publisher in a transactional replication setup, with one subscriber. Also, the system needs to be up constantly so I'm only allowed about a 3-5 hour period to take an outage a week.
So far I've tested several methods of delete following all best practices (batch deletes, using indexes in delete's where clause), and have come up with deleting/commiting 500 rows at a time. The problem is that it still takes 3-4 seconds to delete this many rows, on a 8 GB RAM, 4 processor machine that is not currently used or replicated.
I'm at a loss on a way to pare down the data with a delete as the current purge script will take 7 hours a day for about 3 months. Another option I'm considering is to do a truncate and copy the data back over from the replicated database, but again this has its own set of problems, i.e. network latency and slow inset times. Yet another option would be to create a replica of the table on the production db, copy the data to it, then rename the table.
Any one have experience with purging such a massive amount of data? Any help would be greatly appreciated.
I need to delete records from a table (Table1) which has a foreign key column in a related table (Table2).
Table1 columns are: table1Id; Name. Table2 columns include Table2.table1Id which is the foreign key to Table1.
What is the syntax to delete records from Table1 using Table1.Name='some name' and remove any records in Table2 that have Table2.table1Id equal to Table1.table1Id?
:confused: Urgent!! Hi there. I use MS SQL server. I would like to separate the data from one table to two tables refer to two reference tables and the following conditions:
Let say these two reference tables are called: Table A & Table B
Group A: 1. Same date in Table A & Table B 2. Same ID in Table A & Table B (ID is not unique) 3. Same name in Table A & Table B (Name is not unique)
Combine all of these three conditions for unique identifier.
I used the following SQL code to separate the required data that match the above conditions to the new table. (Code) select a.Project, a.Site, a.S_number, a.Field_ID, a.Method, a.Analyte, a.Result, a.Units, a.Qualifier, a.Dilution_Factor, a.Reporting_limit, a.Recovery_, a.Matrix, a.CAS_Number, a.Sample_Date, a.Received_Date, a.Prep_Date, a.Analysis_Date, a.Batch_ID, a.Data_Package_num_SDG, a.Lab_Sample_ID, a.Lab into APPL_union_exist from APPL_union_update a, Before_01012004_report b where a.Field_ID = b.[Field Sample] and a.Sample_Date = b.Collected and a.Analyte = b.Analyte
However, I don't know how to delete the data that copied to the new table in original table, or separate that to the new table. Wish someone can help me. Thanks a lot
We have a highly transactional database. It was owned by a third party before but now both the database and the application is on our site and we are trying to improve this project. So, we have a big (902919 rows), heap table, which is getting bigger and bigger everyday and sometimes deadlocks occur. The table has only 4 columns, "token", "type", "value" and "cacheTime", unique index cannot be created. It has one index on "token"(char(36)) and "type"(varchar(50)) ("value" should also be included but it is nvarchar(max)).
I uploaded some data about 2 or 3 times and it keep appending it to thetable.Now I want to keep only first duplicate and delete rest of.Suppose part number 123 has been added 3 times so I want to keep only 1record.Thanks
Hello Frnds....Can anybody give the answer of this question as How to Delete duplicate records from Table ? I Know that with check option and also with Unique Constraint we can avoid to enter duplicate records in table but How to delete from table which does not have any constraints ?
Hello, I'm developing application which monitors network packets. The monitoring data are saved into table. Monitoring table maintains the data for fixed quantum time,for example during one 1 hour. So, every minute before or after insert new data, I delete the time-expired data. I doubt that the endless delete operation would results in some problems(increasing index,etc..).
Short Question: How do I delete all records from a destination table prior to appending new data to that table?
I am working with a SQL database that was migrated from MS Access. All relationships, primary keys, and identity columns have been set identically to the MS Access database values. The MS Access database is still being used as the database of record until the SQL database is fully functional with front-end, etc.
I want to delete the information stored in all the SQL tables, and then append the MS Access values to the SQL tables. I was able to write delete and append queries in MS Access to correctly transfer data to the SQL tables. However, I would prefer doing this through SSIS because I have several other sources of data to move to a SQL Server database and most of those other sources are not a MS Access database.
Due to relational entegrity settings, I need to delete the records from 8 tables in a specific order. I have tried independent control objects for each of the 8 tables with data flow objects of either "OLE DB Command" or "OLE DB Source" with the SQL command as "Delete From TableName". Results of the debug indicate everything is "green" but no records were deleted fromt the tables.
The driver table , which keeps track of what datamarts ran and for what date range gets updated frequently during the etl run . There can be as many as 250 updates issued on this table in a single second.
Now this table is a heap , and there are no indexes on it .
During these updates , we encounter deadlocks causing the ETL job to fail .
IF (SELECT OBJECT_ID('t1')) IS NOT NULLDROP TABLE t1GOCREATE TABLE t1 (c1 INT, c2 INT)DECLARE @n INTSET @n = 1WHILE @n <= 454BEGININSERT INTO t1 VALUES (@n, @n)SET @n = @n + 1ENDSELECT name, indid, CASE indidWHEN 0 THEN 'Table'WHEN 1 THEN 'Clustered Index'ELSE 'Nonclustered Index'END AS Type,dpages, rowcntFROM sysindexesWHERE id = OBJECT_ID('T1')name indid Type dpages rowcnt---- ----- ---- ------ ------NULL 0 Table 2 454I have a table containing 454 rows of two columnsof type INT with each being 4 bytesc1 int = 4 bytes+c2 int = 4 bytes=8 bytes per rowIf I entered 454 rows : 454 * 8 = 3,632 byteseach SQL Page is 8KB = 8 * 1024 bytes= 8,192 bytesa data page header takes the first 96 bytesleaving 8096 bytes for data and row offsets.Each record uses a row offset at the end of the pageconsisting of 2 bytes. 454 * 2 = 908 bytes.8096 - 3632 - 908 = 3,556 bytes. Should this befree data bytes?For a heap table, does SQL add an internal uniqueidentifiercolumn also? or my question is when does SQL adda uniqueidentifier? I am reading Inside SQL 2000 andtrying to understand a few things.A uniqueidentifier of 4 bytes gets added when a clustered indexexists but it is NOT a UNIQUE clustered index. AND onlyif duplicate record is added those two records only geta uniqueidentifier value.But in my example it's a heap table with no indexes. Evenon a heap table with no indexes a ROWID or Uniqueidentifierget added? Based on the INSERT statement above allvalues are unique.So what am I missing to understand why 453 rowsmake one data page to be used whereas 454 rowsmake two data pages to be used?Thank you
Hello, I have been serching for weeks to resolve this problem. I am new to ASP.NET and trying to make the migration from ASP which I have programmed in for years. I am using Microsoft Visual Web Developer 2005 Express Edition and SQL Express Edtion. I have been working through the Microsoft Video Training at http://msdn.microsoft.com/vstudio/express/beginner/learningpath/ and created a web site using Tier 3 Lesson 8 as the model. My new web site which is a simple phone book applicaiton lets me read the table and select the record without any problem. But the update form lets me edit but when I attempt to Apply the update I get the following error. Server Error in '/Phonebook' Application. ObjectDataSource 'ObjectDataSource1' could not find a non-generic method 'Update' that has parameters: FirstName, LastName, PhoneNumber, BossGroup, Department, BossPickup, ShowInPhonebook, Type, Original_FirstName, Original_LastName, Original_PhoneNumber, Original_BossGroup, Original_Department, Original_BossPickup, Original_ShowInPhoneBook, Original_Type, Original_ItemID. Description: An unhundled expception occured during the execution of the current web request. Please review the stack trace for more information about the error and where it originiated in the code. The Stack Trace basiclly showes the same error as above. Also, when I attempt to delete the record I do not get an error but the record does not delete. What is interesting is that I can add a record so I do not believe that it is a security permissions issue. I have the ISS Authinication Method Enable Anonymous Access set on with full control. If anyone has any insight as to why this is occuring please let me know.
I have a scheduled job which does an text file import in my database . The data gets appended in my table every day from this import job .
Since my table is growing every day , i want to truncate the table after the data has been collected for three months i.e 90 days . The table will be empty and the new data will flow in through the import .
Any thoughts how to do it through query and schedule it ???
I have a task to create a dts package, which will delete records in a SQL table, which are present on a sheet tab in Excel. I know how to transfer records from Excel to SQL table and vice versa, but not sure how I can delete the records in SQL table, which are present in Excel. Note that there are just 2 columns - one is a key and the other name.
DELETE FROM [SCS_NAV2009R2_PROD].[dbo].[Payroll Ledger Entry]
gives this error
Msg 9002, Level 17, State 4, Line 1
The transaction log for database 'SCS_NAV2009R2_PROD' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
~ 7,000,000 records
now i have set the log to autogrow and set the max size to 2tb
We have a large test database with million of records for more than company site Code. Sometime we want to refresh the data of that database for one or more site Codes.
In order to do that I have to delete all records of the site code we want to refresh on the test database first then copy a new set of data from production database over. Since we refresh data based on the site code therefore I have to use the Delete command instead of Truncate.
Since this is a huge database with thousand of tables and million of records per table I have a performance issues with delete command. So what would be the best to delete a large number of records without writing any information to database log file?
FYI: The Recovery model of this database is Simple
I have bunch of heap tables and the fragmentation seems to be high, i am not sure whether i shall add index for them, as these tables are inserted and updated every day.
Is there a way that I can reset the key field to 1 when using DELETE to clear a table? (Note: if there is a separate command that I could use after the DELETE, that is fine too.) Thank you for your help with this, -DJ
I am using a SQL Server Agent jobs that run each morning to update the records in a table to match what they should be for that day. I built them and tested it using a test table called "testtable1". It worked fine. But when I switched over to our production table, it fails saying the table has to be decaled. What would be the difference. The production table has a "@" in front of the name, is that causing issues?
USE [Live_build] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO BEGIN DELETE FROM @ZIPLIST INSERT INTO @ZIPLIST SELECT * FROM tblZip3DSWed; END
I have been searching many postings and I cant seem to find anyonethat has this answer so I decided to post. I am using SQL(Transact-SQL).If I have 2 tables with columnsacct_num,activity_date,and pay_amt and I want to delete one instanceof a record in table 1 for every instance of that record in table 2how could I do that. For example.Table 1-----------acct activity_date pay_amt123 5/1/2004 50.00123 5/1/2004 50.00123 5/1/2004 50.00123 5/1/2004 50.00123 5/1/2004 50.00Table 2-----------acct activity_date pay_amt123 5/1/2004 50.00123 5/1/2004 50.00I need a delete statement that will find 2 of the 5 records(It doesn'tmatter which 2) and delete them.Leaving table one looking like this.Table 1-----------acct activity_date pay_amt123 5/1/2004 50.00123 5/1/2004 50.00123 5/1/2004 50.00How can I do this??
I have to delete a ton of data from a SQL table. I have a unique identifier called the version. I would like to use if not in these versions then delete. I tried to using the statement below, but learned the hard way that it created an error this is the message I got....
Msg 9002, Level 17, State 4, Line 3...
The transaction log for database 'MonthEnds' is full due to 'ACTIVE_TRANSACTION'.
I was reading about truncate, I am not sure how I would do this or how I would setup the statement.
Delete Products where versions were not in (('48459CED-871F-4971-B888-5083990332BC','D550C8D3-58C7-4C74-841D-1C1675F19AE3','C77C7817-3F04-4145-98D3-37BB1610DB35', '21FE83FA-476D-4604-80EF-2ED57DEE2C16','F3B50B81-191A-4D71-A406-011127AEFBE1','EFBD48E7-E30F-4047-909E-F14DCAEA4181','BD9CCC41-D696-406B- 'C8BEBFBC-D362-4D0F-A555-B281FC2B3023','EFA64956-C2CF-41FC-8E21-F060597DAFCB','77A8DE56-6F7F-4490-8BED-AA6809B947EF','0F4C1E5F-B689-4DCB-
I've 2 tables ResumeSkill (Child table) and Skill (Parent table), There are duplicates in the parent table and after removing the foreign key constraint in child table deleted all duplicate values from Parent table. But those deleted duplicate values has references in child table which need to be deleted now.
ResumeSkill Skill
Id SkillId SkillId Name
I want to delete all the records from ResumeSkill that dont have matching skillId in Skill table.
I'm looking at SSIS and SqlBulkCopy as a possible method to quickly insert and process large amounts of data, my current method uses the sp_xml_preparedocument and OPENXML to parse an xml document of the data I want to process and insert into the database, however I'm noticing the performance of SqlServer parsing the xml document isn't that good.
I found the C# SqlBulkCopy method (new in .NET 2.0) and I was thinking it would be faster to use that to load my data into a staging table and then use SSIS to extract the data from the staging table, process and transform it as necessary and finally load it into the final destination tables. I was able to create an Integration Services Project that selects the data from the staging table, does a bit of processing on one of the fields (by calling a stored procedure), and finally loads the processed data into the final table.
The problem with this is I need to clean out the rows that were extracted from the staging table and I'm not sure how I can accomplish this (and I can't just issue a "delete from staging_table" because there maybe new records in the staging table that were not processed), perhaps I can either delete each record as it is proccessed or somehow get the last proccessed identity id from the staging table and delete all records less than or equal to that id.
thanks in advance for any help you can provide, maybe there is an easy way to accomplish what I'm trying to do.
I have these two tables Log and CategoryLog, I need to archive records older than 13 months in these two tables to two separate tables and then delete the archived records from Log and CategoryLog tables. The problem is that only 'Log' table has a date column, the other table CategoryLog does not have any date column. But the two tables are connected by a column(LogID). How to archive the data and then delete the archive data from both tables.
I have around 3 tables having around 20 to 30gb of data. My table A related to table B by a FK and same way table B related to table C by FK. I would like to delete all rows satisfying certain condition from table A and all corresponding related records from table B and C. I have created a query to delete the grandchild first, followed by child table and finally parent. I have used inner join in my delete query. As you all know, inner join delete operations, are going to be extremely resource Intensive especially on bigger tables.
What is the best approach to delete all these rows? There are many constraints, triggers on these tables. Also, there might be some FK relations to other tables as well.
How to delete records from multiple tables if main table’s entity is deleted as constraints is applied on all..There is this main table called Organization or TblOrganization.and this organization have branches which are in Brach table called tblBranch and this branch have multiple applications let say tblApplication and these application are used by multiple users called tblUsers.What I want is: when I delete the Organization All branches, application and users related to it must be deleted also.How I can apply that on a button click in asp.net web forms..Right now this is my delete function which is very simple
Public void Delete(int? id){ var str=”DELETE FROM tblOrganization WHERE organizationId=”+ id ; } And My tables LOOK LIKE this CREATE TABLE tblOrganization ( OrganizationId int, OrganizationName varchar(255)