I have deleted nearly 30 million rows from a table. But however when I used the sp_spaceused command to calculate the data occupied by the table I don't see any difference in the data size of the table. In fact the data has increased to few MBs after the deletion, but not much.
I am trying to delete tables from data where the ModifiedDates older than 9 years in AdventureWorks2012 database . I get console notified that foreign keys are dropped but the delete statement is throwing errors. I am sure that somewhere the key constraints are not getting altered, but i'm not able to figure it out as i'm a relative beginner to T-SQL. The error and code:
The DELETE statement conflicted with the REFERENCE constraint "FK_SalesOrderHeaderSalesReason_SalesReason_SalesReasonID". The conflict occurred in database "AdventureWorks2012", table "Sales.SalesOrderHeader [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null $option_drop = new-object Microsoft.SqlServer.Management.Smo.ScriptingOptions; $option_drop.ScriptDrops = $true;
I would like to archive /delete data from a 100GB table. I have to delete on the basis of date column. Date column has been added to clustered index But not having an individual non clustered index.
My estimated execution plan shows a index scan.
Should I impose an non-clustered index on the date column then try to archive /delete after confirming the index seek is used in estimated execution plan or, is there any other method to do this?
I am getting a number of deadlocks when inserting and deleting items from the same table.
The delete statement has a U lock and awaiting an IX lock on an index that covers the column in the where clause.
The insert statement has a IX lock and awaiting a U lock on the same index.
The delete statement is deleting about 5000 rows, where as the insert statement is inserting a single row.
Both these statements are found in stored procedures being called from LINQ to SQL.
I am wondering if there is a way I can prevent the delete statement taking the U lock out?My thinking being if the delete didn't take out the U lock then it would not deadlock with the insert. Are there any hints I could use to avoid the particular lock above?
I have seen various examples of multiple updates causing a deadlock, which can be fixed by adding multiple indexes. However, as I am inserting and deleting rows I imagine that all the indexes will need to be updated by both operations.
I have inherited the architecture and don't have the time to redesign everything at present. My backup plan is to deprioritize the delete and build in a retry mechanism.
However, it would be really good if I could find a more elegant way to handle deleting and inserting rows at the same time.
I have found a bunch of duplicate records in our housing database that ideally I need to delete.There are two tables that I need to remove data from ih_cml_log_entry and ih_cml_log_notes. There is no unique identifier between the tables for a log entry. So I have had to join on the person_ref, log_seq and the date/time of entry.How do I go about deleting the data - I've used the script below to identify what I need to delete -
SELECT * FROM ( select cml.person_ref, cml.open_date + open_time as 'datetime',cml.open_user,cml.log_type ,ROW_NUMBER() OVER (PARTITION BY cml.person_ref, cml.open_date + cml.open_time,cml.open_user,cml.log_type ORDER BY (SELECT 0)) AS RowNo ,n.note FROM ih_cml_log_entry cml
I have to delete a ton of data from a SQL table. I have a unique identifier called the version. I would like to use if not in these versions then delete. I tried to using the statement below, but learned the hard way that it created an error this is the message I got....
Msg 9002, Level 17, State 4, Line 3...
The transaction log for database 'MonthEnds' is full due to 'ACTIVE_TRANSACTION'.
I was reading about truncate, I am not sure how I would do this or how I would setup the statement.
Delete Products where versions were not in (('48459CED-871F-4971-B888-5083990332BC','D550C8D3-58C7-4C74-841D-1C1675F19AE3','C77C7817-3F04-4145-98D3-37BB1610DB35', '21FE83FA-476D-4604-80EF-2ED57DEE2C16','F3B50B81-191A-4D71-A406-011127AEFBE1','EFBD48E7-E30F-4047-909E-F14DCAEA4181','BD9CCC41-D696-406B- 'C8BEBFBC-D362-4D0F-A555-B281FC2B3023','EFA64956-C2CF-41FC-8E21-F060597DAFCB','77A8DE56-6F7F-4490-8BED-AA6809B947EF','0F4C1E5F-B689-4DCB-
The data is automaticaly deleting from one perticular table at every night from last week onwords. I have created a delete trigger to find it out. But Nothing was recorded. There is no jobs except maintainance plans. Nothing in event viewer too. The database recovery model is simple. How can i solve this problem Please advise me to solve this problem
I have an entry form allowing customers to enter up to 15 skus (productid) at a time, so they can make a multiple order, instead of enteringone sku, then submitting it, then returing to the form to submit thesecond one, and so forth.From time to time, the sku they enter will be wrong, or discontiued, soit will not submit an order.Therefore, when they are done submitting their 15 skus through the orderform, I want a list showing them all of those skus that came back blank,or were not found in the database.I'm doing this by creating two tables. A shopping cart, which holds allthe skus that were returned, and a holding table, that holds all theskus that were submitted. I want to then delete all the skus in theholding page that match the skus in teh cart (because they are goodskus) which will then leave the unmatched skus in the holding table.I'll then scroll out the contents of the holding table, to show them theskus that were not found in the database.(confused yet?)So what I want to do is have some sql that will delete from the holdingtable where the sku = the sku in the cart. I've tried writing this, butit dosn't work.I tiried this delete from holding_table where sku = cart.skuI was hoping this would work, but it dosn't. Is there a way for me to dothis?Thanks!Bill*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I can run a select to retrieve data using a prefix 'a' for the specific table involved. However when I try to run a delete using the same criteria it fails telling me
Msg 102, Level 15, State 1,.......Line 1
Incorrect syntax near 'a'
The Select statement looks like:
select count(*) from schema.table a where a.customer_id=1234
The Delete looks like:
delete from schema.table a where a.customer_id=1234
What am I doing wrong here? and how can I prefix the table, because the command I want to run is much more complicated than the example above and it needs the prefix
Hi i have to delete the master table data without deleting the child table records,is there any solution for this, parent table has relation with the child table. regards vinod.t.v
I need to delete data from a particular table which has more than half a million records. The data needs to be deleted is more than 200,000 records from the table. What is the best way to delete the data from the table other than importing into a temporary table and performing the same operation?
Let me know if the strategy to be followed is okay.
1. Drop all the triggers 2. Drop all the indexes 3. Write a procedure with a loop setting ROWCOUNT to 1000 and delete the records. ( since if I try to delete all the rows it will give timeout error ) The above procedure will delete 1000 records for each batch inside the loop till it wipes out all the data for the specified condition. 4. Recreate Indexes and Triggers.
Please let me know if there are any other optimal solution.
I have a huge table with 4 primary keys on it. I need to delete the data from this table ( approx. 5.6 millions records to be deleted). It takes a hell lot of time to delete it by normal query. Can someone please suggest me a better way? Any help will be appreciated.
We are using SQL Server 2000 Standard ed, sp4 on Server 2000 Advanced.We have one table that is three times as large as the rest of the database.Most of the data is static after approximately 3-6 months, but we arerequired to keep it for 8 years. I would like to archive this table (A), butthere are complications.1. the only way to access the data is through the application (they areimages produced by the application-built on Power-Builder)2. there are multiple tables refrencing this table and vise-versa3. we restore the entire db to two other servers for testing and trainingregularly4. there might be more complications that have not been thought ofCurrently, our only plan is to setup a seperate server with a copy of this dbon it and the application. Leave only the tables necessary to access the data,and if this 'archive' works, remove from production the data from the table Aand all references to the table A from rows on the other tables.I mentioned #3 because someone mentioned a third party tool that may be ableto pull the data from the table, archive it elsewhere, and at the same time,place a 'pointer' in the table to the new storage location. The tool theymentioned only works on Oracle and we have not explored beyond that yet.I am ready to explore ideas and suggestions; I am still new to the DBA world,I am out of ideas.Thank you!--Message posted via SQLMonster.comhttp://www.sqlmonster.com/Uwe/Forum...eneral/200607/1
I'm Working in a Simple picture Gallery On My web site. When I add my pictures To the table Using Binary Writer and Delete ]
DELETE FROM [Photos]
WHERE [PhotoID] = @PhotoID
It From My Table this Transact Delete the Pictures but After some work I found That My database File size is increassing day To day I'm very confused so please tell me where is the problem ?
I am trying to delete a row in excel [Sheet1$], where this data in that row is used in a pivot table in same excel [Sheet2$] which should also get deleted, when i try to delete that row using "delete ... from [Sheet1$] " it is throwing an error message "Deleting data in a linked table is not supported by this ISAM. (Microsoft Office Access Database Engine)" Can you please guide me in overcoming this error...........
I have a table 300+GB. it holds 10 years of Data. I need to delete 5 years of data and put it to another server so I can have more space.
If I delete 5 years of data, Transaction log gets so huge and size of the database even gets bigger because of the .ldf file which even gets bigger! I think I can shrink the log file and the data file. Is this the best way to do it?
I have a requirement to delete 1 Million records from a table having 10 Million data and it's being queried on 24/7 basis (don't have a downtime). how can I achieve that?
One of our Application creating run time SQL job to run batch process and application itself deleting this run time SQL job, but when doing deletion it’s not checking whether job is running or not, just doing direct delete. Our challenge is how to capture deleted job details, after incident happen we could see error details only in this log.
We can run profile to trace but we don’t know when it will trigger and we cannot possible keep active profiler as it will kill server.so we can't run the trace for log time as we don't know when the issue happens.
What is the best way to transfer data from the staging table into the main table.
Example: Staging Table Name: TableA_satge (# of rows - millions) Main Table Name: TableA_main (# of rows - billions)
Note: Staging table may have some data same as the main table.
Currently I am doing: - Load data into staging table (TableA_stage) - Remove any duplication of rows from the staging table (TableA_stage) - Disable all indexes on main table (TableA_main) - Insert into main table (TableA_main) from staging table (TableA_stage) - Remove any duplication of rows from the main table using CTE (TableA_main) - Rebuild indexes on main_table (TableA_main)
The problem with the above method is that, it takes a lot of time and log file size grows very big.
I have three FileStreams (FS1 on F drive, FS2 on H drive, FS3 on E drive) belonging to the same FileStream group of one particular database (DB) which is in Simple recovery mode in the SQL Server 2012.
FS1 contains huge number of files due to which F drive is completely full.
So, I am trying to move some of the extra files from one FileStream (FS1 on F drive) to another FileStreams (FS2 on H drive and FS3 on E drive) using command:
dbcc shrinkfile('FS1', emptyfile)
Then, I take the Full and Differential backup of the database and issue the CheckPoint and try to delete the already duplicated files from the Filestream FS1 to get some space in the F drive using command:
Where interid in ('comp1', 'comp2', 'comp4', 'comp5')
what would be the best way to using these scripts pull the data to my testDW and not have duplicate data issues?
I was thinking of using a staging DB on the GP cluster and then building an import data package to run nightly. the issue i had was how do i avoid duplicate data ?
In another forum post, a poster was deleting large numbers of rows from a table in batches of 50,000.
In the bad old days ('80s - '90s), I used to have to delete rows in batches of 500, then 1000, then 5000, due to the size of the transaction rollback segments (yes - Oracle).
I always found that increasing the number of deleted rows in a single statement/transaction improved overall process speed - up to some magic point, at which some overhead in the system began slowing the deletes down, so that deleting a single batch of 10,000 rows took more than twice as much time as deleting two batches of 5,000 rows each.
good rule-of-thumb numbers (or even better, some actual statistics and/or explanations) as to how many records should be deleted in a single transaction/statement for optimum speed? 50,000 - 100,000 - 1,000,000 or unlimited? Are there significant differences between 2008, 2012, 2014?
I don't know if the question has been nailed down. Aside from deleting tables, can we delete the *content* of data within the tables. It doesn't seem crazy that, if you can pull in data from a feed then you should be able to remove the content out again (without also destroying the user's meta-data work ). Reasons for this include:
- Security (a user may not have rights to see *my* data and should go refresh their own) - Size (workbook doesn't need to have GB's of irrelevant data saved to disk in a workbook if it was just useful during development phase to a pre-production data feed) - Bad data (pre-production data feed is not good data) - User-friendliness (data feed was refreshed 2 years ago and workbook was saved to file server. Users shouldn't be presented with irrelevant data, but should get empty pivot tables until they go do their refresh)
Obviously Excel internally knows how to clear out PowerPivot data, given the prompt shown here: [URL] ....
But how does a user initiate this on their own (corruption aside)?
Previous time this question was asked, without a real resolution: [URL] ....
We are running SQL Server 2005 express on Windows 2003. The database server gets significant amounts of data.
Because of the 4GB data limit we have a daily cron task which goes through and deletes data older then 90 days.
We would like a way to archive this data instead of deleting it. Is there any way to take data and compress it and store it in a different way, so that if needed, customers can query directly out from the compressed data? Cleary querying from compressed would be slower but that is ok.
Any other solutions that would allow us to archive data instead of deleting it? Thanks.