Deleting Data From Primary Table
Mar 20, 2008
Hi Everybody,
Kindly let me know if there is a way of deleting data from primary table without deleting data from its corresponding foreign key table.
Thanks & Regards
Hi Everybody,
Kindly let me know if there is a way of deleting data from primary table without deleting data from its corresponding foreign key table.
Thanks & Regards
I have a table where I want to delete some data from but I get this error.
You might have a record that has a foreign key value related to it, or you might have violated a check constraint.
What to do????
Is this close to the correct syntax for a stored procedure for deleting all the data from a particular table... or is there a better way?
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE TruncateTmpBank
[Code] ....
I have to delete a ton of data from a SQL table. I have a unique identifier called the version. I would like to use if not in these versions then delete. I tried to using the statement below, but learned the hard way that it created an error this is the message I got....
Msg 9002, Level 17, State 4, Line 3...
The transaction log for database 'MonthEnds' is full due to 'ACTIVE_TRANSACTION'.
I was reading about truncate, I am not sure how I would do this or how I would setup the statement.
Delete Products
where versions were not in (('48459CED-871F-4971-B888-5083990332BC','D550C8D3-58C7-4C74-841D-1C1675F19AE3','C77C7817-3F04-4145-98D3-37BB1610DB35',
'21FE83FA-476D-4604-80EF-2ED57DEE2C16','F3B50B81-191A-4D71-A406-011127AEFBE1','EFBD48E7-E30F-4047-909E-F14DCAEA4181','BD9CCC41-D696-406B-
'C8BEBFBC-D362-4D0F-A555-B281FC2B3023','EFA64956-C2CF-41FC-8E21-F060597DAFCB','77A8DE56-6F7F-4490-8BED-AA6809B947EF','0F4C1E5F-B689-4DCB-
[code]....
Hi gurus,
The data is automaticaly deleting from one perticular table at every night from last week onwords. I have created a delete trigger to find it out. But Nothing was recorded. There is no jobs except maintainance plans. Nothing in event viewer too. The database recovery model is simple. How can i solve this problem
Please advise me to solve this problem
Thanks
Krishna.
I have an entry form allowing customers to enter up to 15 skus (productid) at a time, so they can make a multiple order, instead of enteringone sku, then submitting it, then returing to the form to submit thesecond one, and so forth.From time to time, the sku they enter will be wrong, or discontiued, soit will not submit an order.Therefore, when they are done submitting their 15 skus through the orderform, I want a list showing them all of those skus that came back blank,or were not found in the database.I'm doing this by creating two tables. A shopping cart, which holds allthe skus that were returned, and a holding table, that holds all theskus that were submitted. I want to then delete all the skus in theholding page that match the skus in teh cart (because they are goodskus) which will then leave the unmatched skus in the holding table.I'll then scroll out the contents of the holding table, to show them theskus that were not found in the database.(confused yet?)So what I want to do is have some sql that will delete from the holdingtable where the sku = the sku in the cart. I've tried writing this, butit dosn't work.I tiried this delete from holding_table where sku = cart.skuI was hoping this would work, but it dosn't. Is there a way for me to dothis?Thanks!Bill*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
View 2 Replies View Related
I can run a select to retrieve data using a prefix 'a' for the specific table involved. However when I try to run a delete using the same criteria it fails telling me
Msg 102, Level 15, State 1,.......Line 1
Incorrect syntax near 'a'
The Select statement looks like:
select count(*) from schema.table a where a.customer_id=1234
The Delete looks like:
delete from schema.table a where a.customer_id=1234
What am I doing wrong here? and how can I prefix the table, because the command I want to run is much more complicated than the example above and it needs the prefix
Hi
i have to delete the master table data without deleting the child table records,is there any solution for this, parent table has relation with the child table.
regards
vinod.t.v
I need to delete data from a particular table which has more than half a million records. The data needs to be deleted is more than 200,000 records from the table. What is the best way to delete the data from the table other than importing into a temporary table and performing the same operation?
Let me know if the strategy to be followed is okay.
1. Drop all the triggers
2. Drop all the indexes
3. Write a procedure with a loop setting ROWCOUNT to 1000 and delete the records. ( since if I try to delete all the rows it will give timeout error )
The above procedure will delete 1000 records for each batch inside the loop till it wipes out all the data for the specified condition.
4. Recreate Indexes and Triggers.
Please let me know if there are any other optimal solution.
Thanx,
Zombie
I have a huge table with 4 primary keys on it. I need to delete the data from this table ( approx. 5.6 millions records to be deleted). It takes a hell lot of time to delete it by normal query.
Can someone please suggest me a better way?
Any help will be appreciated.
We are using SQL Server 2000 Standard ed, sp4 on Server 2000 Advanced.We have one table that is three times as large as the rest of the database.Most of the data is static after approximately 3-6 months, but we arerequired to keep it for 8 years. I would like to archive this table (A), butthere are complications.1. the only way to access the data is through the application (they areimages produced by the application-built on Power-Builder)2. there are multiple tables refrencing this table and vise-versa3. we restore the entire db to two other servers for testing and trainingregularly4. there might be more complications that have not been thought ofCurrently, our only plan is to setup a seperate server with a copy of this dbon it and the application. Leave only the tables necessary to access the data,and if this 'archive' works, remove from production the data from the table Aand all references to the table A from rows on the other tables.I mentioned #3 because someone mentioned a third party tool that may be ableto pull the data from the table, archive it elsewhere, and at the same time,place a 'pointer' in the table to the new storage location. The tool theymentioned only works on Oracle and we have not explored beyond that yet.I am ready to explore ideas and suggestions; I am still new to the DBA world,I am out of ideas.Thank you!--Message posted via SQLMonster.comhttp://www.sqlmonster.com/Uwe/Forum...eneral/200607/1
View 1 Replies View RelatedHej There.We have a big problem. We have now for 4th years had a SQL Serverwithout problems. But sutnely some of the primary keys are deleted.The subdata to the primary keys are not deleted. This is a big problembecause This is a billing system. Recently there was over 300 primaykeys deleted. Good that we have backup but still... NOOOT goodCan anyone help me to solve this problem!!! PLEASEBest regardsDanni
View 1 Replies View Related
I'm Working in a Simple picture Gallery On My web site.
When I add my pictures To the table Using Binary Writer and Delete
]
DELETE FROM [Photos]
WHERE [PhotoID] = @PhotoID
It From My Table this Transact Delete the Pictures
but After some work I found That My database File size is increassing day To day I'm very confused
so please tell me where is the problem ?
Hi,
I am trying to delete a row in excel [Sheet1$], where this data in that row is used in a pivot table in same excel [Sheet2$] which should also get deleted, when i try to delete that row using "delete ... from [Sheet1$] " it is throwing an error message "Deleting data in a linked table is not supported by this ISAM. (Microsoft Office Access Database Engine)"
Can you please guide me in overcoming this error...........
Thanks in advance,
Warm Regards,
gchanduu
hi guys,
just a question regarding database design
i have a table with an auto-generated primary key but the problem is this:
say i have 4 records,so logically they'll be numbered 1 to 4.so the problem is whenever i delete all records and add new ones,the numbering will start from 5 and not 1 again.
how do i remedy this?
thanx
I have a table 300+GB. it holds 10 years of Data. I need to delete 5 years of data and put it to another server so I can have more space.
If I delete 5 years of data, Transaction log gets so huge and size of the database even gets bigger because of the .ldf file which even gets bigger! I think I can shrink the log file and the data file. Is this the best way to do it?
I have deleted nearly 30 million rows from a table. But however when I used the sp_spaceused command to calculate the data occupied by the table I don't see any difference in the data size of the table. In fact the data has increased to few MBs after the deletion, but not much.
View 8 Replies View RelatedHi, I'm not user to inserting data into databases, usually I just read the data. So I think my problem might be pretty common.I have a table of longitudes, latitudes, city names, and country names. I set the primary key to be the columns longitude and latitude. I have a method that generates the user's location and the mentioned data. So I want to only insert the new data into the database if it is new and unique. currently if the same user goes to my site, it inserts the data fine the first time and then throws and error the second time because it is inserting duplicate primary key information. Do I need to query the database to see if the data record already exists? or is there a way to insert the record only if it is "new"?? Thanks for the help!!
View 2 Replies View RelatedHi,
I have a Users table that I use for membership. I am using username varchar(30) as the primary key for this table since username will always be unique.
The question I have is regarding how SQL Server actually stores data:
I see that when I add users, they are always stored alphabetically sorted on username.
I was expecting that all users will appear on the users table in the order they were added.
Example: I have 3 users (john, jonah, wilson). Now I added 4 user with username='bob'
If I execute select * from users, it returns me (bob, john, jonah, wilson). Look bob has become the first row of the table.
My question: Is Sql server moving 3 older rows to make room for 'bob' and it is also rebuilding part of the index due this new username 'bob'?
If this is the case, then it will have big impact if I have 100K users and I add one user that becomes firstrow. In that case 99,999 rows will have to move.
Bottom line, insert, delete will be very expensive.
I know sql server keeps data physically sorted on PK. But I am concerned here since rows are losing the order in which they were inserted.
Thanks
I have a table that has a primary key that is auto incremented by 1. This table's data is cleared out periodically and as data gets added the auto id primary key continues to increase in numeric value. Once the data is cleared from the table the auto id names could be used again(the eventId is not stored) Currently the eventID is at 26,581,399. I know the maximum int value is 2,147,483,647.
How should I handle this? or rebuild the table every time the data is cleared(problematically)?
We have a large table which is very old and not much ppl take care about, recently there is a performance problem from the report need to query to this table. Eventally we find that this table have primary key missing and there is duplicate data which make "alter table add primary key" don't work
Besides the data size of this table require unacceptable time to execute something like "insert into new_table_with_pk from select distinct * from old table"
Do you have any recommendation of fixing this? As the application run on oracle , sybase and sql server, is that cross database approace will work?
Or can it record before and after column changes based on the LSN only?
An extract from a file based legacy accounting system is performed every night. The system does not have a primary key because transactions are managed through program code. (the more things change...). The extract is copied to text in Unix and FTP'd to Windows, where the file is loaded into SQL Server by kill & fill. Because of the expense of modifying the source system, there is enormous inertia/resistance to injecting a primary key at the source, so kill & fill it stays.
In reading about Change Data Capture, it seemed to me that column level insert update and delete are stored in tables that remember the before and after content of each column tracked. In my reading I have seen many references to the LSN to decide when and what to record as changed, but I have not seen any refereference to the necessity of a primary key for Change Data Capture to work. This is in contrast to replication, where the requirement for the existence of a primary key is made plain.
Is it possible to use Change Data Capture against a table without a primary key? How to use it to change the extract from kill and fill to incremental.
I have a requirement to delete 1 Million records from a table having 10 Million data and it's being queried on 24/7 basis (don't have a downtime). how can I achieve that?
View 13 Replies View RelatedHi all,
I use the following 3 sets of sql code in SQL Server Management Studio Express (SSMSE) to import the csv data/files to 3 dbo.Tables via CREATE TABLE & BUKL INSERT operations:
-- ImportCSVprojects.sql --
USE ChemDatabase
GO
CREATE TABLE Projects
(
ProjectID int,
ProjectName nvarchar(25),
LabName nvarchar(25)
);
BULK INSERT dbo.Projects
FROM 'c:myfileProjects.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = ''
)
GO
=======================================
-- ImportCSVsamples.sql --
USE ChemDatabase
GO
CREATE TABLE Samples
(
SampleID int,
SampleName nvarchar(25),
Matrix nvarchar(25),
SampleType nvarchar(25),
ChemGroup nvarchar(25),
ProjectID int
);
BULK INSERT dbo.Samples
FROM 'c:myfileSamples.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = ''
)
GO
=========================================
-- ImportCSVtestResult.sql --
USE ChemDatabase
GO
CREATE TABLE TestResults
(
AnalyteID int,
AnalyteName nvarchar(25),
Result decimal(9,3),
UnitForConc nvarchar(25),
SampleID int
);
BULK INSERT dbo.TestResults
FROM 'c:myfileLabTests.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = ''
)
GO
========================================
The 3 csv files were successfully imported into the ChemDatabase of my SSMSE.
2 questions to ask:
(1) How can I designate the Primary and Foreign Keys to these 3 dbo Tables?
Should I do this "designate" thing after the 3 dbo Tables are done or during the "Importing" period?
(2) How can I set up the relationships among these 3 dbo Tables?
Please help and advise.
Thanks in advance,
Scott Chang
Hi all,
Can anyone suggest me on Adding primary key to a table which has already a primary key.
Thanks,
Jeyam
I am trying to delete tables from data where the ModifiedDates older than 9 years in AdventureWorks2012 database . I get console notified that foreign keys are dropped but the delete statement is throwing errors. I am sure that somewhere the key constraints are not getting altered, but i'm not able to figure it out as i'm a relative beginner to T-SQL. The error and code:
The DELETE statement conflicted with the REFERENCE constraint "FK_SalesOrderHeaderSalesReason_SalesReason_SalesReasonID". The conflict
occurred in database "AdventureWorks2012", table "Sales.SalesOrderHeader
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null
$option_drop = new-object Microsoft.SqlServer.Management.Smo.ScriptingOptions;
$option_drop.ScriptDrops = $true;
[Code] ....
I don't know if the question has been nailed down. Aside from deleting tables, can we delete the *content* of data within the tables. It doesn't seem crazy that, if you can pull in data from a feed then you should be able to remove the content out again (without also destroying the user's meta-data work ). Reasons for this include:
- Security (a user may not have rights to see *my* data and should go refresh their own)
- Size (workbook doesn't need to have GB's of irrelevant data saved to disk in a workbook if it was just useful during development phase to a pre-production data feed)
- Bad data (pre-production data feed is not good data)
- User-friendliness (data feed was refreshed 2 years ago and workbook was saved to file server. Users shouldn't be presented with irrelevant data, but should get empty pivot tables until they go do their refresh)
Obviously Excel internally knows how to clear out PowerPivot data, given the prompt shown here: [URL] ....
But how does a user initiate this on their own (corruption aside)?
Previous time this question was asked, without a real resolution: [URL] ....
We are running SQL Server 2005 express on Windows 2003. The database server gets significant amounts of data.
Because of the 4GB data limit we have a daily cron task which goes through and deletes data older then 90 days.
We would like a way to archive this data instead of deleting it. Is there any way to take data and compress it and store it in a different way, so that if needed, customers can query directly out from the compressed data? Cleary querying from compressed would be slower but that is ok.
Any other solutions that would allow us to archive data instead of deleting it? Thanks.
i have a table that i think is bad -
how do i delete the table?
also i want to copy this same table from another database -
can i use enterprise manager to copy the same table in place of the deleted table?
Hi guys,
I have a table which consist of 6,185 rows and from this table I want to delete 427 rows. I have placed the 427 rows in a separate table. So how do I delete these 427 from the original table(6185 rows).
I should also mention that the table does not have a primary key so I was thinking about something like this but it didn't work
delete o.CardNumber
,o.ref_no
,o.tran_date
,o.tran_val
from test6185 o
where exists (select h.CardNumber
,h.ref_no
,h.tran_date
,h.tran_val
from test427 h
where( h.ref_no=o.ref_no and
h.CardNumber = o.CardNumber and
h.tran_date =o.tran_date and
h.tran_val =o.tran_val
))
order by cardnumber
Thank much for any help
I have a package which loads data from a flat file (csv) to 4 tables in a database.
Now, the load is incremental.
I want to clear the data of all 4 tables(in the database) before loading the data from flat file everytime.How can i do this?
Iam using 4 Oledb Destinations, 1 multicast, 1 source component to do this.
Also can it happen like a transaction? because if it deletes the existing data and couldnt load new data there will be a problem!.how to avoid this?
I am trying to delete from a SQL table using two arguments, I currently use the following on the aspx page: <ItemTemplate><asp:LinkButton ID="cmdDeleteJob" runat="server" CommandName="CancelJob" Text="Delete" CssClass="CommandButton" CommandArgument='<%# Bind("Type_of_service") %>' OnClientClick="return confirm('Are you sure you want to delete this item?');"></asp:LinkButton></ItemTemplate> and then I use the following on the aspx.vb page: If e.CommandName = "CancelJob" Then Dim typeService As Integer = CType(e.CommandArgument, Integer) Dim typeDesc As String = "" Dim wsDelete As New page1 wsDelete.deleteAgencySvcs(typeService) gvTech.DataBind() I need to be able to pick up two CommandArguments.
View 4 Replies View Relatedin sql server 2000, I have a table that I have to write a script for to delete several columns in this table. I am finding that I have to use alter or drop keywords or a combination of the two but not sure because I have not done this before. I am googling this but finding all kinds of other information that I dont' need to know.
I dont have rights on this table so I cannot do this manually. I have to create the script and send it on to someone else.
If anyone can provide a good script example that I can use to delete unwanted columns it would be a great thing. Thanks.