Delete From Vs Delete With Subquery In Transaction
Feb 23, 2006
First, this is not my code.
This one is weird and I am missing something fundamental on this one. A developer was getting a timeout with this...
CREATE PROCEDURE p_CM_DeleteBatch
(
@SubmitterTranID VARCHAR(50)
)
AS
DECLARE
@COUNT INT,
@COMMIT INT
SET @COUNT = 0
SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived
where (claimstatus NOT IN ('Keyed', 'Imported')) AND
SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete
BEGIN
BEGIN TRANSACTION
DELETE FROM INVOICECLAIMMAPPING WHERE CLMRECDID IN (SELECT DISTINCT CLMRECDID FROM CLAIMSRECEIVED WHERE SUBMITTERTRANID = @SUBMITTERTRANID)
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsPayment WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsreceivedPayorServices where ClmRecdPyID in (SELECT ClmRecdPyID
FROM ClaimsReceivedPayors
WHERE SubmitterTranID = @SubmitterTranID)
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsReceivedPayors WHERE ClmRecdid in (SELECT ClmRecdID
FROM ClaimsReceived
WHERE SubmitterTranID = @SubmitterTranID)
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
IF (@COMMIT = 1)
BEGIN
--ROLLBACK TRANSACTION --For Testing Purpose ONLY
COMMIT TRANSACTION
RETURN (0)
END
ELSE
BEGIN
ROLLBACK TRANSACTION
RETURN (-1)
END
END
ELSE
BEGIN
RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1)
END
GO
I applied a couple of indices and got ride of the uncorrelated subqueries
CREATE PROCEDURE p_CM_DeleteBatch
(
@SubmitterTranID VARCHAR(50)
)
AS
DECLARE
@COUNT INT,
@COMMIT INT
SET @COUNT = 0
SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived
where (claimstatus NOT IN ('Keyed', 'Imported')) AND
SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete
BEGIN
BEGIN TRANSACTION
DELETE INVOICECLAIMMAPPING
FROM INVOICECLAIMMAPPING
JOIN CLAIMSRECEIVED
ON INVOICECLAIMMAPPING.CLMRECDID = CLAIMSRECEIVED.CLMRECDID
WHERE CLAIMSRECEIVED.SUBMITTERTRANID = @SUBMITTERTRANID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsPayment
WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE ClaimsreceivedPayorServices
FROM ClaimsreceivedPayorServices
JOIN ClaimsReceivedPayors
ON ClaimsreceivedPayorServices.ClmRecdPyID = ClaimsReceivedPayors.ClmRecPyID
WHERE ClaimsReceivedPayors.SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE ClaimsReceivedPayors
FROM ClaimsReceivedPayors
JOIN ClaimsReceived
ON ClaimsReceivedPayors.ClmRecdid = ClaimsReceived.ClmRecdid
WHERE ClaimsReceived.SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
IF (@COMMIT = 1)
BEGIN
--ROLLBACK TRANSACTION --For Testing Purpose ONLY
COMMIT TRANSACTION
RETURN (0)
END
ELSE
BEGIN
ROLLBACK TRANSACTION
RETURN (-1)
END
END
ELSE
BEGIN
RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1)
END
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
Suddenly this constraint was being violated with the change
Is the delete on ClaimsReceivedPayors starting before the delete on ClaimsreceivedPayorServices finishes? If so why would it matter between the join and subquery? This one is making me depressed because I can not explain it.
Hi I got a table 'inventory' from which i need to delete all the records,but some,which are having a 'process_level<8',which is a field in another table called 'course'.The foreign key is 'course_name' and 'course_number' combined.Can anyone help me with the syntax for this? Thanks GEORGE
Hi there! I wanted to move transaction log from drive e: to drive d:. I run DBCC SHRINKFILE(finance_log, EMPTYFILE) before ALTER DATABASE. Two weeks are gone and now old file is empty and I'm trying to delete it running:
ALTER DATABASE FINANSE REMOVE FILE Finance_log GO and I'm getting error message:
Server: Msg 5020, Level 16, State 1, Line 2 The primary data or log file cannot be removed from a database.
Have you experienced with this problem? How to resolve it?
I currently have 3 transaction logs (TR1, TR2, TR3)setup for a database (DB1). The space allocated for TR1 is much to high at 800 MG and TR2 and TR3 are 80 MG apiece. I would like to migrate the data to on of the smaller transaction log files. What is the best way to do this?
On a test system I tried backing up both the database and the transaction log and then executing 'DBCC SHRINKFILE (TR1, emptyfile)'. I thought this would purge inactive transactions from TR1 and mark it as no longer being allowed to take active transactions. When I try to perform a 'ALTER DATABASE DB1 REMOVE FILE TR1' I receive the message the Primary data or log file cannot be removed from the database. So I'm wondering if the file is empty and how I designate another TR file as being the primary log file?
Hello We have a SQL Server 2000 database with 2 transaction log files. The 2nd file was created when we were running out of disk space and the person creating it was not familiar with the dbcc shrink command.
I now want to get rid of the 2nd log file. I ran the following steps with no success:
DBCC SHRINKFILE ('Log_file', EMPTYFILE ) --Message: Cannot shrink log file 3 (log_file) because all logical log files are in use.
ALTER DATABASE db1 REMOVE FILE 'Log_file' --Message: The file 'Log_file' cannot be removed because it is not empty.
There are no users or open transactions in the database. I have also tried sp_detach_db and sp_attach_single_file_db but that does not work either as the database attaches both the transaction logs back.
Question: (SQL Server 2000) Can I arbitrarily delete a transaction logfile in either bulk or full with a cron job and still be able torestore a backup? Or are the log and the .bak enmeshed to the extentthat this not do-able?Details: I need to exclude a file group from a backup as it's staticdata and replicated on a remote server. In order to do this I cannotuse a simple recovery model which I currently use; at least in the GUIthe exclude filegroup option is grayed-out. So it appears I need to useeither bulk or full backups although I only need the functionality ofthe simple model.Is there another way to delete/truncate this transaction log withoutmaking a backup? Or can I just schedule backup through the day tohandle this issue?Obviously I am not a DBA but still need to solve this problem.Thanks,Phil
I have some simple files but they are failing because the delete history task is failing as it is looking for files in a non existent directory.
It is looking for files in C:Program FilesMicrosoft SQL ServerMSSQL10_50.INSTANCEMSSQLLog whereas it should be looking in C:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLLog
how I can get this corrected so I can get the Maintenance Plans to run correctly.
I have tried deleting and recreating the Plan but to no avail
I am using Master Data Service for couple of months now. I can load, update, merge and soft delete data in MDS. Occasionally we even have to hard delete data from MDS. If we keep on soft deleting records in a MDS table eventually there will be huge number of soft deleted records. Is there an easy way to hard delete all the soft deleted records from all MDS tables in a specific Model.
Background: Am working on completing an ORM that can not only handles CRUD actions -- but that can also updates the structure of a table transparently when the class defs change. Reason for this is that I can't get the SQL scripts that would work for updating a software on SqlServer to be portable to other DBMS systems. Doing it by code, rather than SQL batch has a chance of making cross-platform, updateable, software...
Anyway, because it needs to be cross-DBMS capable, the constraints are that the system used must work for the lowest common denominator....ie, a 'recipe' of steps that will work on all DBMS's.
The Problem: There might be simpler ways to do this with SqlServer (all ears :-) - just in case I can't make it cross platform right now) but, with simplistic DBMS's (SqlLite, etc) there is no way to ALTER table once formed: one has to COPY the Table to a new TMP name, adding a Column in the process, then delete the original, then rename the TMP to the original name.
This appears possible in SqlServer too --...as long as there are no CASCADE operations. Truncate table doesn't seem to be the solution, nor drop, as they all seem to trigger a Cascade delete in the Foreign Table.
So -- please correct me if I am wrong here -- it appears that the operations would be along the lines of: a) Remove the Foreign Key references b) Copy the table structure, and make a new temp table, adding the column c) Copy the data over d) Add the FK relations, that used to be in the first table, to the new table e) Delete the original f) Done?
The questions are: a) How does one alter a table to REMOVE the Foreign Key References part, if it has no 'name'. b) Anyone know of a good clean way to get, and save these constraints to reapply them to the new table. Hopefully with some cross platform ADO.NET solution? GetSchema etc appears to me to be very dbms dependant? c) ANY and all tips on things I might run into later that I have not mentioned, are also greatly appreciated.
I am having great difficulty with cascading deletes, delete triggers and referential integrity.
The database is in First Normal Form.
I have some tables that are child tables with two foreign keyes to two different parent tables, for example:
Table A / Table B Table C / Table D
So if I try to turn on cascading deletes for A/B, A/C, B/D and C/D relationships, I get an error that I cannot have cascading delete because it would create multiple cascade paths. I do understand why this is happening. If I delete a row in Table A, I want it to delete child rows in Table B and table C, and then child rows in table D as well. But if I delete a row in Table C, I want it to delete child rows in Table D, and if I delete a row in Table B, I want it to also delete child rows in Table D.
SQL sees this as cyclical, because if I delete a row in table A, both table B and table C would try to delete their child rows in table D.
Ok, so I thought, no biggie, I'll just use delete triggers. So I created delete triggers that will delete child rows in table B and table C when deleting a row in table A. Then I created triggers in both Table B and Table C that would delete child rows in Table D.
When I try to delete a row in table A, B or C, I get the error "Delete Statement Conflicted with COLUMN REFERENCE". This does not make sense to me, can anyone explain? I have a trigger in place that should be deleting the child rows before it attempts to delete the parent row...isn't that the whole point of delete triggers?????
This is an example of my delete trigger:
CREATE TRIGGER [DeleteA] ON A FOR DELETE AS Delete from B where MeetingID = ID; Delete from C where MeetingID = ID;
And then Table B and C both have delete triggers to delete child rows in table D. But it never gets to that point, none of the triggers execute because the above error happens first.
So if I then go into the relationships, and deselect the option for "Enforce relationship for INSERTs and UPDATEs" these triggers all work just fine. Only problem is that now I have no referential integrity and I can simply create unrestrained child rows that do not reference actual foreign keys in the parent table.
So the question is, how do I maintain referential integrity and also have the database delete child rows, keeping in mind that the cascading deletes will not work because of the multiple cascade paths (which are certainly required).
I'm trying to clean up a database design and I'm in a situation to where two tables need a FK but since it didn't exist before there are orphaned records.
Tables are:
Brokers and it's PK is BID
The 2nd table is Broker_Rates which also has a BID table.
I'm trying to figure out a t-sql statement that will parse through all the recrods in the Broker_Rates table and delete the record if there isn't a match for the BID record in the brokers table.
I know this isn't correct syntax but should hopefully clear up what I'm asking
I already did the following but still it wont delete the log file because it is not empty- DBCC SHRINKFILE('logfilename',EMPTYFILE) - DBCC SHRINKFILE('logfilename',TRUNCATEONLY)- ALTER DATABASE databasename REMOVE FILE logfilename
How can I repair the database (5 gb) and remove the transaction log (45 gb). Whenever I run the maintanence wizard, it corrupts the db and I have to restore the db. Whenever I try to shrink the transaction log, the query runs but it doesn't shrink it at all. Is there a manual method for either of these and if so, how?
this is my Delete Query NO 1 alter table ZT_Master disable trigger All Delete ZT_Master WHERE TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) alter table ZT_Master enable trigger All
I have troble in Delete Query No 2 here is a select statemnt , I need to delete them select d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) I tried modified it as below delete d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) but this doesn't works..
can you please help? and can I combine these 2 SQL Query into one Sql Query? thank you
I'm using SqlDataSource and an Access database. Let's say I got two tables:user: userID, usernamemessage: userID, messagetextLet's say a user can register on my website, and leave several messages there. I have an admin page where I can select a user and delete all of his messages just by clicking one button.What would be the best (and easiest) way to make this?Here's my suggestion:I have made a "delete query" (with userID as parameter) in MS Access. It deletes all messages of a user when I type in the userID and click ok.Would it be possible to do this on my ASP.net page? If yes, what would the script look like?(yes, it is a newbie question)
The requirement is: I should allow single row delete from a table but not bulk delete. An audit table should get updated if there is any single delete or single update. So I wrote the triggers as follows: for single and bulk delete
ALTER TRIGGER [dbo].[TRG_Delete_Bulk_tbl_attendance] ON [dbo].[tbl_attendance] AFTER DELETE AS
[code]...
When I try to run the website, the database error I am getting is:Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 0, current count = 1.
I ran the following query in Query Analyzer on a machine running SQL Server 2000. I'm attempting to delete from a linked server running SQL Server 2005:
DELETE FROM sql2005.production.dbo.products WHERE vendor='Foo' AND productId NOT IN ( SELECT productId FROM sql2000.staging.dbo.fooProductList )
The status message (and @@ROWCOUNT) told me 8 rows were affected, but nothing was actually deleted; when I ran a SELECT with the same criteria as the DELETE, all 8 rows are still there. So, once more I tried the DELETE command. This time it told me 7 rows were affected; when I ran the SELECT again, 5 of the rows were still there. Finally, after running this exact same DELETE query 5 times, I was able to remove all 8 rows. Each time it would tell me that a different number of rows had been deleted, and in no case was that number accurate.
I've never seen anything like this before. Neither of the tables involved were undergoing any other changes. There's no replication going on, or anything else that should introduce any delays. And I run queries like this all day, involving every thinkable combination of 2000 and 2005 servers, that don't give me any trouble.
Does anyone have suggestions on what might cause this sort of behavior?
I have a problem with one report on my server. A user has requested that I exclude him from receiving a timed email subscription to several reports. I was able to amend all the subscriptions except one. When I try to remove his email address from the subscription I receive this error:
An internal error occurred on the report server. See the error log for more details. (rsInternalError) Get Online Help
For more information about this error navigate to the report server on the local server machine, or enable remote errors
Online no help couldn't offer any advice at all, so I thought I'd just delete the subscription and recreate it again, but I receive the same message. "Okay, no problem, I'll just delete the report and redeploy it and set up the subscription so all the other users aren't affected", says I. "Oh, no!", says the report server, and then it give me this message:
System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Data.SqlClient.SqlException: Only members of sysadmin role are allowed to update or delete jobs owned by a different login. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.InstrumentedSqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.DBInterface.DeleteObject(String objectName) at Microsoft.ReportingServices.Library.RSService._DeleteItem(String item) at Microsoft.ReportingServices.Library.RSService.ExecuteBatch(Guid batchId) at Microsoft.ReportingServices.WebServer.ReportingService2005.ExecuteBatch() --- End of inner exception stack trace ---
What's even weirder is that I'm the owner and creator of the report and I'm a system admin and content manager on the report server and I set up the subscription when the report was initially deployed. Surely I should have sufficient rights to fart around with this subscription/report as I see fit?
I have rebooted the server, redeployed the report, checked credentials on the data source and tried amending and deleting from both the report manager and management studio but still I am prevented from doing so.
Hi,we are executing the following query in a stored procedure using snapshot isolation level:DELETE FROM tBackgroundProcessProgressReportFROM tBackgroundProcessProgressReport LEFT OUTER JOIN tBackgroundProcess ON tBackgroundProcess.BackgroundProcessProgressReportID = tBackgroundProcessProgressReport.BackgroundProcessProgressReportID LEFT OUTER JOIN tBackgroundProcessProgressReportItem ON tBackgroundProcessProgressReport.BackgroundProcessProgressReportID = tBackgroundProcessProgressReportItem.BackgroundProcessProgressReportIDWHERE (tBackgroundProcess.BackgroundProcessID IS NULL) AND (tBackgroundProcessProgressReportItem.BackgroundProcessProgressReportItemID IS NULL)The query should delete records from tBackgroundProcessProgressReport which are not connected with the other two tables.However, for some reasone we get the following exception:System.Data.SqlClient.SqlException: Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table 'dbo.tBackgroundProcess' directly or indirectly in database 'RHSS_PRD_PT_Engine' to update, delete, or insert the row that has been modified or deleted by another transaction. Retry the transaction or change the isolation level for the update/delete statement.The exception specifies that we are not allowed to update/delete/insert records in tBackgroundProcess, but the query indeed deletes records from tBackgroundProcessProgressReport, not from the table in the exception.Is the exception raised because of the join?Has someone encountered this issue before?Thanks,Yani
I have looked at the membership and roles stored procs from Microsoft and noticed that most of them are wrapped into a transaction. Ok some of the stored procs updated more than one table in which case it makes sense to wrap the code into a transaction. Our stored procs are a little simpler and insert, update or delete only one table for the most part. My question is: What is good practice, should I wrap my stored procs in transactions or because I am only updating one table leave it the way it is, see sample below: Please advise, newbie ALTER PROCEDURE [dbo].[syl_Category_Insert] @CategoryName nvarchar(64), @LanguageID int AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; BEGIN TRYINSERT INTO [syl_Categories] VALUES( @CategoryName, @LanguageID) SELECT SCOPE_IDENTITY() AS [CategoryID] RETURNEND TRY BEGIN CATCH --Execute LogError_Insert SP EXECUTE [dbo].[syl_LogError_Insert]; --Being in a Catch Block indicates failure. --Force RETURN to -1 for consistency (other return values are generated, such as -6).RETURN -1 END CATCH END
I have looked at the membership and roles stored procs from Microsoft and noticed that most of them are wrapped into a transaction. Ok some of the stored procs updated more than one table in which case it makes sense to wrap the code into a transaction. Our stored procs are a little simpler and insert, update or delete only one table for the most part. My question is: What is good practice, should I wrap my stored procs in transactions or because I am only updating one table leave it the way it is, see sample below: Please advise, newbie
ALTER PROCEDURE [dbo].[syl_Category_Insert] @CategoryName nvarchar(64), @LanguageID int
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
Currently we has a database of size about 300G. Because our backup system failed some time past we were left with a transaction log file which grew to about 160G. However our backups are working again and everything is working fine. My understanding is that now the transaction log file is practically empty but the capacity remains at 160G.
When you delete records the deleted transactions are going to get logged to the transaction file. My understanding is when a backup is done these transactions get discarded out of the transaction file.
could I make use of this relatively large transaction file and start deleting transactions without out actually adding to the transaction file size.
The plan is to delete records from logging tables that are not referenced to by any other table without this increasing the transaction log file.For example over a period of a few weeks we can delete a chunk of records from a table. Then after it has completed a backup we can delete another chunk of records out of this table until we have got the table down to the records that we now need.Will this work?
Need some advice solving a little problem I have with my database!
Current setup:
I have a person table that is made up of 39 columns. I also allow for person records to be deleted but I do this by having another table I call LogicallyDeletedrecords. This table is made up of the PersonId, Reason for deletion/suppression and a date time stamp. To access Live records I created a view based on my Person table which contains a WHERE clause to exclude records that exist in the LogicallyDeletedrecords. Similarly, I have another view DeadPersonData which contains Person records that have been removed. Hope it all makes sense so far! Now on to my worries!
The problem:
My Person table contains 9+ million records. The LogicallyDeletedrecords table has 500k+ but I anticipate further growth over the coming weeks/months. My worry is that my LivePersonData view will be too slow to access as my LogicallyDeletedrecords table grows. What’s more, as part of my Load routine, I have to make sure that Person data loaded on to the system is excluded if that same person exists as a deleted member. Both of these actions could slow down my system as the deleted table grows.
My thoughts:
I’ve been thinking of physically deleting dead Person records from my person table (possibly creating an archive table to hold them). But then if I delete them how do I cross check the details when new Person details get loaded?! As I said, my current LogicallyDeletedrecords table holds the PersonId, ReasonDeleted and CreationStamp. The only way is to add further columns which I use to match Person Details?
there are two tables involve in replication let say table1 and replicated table is also rep.table1.
we are not deleting records physically in table1 so only a bit in table1 has true when u want to delete a record but the strange thing is that replication agaent report that this is hard delete operation on table1 so download and report hard delete operation and delete the record in replicated table which is very crucial.
plz let me know where am i wrong and how i put it into right way.
there is no triggers on published tables and noother trigger is created on published table.
I had got the below error when I execute a DELETE SQL query in SSIS Execute SQL Task :
Error: 0xC002F210 at DelAFKO, Execute SQL Task: Executing the query "DELETE FROM [CQMS_SAP].[dbo].[AFKO]" failed with the following error: "The transaction log for database 'CQMS_SAP' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
But my disk has large as more than 6 GB space, and I query the log_reuse_wait_desc column in sys.databases which return value as "NOTHING".
So this confused me, any one has any experience on this?
I have been trying to solve the locking problem from past couple of days. Please help mee!!
Scenario: -------------- I have a SSIS package in which 2 data flow tasks. 1st data flow task deletes records from a 5 tables and the 2nd data flow task should insert records into 1 of the five tables after the success of 1st data flow task. This scenario runs in Transacation.
The above scenrio in the 2nd data flow task hangs in runtime. It does not complete. with sp_who2 command i could see that there is an intent share lock(LK_M_IS) on the table and the status is SUSPENDED.
I dont know how to come out of this locking. Please help.
hello friends.. I am newbie for sql server...I having a problem when executing this procedure .... ALTER PROCEDURE [dbo].[spgetvalues] @Uid intASBEGIN SET NOCOUNT ON; select DATEPART(year, c.fy)as fy, (select contribeamount from wh_contribute where and contribename like 'Retire-Plan B-1% JRF' ) as survivorship, (select contribeamount from wh_contribute where and contribename like 'Gross Earnings' and ) as ytdgross, (select contribeamount from wh_contribute where and contribename like 'Retire-Plan B-1.5% JRP') as totalcontrib, from wh_contribute c where c.uid=@Uid Order by fy Asc .....what is the wrong here?? " Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression."please reply asap...
I am getting 2 resultsets depending on conditon, In the secondconditon i am getting the above error could anyone help me..........CREATE proc sp_count_AllNewsPapers@CustomerId intasdeclare @NewsId intset @NewsId = (select NewsDelId from NewsDelivery whereCustomerId=@CustomerId )if not exists(select CustomerId from NewsDelivery whereNewsPapersId=@NewsId)beginselect count( NewsPapersId) from NewsPapersendif exists(select CustomerId from NewsDelivery whereNewsPapersId=@NewsId)beginselect count(NewsDelId) from NewsDelivery whereCustomerid=@CustomeridendGO