I have seen this buried deep with the questions on Service Broker, but I am looking for it again. How do you delete all records from your sys.Transmission_Queue. This is on a test server and I want to clean it before some more test.
We are having a problem with messages getting out of transmission_queue and into the queues themselves. The queues all are actviated and enabled, and our service broker is enabled at the database level. The dba detached & reattached the db yesterday, which I believe may be the cause of this problem. Everything seems to be in order (sp ownership, activation procs, etc), except when I run:
select [name], count(*)
from sys.dm_broker_queue_monitors qm
inner join sys.service_queues sq on qm.queue_id = sq.object_id
group by [name]
I get 2 entries for each queue. And the last_activated_time is the same date as the detach event. When we reattached we had to do an alter database set enable_broker in order to get it back up and running. When I run the above query on our dev and test environments, I get only one entry per queue.
Does anyone know why this would happen? Is this a valid state for SB? And to get past it, if we can't figure out the real fix for it, we want to get a copy of all the messages in the transmission queue, do an alter database set new_broker, then replay them all into SB. We hope this fixes the root cause, but it's a guess.
At my company, we're trying to use service broker to create a client-server system where there is a head office machine and multiple outlets registered with that head office. My problem is that sometimes when a branch sends a message to the head office, it just seems to sit in the transmission queue and never gets sent. If I run a script that forcibly ends the conversations on the client machine (with cleanup), storing the message bodies and then resend them, they seem to get through fine.
The way that we send messages is by calling a t-sql stored procedure from a c# application using SqlCommand (don't know if this should make any difference).
If I monitor the Head Office machine and one of the Outlets while this is happening, on the HO I get three events in a row:
Broker: Message Classify (1 - Local) Audit Broker Conversation (2 - No Certificate) Broker: Message Undeliverable (1 - Sequenced Message) The TextData contained in the third event is: This message could not be delivered because the security context could not be retrieved.
The RoleName of the server is Initiator, and the TargetUserName is the name of the service on the Outlet.
On the Outlet I get the following event repeatedly (presumably as it continues to try sending the message) - Broker: Remote Message Acknowledgement (1 - Message With Acknowledgement Sent).
On the client the RoleName also appears to be Initiator, and the TargetUserName is blank.
This would make me suspect that certificates were missing or something, except that if I remove messages from the queue and resend them they seem to get through, and also I've checked both databases and they have the correct certificates.
Sorry for the stupid question, but I can't seem to figure it out...
There are 119 messages that are stuck in the transmission queue, all for the same queue. When I check the status of the queue (via sys.service_queues), is_receive_enabled = 1, is_activation_enabled = 1, and max_readers = 3. When I check to see if there is an active queue monitor (via sys.dm_broker_queue_monitors) there is nobody watching this queue. What would cause this queue to be active, enabled but have nobody montioring it? Is there something internal that went wrong that made these (outbound) messages get stuck in the transmission queue, and is not showing up in the views? How can I get these messages "un-stuck" and flow through the system?
A problem I am seeing is the return message to this queue (to signify the target has consumed the message, and to end the conversation) are not being consumed, thus getting stuck in the "DI" state.
Are lock hints propagated to the underlying tables of system catolog views? I ask because I often query sys.transmission_queue with nolock, and I wanted to know if this was honoured through out the underlying tables.
Secondly, is sys.transmission_queue indexed at all, providing a way to prevent table-scanning?
For some reason the messages are stuck at sys.transmission_queue. The transmission status seems to be null. I verified all the Queues and they hae the values of (is_receive_enables = 1, is_enqueue_enabled = 1, is_activation_enabled = 0)
I have set up the service broker on different domain and have bidirectional ports open to receive messages.
I can send and receive messages, when I run the same service broker setup scripts at the machines that belongs to same domain .
Is there any thought going into moving these two tables to a file group that we can control? Putting this in Primary with the rest of my system tables is quite problematic, and hinders my ability to manage space usage on my files. Traditionally, we didn't have to consider a primary file group that could grow to large proportions, but now with these two tables it can. If a large volume of messages gets sent through and the system can't keep up, then these tables and my primary file group will grow sometimes enormously.
as Christopher Yager say in "Need distributed service broker sample", I also test sending messages between two SQL Server 2005 instances,and after I setup the test environment with instance1 and instance2,I find queue [q2] in ssb2 can't receive message from ssb1. when I query by "select * from sys.transmission_queue",I get some message records that transmission_status is "64(error not found)".
I may have a misunderstanding of how SB works, but this seems like a problem.
If a queue is disable (i.e. status = off) and a message is sent to the queue the message is placed on the sys.transmission_queue. Once the queue is enabled I thought the messages were sent to the queue in the order they were placed on the sys.tranmission_queue? I have been troubleshooting a problem and this is not the case. Do I have a misunderstanding of how the sys.transmission_queue works?
Hi was wondering whether it is possible to log somewhere outside SB that there are messages in the transmission_queue because the Target queue was disabled.
I was testing this scenario:
try to send messages on a disabled queue and log the problem.
But the transmission_status from the trasmission_queue is always empty.
This is the code that I tried to execute between the send and the commit and after the commit:
WHILE (1=1)
BEGIN
BEGIN DIALOG CONVERSATION .....
SEND ON CONVERSATION ......
if select count(*) from sys.transmission_queue <> 0 BEGIN
set @transmission_status = (select transmission_status from sys.transmission_queue where conversation_handle=@dialog_handle);
if @transmission_status = ''
--Successful send - Exit the LOOP
BEGIN
UPDATE Mytable set isReceivedSuccessfully = 1 where ID = @IDMessageXML;
BREAK;
END
ELSE
raiserror(@transmission_status,1,1) with log;
END
ELSE
BEGIN
UPDATE [dbo].[tblDumpMsg] set isReceivedSuccessfully = 1 where ID = @IDMessageXML;
BREAK;
END
END
COMMIT TRANSACTION;
As I wrote before the @transmission_status variable is always empty and I have the same result even if I put the code after the commit transaction!
Maybe what I'm trying to reach has no sense?
With the event notification I can notify when the queue is disable because the receive rollsback 5 times but what if by mistake the target queue is disabled outside the SB environment? I can I catch it and handle it properly?
I've got 2 service broker databases on remote servers. I've created my endpoints, my routes and have everything set up. But when i send a test message, the messages set in the transmission_queue. There is no transmission_status. And when i look in at the sys.conversation_endpoints view I see that the conversation status is conversing. One odd thing I wanted to point out though is that the far_broker_instance column of the sys.conversation_enpoints view is null. When i run a trace on both databases, I see activity on the Initiator with things like Started_OutBound and conversing but I don't see any messages such as acknowledgment or any errors. On the Targer side I see no activity at all. Does anyone know what the deal is. Why don't I get some kind of error message. Why are all my messages staying in the transmission_queue?
I set-up my Service Broker comunication on the same SQL Server Instance beetween 2 different DBs.
One DB behave as Initiator and send messages to the SB service setup in the other DB.
I execute the SEND statement from the Initiator and if I count the messages on the sys.transmission_queue before to commit the transaction the count returns 0.
If I try to send a message not compliant with the message type, the count that runs after the SEND returns 1 - far enought.
I'm confused about the first behaviour because from what I understood the Acknolodgment and the removal of the message from the sys.trasmission queue should happen after the COMMIT.
I have some simple files but they are failing because the delete history task is failing as it is looking for files in a non existent directory.
It is looking for files in C:Program FilesMicrosoft SQL ServerMSSQL10_50.INSTANCEMSSQLLog whereas it should be looking in C:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLLog
how I can get this corrected so I can get the Maintenance Plans to run correctly.
I have tried deleting and recreating the Plan but to no avail
I am using Master Data Service for couple of months now. I can load, update, merge and soft delete data in MDS. Occasionally we even have to hard delete data from MDS. If we keep on soft deleting records in a MDS table eventually there will be huge number of soft deleted records. Is there an easy way to hard delete all the soft deleted records from all MDS tables in a specific Model.
Background: Am working on completing an ORM that can not only handles CRUD actions -- but that can also updates the structure of a table transparently when the class defs change. Reason for this is that I can't get the SQL scripts that would work for updating a software on SqlServer to be portable to other DBMS systems. Doing it by code, rather than SQL batch has a chance of making cross-platform, updateable, software...
Anyway, because it needs to be cross-DBMS capable, the constraints are that the system used must work for the lowest common denominator....ie, a 'recipe' of steps that will work on all DBMS's.
The Problem: There might be simpler ways to do this with SqlServer (all ears :-) - just in case I can't make it cross platform right now) but, with simplistic DBMS's (SqlLite, etc) there is no way to ALTER table once formed: one has to COPY the Table to a new TMP name, adding a Column in the process, then delete the original, then rename the TMP to the original name.
This appears possible in SqlServer too --...as long as there are no CASCADE operations. Truncate table doesn't seem to be the solution, nor drop, as they all seem to trigger a Cascade delete in the Foreign Table.
So -- please correct me if I am wrong here -- it appears that the operations would be along the lines of: a) Remove the Foreign Key references b) Copy the table structure, and make a new temp table, adding the column c) Copy the data over d) Add the FK relations, that used to be in the first table, to the new table e) Delete the original f) Done?
The questions are: a) How does one alter a table to REMOVE the Foreign Key References part, if it has no 'name'. b) Anyone know of a good clean way to get, and save these constraints to reapply them to the new table. Hopefully with some cross platform ADO.NET solution? GetSchema etc appears to me to be very dbms dependant? c) ANY and all tips on things I might run into later that I have not mentioned, are also greatly appreciated.
I am having great difficulty with cascading deletes, delete triggers and referential integrity.
The database is in First Normal Form.
I have some tables that are child tables with two foreign keyes to two different parent tables, for example:
Table A / Table B Table C / Table D
So if I try to turn on cascading deletes for A/B, A/C, B/D and C/D relationships, I get an error that I cannot have cascading delete because it would create multiple cascade paths. I do understand why this is happening. If I delete a row in Table A, I want it to delete child rows in Table B and table C, and then child rows in table D as well. But if I delete a row in Table C, I want it to delete child rows in Table D, and if I delete a row in Table B, I want it to also delete child rows in Table D.
SQL sees this as cyclical, because if I delete a row in table A, both table B and table C would try to delete their child rows in table D.
Ok, so I thought, no biggie, I'll just use delete triggers. So I created delete triggers that will delete child rows in table B and table C when deleting a row in table A. Then I created triggers in both Table B and Table C that would delete child rows in Table D.
When I try to delete a row in table A, B or C, I get the error "Delete Statement Conflicted with COLUMN REFERENCE". This does not make sense to me, can anyone explain? I have a trigger in place that should be deleting the child rows before it attempts to delete the parent row...isn't that the whole point of delete triggers?????
This is an example of my delete trigger:
CREATE TRIGGER [DeleteA] ON A FOR DELETE AS Delete from B where MeetingID = ID; Delete from C where MeetingID = ID;
And then Table B and C both have delete triggers to delete child rows in table D. But it never gets to that point, none of the triggers execute because the above error happens first.
So if I then go into the relationships, and deselect the option for "Enforce relationship for INSERTs and UPDATEs" these triggers all work just fine. Only problem is that now I have no referential integrity and I can simply create unrestrained child rows that do not reference actual foreign keys in the parent table.
So the question is, how do I maintain referential integrity and also have the database delete child rows, keeping in mind that the cascading deletes will not work because of the multiple cascade paths (which are certainly required).
I'm trying to clean up a database design and I'm in a situation to where two tables need a FK but since it didn't exist before there are orphaned records.
Tables are:
Brokers and it's PK is BID
The 2nd table is Broker_Rates which also has a BID table.
I'm trying to figure out a t-sql statement that will parse through all the recrods in the Broker_Rates table and delete the record if there isn't a match for the BID record in the brokers table.
I know this isn't correct syntax but should hopefully clear up what I'm asking
this is my Delete Query NO 1 alter table ZT_Master disable trigger All Delete ZT_Master WHERE TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) alter table ZT_Master enable trigger All
I have troble in Delete Query No 2 here is a select statemnt , I need to delete them select d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) I tried modified it as below delete d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) but this doesn't works..
can you please help? and can I combine these 2 SQL Query into one Sql Query? thank you
I'm using SqlDataSource and an Access database. Let's say I got two tables:user: userID, usernamemessage: userID, messagetextLet's say a user can register on my website, and leave several messages there. I have an admin page where I can select a user and delete all of his messages just by clicking one button.What would be the best (and easiest) way to make this?Here's my suggestion:I have made a "delete query" (with userID as parameter) in MS Access. It deletes all messages of a user when I type in the userID and click ok.Would it be possible to do this on my ASP.net page? If yes, what would the script look like?(yes, it is a newbie question)
The requirement is: I should allow single row delete from a table but not bulk delete. An audit table should get updated if there is any single delete or single update. So I wrote the triggers as follows: for single and bulk delete
ALTER TRIGGER [dbo].[TRG_Delete_Bulk_tbl_attendance] ON [dbo].[tbl_attendance] AFTER DELETE AS
[code]...
When I try to run the website, the database error I am getting is:Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 0, current count = 1.
I ran the following query in Query Analyzer on a machine running SQL Server 2000. I'm attempting to delete from a linked server running SQL Server 2005:
DELETE FROM sql2005.production.dbo.products WHERE vendor='Foo' AND productId NOT IN ( SELECT productId FROM sql2000.staging.dbo.fooProductList )
The status message (and @@ROWCOUNT) told me 8 rows were affected, but nothing was actually deleted; when I ran a SELECT with the same criteria as the DELETE, all 8 rows are still there. So, once more I tried the DELETE command. This time it told me 7 rows were affected; when I ran the SELECT again, 5 of the rows were still there. Finally, after running this exact same DELETE query 5 times, I was able to remove all 8 rows. Each time it would tell me that a different number of rows had been deleted, and in no case was that number accurate.
I've never seen anything like this before. Neither of the tables involved were undergoing any other changes. There's no replication going on, or anything else that should introduce any delays. And I run queries like this all day, involving every thinkable combination of 2000 and 2005 servers, that don't give me any trouble.
Does anyone have suggestions on what might cause this sort of behavior?
I have a problem with one report on my server. A user has requested that I exclude him from receiving a timed email subscription to several reports. I was able to amend all the subscriptions except one. When I try to remove his email address from the subscription I receive this error:
An internal error occurred on the report server. See the error log for more details. (rsInternalError) Get Online Help
For more information about this error navigate to the report server on the local server machine, or enable remote errors
Online no help couldn't offer any advice at all, so I thought I'd just delete the subscription and recreate it again, but I receive the same message. "Okay, no problem, I'll just delete the report and redeploy it and set up the subscription so all the other users aren't affected", says I. "Oh, no!", says the report server, and then it give me this message:
System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Data.SqlClient.SqlException: Only members of sysadmin role are allowed to update or delete jobs owned by a different login. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.InstrumentedSqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.DBInterface.DeleteObject(String objectName) at Microsoft.ReportingServices.Library.RSService._DeleteItem(String item) at Microsoft.ReportingServices.Library.RSService.ExecuteBatch(Guid batchId) at Microsoft.ReportingServices.WebServer.ReportingService2005.ExecuteBatch() --- End of inner exception stack trace ---
What's even weirder is that I'm the owner and creator of the report and I'm a system admin and content manager on the report server and I set up the subscription when the report was initially deployed. Surely I should have sufficient rights to fart around with this subscription/report as I see fit?
I have rebooted the server, redeployed the report, checked credentials on the data source and tried amending and deleting from both the report manager and management studio but still I am prevented from doing so.
This one is weird and I am missing something fundamental on this one. A developer was getting a timeout with this...
CREATE PROCEDURE p_CM_DeleteBatch ( @SubmitterTranID VARCHAR(50) ) AS DECLARE @COUNT INT, @COMMIT INT
SET @COUNT = 0 SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived where (claimstatus NOT IN ('Keyed', 'Imported')) AND SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete BEGIN BEGIN TRANSACTION DELETE FROM INVOICECLAIMMAPPING WHERE CLMRECDID IN (SELECT DISTINCT CLMRECDID FROM CLAIMSRECEIVED WHERE SUBMITTERTRANID = @SUBMITTERTRANID) IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPayment WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsreceivedPayorServices where ClmRecdPyID in (SELECT ClmRecdPyID FROM ClaimsReceivedPayors WHERE SubmitterTranID = @SubmitterTranID)
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedPayors WHERE ClmRecdid in (SELECT ClmRecdID FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID) IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0 DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0 DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
IF (@COMMIT = 1) BEGIN --ROLLBACK TRANSACTION --For Testing Purpose ONLY COMMIT TRANSACTION RETURN (0) END ELSE BEGIN ROLLBACK TRANSACTION RETURN (-1) END END ELSE BEGIN RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1) END GO
I applied a couple of indices and got ride of the uncorrelated subqueries
CREATE PROCEDURE p_CM_DeleteBatch ( @SubmitterTranID VARCHAR(50) ) AS DECLARE @COUNT INT, @COMMIT INT
SET @COUNT = 0 SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived where (claimstatus NOT IN ('Keyed', 'Imported')) AND SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete BEGIN BEGIN TRANSACTION
DELETE INVOICECLAIMMAPPING FROM INVOICECLAIMMAPPING JOIN CLAIMSRECEIVED ON INVOICECLAIMMAPPING.CLMRECDID = CLAIMSRECEIVED.CLMRECDID WHERE CLAIMSRECEIVED.SUBMITTERTRANID = @SUBMITTERTRANID
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPayment WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE ClaimsreceivedPayorServices FROM ClaimsreceivedPayorServices JOIN ClaimsReceivedPayors ON ClaimsreceivedPayorServices.ClmRecdPyID = ClaimsReceivedPayors.ClmRecPyID WHERE ClaimsReceivedPayors.SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE ClaimsReceivedPayors FROM ClaimsReceivedPayors JOIN ClaimsReceived ON ClaimsReceivedPayors.ClmRecdid = ClaimsReceived.ClmRecdid WHERE ClaimsReceived.SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
IF (@COMMIT = 1) BEGIN --ROLLBACK TRANSACTION --For Testing Purpose ONLY COMMIT TRANSACTION RETURN (0) END ELSE BEGIN ROLLBACK TRANSACTION RETURN (-1) END END ELSE BEGIN RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1) END GO SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS ON
GO
Suddenly this constraint was being violated with the change
Is the delete on ClaimsReceivedPayors starting before the delete on ClaimsreceivedPayorServices finishes? If so why would it matter between the join and subquery? This one is making me depressed because I can not explain it.
Need some advice solving a little problem I have with my database!
Current setup:
I have a person table that is made up of 39 columns. I also allow for person records to be deleted but I do this by having another table I call LogicallyDeletedrecords. This table is made up of the PersonId, Reason for deletion/suppression and a date time stamp. To access Live records I created a view based on my Person table which contains a WHERE clause to exclude records that exist in the LogicallyDeletedrecords. Similarly, I have another view DeadPersonData which contains Person records that have been removed. Hope it all makes sense so far! Now on to my worries!
The problem:
My Person table contains 9+ million records. The LogicallyDeletedrecords table has 500k+ but I anticipate further growth over the coming weeks/months. My worry is that my LivePersonData view will be too slow to access as my LogicallyDeletedrecords table grows. What’s more, as part of my Load routine, I have to make sure that Person data loaded on to the system is excluded if that same person exists as a deleted member. Both of these actions could slow down my system as the deleted table grows.
My thoughts:
I’ve been thinking of physically deleting dead Person records from my person table (possibly creating an archive table to hold them). But then if I delete them how do I cross check the details when new Person details get loaded?! As I said, my current LogicallyDeletedrecords table holds the PersonId, ReasonDeleted and CreationStamp. The only way is to add further columns which I use to match Person Details?
there are two tables involve in replication let say table1 and replicated table is also rep.table1.
we are not deleting records physically in table1 so only a bit in table1 has true when u want to delete a record but the strange thing is that replication agaent report that this is hard delete operation on table1 so download and report hard delete operation and delete the record in replicated table which is very crucial.
plz let me know where am i wrong and how i put it into right way.
there is no triggers on published tables and noother trigger is created on published table.
I have several jobs in SQL 2005 that I cannot delete. I get "The DELETE statement conflicted with the REFERENCE constraint "FK_subplan_job_id" the conflict occured in database "msdb", table "dbo.sysmaintplan_subplans" columd 'job_id'. How can I fix this? TIA <JP>
Hello, I just cannot think of the answer to this this morning and would appreciate some help on it. Scenario for these two tables:FooFooID PKDescriptionBarBarID PKFooID FKNo cascade delete on the FK relationship, I want to handle it explicitly based on logic.I want to delete Bar for a specific value of BarID, and want to delete all Foo that are in the intersecting table Bar by the FK.I begin a transaction, and here is the issue:If I delete Foo first, I get a FK constraint issue, there is a record in Bar corresponding to it.If I delete Bar first, the FK does not exist for the query to find the children in Foo. As silly as it sounds I just cannot think of the answer right now, thanks for any help.
I have a database of posts which i want to delete after 30days of being on the site, my code so far gets all the data out of the table and then if their is more than 0 rows it loops within the tblTable and finds all the posts made within the last 30days and deletes them, my problem is how do I get it to work out 30days ago, for 30days time, its just " DateTime.Now.AddDays(30);" but i cant seem to do something as simple for back in time, heres my code : private void DeleteOldStuff() { string strSQL = "SELECT * FROM ForumPosts"; SqlConnection Connection = new SqlConnection(ConfigurationManager.ConnectionStrings["BlinkConnectionString"].ConnectionString); Connection.Open(); SqlCommand comm = new SqlCommand(strSQL, Connection); SqlDataAdapter da = new SqlDataAdapter(comm); tblData = new DataTable(); da.Fill(tblData); Connection.Close(); if (tblData.Rows.Count > 0) { for (int i = 0; i < tblData.Rows.Count; i++) { DataRow dr = tblData.Rows[i]; if ("30DAYSAGO" <= dr["DateCreated"]) { Connection.Open(); SqlCommand Delete = new SqlCommand("DELETE FROM ForumPosts WHERE PostID = '" + dr["PostID"].ToString() +"'", Connection); Delete.ExecuteNonQuery(); Connection.Close(); } } } }Any help / advice is appriciated! Thanks John
Hello,I have 3 tables: [Article] > ArticleId, ... [Category] > CategoryId, ... [CategoriesInArticles] > ArticleId, CategoryIdGiven an @ArticleId, I need to: 1. Delete the record in [Article] with the given @ArticleId 2. Delete all records in [CategoriesInArticles] with the given @ArticleId 3. Delete the records in [Categories] which CategoryId was deleted in (2) BUT only if the CategoryId is not associated with other articles in [CategoriesInArticles].How can I do this?Thanks,Miguel
Hi Guys, I wrote sqlquery for select users for a role.its working fine now i am trying to delete users using same query.I ma getting error. here is my delete query Select u.name from sysusers u, sysusers g, sysmembers m where g.name = @rolename and g.uid = m.groupuid and g.issqlrole = 1 and u.uid = m.memberuid -->its working fine Delete u.name from sysusers u, sysusers g, sysmembers m where g.name = @rolename and g.uid = m.groupuid and g.issqlrole = 1 and u.uid = m.memberuid ---> Getting error Here rolename is database rolename. my question is, Is this correct way to delete roleusers and role at a time? if not can any one help to do?
I am using the Issue Tracker Starting Kit and I migrated not too long ago the DB from sql 2000 to 2005. Now in Microsoft Management Studio when I try to delete a row from the IssueComments table I get the following error message: No rows were deleted.A problem occurred attempting to delete ros 4215Error Source: Microsoft.VisualStudio.DataTools.Error Message: The updated rows has changed or been deleted since data was last retrieved. Correct the errors and attempt to delete the row again If someone has some explainations or advises at what I should look into to resolve my problem I would appreciate greatly. Thank you