I have "sqlGetReservationMembers" datasource in my ascx code and another datasource in 'KillReservation'. As you can see "sqlGetReservationMembers" uses a SELECT command only. I have verified this several times. The 3rd block below is the only place in code-behind where "sqlGetReservationMembers" is referenced. When the "KillReservation" fires I get an error stating "Deleting is not supported by data source 'sqlGetReservationMembers' unless DeleteCommand is specified". I'm confused because I can't see where the 'sqlGetReservationMembers' delete event is being fired. I have removed the datasource and recreated it but the problems returns. What am I missing?
Thanks
<asp:SqlDataSource ID="sqlGetReservationMembers" runat="server"
ConnectionString="<%$ ConnectionStrings:SiteSqlServer %>"
SelectCommand="sp_GetReservations" SelectCommandType="StoredProcedure">
<SelectParameters>
<asp:ControlParameter ControlID="gvMyRides" Name="ID"
PropertyName="SelectedValue" Type="Int32" />
</SelectParameters>
</asp:SqlDataSource>
---------------------------------------------------------------------------------
Protected Sub KillReservation(ByVal RideID As Integer, ByVal MemberID As Integer)
Using Myconnection As New SqlConnection(Config.GetConnectionString("SiteSqlServer"))
Dim Mycommand As New SqlCommand("sp_KillReservation", Myconnection)
Mycommand.CommandType = CommandType.StoredProcedure
Mycommand.Connection.Open()
Mycommand.Parameters.Add(New Parameter("@MemberID", TypeCode.Int32, MemberID))
Mycommand.Parameters.Add(New Parameter("@RideID", TypeCode.Int32, RideID))
Mycommand.ExecuteNonQuery()
Myconnection.Close()
End Using
End Sub
----------------------------------------------------------------------------------
Protected Sub gvMyRides_SelectedIndexChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles gvMyRides.SelectedIndexChanged
I am using the SqlDependency to notify me of data changes in a table. However, as soon as I start it, the OnChange event fires over and over even though no data has changed.
Essentially All I want to do is be notified when records are inserted into a table. I am able to get another example to work using a Service Broker Queue and a sql query of WAITFOR ( RECEIVE CONVERT(int, message_body) AS msg FROM QueueMailReceiveQueue ) However I would prefer not to use a queue for once my messages have been read they are taken off the queue and I would rather control that manually.
Any help with getting SqlDependency to notify my app when records are added and not over and over when the data has changed would be great.
I have written some code to insert a record into the table in BtnAdd click event and I also have a Grid view control to show the table records.
If I click the Add button ,the record gets inserted into the table and shown in the grid correspondingly.But if I refresh teh page,the same data gets inserted again since it fires the Btnclick Event.
The SQL Express shows up as a source but when I try and go to the next page (the one where you can enter the SQL ID and password) it does not allow the connection. The errors thrown are 'Connection Failed.. 11004 and 6'. I can get the server over a LAN but not over a VPN. Have configured SQL Express for IP and named pipes. We havea firewall don't think this is the problem as have opened up the required port Can you help?
Hi There, I have created a trigger which supposingly will do event before delete the record from its own table.unfortunately when i try delete the record to test it whether it will do the event (inserting some records to another table), i found that it was not doing the event like i wanted to be. :(the trigger is as below :=======================CREATE TRIGGER TG_D_AGENT ON dbo.AgentFOR DELETEASbegindeclare @vAgentID as numeric,@vAgency as varchar(50),@vUnit as varchar(50),@vAgentCode as varchar(50),@vName as varchar(50),@vIC as varchar(14),@vAddress as varchar(100),@vContactNumber as varchar(50),@vDownlink as varchar(50),@vGSM as varchar(10),@vAM as varchar(10),@vDeleted_date as datetime set @vDeleted_date = convert(datetime, convert(varchar(10) , getdate(),103),103)declare cur_policy_rec CURSOR forselect AgentID,Agency,Unit,AgentCode,[Name],IC,Address,ContactNumber,Downlink,GSM,AM from insertedopen cur_policy_recfetch from cur_policy_rec into @vAgentID,@vAgency,@vUnit,@vAgentCode,@vName,@vIC, @vAddress,@vContactNumber,@vDownlink,@vGSM,@vAM WHILE @@FETCH_STATUS=0BEGIN INSERT INTO [Agent_history] (AgentID,Agency,Unit,AgentCode,Name,IC,Address,Con tactNumber,Downlink,GSM,AM,Deleted_date) VALUES(@vAgentID,@vAgency,@vUnit,@vAgentCode,@vNam e,@vIC,@vAddress,@vContactNumber,@vDownlink,@vGSM, @vAM,@vDeleted_date)fetch from cur_policy_rec into @vAgentID,@vAgency,@vUnit,@vAgentCode,@vName,@vIC, @vAddress,@vContactNumber,@vDownlink,@vGSM,@vAM enddeallocate cur_policy_recend===============================in oracle , i normallly can do something like this...====================================CREATE TRIGGER TG_D_AGENT ON dbo.AgentBEFORE DELETE ON dbo.Agent FOR EACH ROWbeginIs that such thing function like 'BEFORE' in MS SQL SERVER 2000, coz in sql server im not sure they do have or not. Plz someone help me on this...realy appreciated if can!
I am generally a C programmer and have little experience with DB programming, so I apologize right from the start.
I have a table that allows a registration item with a verification item that will be NULL when the item is created. What I wish to do is delete these if the verification item is still NULL after a specified time (say 24 to 48 hours).
I would prefer to do this with some sort of trigger in one of two ways and any suggestions are much appreciated. 1. Have a stored procedure or function (not sure which is best to utilize) that runs every 2 days which checks the table for the NULL values and deletes any older than 48 hours. 2. Create a stored procedure or function when the registration is made that will check if the verification is still NULL for that particular record 24 hours later. I would think the first would be the simplest, but am not certain.
Again, forgive my inexperience with DB programming and again I appreciate any advice given.
I want a trigger in db aaa to fire when table a_aaa is updated in server a and table b_bbb in db bbb in server b to be populated with data. I know how to write a trigger if fired and the result stays in one server with one db. But I don't know how to do it if between two servers and two db.
a win server 2003 standard Edition sql Server 2000 db:aaa table: a_aaa column:a_aaa_a
b win server 2003 standard Edition sql Server 2000 db:bbb table:b_bbb column: b_bbb_b
It's not working because the symtax is incorrect and because I am not sure how to do it between two servers. If it is not correct, where am I wrong? Where should each line be located?? et cetera...... Can anyone help?
Thanks in advance.
CREATE TRIGGER [enddate_changed_on_alert] ON [dbo].[USER_DATA]
FOR INSERT, UPDATE AS
BEGIN SET NOCOUNT ON;
DECLARE @EndDate as DATE DECLARE @CompCode as VARCHAR(15) DECLARE @PhoneGMSM as VARCHAR(10) DECLARE @PhoneALERT as VARCHAR(10)
SELECT @PhoneALERT=phone1 from AUSSMEDEVIQ1.QDB_Test.USER_DATA; SELECT @PhoneGMSM=PHONE1 from AUSADFORMULA02.QDB_KS.dbo.USER_DATA_DEVIQ;
UPDATE AUSADFORMULA02.QDB_KS.dbo.USER_DATA_DEVIQ SET EXT4 = "111" WHERE AUSSMEDEVIQ1.QDB_Test.USER_DATA.@Phone = AUSADFORMULA02.QDB_KS.dbo.USER_DATA_DEVIQ;
In Pablo Castro webcast, First Look at ADO.NET 2.0, he mentions the use of UpdateBatchSize which I think would be handy.However, I was not able to get it to work.
Dim t As New Global.System.Data.SqlClient.SqlDataAdapter("Select * from Table_1", Global.System.Configuration.ConfigurationManager.ConnectionStrings("ConnectionString1").ConnectionString)Dim d As New Global.System.Data.DataTablet.FillSchema(d, SchemaType.Source)Dim cb As New Global.System.Data.SqlClient.SqlCommandBuilder(t)For c As Int32 = 1 To 10000Dim r As Global.System.Data.DataRow = d.NewRowr("id") = Global.System.Guid.NewGuidr("val") = CType(Rnd(), Int32)d.Rows.Add(r)Nextt.UpdateBatchSize = 100t.Update(d)d.Dispose()t.Dispose()
What ends up showing in Sql Server Profiler (Sql Server 2005) is each Insert being executed in it's own statement:exec sp_executesql N'INSERT INTO [Table_1] ([id], [val]) VALUES (@p1, @p2)',N'@p1 uniqueidentifier,@p2 int',@p1='3793CB5E-3B7A-45E7-9A53-0BD528BB6B07',@p2=1 I think I made this example very simple and yet can't fathom why it won't batch the statements. I'm somewhat aware of SqlBulkCopy and was very pleased with that speed, but don't think it would handle the Update,Delete,Insert that the SqlDataAdapter.Update would. Originally started using xsd DataSets until I saw the data tutorials in the Learning section here in which case I copied out the autogenerated TableAdapter classes and fuddling with them to do what I want since batching was not something I saw in TableAdapters. Does anyone see what I'm missing here?Nathan
I have an update trigger that works wonderful as long as I am updating the row in Enterprise Manager, but if I update the same column and row using an update statement in Query Analyzer the trigger doesn't fire.
Dear Experts,I'm an Oracle guy, who is being given more SQL Server assignmentslately.I've been looking for things on the web about this, but I can'tanything so far.In Oracle, I you can create a trigger on a table that -only- fires ifcertain fields are updated.create or replace trigger trg_some_triggerBEFORE insertOF some_field1, some_field2on tbl_some_tablefor each row....Is this also possible in SQL Server?What is the syntax please?Thanks a lot!
My SQL Server 2005 SP4 on Windows 2008 R2 is flooded with the below errors:-
Date 10/25/2011 10:55:46 AM Log SQL Server (Current - 10/25/2011 10:55:00 AM) Source spid Message Event Tracing for Windows failed to send an event. Send failures with the same error code may not be reported in the future. Error ID: 0, Event class ID: 54, Cause: (null).
Is there a way I can trace it how it is coming? When I check input buffer for these ids, it looks like it is tracing everything. All the general application DMLs are coming in these spids.
I have been testing with the WMI Event Watcher Task, so that I can identify a change to a file. The WQL is thus:
SELECT * FROM __InstanceModificationEvent within 30 WHERE targetinstance isa 'CIM_DataFile' AND targetinstance.name = 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\AdventureWorks.bak'
This polls every 30 secs and in the SSIS Event (ActionAtEvent in the WMI Task is set to fire the SSIS Event) I have a simple script task that runs a message box).
My understanding is that the event polls every 30 s and if there is a change on the AdventureWorks.bak file then the event is triggered and the script task will run producing the message. However, when I run the package the message is occurring every 30s, meaning the event is continually firing even though there has been NO change to the AdventureWorks.bak file.
Am I correct in my understanding of how this should work and if so why is the event firing when it should not ?
Server 2003 SE SP1 5.2.3790 Sql Server 2000, SP 4, 8.00.2187 (latest hotfix rollup) We fixed one issue, but it brought up another. the fix we applied stopped the ServicesActive access failure, but now we have a failure on MSSEARCH. The users this is affecting do NOT have admin rights on the machine, they are SQL developers. We were having
Event Type: Failure Audit Event Source: Security Event Category: Object AccessEvent ID: 560 Date: 5/23/2007 Time: 6:27:15 AM User: domainuser Computer: MACHINENAME Description: Object Open: Object Server: SC Manager Object Type: SC_MANAGER OBJECT Object Name: ServicesActive Handle ID: - Operation ID: {0,1623975729} Process ID: 840 Image File Name: C:WINDOWSsystem32services.exe Primary User Name: MACHINE$ Primary Domain: Domain Primary Logon ID: (0x0,0x3E7) Client User Name: User Client Domain: Domain Client Logon ID: (0x0,0x6097C608) Accesses: READ_CONTROL Connect to service controller Enumerate services Query service database lock state
Need to parsing serverName and databaseName to run a dynamic query to get serverName and databaseName and employee_ID via a accountID parameter. ----------------------------- declare @stringSQL varchar(200) select @stringSQL= 'insert into temp1 select '+@AccountID+' accountID, employee_ID from ' + @serverName +'.dbo.'+@databaseName+'.tblEmployee where inactive=0' print @stringSQL_GetUserName exec (@stringSQL_GetUserName) select * from temp1 ------------------------------ above dynamic query works fine. Howevery, this should be run only under insertion event. When I put it in a proc to run within the insertion trigger or put the whole sql statement within the trigger:
1. when ran at a MSDE server MSDTC on server is unavailable.
2. when ran at a SQL2000 developer testing server with the distributed transaction coordinator on, the insertion a record in the isql/w hang there. Could not even to kill this query, and have to stop and restart the SQL server.
Then I just want to return the dynamic query result without 'insert into temp1 ', the result is still hang... Is there a way to let the insert trigger to run a dyanamic query which linked to around 10 servers?
Normally we use rebuild, reorganize indexes when it is required, I used a SQL job using maintenance plan to run daily and rebuild, reorganize indexes and update statistics but I do not know if it runs either they are required or not. Should this plan automatically execute the build upon required indexes to be rebuild or it fires either they are required to be executed or not.
We recently upgraded to SQL 2005 from SQL 2000. We have most of our issues ironed out however about every 1 minute there is a message in the Application Event log and the SQL log that states:
EVENT ID 18456 Login Failed for the users DOMAIN/ACCOUNT [CLIENT: <local machine>]
This is a state 16 message which I thought meant that the account does not have access to the default database. The account is actually the account that the SQL services run under.
Any ideas? We can't seem to figure this one out. We actually upgraded to 2005 from 2000 and had an error appear after every reboot that prevented the SQL Agent from running(This application has failed to start because GAPI32.dll was not found. Re-installing the application may fix this problem.) We did a full uninstall of SQL and reinstalled fresh and restored the databases from .bak files and that is when the EVENT ID 18546 started occuring every minute.
We don't have any SQL heavy hitters here so please be detailed with any possible solutions. That you very much for any help you can provide!
I have some simple files but they are failing because the delete history task is failing as it is looking for files in a non existent directory.
It is looking for files in C:Program FilesMicrosoft SQL ServerMSSQL10_50.INSTANCEMSSQLLog whereas it should be looking in C:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLLog
how I can get this corrected so I can get the Maintenance Plans to run correctly.
I have tried deleting and recreating the Plan but to no avail
I am using Master Data Service for couple of months now. I can load, update, merge and soft delete data in MDS. Occasionally we even have to hard delete data from MDS. If we keep on soft deleting records in a MDS table eventually there will be huge number of soft deleted records. Is there an easy way to hard delete all the soft deleted records from all MDS tables in a specific Model.
Background: Am working on completing an ORM that can not only handles CRUD actions -- but that can also updates the structure of a table transparently when the class defs change. Reason for this is that I can't get the SQL scripts that would work for updating a software on SqlServer to be portable to other DBMS systems. Doing it by code, rather than SQL batch has a chance of making cross-platform, updateable, software...
Anyway, because it needs to be cross-DBMS capable, the constraints are that the system used must work for the lowest common denominator....ie, a 'recipe' of steps that will work on all DBMS's.
The Problem: There might be simpler ways to do this with SqlServer (all ears :-) - just in case I can't make it cross platform right now) but, with simplistic DBMS's (SqlLite, etc) there is no way to ALTER table once formed: one has to COPY the Table to a new TMP name, adding a Column in the process, then delete the original, then rename the TMP to the original name.
This appears possible in SqlServer too --...as long as there are no CASCADE operations. Truncate table doesn't seem to be the solution, nor drop, as they all seem to trigger a Cascade delete in the Foreign Table.
So -- please correct me if I am wrong here -- it appears that the operations would be along the lines of: a) Remove the Foreign Key references b) Copy the table structure, and make a new temp table, adding the column c) Copy the data over d) Add the FK relations, that used to be in the first table, to the new table e) Delete the original f) Done?
The questions are: a) How does one alter a table to REMOVE the Foreign Key References part, if it has no 'name'. b) Anyone know of a good clean way to get, and save these constraints to reapply them to the new table. Hopefully with some cross platform ADO.NET solution? GetSchema etc appears to me to be very dbms dependant? c) ANY and all tips on things I might run into later that I have not mentioned, are also greatly appreciated.
I am having great difficulty with cascading deletes, delete triggers and referential integrity.
The database is in First Normal Form.
I have some tables that are child tables with two foreign keyes to two different parent tables, for example:
Table A / Table B Table C / Table D
So if I try to turn on cascading deletes for A/B, A/C, B/D and C/D relationships, I get an error that I cannot have cascading delete because it would create multiple cascade paths. I do understand why this is happening. If I delete a row in Table A, I want it to delete child rows in Table B and table C, and then child rows in table D as well. But if I delete a row in Table C, I want it to delete child rows in Table D, and if I delete a row in Table B, I want it to also delete child rows in Table D.
SQL sees this as cyclical, because if I delete a row in table A, both table B and table C would try to delete their child rows in table D.
Ok, so I thought, no biggie, I'll just use delete triggers. So I created delete triggers that will delete child rows in table B and table C when deleting a row in table A. Then I created triggers in both Table B and Table C that would delete child rows in Table D.
When I try to delete a row in table A, B or C, I get the error "Delete Statement Conflicted with COLUMN REFERENCE". This does not make sense to me, can anyone explain? I have a trigger in place that should be deleting the child rows before it attempts to delete the parent row...isn't that the whole point of delete triggers?????
This is an example of my delete trigger:
CREATE TRIGGER [DeleteA] ON A FOR DELETE AS Delete from B where MeetingID = ID; Delete from C where MeetingID = ID;
And then Table B and C both have delete triggers to delete child rows in table D. But it never gets to that point, none of the triggers execute because the above error happens first.
So if I then go into the relationships, and deselect the option for "Enforce relationship for INSERTs and UPDATEs" these triggers all work just fine. Only problem is that now I have no referential integrity and I can simply create unrestrained child rows that do not reference actual foreign keys in the parent table.
So the question is, how do I maintain referential integrity and also have the database delete child rows, keeping in mind that the cascading deletes will not work because of the multiple cascade paths (which are certainly required).
I'm trying to clean up a database design and I'm in a situation to where two tables need a FK but since it didn't exist before there are orphaned records.
Tables are:
Brokers and it's PK is BID
The 2nd table is Broker_Rates which also has a BID table.
I'm trying to figure out a t-sql statement that will parse through all the recrods in the Broker_Rates table and delete the record if there isn't a match for the BID record in the brokers table.
I know this isn't correct syntax but should hopefully clear up what I'm asking
this is my Delete Query NO 1 alter table ZT_Master disable trigger All Delete ZT_Master WHERE TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) alter table ZT_Master enable trigger All
I have troble in Delete Query No 2 here is a select statemnt , I need to delete them select d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) I tried modified it as below delete d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) but this doesn't works..
can you please help? and can I combine these 2 SQL Query into one Sql Query? thank you
I'm using SqlDataSource and an Access database. Let's say I got two tables:user: userID, usernamemessage: userID, messagetextLet's say a user can register on my website, and leave several messages there. I have an admin page where I can select a user and delete all of his messages just by clicking one button.What would be the best (and easiest) way to make this?Here's my suggestion:I have made a "delete query" (with userID as parameter) in MS Access. It deletes all messages of a user when I type in the userID and click ok.Would it be possible to do this on my ASP.net page? If yes, what would the script look like?(yes, it is a newbie question)
The requirement is: I should allow single row delete from a table but not bulk delete. An audit table should get updated if there is any single delete or single update. So I wrote the triggers as follows: for single and bulk delete
ALTER TRIGGER [dbo].[TRG_Delete_Bulk_tbl_attendance] ON [dbo].[tbl_attendance] AFTER DELETE AS
[code]...
When I try to run the website, the database error I am getting is:Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 0, current count = 1.
I ran the following query in Query Analyzer on a machine running SQL Server 2000. I'm attempting to delete from a linked server running SQL Server 2005:
DELETE FROM sql2005.production.dbo.products WHERE vendor='Foo' AND productId NOT IN ( SELECT productId FROM sql2000.staging.dbo.fooProductList )
The status message (and @@ROWCOUNT) told me 8 rows were affected, but nothing was actually deleted; when I ran a SELECT with the same criteria as the DELETE, all 8 rows are still there. So, once more I tried the DELETE command. This time it told me 7 rows were affected; when I ran the SELECT again, 5 of the rows were still there. Finally, after running this exact same DELETE query 5 times, I was able to remove all 8 rows. Each time it would tell me that a different number of rows had been deleted, and in no case was that number accurate.
I've never seen anything like this before. Neither of the tables involved were undergoing any other changes. There's no replication going on, or anything else that should introduce any delays. And I run queries like this all day, involving every thinkable combination of 2000 and 2005 servers, that don't give me any trouble.
Does anyone have suggestions on what might cause this sort of behavior?
I have a problem with one report on my server. A user has requested that I exclude him from receiving a timed email subscription to several reports. I was able to amend all the subscriptions except one. When I try to remove his email address from the subscription I receive this error:
An internal error occurred on the report server. See the error log for more details. (rsInternalError) Get Online Help
For more information about this error navigate to the report server on the local server machine, or enable remote errors
Online no help couldn't offer any advice at all, so I thought I'd just delete the subscription and recreate it again, but I receive the same message. "Okay, no problem, I'll just delete the report and redeploy it and set up the subscription so all the other users aren't affected", says I. "Oh, no!", says the report server, and then it give me this message:
System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Data.SqlClient.SqlException: Only members of sysadmin role are allowed to update or delete jobs owned by a different login. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.InstrumentedSqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.DBInterface.DeleteObject(String objectName) at Microsoft.ReportingServices.Library.RSService._DeleteItem(String item) at Microsoft.ReportingServices.Library.RSService.ExecuteBatch(Guid batchId) at Microsoft.ReportingServices.WebServer.ReportingService2005.ExecuteBatch() --- End of inner exception stack trace ---
What's even weirder is that I'm the owner and creator of the report and I'm a system admin and content manager on the report server and I set up the subscription when the report was initially deployed. Surely I should have sufficient rights to fart around with this subscription/report as I see fit?
I have rebooted the server, redeployed the report, checked credentials on the data source and tried amending and deleting from both the report manager and management studio but still I am prevented from doing so.
This one is weird and I am missing something fundamental on this one. A developer was getting a timeout with this...
CREATE PROCEDURE p_CM_DeleteBatch ( @SubmitterTranID VARCHAR(50) ) AS DECLARE @COUNT INT, @COMMIT INT
SET @COUNT = 0 SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived where (claimstatus NOT IN ('Keyed', 'Imported')) AND SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete BEGIN BEGIN TRANSACTION DELETE FROM INVOICECLAIMMAPPING WHERE CLMRECDID IN (SELECT DISTINCT CLMRECDID FROM CLAIMSRECEIVED WHERE SUBMITTERTRANID = @SUBMITTERTRANID) IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPayment WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsreceivedPayorServices where ClmRecdPyID in (SELECT ClmRecdPyID FROM ClaimsReceivedPayors WHERE SubmitterTranID = @SubmitterTranID)
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedPayors WHERE ClmRecdid in (SELECT ClmRecdID FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID) IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0 DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0 DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
IF (@COMMIT = 1) BEGIN --ROLLBACK TRANSACTION --For Testing Purpose ONLY COMMIT TRANSACTION RETURN (0) END ELSE BEGIN ROLLBACK TRANSACTION RETURN (-1) END END ELSE BEGIN RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1) END GO
I applied a couple of indices and got ride of the uncorrelated subqueries
CREATE PROCEDURE p_CM_DeleteBatch ( @SubmitterTranID VARCHAR(50) ) AS DECLARE @COUNT INT, @COMMIT INT
SET @COUNT = 0 SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived where (claimstatus NOT IN ('Keyed', 'Imported')) AND SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete BEGIN BEGIN TRANSACTION
DELETE INVOICECLAIMMAPPING FROM INVOICECLAIMMAPPING JOIN CLAIMSRECEIVED ON INVOICECLAIMMAPPING.CLMRECDID = CLAIMSRECEIVED.CLMRECDID WHERE CLAIMSRECEIVED.SUBMITTERTRANID = @SUBMITTERTRANID
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPayment WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE ClaimsreceivedPayorServices FROM ClaimsreceivedPayorServices JOIN ClaimsReceivedPayors ON ClaimsreceivedPayorServices.ClmRecdPyID = ClaimsReceivedPayors.ClmRecPyID WHERE ClaimsReceivedPayors.SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE ClaimsReceivedPayors FROM ClaimsReceivedPayors JOIN ClaimsReceived ON ClaimsReceivedPayors.ClmRecdid = ClaimsReceived.ClmRecdid WHERE ClaimsReceived.SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
IF (@COMMIT = 1) BEGIN --ROLLBACK TRANSACTION --For Testing Purpose ONLY COMMIT TRANSACTION RETURN (0) END ELSE BEGIN ROLLBACK TRANSACTION RETURN (-1) END END ELSE BEGIN RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1) END GO SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS ON
GO
Suddenly this constraint was being violated with the change
Is the delete on ClaimsReceivedPayors starting before the delete on ClaimsreceivedPayorServices finishes? If so why would it matter between the join and subquery? This one is making me depressed because I can not explain it.
Need some advice solving a little problem I have with my database!
Current setup:
I have a person table that is made up of 39 columns. I also allow for person records to be deleted but I do this by having another table I call LogicallyDeletedrecords. This table is made up of the PersonId, Reason for deletion/suppression and a date time stamp. To access Live records I created a view based on my Person table which contains a WHERE clause to exclude records that exist in the LogicallyDeletedrecords. Similarly, I have another view DeadPersonData which contains Person records that have been removed. Hope it all makes sense so far! Now on to my worries!
The problem:
My Person table contains 9+ million records. The LogicallyDeletedrecords table has 500k+ but I anticipate further growth over the coming weeks/months. My worry is that my LivePersonData view will be too slow to access as my LogicallyDeletedrecords table grows. What’s more, as part of my Load routine, I have to make sure that Person data loaded on to the system is excluded if that same person exists as a deleted member. Both of these actions could slow down my system as the deleted table grows.
My thoughts:
I’ve been thinking of physically deleting dead Person records from my person table (possibly creating an archive table to hold them). But then if I delete them how do I cross check the details when new Person details get loaded?! As I said, my current LogicallyDeletedrecords table holds the PersonId, ReasonDeleted and CreationStamp. The only way is to add further columns which I use to match Person Details?