I need to delete couple or rows from sysindexes, the code worked fine in sql 2000, but 2005 does not support direct update in sys tables any more, what's the work around? thanks.
USE [dbname]
DELETE FROM sysindexes WHERE name LIKE '_va_Sys_%'
When you have a table with no indexes you can see the table name in sysindexes and the relevant data in the rest of the fields with an indid of 0. I understand this bit.....but when you have created an index and then you delete it again.... the row in sysindexes just has the tablename and no other data?
I have read at this location-- http://www.swynk.com/friends/achigrik/RowCount.asp that it is better to use sysindexes to get a rowcount. Can I be assured that this is always up-to-date?
It seems my sysindexes table is inaccurate on a nonclustered index. In my case the rowcount (rows and rowcnt) do not match the actual rowcount of the table. The command UPDATE STATISTICS doesn't change the rows or rowcount, adding 'FULSLCAN' won't budge rowcount either.
After I did a dbcc reindex, the number of rows matched, however, upon adding rows in the table both rows and rowcount are out of sync again.
It's a fairly straightforward table, no triggers, no computed fields, only integer, datetime, varchar and bigint columns. There's a clustered index on a bigint column and a nonclustered index on a integer column.
dbcc show_statistics show that the nonclustered index is updated and it's rows and rows sampled match the number of rows in the table (not in the sysindexes-table).
I'd like to know if I'm chasing ghosts here or if there's something very wrong here. What could be causing the counts being inaccurate? Anyone who could shed some light?
hi all Is there any way to remove inconsistencies from sysindexes table. I have already used all the options of the checkdb as well as checktable but invain. thanks in advance
Hi, Please note that I'm having the below problem:
1- when i run "DBcc CheckDB ('DBName') with all_errormsgs" I Get: Could not read and latch page (1:173) with latch type SH. sysindexes failed.
2- then : select * from sysindexes
Gave me: I/O error (torn page) detected during read at offset 0x0000000015a000 in file 'C:DataDatabasesOld_Data.MDF'.
Connection Broken
3- dbcc checktable ('sysindexes') Could not read and latch page (1:173) with latch type SH. sysindexes failed. Server: Msg 8939, Level 16, State 1, Line 1 Table error: Object ID 2, index ID 0, page (1:173). Test (IS_ON (BUF_IOERR, bp->bstat) && bp->berrcode) failed. Values are 2057 and -1.CHECKTABLE found 0 allocation errors and 1 consistency errors in table 'sysindexes' (object ID 2).
Forth of all, I can't use a backup because it is old. Can i copy system tables from old database to the current one that we are using...
I tried to export and import data from this corrupted database but it would give me errors...
Is there anyway that i can adjust this database. Please Help Urgently....
I am trying to attach a database that was created on another server. I believe the database was created using SQL Server Management Studio Express 2005. Now, I have successfully attached the database in my development environment, which is using SQL Server Management Studio Express 2005. However, when I tried to attach the database onto our production server, which is SQL Server 2000 I received the error: Error 602:Could not find row in sysindexes for database ID 18, object ID 1, index ID 1. Run DBCC CheckTable on sysindexes. In my development environment, I ran DBCC CheckDB on the source database and no errors are returned. I aslo checked the compatibility level under the database properties and it shows: Sql Server 2000 (80) So, this should not be a version incompatibility issue. What is causing the attach to fail on SQL Server 2000? Thanks for any help.
Does anyone know what status 8388704 represents? The table in question is a heap. There are two of these on the table, and they don't show up in EM but are listed in sp_help. They also have weird names associated with their entries in sysindexes "_WA_Sys_CUST_PO_NBR_0F975522" and "_WA_Sys_ORD_STUS_CODE_0F975522"... I've searched everywhere... HELP! Thanks
Can anyone explain why when I look at table using enterprise manager, highlight a table, all tasks, maanage indexes why only 1 index appears and when I look at the same table in sysindexes is says that there are 8 indexes. This is the sql code I executed: select object_name(id), indid from sysindexes where object_name(id) = 'tbh_matter_summ'
Is it possible that there is a problem with the database?
Hi, I got the data currption after run CheckDB and it cannot be repaired: ------------- Server: Msg 8966, Level 16, State 1, Line 1 Could not read and latch page (1:273) with latch type SH. sysindexes failed. Server: Msg 8944, Level 16, State 1, Line 1 Table Corrupt: Object ID 2, index ID 0, page (1:273), row 9. Test (columnOffsets->offTbl [varColumnNumber] <= (nextRec - pRec)) failed. Values are 331 and 120. DBCC results for 'abtrepository'. CHECKDB found 0 allocation errors and 1 consistency errors in table 'sysindexes' (object ID 2). CHECKDB found 0 allocation errors and 1 consistency errors in database 'abtrepository'. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (abtrepository repair_fast). DBCC execution completed. If DBCC printed error messages, contact your system administrator. -------------- Please help!
Please help 1. my database can not truncate log and I use this command BACKUP LOG IMVDB2 WITH NO_LOG and after that I received result as below ->The log was not truncated because records at the beginning of the log are pending replication. Ensure the Log Reader Agent is running or use sp_repldone to mark transactions as distributed.
2. so I drop all replicate but it's still return same error so I try to run DBCC CHECKDB --> but no error meesage DBCC CHECKALLOC --> but no error meesage DBCC CHECKTABLE ('sysindexes') --> but no error meesage but I tried to run 3. DBCC PAGE (IMVDB2, 1, 13015, 3) i recieved -->Server: Msg 2591, Level 16, State 14, Line 1 Could not find row in sysindexes with index ID 2 for table 'object ID (334220541)'. DBCC execution completed. If DBCC printed error messages, contact your system administrator.
so what can I do the next to fix this problem, please help[/b]
I have some errors in my DB, we do have a backup plan in place, but the person who put it in place is no longer here. It's all automated and running, the problem is , we do A LOT of transactions a day, and the error has been in the db since 9/20(2.5 weeks) when someone shut the power off accidentally. So i'm reluctant to do a backup, mainly since i have NO idea how to do one, we do full and incremental, full on weekend, incremental weeknights.
below is the biggest problem SYSINDEXES :(. a few other tables had problems but DBCC with repair_allow_data_loss has fixed those with 'hardly' any data loss. What can I do here?
THanks in advance
Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID 2, index ID 0, page ID (1:4917). The PageId in the page header = (1:4925). Server: Msg 8928, Level 16, State 1, Line 1 Object ID 2, index ID 255: Page (1:4917) could not be processed. See other errors for details. DBCC results for 'sysindexes'. There are 485 rows in 30 pages for object 'sysindexes'. CHECKTABLE found 0 allocation errors and 2 consistency errors in table 'sysindexes' (object ID 2).
I need to write a script which will respond differently if a table's index forces uniqueness.
I am using SQL 7.0 and I am using the [sysindexes] table of my database to examine those indexes. I am using the [id] field from the [sysobjects] table for the table I am examining and I am using that [id] field to find like-valued [id] fields in the [sysindexes] table.
My problem is that I can't seem to determine, from an examination of the [sysindexes] table, whether, or not, an index is unique.
When adding an index, and checking the unique property of that index, the values that I get in the [status] field of [sysindexes] are different from the values which are supposed to show, at least according to the sysindexes (T-SQL) entry in SQL Server Books Online.
For instance, I have a table which combines two text fields which comprise the only key on a table. The unique and clustered boxes are checked in the index setup screen. When I look in the [sysindexes] table, the value in the [status] field is 2113554, which is not a value I see in the books online page. According to my books online page, a unique index should have a value of 2 in the [status] field and a clustered index should have a value of 16 in the [status] field. My assumption is that I should see a value of 18 in that [status] field, not 2113554.
I looks like the books online entry might be out-of-date because the field that is labeled [reserved1] in my books online page, is labeled [StatVersion] in my actual [sysindexes] table. That [StatVersion] field looks suspiciously like a Status Version field, possibly indicating that the Status field has undergone some sort of version revision?
The sysindexes table of my database seems to be messedup and I cannot backup my database. Here is the error 17052 : Table error: IAM page (1:278538) (object ID 2, index ID 255) is out of the range of this database.
FYI Object ID 2 is the sysindexes table. Thanks, kellie
WHen researching an index name problem, I found duplicate index names in sysindexes, referencing the same table. I notice that one of the indexes has a status = 2097154 and a indid of 2 while the other has a status of 0 and a indid of 0. I believe these are duplicates. Anybody have an idea if these really are and what the status and indid fields mean?
A few days ago a sproc stopped working, only noticed it this morning, when checking information to see what's up I get this:
Server: Msg 644, Level 21, State 1, Procedure uspV2InventoryFetch, Line 83 Could not find the index entry for RID '45574f44523738313834202020202020202020202020202020202020202020' in index page (1:11690152), index ID 2, database 'ASOS'.
Connection Broken
I've treid DBCC CHECKDBing all related tables and everything else. Even tried dbreindexing a couple of them too, everything reports as correct. When I looked further at the problem I spotted "index ID 2" in there, on checking this out in sysindexes it seems that the name field of sysindexes where id = 2 is SYSINDEXES and TSYSINDEXES.
I can't seem to do a DBB DBRECINDEX against a system table.
hi all,i was using the system tables namely sysindexes and sysobjects in sql server 2000. But now sql server 2005 is using instead of sql server 2000. Since am using sysindexes and sysobjects, too much time is taking for the execution in sql server 2005. So I need to change the query suitable for sql server 2005.I have read in msdn that the system tables are replaced with corresponding catalog views in sql server 2005. The catelog view for sysobjects is sys.objects. plz check the link http://technet.microsoft.com/en-us/library/ms187997.aspxCould anyone please tell me how to replace the sysindexes and sysobjects in the query without rewriting the query too much. If I can replacesysobjects with just sys.objects, then it will be very helpful. Is it possible? And what about sysindexes. Any help anyone could give would be greatly appreciated!
I have some simple files but they are failing because the delete history task is failing as it is looking for files in a non existent directory.
It is looking for files in C:Program FilesMicrosoft SQL ServerMSSQL10_50.INSTANCEMSSQLLog whereas it should be looking in C:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLLog
how I can get this corrected so I can get the Maintenance Plans to run correctly.
I have tried deleting and recreating the Plan but to no avail
I am using Master Data Service for couple of months now. I can load, update, merge and soft delete data in MDS. Occasionally we even have to hard delete data from MDS. If we keep on soft deleting records in a MDS table eventually there will be huge number of soft deleted records. Is there an easy way to hard delete all the soft deleted records from all MDS tables in a specific Model.
Background: Am working on completing an ORM that can not only handles CRUD actions -- but that can also updates the structure of a table transparently when the class defs change. Reason for this is that I can't get the SQL scripts that would work for updating a software on SqlServer to be portable to other DBMS systems. Doing it by code, rather than SQL batch has a chance of making cross-platform, updateable, software...
Anyway, because it needs to be cross-DBMS capable, the constraints are that the system used must work for the lowest common denominator....ie, a 'recipe' of steps that will work on all DBMS's.
The Problem: There might be simpler ways to do this with SqlServer (all ears :-) - just in case I can't make it cross platform right now) but, with simplistic DBMS's (SqlLite, etc) there is no way to ALTER table once formed: one has to COPY the Table to a new TMP name, adding a Column in the process, then delete the original, then rename the TMP to the original name.
This appears possible in SqlServer too --...as long as there are no CASCADE operations. Truncate table doesn't seem to be the solution, nor drop, as they all seem to trigger a Cascade delete in the Foreign Table.
So -- please correct me if I am wrong here -- it appears that the operations would be along the lines of: a) Remove the Foreign Key references b) Copy the table structure, and make a new temp table, adding the column c) Copy the data over d) Add the FK relations, that used to be in the first table, to the new table e) Delete the original f) Done?
The questions are: a) How does one alter a table to REMOVE the Foreign Key References part, if it has no 'name'. b) Anyone know of a good clean way to get, and save these constraints to reapply them to the new table. Hopefully with some cross platform ADO.NET solution? GetSchema etc appears to me to be very dbms dependant? c) ANY and all tips on things I might run into later that I have not mentioned, are also greatly appreciated.
I am having great difficulty with cascading deletes, delete triggers and referential integrity.
The database is in First Normal Form.
I have some tables that are child tables with two foreign keyes to two different parent tables, for example:
Table A / Table B Table C / Table D
So if I try to turn on cascading deletes for A/B, A/C, B/D and C/D relationships, I get an error that I cannot have cascading delete because it would create multiple cascade paths. I do understand why this is happening. If I delete a row in Table A, I want it to delete child rows in Table B and table C, and then child rows in table D as well. But if I delete a row in Table C, I want it to delete child rows in Table D, and if I delete a row in Table B, I want it to also delete child rows in Table D.
SQL sees this as cyclical, because if I delete a row in table A, both table B and table C would try to delete their child rows in table D.
Ok, so I thought, no biggie, I'll just use delete triggers. So I created delete triggers that will delete child rows in table B and table C when deleting a row in table A. Then I created triggers in both Table B and Table C that would delete child rows in Table D.
When I try to delete a row in table A, B or C, I get the error "Delete Statement Conflicted with COLUMN REFERENCE". This does not make sense to me, can anyone explain? I have a trigger in place that should be deleting the child rows before it attempts to delete the parent row...isn't that the whole point of delete triggers?????
This is an example of my delete trigger:
CREATE TRIGGER [DeleteA] ON A FOR DELETE AS Delete from B where MeetingID = ID; Delete from C where MeetingID = ID;
And then Table B and C both have delete triggers to delete child rows in table D. But it never gets to that point, none of the triggers execute because the above error happens first.
So if I then go into the relationships, and deselect the option for "Enforce relationship for INSERTs and UPDATEs" these triggers all work just fine. Only problem is that now I have no referential integrity and I can simply create unrestrained child rows that do not reference actual foreign keys in the parent table.
So the question is, how do I maintain referential integrity and also have the database delete child rows, keeping in mind that the cascading deletes will not work because of the multiple cascade paths (which are certainly required).
I'm trying to clean up a database design and I'm in a situation to where two tables need a FK but since it didn't exist before there are orphaned records.
Tables are:
Brokers and it's PK is BID
The 2nd table is Broker_Rates which also has a BID table.
I'm trying to figure out a t-sql statement that will parse through all the recrods in the Broker_Rates table and delete the record if there isn't a match for the BID record in the brokers table.
I know this isn't correct syntax but should hopefully clear up what I'm asking
this is my Delete Query NO 1 alter table ZT_Master disable trigger All Delete ZT_Master WHERE TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) alter table ZT_Master enable trigger All
I have troble in Delete Query No 2 here is a select statemnt , I need to delete them select d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) I tried modified it as below delete d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0) but this doesn't works..
can you please help? and can I combine these 2 SQL Query into one Sql Query? thank you
I'm using SqlDataSource and an Access database. Let's say I got two tables:user: userID, usernamemessage: userID, messagetextLet's say a user can register on my website, and leave several messages there. I have an admin page where I can select a user and delete all of his messages just by clicking one button.What would be the best (and easiest) way to make this?Here's my suggestion:I have made a "delete query" (with userID as parameter) in MS Access. It deletes all messages of a user when I type in the userID and click ok.Would it be possible to do this on my ASP.net page? If yes, what would the script look like?(yes, it is a newbie question)
The requirement is: I should allow single row delete from a table but not bulk delete. An audit table should get updated if there is any single delete or single update. So I wrote the triggers as follows: for single and bulk delete
ALTER TRIGGER [dbo].[TRG_Delete_Bulk_tbl_attendance] ON [dbo].[tbl_attendance] AFTER DELETE AS
[code]...
When I try to run the website, the database error I am getting is:Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 0, current count = 1.
I ran the following query in Query Analyzer on a machine running SQL Server 2000. I'm attempting to delete from a linked server running SQL Server 2005:
DELETE FROM sql2005.production.dbo.products WHERE vendor='Foo' AND productId NOT IN ( SELECT productId FROM sql2000.staging.dbo.fooProductList )
The status message (and @@ROWCOUNT) told me 8 rows were affected, but nothing was actually deleted; when I ran a SELECT with the same criteria as the DELETE, all 8 rows are still there. So, once more I tried the DELETE command. This time it told me 7 rows were affected; when I ran the SELECT again, 5 of the rows were still there. Finally, after running this exact same DELETE query 5 times, I was able to remove all 8 rows. Each time it would tell me that a different number of rows had been deleted, and in no case was that number accurate.
I've never seen anything like this before. Neither of the tables involved were undergoing any other changes. There's no replication going on, or anything else that should introduce any delays. And I run queries like this all day, involving every thinkable combination of 2000 and 2005 servers, that don't give me any trouble.
Does anyone have suggestions on what might cause this sort of behavior?
I have a problem with one report on my server. A user has requested that I exclude him from receiving a timed email subscription to several reports. I was able to amend all the subscriptions except one. When I try to remove his email address from the subscription I receive this error:
An internal error occurred on the report server. See the error log for more details. (rsInternalError) Get Online Help
For more information about this error navigate to the report server on the local server machine, or enable remote errors
Online no help couldn't offer any advice at all, so I thought I'd just delete the subscription and recreate it again, but I receive the same message. "Okay, no problem, I'll just delete the report and redeploy it and set up the subscription so all the other users aren't affected", says I. "Oh, no!", says the report server, and then it give me this message:
System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Data.SqlClient.SqlException: Only members of sysadmin role are allowed to update or delete jobs owned by a different login. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.InstrumentedSqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.DBInterface.DeleteObject(String objectName) at Microsoft.ReportingServices.Library.RSService._DeleteItem(String item) at Microsoft.ReportingServices.Library.RSService.ExecuteBatch(Guid batchId) at Microsoft.ReportingServices.WebServer.ReportingService2005.ExecuteBatch() --- End of inner exception stack trace ---
What's even weirder is that I'm the owner and creator of the report and I'm a system admin and content manager on the report server and I set up the subscription when the report was initially deployed. Surely I should have sufficient rights to fart around with this subscription/report as I see fit?
I have rebooted the server, redeployed the report, checked credentials on the data source and tried amending and deleting from both the report manager and management studio but still I am prevented from doing so.
This one is weird and I am missing something fundamental on this one. A developer was getting a timeout with this...
CREATE PROCEDURE p_CM_DeleteBatch ( @SubmitterTranID VARCHAR(50) ) AS DECLARE @COUNT INT, @COMMIT INT
SET @COUNT = 0 SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived where (claimstatus NOT IN ('Keyed', 'Imported')) AND SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete BEGIN BEGIN TRANSACTION DELETE FROM INVOICECLAIMMAPPING WHERE CLMRECDID IN (SELECT DISTINCT CLMRECDID FROM CLAIMSRECEIVED WHERE SUBMITTERTRANID = @SUBMITTERTRANID) IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPayment WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsreceivedPayorServices where ClmRecdPyID in (SELECT ClmRecdPyID FROM ClaimsReceivedPayors WHERE SubmitterTranID = @SubmitterTranID)
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedPayors WHERE ClmRecdid in (SELECT ClmRecdID FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID) IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0 DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0 DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
IF (@COMMIT = 1) BEGIN --ROLLBACK TRANSACTION --For Testing Purpose ONLY COMMIT TRANSACTION RETURN (0) END ELSE BEGIN ROLLBACK TRANSACTION RETURN (-1) END END ELSE BEGIN RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1) END GO
I applied a couple of indices and got ride of the uncorrelated subqueries
CREATE PROCEDURE p_CM_DeleteBatch ( @SubmitterTranID VARCHAR(50) ) AS DECLARE @COUNT INT, @COMMIT INT
SET @COUNT = 0 SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived where (claimstatus NOT IN ('Keyed', 'Imported')) AND SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete BEGIN BEGIN TRANSACTION
DELETE INVOICECLAIMMAPPING FROM INVOICECLAIMMAPPING JOIN CLAIMSRECEIVED ON INVOICECLAIMMAPPING.CLMRECDID = CLAIMSRECEIVED.CLMRECDID WHERE CLAIMSRECEIVED.SUBMITTERTRANID = @SUBMITTERTRANID
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPayment WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE ClaimsreceivedPayorServices FROM ClaimsreceivedPayorServices JOIN ClaimsReceivedPayors ON ClaimsreceivedPayorServices.ClmRecdPyID = ClaimsReceivedPayors.ClmRecPyID WHERE ClaimsReceivedPayors.SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE ClaimsReceivedPayors FROM ClaimsReceivedPayors JOIN ClaimsReceived ON ClaimsReceivedPayors.ClmRecdid = ClaimsReceived.ClmRecdid WHERE ClaimsReceived.SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID IF (@@ERROR <> 0) SET @COMMIT = 0
IF (@COMMIT = 1) BEGIN --ROLLBACK TRANSACTION --For Testing Purpose ONLY COMMIT TRANSACTION RETURN (0) END ELSE BEGIN ROLLBACK TRANSACTION RETURN (-1) END END ELSE BEGIN RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1) END GO SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS ON
GO
Suddenly this constraint was being violated with the change
Is the delete on ClaimsReceivedPayors starting before the delete on ClaimsreceivedPayorServices finishes? If so why would it matter between the join and subquery? This one is making me depressed because I can not explain it.
Need some advice solving a little problem I have with my database!
Current setup:
I have a person table that is made up of 39 columns. I also allow for person records to be deleted but I do this by having another table I call LogicallyDeletedrecords. This table is made up of the PersonId, Reason for deletion/suppression and a date time stamp. To access Live records I created a view based on my Person table which contains a WHERE clause to exclude records that exist in the LogicallyDeletedrecords. Similarly, I have another view DeadPersonData which contains Person records that have been removed. Hope it all makes sense so far! Now on to my worries!
The problem:
My Person table contains 9+ million records. The LogicallyDeletedrecords table has 500k+ but I anticipate further growth over the coming weeks/months. My worry is that my LivePersonData view will be too slow to access as my LogicallyDeletedrecords table grows. What’s more, as part of my Load routine, I have to make sure that Person data loaded on to the system is excluded if that same person exists as a deleted member. Both of these actions could slow down my system as the deleted table grows.
My thoughts:
I’ve been thinking of physically deleting dead Person records from my person table (possibly creating an archive table to hold them). But then if I delete them how do I cross check the details when new Person details get loaded?! As I said, my current LogicallyDeletedrecords table holds the PersonId, ReasonDeleted and CreationStamp. The only way is to add further columns which I use to match Person Details?