Massive Delete In DB
Sep 6, 2005
Hello,
I have a huge database (2 GB / month) and after a while it is becoming
non-operational (time-outs, etc.) So I have written an SQL sentence
(delete) that can reduce around 60% of the db size without compromising
the application data needs. The problem is that when I execute it, the
db does reduce its size 60%, but the transaction log increases at the
same rate. Can I execute the sentence in a "commit" or
"transaction" mode so to impede the SQL Server write in the log?
Thanks for the help!
Antonio
View 6 Replies
ADVERTISEMENT
Feb 14, 2005
Hey all,
We are using SQL Server 7 on Win 2k and there are some DTS packages set up which empty some large tables (delete from) and then import some datafiles.
The imported files are about 13 GB and during the process the log file gets to about 10GB and then runs out of disk space.
Is there a trick to empty a table without logging it? (a la LOAD Replace from Null in DB2)?
How can I go about keeping the log file size down during this operation?
I think the DB is set to autocommit, the trunc log on chkpt. is set on as is the select into/bulk copy (altho I'm reasonable sure we arent availaing of the bulk copy for the import).
Help? :)
View 2 Replies
View Related
May 21, 2013
So I've stumbled across an audit table on one of our systems that has reached a hearty 180M rows in size.
The table is a heap (no indexes whatsoever).
Each record has a datetime value indicating when it was created.
I need to delete everything that was created prior to the last 6 months; what is my best plan of attack?
View 12 Replies
View Related
Jan 28, 2008
I've got a large MS Sql Server 2000 database that has 15 indexes, with roughly 180 million rows representing 240 GB worth of data. Due to the massive size of the database we are trying to purge it down to a smaller dataset, about 40 million rows, in order to speed up the query performance and to be able to defrag the indexes (which are 30-50% fragmented). To complicate the matter, this table is also a publisher in a transactional replication setup, with one subscriber. Also, the system needs to be up constantly so I'm only allowed about a 3-5 hour period to take an outage a week.
So far I've tested several methods of delete following all best practices (batch deletes, using indexes in delete's where clause), and have come up with deleting/commiting 500 rows at a time. The problem is that it still takes 3-4 seconds to delete this many rows, on a 8 GB RAM, 4 processor machine that is not currently used or replicated.
I'm at a loss on a way to pare down the data with a delete as the current purge script will take 7 hours a day for about 3 months. Another option I'm considering is to do a truncate and copy the data back over from the replicated database, but again this has its own set of problems, i.e. network latency and slow inset times. Yet another option would be to create a replica of the table on the production db, copy the data to it, then rename the table.
Any one have experience with purging such a massive amount of data? Any help would be greatly appreciated.
View 6 Replies
View Related
Oct 15, 2004
Currenlty I have huge amounts of data going into a table.
I'm sending an xmldoc and using openxml with a cursor to seed them.
the question I have is whether to let duplicate keyed data rows bounce
and then check @@error and then do an update on the nokeyed field
or
to do a select on the keyed field and then do an insert or update based on the
selects results.
Speed is my goal.
View 3 Replies
View Related
Feb 6, 2007
Rather than posting twice, I thought I would put both issues I'm having in one. Our server is Windows Server 2003 and we're running SQL Server 2005.
The first issue is this: We have several databases and I have scheduled their backups to run nightly which works just fine. A couple weeks ago, one of the databases .bak file grew from about 500MEG to 2GB overnight. Then, just a few days ago, it went from 2GB to 3.5GB. There is nothing unusual going on in the live db that would warrant such an increase in the .bak file. All the dbs are in the same backup job schedule but this is the only one affected. Additionally, I had autogrowth enabled on all the dbs but today disabled it for this particular db. Any ideas?
The second issue is my tempdb.mdf file on my C drive. It will go from just a few hundred KB's to 4.5GB overnight consuming most of what is left on my C drive. I'm afraid I'm in for a system crash if it continues. I have to stop SQL Server and restart it to clear the size. Is there a way to move the location of the tempdb.mdf file to my F drive?
I don't know if these two issues are related or not but certainly would like to hear from someone.
Sorry, in advance, for the large post.
Dave
View 20 Replies
View Related
Mar 3, 2000
When updating large sets a row at a time the performance is lacking in comparison to 6.5. When using PeopleSoft which uses cursors with a begin transaction with a loop inside and a commit after the loop completes, SQL 6.5 with Page locking could handle a 300,000 row transaction in 3-4 hours. 7.0 took 17.5 hours. The difference is 6.5 used 50,000 locks and 7.0 used 300,000 locks.
Does anybody have solution short of rewriting PeopleSoft ?
View 2 Replies
View Related
Apr 5, 2006
Hi Everyone,
We have a large and active MSSQL 2000 database. Recently, after a rebuild of the server, we had a problem with the SQL service SQLSERVERAGENT. The service could not start as the service account lost local permission to the registry. During this time, all of the data being sent to the database from our application accumulated into the database .ldf file. By the time we were able to get the service restarted, our .ldf file was approx. 28 Gigs. When the service restarded, the .ldf file shrunk down to regular size,about 40 megs, and the .trx tlog file grew up to 28 gigs for that specific period (new file every hour).
The problem is, the database file (database.mdf) stayed about the same as it was before the service was restarted. When the .ldf transfered to the .trn none of the 28 gigs of data got stored in the database. What does this mean? Perhaps with the service stopped the application using the db saw problems and did not commit the data making it all useless? Or is it possible that the data in the .trn log just needs to be forced to commit to the .mdf???
Is there any way to verify the data in the 28 gig .trn file and figure out if we should get it stored to the database? If yes, how would we go about verifying it, and after that how would we force it to commit to the .mdf file? Am I on the right track here or is it not as I see it??
Thanks!
Mike
View 4 Replies
View Related
Jul 2, 2006
Hi
Would you say that it's ok for a web site code to make ALL of it's access to a db through SP and views? And I mean everything including inserting new records and updating others with no use with SQL in the code.
The advantage would be very strict control over the access, but in order to achieve this it would take many many SP and views to cover all types of actions, can you think about a disadvantage except all the work creating those SP?? what about the server resources and performance? how demanding it would be?
Thanks,
Inon.
View 7 Replies
View Related
Jan 3, 2006
Hello,
I thought this was a neat solution I came up with, but I'm sure it's
been thought of before. Anyway, it's my first post here.
We have a process for importing data which generates a SELECT statement
based on user's stored configuration. Since the resulting SELECT statement
can be massive, it's created and stored in a text field in a temp table.
So how do I run this huge query after creating it? In my tests, I was
getting a datalength > 20000, requiring 3 varchar(8000) variables in
order to use the execute command. Thing is, I don't know how big it could
possibly get, I wanted to be able to execute it regardless.
Here's what I came up with, it's very simple:
Table is named #IMPORTQUERY, one field SQLTEXT of type TEXT.
>>
declare @x int, @s varchar(8000)
select @x = datalength(sqltext) / 8000 + 1, @s = 'execute('''')' from #importquery
while @x > 0
select @s = 'declare @s' + cast(@x as varchar) + ' varchar(8000) ' +
'select @s' + cast(@x as varchar) +
'=substring(sqltext,@x,@x+8000),@x=@x+8000 from #importquery ' +
replace(@s,'execute(','execute(@s' + cast(@x as varchar) + '+')
, @x = @x - 1
set @s = 'declare @x int set @x=1 ' + @s
execute(@s)
<<
At the end, I execute the "@s" variable which is SQL that builds and
executes the massive query. Here's what @s looks like at the end:
>>
declare @x int set @x=1
declare @s1 varchar(8000)
select @s1=substring(sqltext,@x,@x+8000),@x=@x+8000 from #importquery
declare @s2 varchar(8000)
select @s2=substring(sqltext,@x,@x+8000),@x=@x+8000 from #importquery
declare @s3 varchar(8000)
select @s3=substring(sqltext,@x,@x+8000),@x=@x+8000 from #importquery
execute(@s1+@s2+@s3+'')
<<
View 4 Replies
View Related
Jul 23, 2005
Our database server has started acting weird and at this point I'm eithertoo sleep deprived or close to the problem to adequately diagnose the issue.Basically to put it simply... when I look at the read disk queue length, thedisks queues are astronomical.normally we're seeing a disk queue length of 0-1 on the disks that containthe DB data and index. (i.e non clustered indexes are on a disk of theirown).Writes are just fine.Problem is, all our databases are on the same drive, and I can't seem tonail down which DB, let alone which table is the source of all our reads.Now, to really make things weirder.. during the busier times of the daytoday (say 1:00 PM to 4:00 PM) things were fine.At 4:20 PM or so it was like someone hit a switch and read disk queue lengthjumped from 0-1 up to 100-200+... with spikes up to 1500 for a split secondor so.What's the best way folks know to nail down this?Thanks.----
View 9 Replies
View Related
Dec 29, 2007
If I remove the TOP 200 this query returns about 2.5 million rows. It combines a lot of records and turns it into much more programmer friendly results. The query slowed down from 2 seconds to about 13 seconds as it has grown from about 10k to the now couple of million.
Code Block
SELECT TOP 200 *
FROM
(
SELECT
[UserProfile].[UserId]
,[aspnet_Users].[UserName]
,[City]
,[State]
,[RoleName]
,[ProfileItemType].[Name] AS pt_name
,[ProfileItem].[Value]
FROM
[UserCriteria]
,[aspnet_Users]
,[aspnet_Roles]
,[aspnet_UsersInRoles]
,[Location]
,[ProfileType]
,[ProfileTypeItem]
,[ProfileItem]
INNER JOIN [UserProfile]
ON [ProfileItem].[ProfileId] = [UserProfile].[ProfileId]
INNER JOIN [ProfileItemType]
ON [ProfileItem].[ProfileItemTypeId] = [ProfileItemType].[ProfileItemTypeId]
WHERE [UserProfile].[UserId] IN (
SELECT [UserCriteria].[UserId]
FROM [UserCriteria]
WHERE
Zipcode IN (
SELECT [Zipcode]
FROM [ZipcodeProximitySQR] ('89108' , 150))
)
AND [UserProfile].[UserId] = [aspnet_Users].[UserId]
AND [UserCriteria].[UserId] = [UserProfile].[UserId]
AND [Location].[Zipcode] = [UserCriteria].[Zipcode]
AND [aspnet_UsersInRoles].[UserId] = [aspnet_Users].[UserId]
AND [aspnet_UsersInRoles].[RoleId] = [aspnet_Roles].[RoleId]
) AS t
PIVOT
(
MIN([Value])
FOR pt_name IN ([field1],[field2]],[field3]],[field4]])
) AS pvt
ORDER BY RoleName DESC, NEWID()
The line: FOR pt_name IN ([field1],[field2]],[field3]],[field4]]) I change the values from the long names to read field1, field2... because it was irrelevant but confusing because of the names.
Here is the showplan text
Code Block
|--Sequence
|--Table-valued function(OBJECT:([aous].[dbo].[ZipcodeProximitySQR].[PK__ZipcodeProximity__5E54FF49]))
|--Top(TOP EXPRESSION:((200)))
|--Stream Aggregate(GROUP BY:([aous].[dbo].[UserCriteria].[UserId], [aous].[dbo].[aspnet_Users].[UserName], [aous].[dbo].[Location].[City], [aous].[dbo].[Location].[State], [aous].[dbo].[UserCriteria].[Birthdate], [aous].[dbo].[aspnet_Roles].[RoleName]) DEFINE:([Expr1039]=MIN(CASE WHEN [aous].[dbo].[ProfileItemType].[Name]=N'height' THEN [aous].[dbo].[ProfileItem].[Value] ELSE NULL END), [Expr1040]=MIN(CASE WHEN [aous].[dbo].[ProfileItemType].[Name]=N'bodyType' THEN [aous].[dbo].[ProfileItem].[Value] ELSE NULL END), [Expr1041]=MIN(CASE WHEN [aous].[dbo].[ProfileItemType].[Name]=N'hairColor' THEN [aous].[dbo].[ProfileItem].[Value] ELSE NULL END), [Expr1042]=MIN(CASE WHEN [aous].[dbo].[ProfileItemType].[Name]=N'eyeColor' THEN [aous].[dbo].[ProfileItem].[Value] ELSE NULL END)))
|--Nested Loops(Inner Join)
|--Nested Loops(Inner Join)
| |--Sort(ORDER BY:([aous].[dbo].[UserCriteria].[UserId] ASC, [aous].[dbo].[Location].[City] ASC, [aous].[dbo].[Location].[State] ASC, [aous].[dbo].[UserCriteria].[Birthdate] ASC, [aous].[dbo].[aspnet_Roles].[RoleName] ASC))
| | |--Hash Match(Inner Join, HASH:([aous].[dbo].[UserCriteria].[Zipcode])=([Expr1043]), RESIDUAL:([Expr1043]=[aous].[dbo].[UserCriteria].[Zipcode]))
| | |--Hash Match(Inner Join, HASH:([aous].[dbo].[ProfileItemType].[ProfileItemTypeId])=([aous].[dbo].[ProfileItem].[ProfileItemTypeId]))
| | | |--Index Scan(OBJECT:([aous].[dbo].[ProfileItemType].[ProfileTypes]))
| | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[UserProfile].[ProfileId]))
| | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[UserProfile].[UserId]))
| | | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[UserProfile].[UserId]))
| | | | | |--Hash Match(Inner Join, HASH:([aous].[dbo].[UserProfile].[UserId])=([aous].[dbo].[aspnet_UsersInRoles].[UserId]), RESIDUAL:([aous].[dbo].[UserProfile].[UserId]=[aous].[dbo].[aspnet_UsersInRoles].[UserId]))
| | | | | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[UserCriteria].[UserId]))
| | | | | | | |--Stream Aggregate(GROUP BY:([aous].[dbo].[UserCriteria].[UserId]))
| | | | | | | | |--Nested Loops(Left Semi Join, WHERE:([aous].[dbo].[UserCriteria].[Zipcode]=[Expr1044]))
| | | | | | | | |--Clustered Index Seek(OBJECT:([aous].[dbo].[UserCriteria].[UserCriteria]), SEEK:([aous].[dbo].[UserCriteria].[UserId] < {guid'E3D72D56-731A-410E-BCB1-07A87A312137'} OR [aous].[dbo].[UserCriteria].[UserId] > {guid'E3D72D56-731A-410E-BCB1-07A87A312137'}), WHERE:([aous].[dbo].[UserCriteria].[Male]=(1) AND [aous].[dbo].[UserCriteria].[SeekingMale]=(0)) ORDERED FORWARD)
| | | | | | | | |--Compute Scalar(DEFINE:([Expr1044]=CONVERT_IMPLICIT(nvarchar(5),[aous].[dbo].[ZipcodeProximitySQR].[Zipcode],0)))
| | | | | | | | |--Clustered Index Scan(OBJECT:([aous].[dbo].[ZipcodeProximitySQR].[PK__ZipcodeProximity__5E54FF49]))
| | | | | | | |--Clustered Index Seek(OBJECT:([aous].[dbo].[UserProfile].[UserProfileIds]), SEEK:([aous].[dbo].[UserProfile].[UserId]=[aous].[dbo].[UserCriteria].[UserId]) ORDERED FORWARD)
| | | | | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[aspnet_Roles].[RoleId]))
| | | | | | |--Clustered Index Scan(OBJECT:([aous].[dbo].[aspnet_Roles].[aspnet_Roles_index1]))
| | | | | | |--Index Seek(OBJECT:([aous].[dbo].[aspnet_UsersInRoles].[aspnet_UsersInRoles_index]), SEEK:([aous].[dbo].[aspnet_UsersInRoles].[RoleId]=[aous].[dbo].[aspnet_Roles].[RoleId]) ORDERED FORWARD)
| | | | | |--Clustered Index Seek(OBJECT:([aous].[dbo].[UserCriteria].[UserCriteria]), SEEK:([aous].[dbo].[UserCriteria].[UserId]=[aous].[dbo].[UserProfile].[UserId]) ORDERED FORWARD)
| | | | |--Index Seek(OBJECT:([aous].[dbo].[aspnet_Users].[_dta_index_aspnet_Users_5_37575172__K2_K1_K4_3]), SEEK:([aous].[dbo].[aspnet_Users].[UserId]=[aous].[dbo].[UserProfile].[UserId]) ORDERED FORWARD)
| | | |--Index Seek(OBJECT:([aous].[dbo].[ProfileItem].[_dta_index_ProfileItem_5_1714105147__K2_K1_K3_4]), SEEK:([aous].[dbo].[ProfileItem].[ProfileId]=[aous].[dbo].[UserProfile].[ProfileId]) ORDERED FORWARD)
| | |--Compute Scalar(DEFINE:([Expr1043]=CONVERT_IMPLICIT(nchar(5),[aous].[dbo].[Location].[Zipcode],0)))
| | |--Index Scan(OBJECT:([aous].[dbo].[Location].[CityLocation]))
| |--Clustered Index Scan(OBJECT:([aous].[dbo].[ProfileType].[PKProfileTypeProfileTypeId]))
|--Clustered Index Scan(OBJECT:([aous].[dbo].[ProfileTypeItem].[ProfileTypeItem]))
Here is a link to the execution plan from Microsoft SQL Server management Studio.
http://epi.cc/BasicUserSearch.zip
There are no table scans, but the Hash Match from the inner join is pretty bad.
Can anyone give me a pointer or two?
View 1 Replies
View Related
Jan 20, 2014
I have to delete a ton of data from a SQL table. I have a unique identifier called the version. I would like to use if not in these versions then delete. I tried to using the statement below, but learned the hard way that it created an error this is the message I got....
Msg 9002, Level 17, State 4, Line 3...
The transaction log for database 'MonthEnds' is full due to 'ACTIVE_TRANSACTION'.
I was reading about truncate, I am not sure how I would do this or how I would setup the statement.
Delete Products
where versions were not in (('48459CED-871F-4971-B888-5083990332BC','D550C8D3-58C7-4C74-841D-1C1675F19AE3','C77C7817-3F04-4145-98D3-37BB1610DB35',
'21FE83FA-476D-4604-80EF-2ED57DEE2C16','F3B50B81-191A-4D71-A406-011127AEFBE1','EFBD48E7-E30F-4047-909E-F14DCAEA4181','BD9CCC41-D696-406B-
'C8BEBFBC-D362-4D0F-A555-B281FC2B3023','EFA64956-C2CF-41FC-8E21-F060597DAFCB','77A8DE56-6F7F-4490-8BED-AA6809B947EF','0F4C1E5F-B689-4DCB-
[code]....
View 2 Replies
View Related
Feb 8, 2008
Hi,
I have a series of SSIS packages, all of which are ultimately executed by a parent package.
I'm consitently getting "OutOfMemory" errors when working with the packages which is temporarily solved by closing Visual Studio and re-opening the package(s)... This solution is short lived however as the OutOfMemory error occurs quite quickly after re-opening, often after doing nothing other than altering a variables default value and attempting to save the package.
The average size of the packages in question (.dtsx files) is around 7,000kb with the largest being 12,500kb. The total size of all the solution's packages is ~75,000kb.
The Processes tab in Task Manager shows a Mem Usage counter for devenv.exe *32 of around 20,000kb when Visual Studio is first opened however, when a single ~6,000kb dtsx file is opened this counter jumps to +300,000kb and when the entire solution is opened (When the parent package is executed), the Mem Usage counter for devenv.exe *32 is a massive +800,000kb!!!
Is this normal SSIS behaviour or do I have a major problem? Any tips or suggestions as to how to resolve this issue would be gratefully received.
FYI, "SELECT @@VERSION" gives me "Microsoft SQL Server 2005 - 9.00.3042.00 (X64) Feb 10 2007 00:59:02 Copyright (c) 1988-2005 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2) "
My Server is Windows Server 2003 R2 Enterprise x64 SP2 with 8GB of RAM.
Thanks in advance.
Leigh.
View 7 Replies
View Related
Dec 4, 2006
Hello,
I am currently working on a project where I have to import a huge amount of data from CSV files into a database.
I don't want to have dublicate keys in my table, but my CSV file contains them. That means the line more at the end of the file contains the mor up to date information that I have to store.
I try to fix this problem since serveral weeks, but my algorithm is very slow and blocks all other processes on the server. At the moment I am copying all records into a temp table that occure more than once in the CSV file. After that I am running through this table line by line and check if the key already exists in the target table and then either make an insert or an update.
Does somebody know a better process?
I hope somebody can help me... :(
View 5 Replies
View Related
Apr 10, 2007
Background
SQL Server 2005 Standard 9.0.1399 64bit
Windows 2003 64-bit
8gb RAM
RAID-1 70gb HD 15K SCSI (Log Files, OS)
RAID-10 1.08TB HD 10K SCSI (Data Files)
Runs aproximately _Total 800 Transaction/Second
We deliver aproximately 70-80 million ad views / day
8 Clustered Windows 2003 32-bit OS IIS Servers running Asp.net 2.0 websites
All 8 servers talking to the one SQL server via a private network (server backbone).
In SQL Server Profiler, I see the following SQL statements with durations of 2000 - 7000:
select top 1 keywordID, keyword, hits, photo, feed from dbo.XXXX where hits > 0 order by hits
and
UPDATE XXXX SET hits=1906342 WHERE keywordID = 7;
Where the hits number is incremented by one each time that is selcted for that keyword ID.
Sometimes these happen so frequently the server stops accepting new connectinos, and I have to restart the SQL server or reboot.
Any ideas on why this is happening?
Regards,
Joe
View 6 Replies
View Related
May 3, 2007
I need some help I have this massive sql script the problem is I tried to put it in to the query string box in my sql reports and it will not take it this script will run if I break it up but I think it is to large is there a sql guru out there that can show me how to reduce the size of this script maybe by using an out parameter to a stored proceedure. I just dont know what to do and need to produce the report from this script. Below is the entire script
SELECT 'Prior Year All ' as 'qtr', COUNT(JOB.JOBID) AS 'transcount', COUNT(DISTINCT JOB.PATIENTID) AS 'patientcount', SUM(JOB.TRANSPORTATION_TCOST) AS 'tcost', SUM(JOB.TRANSPORTATION_DISC_COST) AS 'dtcost', AVG(JOB.TRANSPORTATION_DISC) AS 'avgTDisc', SUM(JOB.TRANSPORTATION_TCOST) + SUM(JOB.TRANSPORTATION_DISC_COST) AS 'TGrossAmtBilled', SUM(JOB.TRANSPORTATION_TCOST) / COUNT(DISTINCT JOB.PATIENTID) AS 'PatAvgT', SUM(JOB.TRANSPORTATION_DISC) AS 'avgPercentDiscT', SUM(JOB.TRANSPORTATION_TCOST) / COUNT(JOB.JOBID) AS 'RefAvgT', JOB.JURISDICTION, PAYER.PAY_GROUPNAME, PAYER.PAY_COMPANY, PAYER.PAY_CITY, PAYER.PAY_STATE, PAYER.PAY_SALES_STAFF_ID, JOB.PATIENTID, JOB.INVOICE_DATE, JOB.JOBOUTCOMEID, JOB.SERVICEOUTCOME, INVOICE_AR.INVOICE_NO, INVOICE_AR.INVOICE_DATE AS Expr1, INVOICE_AR.AMOUNT_DUE, INVOICE_AR.CLAIMNUMBER, PATIENT.LASTNAME, PATIENT.FIRSTNAME, PATIENT.EMPLOYERNAME, JOB_OUTCOME.DESCRIPTION, SERVICE_TYPE.DESCRIPTION, PAT_SERVICES_HISTORY.TRANSPORT_TYPE,
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed Successfully') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedSuccessfullyItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with complaint') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithComplaintItems', (SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with No Show') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithNoShowItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with No Charge') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithNoChargeItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with Situation') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithSituationItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Not Completed') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'NotCompletedItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Cancelled Prior to service') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CancelledPriorToServiceItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Cancelled During Service') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CancelledDuringServiceItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed Successfully') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'AwaitingforcompletionItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Pending for review') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like'%T ')) AS 'PendingforreviewItems'
FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID
WHERE (INVOICE_AR.AMOUNT_DUE > 0)AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12))AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%')AND (INVOICE_AR.INVOICE_NO like '%T')
GROUP BY JOB.JURISDICTION, PAYER.PAY_GROUPNAME, PAYER.PAY_COMPANY, PAYER.PAY_CITY, PAYER.PAY_STATE, PAYER.PAY_SALES_STAFF_ID, JOB.PATIENTID, JOB.INVOICE_DATE, JOB.JOBOUTCOMEID, JOB.SERVICEOUTCOME, INVOICE_AR.INVOICE_NO, INVOICE_AR.INVOICE_DATE, INVOICE_AR.AMOUNT_DUE, INVOICE_AR.CLAIMNUMBER, PATIENT.LASTNAME, PATIENT.FIRSTNAME, PATIENT.EMPLOYERNAME, JOB_OUTCOME.DESCRIPTION, SERVICE_TYPE.DESCRIPTION, PAT_SERVICES_HISTORY.TRANSPORT_TYPE
UNION ALL
SELECT 'Current Year 2007 All ' as 'qtr', COUNT(JOB.JOBID) AS 'transcount', COUNT(DISTINCT JOB.PATIENTID) AS 'patientcount', SUM(JOB.TRANSPORTATION_TCOST) AS 'tcost', SUM(JOB.TRANSPORTATION_DISC_COST) AS 'dtcost', AVG(JOB.TRANSPORTATION_DISC) AS 'avgTDisc', SUM(JOB.TRANSPORTATION_TCOST) + SUM(JOB.TRANSPORTATION_DISC_COST) AS 'TGrossAmtBilled', SUM(JOB.TRANSPORTATION_TCOST) / COUNT(DISTINCT JOB.PATIENTID) AS 'PatAvgT', SUM(JOB.TRANSPORTATION_DISC) AS 'avgPercentDiscT', SUM(JOB.TRANSPORTATION_TCOST) / COUNT(JOB.JOBID) AS 'RefAvgT', JOB.JURISDICTION, PAYER.PAY_GROUPNAME, PAYER.PAY_COMPANY, PAYER.PAY_CITY, PAYER.PAY_STATE, PAYER.PAY_SALES_STAFF_ID, JOB.PATIENTID, JOB.INVOICE_DATE, JOB.JOBOUTCOMEID, JOB.SERVICEOUTCOME, INVOICE_AR.INVOICE_NO, INVOICE_AR.INVOICE_DATE AS Expr1, INVOICE_AR.AMOUNT_DUE, INVOICE_AR.CLAIMNUMBER, PATIENT.LASTNAME, PATIENT.FIRSTNAME, PATIENT.EMPLOYERNAME, JOB_OUTCOME.DESCRIPTION, SERVICE_TYPE.DESCRIPTION, PAT_SERVICES_HISTORY.TRANSPORT_TYPE,
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed Successfully') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (@startDate) and DATEADD(@enddate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedSuccessfullyItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with complaint') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (@startdate) and DATEADD(@enddate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithComplaintItems', (SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with No Show') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (startdate) and DATEADD(@enddate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithNoShowItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with No Charge') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (@startdate) and DATEADD(@enddate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithNoChargeItems',
&nb
View 8 Replies
View Related
Sep 11, 2015
I have some simple files but they are failing because the delete history task is failing as it is looking for files in a non existent directory.
It is looking for files in C:Program FilesMicrosoft SQL ServerMSSQL10_50.INSTANCEMSSQLLog whereas it should be looking in C:Program FilesMicrosoft SQL ServerMSSQL10_50.MSSQLSERVERMSSQLLog
how I can get this corrected so I can get the Maintenance Plans to run correctly.
I have tried deleting and recreating the Plan but to no avail
View 0 Replies
View Related
May 19, 2012
I am using Master Data Service for couple of months now. I can load, update, merge and soft delete data in MDS. Occasionally we even have to hard delete data from MDS. If we keep on soft deleting records in a MDS table eventually there will be huge number of soft deleted records. Is there an easy way to hard delete all the soft deleted records from all MDS tables in a specific Model.
View 18 Replies
View Related
Oct 23, 2004
Hello:
Need some serious help with this one...
Background:
Am working on completing an ORM that can not only handles CRUD actions -- but that can also updates the structure of a table transparently when the class defs change. Reason for this is that I can't get the SQL scripts that would work for updating a software on SqlServer to be portable to other DBMS systems. Doing it by code, rather than SQL batch has a chance of making cross-platform, updateable, software...
Anyway, because it needs to be cross-DBMS capable, the constraints are that the system used must work for the lowest common denominator....ie, a 'recipe' of steps that will work on all DBMS's.
The Problem:
There might be simpler ways to do this with SqlServer (all ears :-) - just in case I can't make it cross platform right now) but, with simplistic DBMS's (SqlLite, etc) there is no way to ALTER table once formed: one has to COPY the Table to a new TMP name, adding a Column in the process, then delete the original, then rename the TMP to the original name.
This appears possible in SqlServer too --...as long as there are no CASCADE operations.
Truncate table doesn't seem to be the solution, nor drop, as they all seem to trigger a Cascade delete in the Foreign Table.
So -- please correct me if I am wrong here -- it appears that the operations would be
along the lines of:
a) Remove the Foreign Key references
b) Copy the table structure, and make a new temp table, adding the column
c) Copy the data over
d) Add the FK relations, that used to be in the first table, to the new table
e) Delete the original
f) Done?
The questions are:
a) How does one alter a table to REMOVE the Foreign Key References part, if it has no 'name'.
b) Anyone know of a good clean way to get, and save these constraints to reapply them to the new table. Hopefully with some cross platform ADO.NET solution? GetSchema etc appears to me to be very dbms dependant?
c) ANY and all tips on things I might run into later that I have not mentioned, are also greatly appreciated.
Thanks!
Sky
View 1 Replies
View Related
Nov 13, 2006
I am having great difficulty with cascading deletes, delete triggers and referential integrity.
The database is in First Normal Form.
I have some tables that are child tables with two foreign keyes to two different parent tables, for example:
Table A
/
Table B Table C
/
Table D
So if I try to turn on cascading deletes for A/B, A/C, B/D and C/D relationships, I get an error that I cannot have cascading delete because it would create multiple cascade paths. I do understand why this is happening. If I delete a row in Table A, I want it to delete child rows in Table B and table C, and then child rows in table D as well. But if I delete a row in Table C, I want it to delete child rows in Table D, and if I delete a row in Table B, I want it to also delete child rows in Table D.
SQL sees this as cyclical, because if I delete a row in table A, both table B and table C would try to delete their child rows in table D.
Ok, so I thought, no biggie, I'll just use delete triggers. So I created delete triggers that will delete child rows in table B and table C when deleting a row in table A. Then I created triggers in both Table B and Table C that would delete child rows in Table D.
When I try to delete a row in table A, B or C, I get the error "Delete Statement Conflicted with COLUMN REFERENCE". This does not make sense to me, can anyone explain? I have a trigger in place that should be deleting the child rows before it attempts to delete the parent row...isn't that the whole point of delete triggers?????
This is an example of my delete trigger:
CREATE TRIGGER [DeleteA] ON A
FOR DELETE
AS
Delete from B where MeetingID = ID;
Delete from C where MeetingID = ID;
And then Table B and C both have delete triggers to delete child rows in table D. But it never gets to that point, none of the triggers execute because the above error happens first.
So if I then go into the relationships, and deselect the option for "Enforce relationship for INSERTs and UPDATEs" these triggers all work just fine. Only problem is that now I have no referential integrity and I can simply create unrestrained child rows that do not reference actual foreign keys in the parent table.
So the question is, how do I maintain referential integrity and also have the database delete child rows, keeping in mind that the cascading deletes will not work because of the multiple cascade paths (which are certainly required).
Hope this makes sense...
Thanks,
Josh
View 6 Replies
View Related
Nov 17, 2006
I'm trying to clean up a database design and I'm in a situation to where two tables need a FK but since it didn't exist before there are orphaned records.
Tables are:
Brokers and it's PK is BID
The 2nd table is Broker_Rates which also has a BID table.
I'm trying to figure out a t-sql statement that will parse through all the recrods in the Broker_Rates table and delete the record if there isn't a match for the BID record in the brokers table.
I know this isn't correct syntax but should hopefully clear up what I'm asking
DELETE FROM Broker_Rates
WHERE (Broker_Rates.BID <> Broker.BID)
Thanks
View 6 Replies
View Related
Nov 26, 2007
this is my Delete Query NO 1
alter table ZT_Master disable trigger All
Delete ZT_Master WHERE TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0)
alter table ZT_Master enable trigger All
I have troble in Delete Query No 2
here is a select statemnt , I need to delete them
select d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0)
I tried modified it as below
delete d.* from ZT_Master m, ZT_Detail d where (m.Prikey=d.MasterKey) And m.TDateTime> = DATEADD(month,DATEDIFF(month,0,getdate())-(select Keepmonths from ZT_KeepMonths where id =1),0) AND m.TDateTime< DATEADD(month,DATEDIFF(month,0,getdate()),0)
but this doesn't works..
can you please help?
and can I combine these 2 SQL Query into one Sql Query? thank you
View 1 Replies
View Related
Feb 16, 2008
I'm using SqlDataSource and an Access database. Let's say I got two tables:user: userID, usernamemessage: userID, messagetextLet's say a user can register on my website, and leave several messages there. I have an admin page where I can select a user and delete all of his messages just by clicking one button.What would be the best (and easiest) way to make this?Here's my suggestion:I have made a "delete query" (with userID as parameter) in MS Access. It deletes all messages of a user when I type in the userID and click ok.Would it be possible to do this on my ASP.net page? If yes, what would the script look like?(yes, it is a newbie question)
View 2 Replies
View Related
Sep 16, 2013
The requirement is: I should allow single row delete from a table but not bulk delete. An audit table should get updated if there is any single delete or single update. So I wrote the triggers as follows: for single and bulk delete
ALTER TRIGGER [dbo].[TRG_Delete_Bulk_tbl_attendance]
ON [dbo].[tbl_attendance]
AFTER DELETE
AS
[code]...
When I try to run the website, the database error I am getting is:Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 0, current count = 1.
View 3 Replies
View Related
Aug 20, 2007
I ran the following query in Query Analyzer on a machine running SQL Server 2000. I'm attempting to delete from a linked server running SQL Server 2005:
DELETE FROM sql2005.production.dbo.products
WHERE vendor='Foo'
AND productId NOT IN
(
SELECT productId FROM sql2000.staging.dbo.fooProductList
)
The status message (and @@ROWCOUNT) told me 8 rows were affected, but nothing was actually deleted; when I ran a SELECT with the same criteria as the DELETE, all 8 rows are still there. So, once more I tried the DELETE command. This time it told me 7 rows were affected; when I ran the SELECT again, 5 of the rows were still there. Finally, after running this exact same DELETE query 5 times, I was able to remove all 8 rows. Each time it would tell me that a different number of rows had been deleted, and in no case was that number accurate.
I've never seen anything like this before. Neither of the tables involved were undergoing any other changes. There's no replication going on, or anything else that should introduce any delays. And I run queries like this all day, involving every thinkable combination of 2000 and 2005 servers, that don't give me any trouble.
Does anyone have suggestions on what might cause this sort of behavior?
View 3 Replies
View Related
Nov 20, 2007
Hi,
I have a problem with one report on my server. A user has requested that I exclude him from receiving a timed email subscription to several reports. I was able to amend all the subscriptions except one. When I try to remove his email address from the subscription I receive this error:
An internal error occurred on the report server. See the error log for more details. (rsInternalError) Get Online Help
For more information about this error navigate to the report server on the local server machine, or enable remote errors
Online no help couldn't offer any advice at all, so I thought I'd just delete the subscription and recreate it again, but I receive the same message. "Okay, no problem, I'll just delete the report and redeploy it and set up the subscription so all the other users aren't affected", says I. "Oh, no!", says the report server, and then it give me this message:
System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Data.SqlClient.SqlException: Only members of sysadmin role are allowed to update or delete jobs owned by a different login. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.InstrumentedSqlCommand.ExecuteNonQuery() at Microsoft.ReportingServices.Library.DBInterface.DeleteObject(String objectName) at Microsoft.ReportingServices.Library.RSService._DeleteItem(String item) at Microsoft.ReportingServices.Library.RSService.ExecuteBatch(Guid batchId) at Microsoft.ReportingServices.WebServer.ReportingService2005.ExecuteBatch() --- End of inner exception stack trace ---
What's even weirder is that I'm the owner and creator of the report and I'm a system admin and content manager on the report server and I set up the subscription when the report was initially deployed. Surely I should have sufficient rights to fart around with this subscription/report as I see fit?
I have rebooted the server, redeployed the report, checked credentials on the data source and tried amending and deleting from both the report manager and management studio but still I am prevented from doing so.
Any help would be much appreciated.
Thanks in advance,
Paul
View 3 Replies
View Related
Feb 23, 2006
First, this is not my code.
This one is weird and I am missing something fundamental on this one. A developer was getting a timeout with this...
CREATE PROCEDURE p_CM_DeleteBatch
(
@SubmitterTranID VARCHAR(50)
)
AS
DECLARE
@COUNT INT,
@COMMIT INT
SET @COUNT = 0
SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived
where (claimstatus NOT IN ('Keyed', 'Imported')) AND
SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete
BEGIN
BEGIN TRANSACTION
DELETE FROM INVOICECLAIMMAPPING WHERE CLMRECDID IN (SELECT DISTINCT CLMRECDID FROM CLAIMSRECEIVED WHERE SUBMITTERTRANID = @SUBMITTERTRANID)
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsPayment WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsreceivedPayorServices where ClmRecdPyID in (SELECT ClmRecdPyID
FROM ClaimsReceivedPayors
WHERE SubmitterTranID = @SubmitterTranID)
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsReceivedPayors WHERE ClmRecdid in (SELECT ClmRecdID
FROM ClaimsReceived
WHERE SubmitterTranID = @SubmitterTranID)
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
IF (@COMMIT = 1)
BEGIN
--ROLLBACK TRANSACTION --For Testing Purpose ONLY
COMMIT TRANSACTION
RETURN (0)
END
ELSE
BEGIN
ROLLBACK TRANSACTION
RETURN (-1)
END
END
ELSE
BEGIN
RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1)
END
GO
I applied a couple of indices and got ride of the uncorrelated subqueries
CREATE PROCEDURE p_CM_DeleteBatch
(
@SubmitterTranID VARCHAR(50)
)
AS
DECLARE
@COUNT INT,
@COMMIT INT
SET @COUNT = 0
SET @COMMIT = 1 --DO NOT CHANGE THIS. The Operation will be commited only when this value is 1
select @COUNT = COUNT(*) from claimsreceived
where (claimstatus NOT IN ('Keyed', 'Imported')) AND
SubmitterTranID = @SubmitterTranID
IF (@COUNT = 0) --This means that that Claims under this Batch have not been adjudicated & it is safe to delete
BEGIN
BEGIN TRANSACTION
DELETE INVOICECLAIMMAPPING
FROM INVOICECLAIMMAPPING
JOIN CLAIMSRECEIVED
ON INVOICECLAIMMAPPING.CLMRECDID = CLAIMSRECEIVED.CLMRECDID
WHERE CLAIMSRECEIVED.SUBMITTERTRANID = @SUBMITTERTRANID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsPayment
WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsPaymentServices WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE ClaimsreceivedPayorServices
FROM ClaimsreceivedPayorServices
JOIN ClaimsReceivedPayors
ON ClaimsreceivedPayorServices.ClmRecdPyID = ClaimsReceivedPayors.ClmRecPyID
WHERE ClaimsReceivedPayors.SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE ClaimsReceivedPayors
FROM ClaimsReceivedPayors
JOIN ClaimsReceived
ON ClaimsReceivedPayors.ClmRecdid = ClaimsReceived.ClmRecdid
WHERE ClaimsReceived.SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsReceivedServices WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM ClaimsReceived WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
DELETE FROM BATCHLOGCLAIMS WHERE SubmitterTranID = @SubmitterTranID
IF (@@ERROR <> 0)
SET @COMMIT = 0
IF (@COMMIT = 1)
BEGIN
--ROLLBACK TRANSACTION --For Testing Purpose ONLY
COMMIT TRANSACTION
RETURN (0)
END
ELSE
BEGIN
ROLLBACK TRANSACTION
RETURN (-1)
END
END
ELSE
BEGIN
RaisError ('This Batch cannot be deleted. It has claim(s) which has been Adjudicated', 16, 1)
END
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
Suddenly this constraint was being violated with the change
ALTER TABLE [dbo].[ClaimsReceivedPayorServices] ADD CONSTRAINT [FK_ClaimsReceivedPayorServices_CLAIMSRECEIVEDPAYOR S] FOREIGN KEY
(
[ClmRecdPyID]
) REFERENCES [CLAIMSRECEIVEDPAYORS] (
[CLMRECPYID]
)
Is the delete on ClaimsReceivedPayors starting before the delete on ClaimsreceivedPayorServices finishes? If so why would it matter between the join and subquery? This one is making me depressed because I can not explain it.
View 2 Replies
View Related
Apr 3, 2008
Hi all
Need some advice solving a little problem I have with my database!
Current setup:
I have a person table that is made up of 39 columns. I also allow for person records to be deleted but I do this by having another table I call LogicallyDeletedrecords. This table is made up of the PersonId, Reason for deletion/suppression and a date time stamp. To access Live records I created a view based on my Person table which contains a WHERE clause to exclude records that exist in the LogicallyDeletedrecords. Similarly, I have another view DeadPersonData which contains Person records that have been removed. Hope it all makes sense so far! Now on to my worries!
The problem:
My Person table contains 9+ million records. The LogicallyDeletedrecords table has 500k+ but I anticipate further growth over the coming weeks/months. My worry is that my LivePersonData view will be too slow to access as my LogicallyDeletedrecords table grows. What’s more, as part of my Load routine, I have to make sure that Person data loaded on to the system is excluded if that same person exists as a deleted member. Both of these actions could slow down my system as the deleted table grows.
My thoughts:
I’ve been thinking of physically deleting dead Person records from my person table (possibly creating an archive table to hold them). But then if I delete them how do I cross check the details when new Person details get loaded?! As I said, my current LogicallyDeletedrecords table holds the PersonId, ReasonDeleted and CreationStamp. The only way is to add further columns which I use to match Person Details?
Any design suggestions would be welcome!
View 3 Replies
View Related
Aug 12, 2007
I am trying to write one sql statement that deletes from two tables. Is it possible ? If yes, any thoughts ?
View 5 Replies
View Related
Feb 1, 2007
Hi seniors
there are two tables involve in replication let say table1 and replicated table is also rep.table1.
we are not deleting records physically in table1 so only a bit in table1 has true when u want to delete a record but the strange thing is that replication agaent report that this is hard delete operation on table1 so download and report hard delete operation and delete the record in replicated table which is very crucial.
plz let me know where am i wrong and how i put it into right way.
there is no triggers on published tables and noother trigger is created on published table.
regards
Ahmad Drshen
View 6 Replies
View Related
May 6, 2007
I have several jobs in SQL 2005 that I cannot delete. I get "The DELETE statement conflicted with the REFERENCE constraint "FK_subplan_job_id" the conflict occured in database "msdb", table "dbo.sysmaintplan_subplans" columd 'job_id'.
How can I fix this?
TIA <JP>
View 2 Replies
View Related
Jun 26, 2007
Hello, I just cannot think of the answer to this this morning and would appreciate some help on it. Scenario for these two tables:FooFooID PKDescriptionBarBarID PKFooID FKNo cascade delete on the FK relationship, I want to handle it explicitly based on logic.I want to delete Bar for a specific value of BarID, and want to delete all Foo that are in the intersecting table Bar by the FK.I begin a transaction, and here is the issue:If I delete Foo first, I get a FK constraint issue, there is a record in Bar corresponding to it.If I delete Bar first, the FK does not exist for the query to find the children in Foo. As silly as it sounds I just cannot think of the answer right now, thanks for any help.
View 6 Replies
View Related