Massive Slowdown With Query
Dec 29, 2007
If I remove the TOP 200 this query returns about 2.5 million rows. It combines a lot of records and turns it into much more programmer friendly results. The query slowed down from 2 seconds to about 13 seconds as it has grown from about 10k to the now couple of million.
Code Block
SELECT TOP 200 *
FROM
(
SELECT
[UserProfile].[UserId]
,[aspnet_Users].[UserName]
,[City]
,[State]
,[RoleName]
,[ProfileItemType].[Name] AS pt_name
,[ProfileItem].[Value]
FROM
[UserCriteria]
,[aspnet_Users]
,[aspnet_Roles]
,[aspnet_UsersInRoles]
,[Location]
,[ProfileType]
,[ProfileTypeItem]
,[ProfileItem]
INNER JOIN [UserProfile]
ON [ProfileItem].[ProfileId] = [UserProfile].[ProfileId]
INNER JOIN [ProfileItemType]
ON [ProfileItem].[ProfileItemTypeId] = [ProfileItemType].[ProfileItemTypeId]
WHERE [UserProfile].[UserId] IN (
SELECT [UserCriteria].[UserId]
FROM [UserCriteria]
WHERE
Zipcode IN (
SELECT [Zipcode]
FROM [ZipcodeProximitySQR] ('89108' , 150))
)
AND [UserProfile].[UserId] = [aspnet_Users].[UserId]
AND [UserCriteria].[UserId] = [UserProfile].[UserId]
AND [Location].[Zipcode] = [UserCriteria].[Zipcode]
AND [aspnet_UsersInRoles].[UserId] = [aspnet_Users].[UserId]
AND [aspnet_UsersInRoles].[RoleId] = [aspnet_Roles].[RoleId]
) AS t
PIVOT
(
MIN([Value])
FOR pt_name IN ([field1],[field2]],[field3]],[field4]])
) AS pvt
ORDER BY RoleName DESC, NEWID()
The line: FOR pt_name IN ([field1],[field2]],[field3]],[field4]]) I change the values from the long names to read field1, field2... because it was irrelevant but confusing because of the names.
Here is the showplan text
Code Block
|--Sequence
|--Table-valued function(OBJECT:([aous].[dbo].[ZipcodeProximitySQR].[PK__ZipcodeProximity__5E54FF49]))
|--Top(TOP EXPRESSION:((200)))
|--Stream Aggregate(GROUP BY:([aous].[dbo].[UserCriteria].[UserId], [aous].[dbo].[aspnet_Users].[UserName], [aous].[dbo].[Location].[City], [aous].[dbo].[Location].[State], [aous].[dbo].[UserCriteria].[Birthdate], [aous].[dbo].[aspnet_Roles].[RoleName]) DEFINE:([Expr1039]=MIN(CASE WHEN [aous].[dbo].[ProfileItemType].[Name]=N'height' THEN [aous].[dbo].[ProfileItem].[Value] ELSE NULL END), [Expr1040]=MIN(CASE WHEN [aous].[dbo].[ProfileItemType].[Name]=N'bodyType' THEN [aous].[dbo].[ProfileItem].[Value] ELSE NULL END), [Expr1041]=MIN(CASE WHEN [aous].[dbo].[ProfileItemType].[Name]=N'hairColor' THEN [aous].[dbo].[ProfileItem].[Value] ELSE NULL END), [Expr1042]=MIN(CASE WHEN [aous].[dbo].[ProfileItemType].[Name]=N'eyeColor' THEN [aous].[dbo].[ProfileItem].[Value] ELSE NULL END)))
|--Nested Loops(Inner Join)
|--Nested Loops(Inner Join)
| |--Sort(ORDER BY:([aous].[dbo].[UserCriteria].[UserId] ASC, [aous].[dbo].[Location].[City] ASC, [aous].[dbo].[Location].[State] ASC, [aous].[dbo].[UserCriteria].[Birthdate] ASC, [aous].[dbo].[aspnet_Roles].[RoleName] ASC))
| | |--Hash Match(Inner Join, HASH:([aous].[dbo].[UserCriteria].[Zipcode])=([Expr1043]), RESIDUAL:([Expr1043]=[aous].[dbo].[UserCriteria].[Zipcode]))
| | |--Hash Match(Inner Join, HASH:([aous].[dbo].[ProfileItemType].[ProfileItemTypeId])=([aous].[dbo].[ProfileItem].[ProfileItemTypeId]))
| | | |--Index Scan(OBJECT:([aous].[dbo].[ProfileItemType].[ProfileTypes]))
| | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[UserProfile].[ProfileId]))
| | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[UserProfile].[UserId]))
| | | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[UserProfile].[UserId]))
| | | | | |--Hash Match(Inner Join, HASH:([aous].[dbo].[UserProfile].[UserId])=([aous].[dbo].[aspnet_UsersInRoles].[UserId]), RESIDUAL:([aous].[dbo].[UserProfile].[UserId]=[aous].[dbo].[aspnet_UsersInRoles].[UserId]))
| | | | | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[UserCriteria].[UserId]))
| | | | | | | |--Stream Aggregate(GROUP BY:([aous].[dbo].[UserCriteria].[UserId]))
| | | | | | | | |--Nested Loops(Left Semi Join, WHERE:([aous].[dbo].[UserCriteria].[Zipcode]=[Expr1044]))
| | | | | | | | |--Clustered Index Seek(OBJECT:([aous].[dbo].[UserCriteria].[UserCriteria]), SEEK:([aous].[dbo].[UserCriteria].[UserId] < {guid'E3D72D56-731A-410E-BCB1-07A87A312137'} OR [aous].[dbo].[UserCriteria].[UserId] > {guid'E3D72D56-731A-410E-BCB1-07A87A312137'}), WHERE:([aous].[dbo].[UserCriteria].[Male]=(1) AND [aous].[dbo].[UserCriteria].[SeekingMale]=(0)) ORDERED FORWARD)
| | | | | | | | |--Compute Scalar(DEFINE:([Expr1044]=CONVERT_IMPLICIT(nvarchar(5),[aous].[dbo].[ZipcodeProximitySQR].[Zipcode],0)))
| | | | | | | | |--Clustered Index Scan(OBJECT:([aous].[dbo].[ZipcodeProximitySQR].[PK__ZipcodeProximity__5E54FF49]))
| | | | | | | |--Clustered Index Seek(OBJECT:([aous].[dbo].[UserProfile].[UserProfileIds]), SEEK:([aous].[dbo].[UserProfile].[UserId]=[aous].[dbo].[UserCriteria].[UserId]) ORDERED FORWARD)
| | | | | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[aspnet_Roles].[RoleId]))
| | | | | | |--Clustered Index Scan(OBJECT:([aous].[dbo].[aspnet_Roles].[aspnet_Roles_index1]))
| | | | | | |--Index Seek(OBJECT:([aous].[dbo].[aspnet_UsersInRoles].[aspnet_UsersInRoles_index]), SEEK:([aous].[dbo].[aspnet_UsersInRoles].[RoleId]=[aous].[dbo].[aspnet_Roles].[RoleId]) ORDERED FORWARD)
| | | | | |--Clustered Index Seek(OBJECT:([aous].[dbo].[UserCriteria].[UserCriteria]), SEEK:([aous].[dbo].[UserCriteria].[UserId]=[aous].[dbo].[UserProfile].[UserId]) ORDERED FORWARD)
| | | | |--Index Seek(OBJECT:([aous].[dbo].[aspnet_Users].[_dta_index_aspnet_Users_5_37575172__K2_K1_K4_3]), SEEK:([aous].[dbo].[aspnet_Users].[UserId]=[aous].[dbo].[UserProfile].[UserId]) ORDERED FORWARD)
| | | |--Index Seek(OBJECT:([aous].[dbo].[ProfileItem].[_dta_index_ProfileItem_5_1714105147__K2_K1_K3_4]), SEEK:([aous].[dbo].[ProfileItem].[ProfileId]=[aous].[dbo].[UserProfile].[ProfileId]) ORDERED FORWARD)
| | |--Compute Scalar(DEFINE:([Expr1043]=CONVERT_IMPLICIT(nchar(5),[aous].[dbo].[Location].[Zipcode],0)))
| | |--Index Scan(OBJECT:([aous].[dbo].[Location].[CityLocation]))
| |--Clustered Index Scan(OBJECT:([aous].[dbo].[ProfileType].[PKProfileTypeProfileTypeId]))
|--Clustered Index Scan(OBJECT:([aous].[dbo].[ProfileTypeItem].[ProfileTypeItem]))
Here is a link to the execution plan from Microsoft SQL Server management Studio.
http://epi.cc/BasicUserSearch.zip
There are no table scans, but the Hash Match from the inner join is pretty bad.
Can anyone give me a pointer or two?
View 1 Replies
ADVERTISEMENT
Apr 30, 2007
I have a production sql 2000 box with 24 gig of memory running 3rd part app. There is a posting process (takes all the orders and does the math). Before we did the upgrade it was a 15-20 minute process. Nothing has changed, but in the last month it has gone 60-75 minutes. There were also some windows updates applied (win 2003 adv server). I realize looking for the needle in the haystack, but has anyone seen this issue with either the sql sp4 or windows SP for 2003?
View 9 Replies
View Related
Dec 21, 2007
The strangest thing happens after upgrading to SQL Server 2005 64bit (4 cores, 16G RAM)
A table base function call that usually returns instantly would slow down to 25secs!
The underlying data is unchanged.
Now, if I reload the function (run alter function but without any changes to the function) the performance is restored for a while, but in about one week it slows down again.
Two questions:
1. most important: is there anything I could do to preemptively restore the performance? Like running a dbcc statement every night?
2. what is the cause of this slowdown?
Any suggestions welcome
View 6 Replies
View Related
Feb 1, 2008
Hi,
I am currently working on a report with 3 cascading parameters. These three parameters depend on the datasets whose data are retrieved from a large table with SELECT DISTINCT. As the table grows larger, selecting values for these parameters cause postback, and slow in performance from user perspective. I am looking for a way to reduce postback. Can anyone suggest the way to retrieve the dataset all once and filter the dataset without causing postback? Or any other way to improve the performance will be greatly appreciated.
Thanks,
NL
View 7 Replies
View Related
Jul 4, 2006
How can I speed up if there is a slowdown in SQL Sever 2005?
View 4 Replies
View Related
Aug 27, 2007
Hi,
Just a general question here.. I'm designing a web application that might have 50 million - 100 million rows plus. Basically its a simple logging table each row probably only 24 bytes wide, however I can see it taking quite awhile to execute.
The query is basically a group by, showing the amount of "hits" per day.
Are there any special types of strategies I should implement ? Or is a properly designed structure with indexes likely sufficient (on the right hardware of course)
Thanks for any advice!,
Mike
View 7 Replies
View Related
May 4, 2007
We have an application in which one particular stored procedure goes from .3 second response times to 20-40 second response times, suddenly, at seemingly random times of the day. Recompiling that particular stored proc fixes the problem temporarily.
The 3 main tables have between 500K and 1.5M records, are defragged daily and contain the past 42 days of work. Old records are deleted once daily. The slowdowns do not conincide with the deletions, occuring without a clear pattern. Nothing else on going on on that server -- it is dedicated to this one app. Nothing shows in event log. It is clustered Windows 2003, with SQL Server 2005 RTM version. SP2 of SQL2005 due for an install next Monday. This behavior also was seen when the app was on a different server running SQL2K SP3, so think the underlying problem is some sort of design issue with the app, not a SQL server bug. Has anyone seen something like this and what suggestions do you have for doing a permenant fix? Think that a recompile of the sp causes a new execution plan, but why would that be necessary daily or even several times a day? 30-50 users are banging away at it. The app is an order entry system. The tables contain what are basically order histories and label data. Am running out of good ideas. Thanks for any help.
View 13 Replies
View Related
Aug 9, 2006
We seem to be having a rather strange issue. We just
finished migrating from 2000 to 2005. I've been working on converting packages
from DTS to SSIS. Everything has been going well in testing, but a problem has
shown up going into production. The first package I deployed ran significantly
slower on the production server, going from about 7 seconds to 90 seconds. The
production server is much more powerful, so this was rather confusing. The
major difference is that the server is running 64 bit.
After searching around, I haven't found much that is helpful. To test things
out, I created a blank package that does absolutely nothing. All it contains is
a single connection manager (OLE DB). Executing this package takes 45 seconds.
If I remove the connection manager, it executes in 0.1 seconds. Setting
DelayValidation to TRUE has no effect. If the connection is to a nonexistent
server, or if an incorrect login is used, the execution time is still 45
seconds. Adding a second connection manager increases execution time to 75
seconds. The package runs on my local computer in 0.1 seconds.
Any ideas what could be going on here? I can€™t believe that
no one would have seen an issue like this. Is there some sort of strange
configuration issue going on here?
Thanks in advance.
View 1 Replies
View Related
Oct 15, 2004
Currenlty I have huge amounts of data going into a table.
I'm sending an xmldoc and using openxml with a cursor to seed them.
the question I have is whether to let duplicate keyed data rows bounce
and then check @@error and then do an update on the nokeyed field
or
to do a select on the keyed field and then do an insert or update based on the
selects results.
Speed is my goal.
View 3 Replies
View Related
Feb 6, 2007
Rather than posting twice, I thought I would put both issues I'm having in one. Our server is Windows Server 2003 and we're running SQL Server 2005.
The first issue is this: We have several databases and I have scheduled their backups to run nightly which works just fine. A couple weeks ago, one of the databases .bak file grew from about 500MEG to 2GB overnight. Then, just a few days ago, it went from 2GB to 3.5GB. There is nothing unusual going on in the live db that would warrant such an increase in the .bak file. All the dbs are in the same backup job schedule but this is the only one affected. Additionally, I had autogrowth enabled on all the dbs but today disabled it for this particular db. Any ideas?
The second issue is my tempdb.mdf file on my C drive. It will go from just a few hundred KB's to 4.5GB overnight consuming most of what is left on my C drive. I'm afraid I'm in for a system crash if it continues. I have to stop SQL Server and restart it to clear the size. Is there a way to move the location of the tempdb.mdf file to my F drive?
I don't know if these two issues are related or not but certainly would like to hear from someone.
Sorry, in advance, for the large post.
Dave
View 20 Replies
View Related
Sep 6, 2005
Hello,I have a huge database (2 GB / month) and after a while it is becomingnon-operational (time-outs, etc.) So I have written an SQL sentence(delete) that can reduce around 60% of the db size without compromisingthe application data needs. The problem is that when I execute it, thedb does reduce its size 60%, but the transaction log increases at thesame rate. Can I execute the sentence in a "commit" or"transaction" mode so to impede the SQL Server write in the log?Thanks for the help!Antonio
View 6 Replies
View Related
Mar 3, 2000
When updating large sets a row at a time the performance is lacking in comparison to 6.5. When using PeopleSoft which uses cursors with a begin transaction with a loop inside and a commit after the loop completes, SQL 6.5 with Page locking could handle a 300,000 row transaction in 3-4 hours. 7.0 took 17.5 hours. The difference is 6.5 used 50,000 locks and 7.0 used 300,000 locks.
Does anybody have solution short of rewriting PeopleSoft ?
View 2 Replies
View Related
Apr 5, 2006
Hi Everyone,
We have a large and active MSSQL 2000 database. Recently, after a rebuild of the server, we had a problem with the SQL service SQLSERVERAGENT. The service could not start as the service account lost local permission to the registry. During this time, all of the data being sent to the database from our application accumulated into the database .ldf file. By the time we were able to get the service restarted, our .ldf file was approx. 28 Gigs. When the service restarded, the .ldf file shrunk down to regular size,about 40 megs, and the .trx tlog file grew up to 28 gigs for that specific period (new file every hour).
The problem is, the database file (database.mdf) stayed about the same as it was before the service was restarted. When the .ldf transfered to the .trn none of the 28 gigs of data got stored in the database. What does this mean? Perhaps with the service stopped the application using the db saw problems and did not commit the data making it all useless? Or is it possible that the data in the .trn log just needs to be forced to commit to the .mdf???
Is there any way to verify the data in the 28 gig .trn file and figure out if we should get it stored to the database? If yes, how would we go about verifying it, and after that how would we force it to commit to the .mdf file? Am I on the right track here or is it not as I see it??
Thanks!
Mike
View 4 Replies
View Related
Jul 2, 2006
Hi
Would you say that it's ok for a web site code to make ALL of it's access to a db through SP and views? And I mean everything including inserting new records and updating others with no use with SQL in the code.
The advantage would be very strict control over the access, but in order to achieve this it would take many many SP and views to cover all types of actions, can you think about a disadvantage except all the work creating those SP?? what about the server resources and performance? how demanding it would be?
Thanks,
Inon.
View 7 Replies
View Related
Jan 3, 2006
Hello,
I thought this was a neat solution I came up with, but I'm sure it's
been thought of before. Anyway, it's my first post here.
We have a process for importing data which generates a SELECT statement
based on user's stored configuration. Since the resulting SELECT statement
can be massive, it's created and stored in a text field in a temp table.
So how do I run this huge query after creating it? In my tests, I was
getting a datalength > 20000, requiring 3 varchar(8000) variables in
order to use the execute command. Thing is, I don't know how big it could
possibly get, I wanted to be able to execute it regardless.
Here's what I came up with, it's very simple:
Table is named #IMPORTQUERY, one field SQLTEXT of type TEXT.
>>
declare @x int, @s varchar(8000)
select @x = datalength(sqltext) / 8000 + 1, @s = 'execute('''')' from #importquery
while @x > 0
select @s = 'declare @s' + cast(@x as varchar) + ' varchar(8000) ' +
'select @s' + cast(@x as varchar) +
'=substring(sqltext,@x,@x+8000),@x=@x+8000 from #importquery ' +
replace(@s,'execute(','execute(@s' + cast(@x as varchar) + '+')
, @x = @x - 1
set @s = 'declare @x int set @x=1 ' + @s
execute(@s)
<<
At the end, I execute the "@s" variable which is SQL that builds and
executes the massive query. Here's what @s looks like at the end:
>>
declare @x int set @x=1
declare @s1 varchar(8000)
select @s1=substring(sqltext,@x,@x+8000),@x=@x+8000 from #importquery
declare @s2 varchar(8000)
select @s2=substring(sqltext,@x,@x+8000),@x=@x+8000 from #importquery
declare @s3 varchar(8000)
select @s3=substring(sqltext,@x,@x+8000),@x=@x+8000 from #importquery
execute(@s1+@s2+@s3+'')
<<
View 4 Replies
View Related
Jul 23, 2005
Our database server has started acting weird and at this point I'm eithertoo sleep deprived or close to the problem to adequately diagnose the issue.Basically to put it simply... when I look at the read disk queue length, thedisks queues are astronomical.normally we're seeing a disk queue length of 0-1 on the disks that containthe DB data and index. (i.e non clustered indexes are on a disk of theirown).Writes are just fine.Problem is, all our databases are on the same drive, and I can't seem tonail down which DB, let alone which table is the source of all our reads.Now, to really make things weirder.. during the busier times of the daytoday (say 1:00 PM to 4:00 PM) things were fine.At 4:20 PM or so it was like someone hit a switch and read disk queue lengthjumped from 0-1 up to 100-200+... with spikes up to 1500 for a split secondor so.What's the best way folks know to nail down this?Thanks.----
View 9 Replies
View Related
Jan 20, 2014
I have to delete a ton of data from a SQL table. I have a unique identifier called the version. I would like to use if not in these versions then delete. I tried to using the statement below, but learned the hard way that it created an error this is the message I got....
Msg 9002, Level 17, State 4, Line 3...
The transaction log for database 'MonthEnds' is full due to 'ACTIVE_TRANSACTION'.
I was reading about truncate, I am not sure how I would do this or how I would setup the statement.
Delete Products
where versions were not in (('48459CED-871F-4971-B888-5083990332BC','D550C8D3-58C7-4C74-841D-1C1675F19AE3','C77C7817-3F04-4145-98D3-37BB1610DB35',
'21FE83FA-476D-4604-80EF-2ED57DEE2C16','F3B50B81-191A-4D71-A406-011127AEFBE1','EFBD48E7-E30F-4047-909E-F14DCAEA4181','BD9CCC41-D696-406B-
'C8BEBFBC-D362-4D0F-A555-B281FC2B3023','EFA64956-C2CF-41FC-8E21-F060597DAFCB','77A8DE56-6F7F-4490-8BED-AA6809B947EF','0F4C1E5F-B689-4DCB-
[code]....
View 2 Replies
View Related
Feb 8, 2008
Hi,
I have a series of SSIS packages, all of which are ultimately executed by a parent package.
I'm consitently getting "OutOfMemory" errors when working with the packages which is temporarily solved by closing Visual Studio and re-opening the package(s)... This solution is short lived however as the OutOfMemory error occurs quite quickly after re-opening, often after doing nothing other than altering a variables default value and attempting to save the package.
The average size of the packages in question (.dtsx files) is around 7,000kb with the largest being 12,500kb. The total size of all the solution's packages is ~75,000kb.
The Processes tab in Task Manager shows a Mem Usage counter for devenv.exe *32 of around 20,000kb when Visual Studio is first opened however, when a single ~6,000kb dtsx file is opened this counter jumps to +300,000kb and when the entire solution is opened (When the parent package is executed), the Mem Usage counter for devenv.exe *32 is a massive +800,000kb!!!
Is this normal SSIS behaviour or do I have a major problem? Any tips or suggestions as to how to resolve this issue would be gratefully received.
FYI, "SELECT @@VERSION" gives me "Microsoft SQL Server 2005 - 9.00.3042.00 (X64) Feb 10 2007 00:59:02 Copyright (c) 1988-2005 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2) "
My Server is Windows Server 2003 R2 Enterprise x64 SP2 with 8GB of RAM.
Thanks in advance.
Leigh.
View 7 Replies
View Related
Feb 14, 2005
Hey all,
We are using SQL Server 7 on Win 2k and there are some DTS packages set up which empty some large tables (delete from) and then import some datafiles.
The imported files are about 13 GB and during the process the log file gets to about 10GB and then runs out of disk space.
Is there a trick to empty a table without logging it? (a la LOAD Replace from Null in DB2)?
How can I go about keeping the log file size down during this operation?
I think the DB is set to autocommit, the trunc log on chkpt. is set on as is the select into/bulk copy (altho I'm reasonable sure we arent availaing of the bulk copy for the import).
Help? :)
View 2 Replies
View Related
Dec 4, 2006
Hello,
I am currently working on a project where I have to import a huge amount of data from CSV files into a database.
I don't want to have dublicate keys in my table, but my CSV file contains them. That means the line more at the end of the file contains the mor up to date information that I have to store.
I try to fix this problem since serveral weeks, but my algorithm is very slow and blocks all other processes on the server. At the moment I am copying all records into a temp table that occure more than once in the CSV file. After that I am running through this table line by line and check if the key already exists in the target table and then either make an insert or an update.
Does somebody know a better process?
I hope somebody can help me... :(
View 5 Replies
View Related
May 21, 2013
So I've stumbled across an audit table on one of our systems that has reached a hearty 180M rows in size.
The table is a heap (no indexes whatsoever).
Each record has a datetime value indicating when it was created.
I need to delete everything that was created prior to the last 6 months; what is my best plan of attack?
View 12 Replies
View Related
Apr 10, 2007
Background
SQL Server 2005 Standard 9.0.1399 64bit
Windows 2003 64-bit
8gb RAM
RAID-1 70gb HD 15K SCSI (Log Files, OS)
RAID-10 1.08TB HD 10K SCSI (Data Files)
Runs aproximately _Total 800 Transaction/Second
We deliver aproximately 70-80 million ad views / day
8 Clustered Windows 2003 32-bit OS IIS Servers running Asp.net 2.0 websites
All 8 servers talking to the one SQL server via a private network (server backbone).
In SQL Server Profiler, I see the following SQL statements with durations of 2000 - 7000:
select top 1 keywordID, keyword, hits, photo, feed from dbo.XXXX where hits > 0 order by hits
and
UPDATE XXXX SET hits=1906342 WHERE keywordID = 7;
Where the hits number is incremented by one each time that is selcted for that keyword ID.
Sometimes these happen so frequently the server stops accepting new connectinos, and I have to restart the SQL server or reboot.
Any ideas on why this is happening?
Regards,
Joe
View 6 Replies
View Related
May 3, 2007
I need some help I have this massive sql script the problem is I tried to put it in to the query string box in my sql reports and it will not take it this script will run if I break it up but I think it is to large is there a sql guru out there that can show me how to reduce the size of this script maybe by using an out parameter to a stored proceedure. I just dont know what to do and need to produce the report from this script. Below is the entire script
SELECT 'Prior Year All ' as 'qtr', COUNT(JOB.JOBID) AS 'transcount', COUNT(DISTINCT JOB.PATIENTID) AS 'patientcount', SUM(JOB.TRANSPORTATION_TCOST) AS 'tcost', SUM(JOB.TRANSPORTATION_DISC_COST) AS 'dtcost', AVG(JOB.TRANSPORTATION_DISC) AS 'avgTDisc', SUM(JOB.TRANSPORTATION_TCOST) + SUM(JOB.TRANSPORTATION_DISC_COST) AS 'TGrossAmtBilled', SUM(JOB.TRANSPORTATION_TCOST) / COUNT(DISTINCT JOB.PATIENTID) AS 'PatAvgT', SUM(JOB.TRANSPORTATION_DISC) AS 'avgPercentDiscT', SUM(JOB.TRANSPORTATION_TCOST) / COUNT(JOB.JOBID) AS 'RefAvgT', JOB.JURISDICTION, PAYER.PAY_GROUPNAME, PAYER.PAY_COMPANY, PAYER.PAY_CITY, PAYER.PAY_STATE, PAYER.PAY_SALES_STAFF_ID, JOB.PATIENTID, JOB.INVOICE_DATE, JOB.JOBOUTCOMEID, JOB.SERVICEOUTCOME, INVOICE_AR.INVOICE_NO, INVOICE_AR.INVOICE_DATE AS Expr1, INVOICE_AR.AMOUNT_DUE, INVOICE_AR.CLAIMNUMBER, PATIENT.LASTNAME, PATIENT.FIRSTNAME, PATIENT.EMPLOYERNAME, JOB_OUTCOME.DESCRIPTION, SERVICE_TYPE.DESCRIPTION, PAT_SERVICES_HISTORY.TRANSPORT_TYPE,
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed Successfully') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedSuccessfullyItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with complaint') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithComplaintItems', (SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with No Show') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithNoShowItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with No Charge') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithNoChargeItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with Situation') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithSituationItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Not Completed') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'NotCompletedItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Cancelled Prior to service') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CancelledPriorToServiceItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Cancelled During Service') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CancelledDuringServiceItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed Successfully') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'AwaitingforcompletionItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Pending for review') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like'%T ')) AS 'PendingforreviewItems'
FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID
WHERE (INVOICE_AR.AMOUNT_DUE > 0)AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12))AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%')AND (INVOICE_AR.INVOICE_NO like '%T')
GROUP BY JOB.JURISDICTION, PAYER.PAY_GROUPNAME, PAYER.PAY_COMPANY, PAYER.PAY_CITY, PAYER.PAY_STATE, PAYER.PAY_SALES_STAFF_ID, JOB.PATIENTID, JOB.INVOICE_DATE, JOB.JOBOUTCOMEID, JOB.SERVICEOUTCOME, INVOICE_AR.INVOICE_NO, INVOICE_AR.INVOICE_DATE, INVOICE_AR.AMOUNT_DUE, INVOICE_AR.CLAIMNUMBER, PATIENT.LASTNAME, PATIENT.FIRSTNAME, PATIENT.EMPLOYERNAME, JOB_OUTCOME.DESCRIPTION, SERVICE_TYPE.DESCRIPTION, PAT_SERVICES_HISTORY.TRANSPORT_TYPE
UNION ALL
SELECT 'Current Year 2007 All ' as 'qtr', COUNT(JOB.JOBID) AS 'transcount', COUNT(DISTINCT JOB.PATIENTID) AS 'patientcount', SUM(JOB.TRANSPORTATION_TCOST) AS 'tcost', SUM(JOB.TRANSPORTATION_DISC_COST) AS 'dtcost', AVG(JOB.TRANSPORTATION_DISC) AS 'avgTDisc', SUM(JOB.TRANSPORTATION_TCOST) + SUM(JOB.TRANSPORTATION_DISC_COST) AS 'TGrossAmtBilled', SUM(JOB.TRANSPORTATION_TCOST) / COUNT(DISTINCT JOB.PATIENTID) AS 'PatAvgT', SUM(JOB.TRANSPORTATION_DISC) AS 'avgPercentDiscT', SUM(JOB.TRANSPORTATION_TCOST) / COUNT(JOB.JOBID) AS 'RefAvgT', JOB.JURISDICTION, PAYER.PAY_GROUPNAME, PAYER.PAY_COMPANY, PAYER.PAY_CITY, PAYER.PAY_STATE, PAYER.PAY_SALES_STAFF_ID, JOB.PATIENTID, JOB.INVOICE_DATE, JOB.JOBOUTCOMEID, JOB.SERVICEOUTCOME, INVOICE_AR.INVOICE_NO, INVOICE_AR.INVOICE_DATE AS Expr1, INVOICE_AR.AMOUNT_DUE, INVOICE_AR.CLAIMNUMBER, PATIENT.LASTNAME, PATIENT.FIRSTNAME, PATIENT.EMPLOYERNAME, JOB_OUTCOME.DESCRIPTION, SERVICE_TYPE.DESCRIPTION, PAT_SERVICES_HISTORY.TRANSPORT_TYPE,
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed Successfully') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (@startDate) and DATEADD(@enddate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedSuccessfullyItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with complaint') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (@startdate) and DATEADD(@enddate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithComplaintItems', (SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with No Show') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (startdate) and DATEADD(@enddate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithNoShowItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with No Charge') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (@startdate) and DATEADD(@enddate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithNoChargeItems',
&nb
View 8 Replies
View Related
Jan 28, 2008
I've got a large MS Sql Server 2000 database that has 15 indexes, with roughly 180 million rows representing 240 GB worth of data. Due to the massive size of the database we are trying to purge it down to a smaller dataset, about 40 million rows, in order to speed up the query performance and to be able to defrag the indexes (which are 30-50% fragmented). To complicate the matter, this table is also a publisher in a transactional replication setup, with one subscriber. Also, the system needs to be up constantly so I'm only allowed about a 3-5 hour period to take an outage a week.
So far I've tested several methods of delete following all best practices (batch deletes, using indexes in delete's where clause), and have come up with deleting/commiting 500 rows at a time. The problem is that it still takes 3-4 seconds to delete this many rows, on a 8 GB RAM, 4 processor machine that is not currently used or replicated.
I'm at a loss on a way to pare down the data with a delete as the current purge script will take 7 hours a day for about 3 months. Another option I'm considering is to do a truncate and copy the data back over from the replicated database, but again this has its own set of problems, i.e. network latency and slow inset times. Yet another option would be to create a replica of the table on the production db, copy the data to it, then rename the table.
Any one have experience with purging such a massive amount of data? Any help would be greatly appreciated.
View 6 Replies
View Related
Dec 19, 2003
I'm running a query, actually its an insert that works when using the TSQL below.
However when I try to use the debugger to step through and using the exact same values as those below I get the following error:
[Microsoft][ODBC SQL Server Driver]Invalid character value for cast specification
Its Killing me because everything else works, but this. Can somebody help.
DECLARE @NoteID INT,-- NULL OUTPUT,
@Note_Description NVARCHAR(3000),-- = NULL,
@Date DateTime,-- = NULL OUTPUT,
@ByWho NVARCHAR(30),-- = NULL,
@FK_Action_Performed NVARCHAR(40),-- = NULL,
@FK_UserID INT,-- = NULL,
@FK_JobID INT,-- = NULL,
@Job_Date DateTime,-- = NULL,
@Start DateTime,-- = NULL,
@Finish DateTime,-- = NULL,
@BeenRead NVARCHAR(10),-- = NULL
@FK_UserIDList NVARCHAR(4000)-- = NULL
--SET @NoteID = 409 --NULL OUTPUT,
SET @Note_Description = 'Tetsing'
--SET @Date DateTime = NULL OUTPUT,
SET @ByWho = 'GeorgeAgaian'
SET @FK_Action_Performed = 'Worked hard'
SET @FK_UserID = 5
SET @FK_JobID = 29
SET @Job_Date = 28/01/03
SET @Start = '1:00:20 PM'
SET @Finish = '1:00:20 PM'
SET @BeenRead = 'UnRead'
SET @FK_UserIDList = '1,2,3'
--AS
--SET NOCOUNT ON
SET NOCOUNT ON
SET XACT_ABORT ON
BEGIN TRANSACTION
SET @Date = GETDATE()
-- Insert Values into the customer table
INSERT Note (Note_Description,
Date,
ByWho,
FK_Action_Performed,
FK_UserID,
FK_JobID,
Job_Date,
Start,
Finish)
SELECT --@NoteID,
@Note_Description,
@Date,
@ByWho,
@FK_Action_Performed,
@FK_UserID,
@FK_JobID,
@Job_Date,
@Start,
@Finish
-- Get the new Customer Identifier, return as OUTPUT param
SELECT @NoteID = @@IDENTITY
-- Insert new notes for all the users that the note pertains to, in this case this will be by the assigned
-- users.
IF @FK_UserIDList IS NOT NULL
EXECUTE spInsertNotesByAssignedUsers @NoteID, @FK_UserIDList
-- Insert New Address record
-- Retrieve Address reference into @AddressId
-- EXEC spInsertForUserNote
-- @FK_UserID,
--@NoteID,
-- @BeenRead
-- @Fax,
-- @PKId,
-- @AddressId OUTPUT
COMMIT TRANSACTION
--------------------------------------------------
GO
View 1 Replies
View Related
May 28, 2008
ok can someone tell me why i get two different answers for the same query. (looking for last day of month for a given date)
SELECT DATEADD(ms, - 3, DATEADD(mm, DATEDIFF(m, 0, CAST('12/20/2006' AS datetime)) + 1, 0)) AS Expr1
FROM testsupplierSCNCR
I am getting the result of 01/01/2007
but in query analizer I get the result of
12/31/2006
Why the different dates
View 4 Replies
View Related
Jan 22, 2001
Hi,
I get this error dialog when I try to open all the rows of any table from Enterprise manager..
Any help would be really appreciated..
Thanks,
-Srini.
View 1 Replies
View Related
May 24, 2007
SQL Server 2005 9.0.3161 on Win 2k3 R2
I receive the following error:
"Error: 8624, Severity: 16, State: 1 Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services."
I have traced this to an insert statement that executes as part of a stored procedure.
INSERT INTO ledger (journal__id, account__id,account_recv_info__id,amount)
VALUES (@journal_id, @acct_id, @acct_recv_id, @amount)
There is also an auto-increment column called id. There are FK contraints on all of the columns ending in "__id". I have found that if I remove the contraint on account__id the procedure will execute without error. None of the other constraints seem to make a difference. Of course I don't want to remove this key because it is important to the database integrity and should not be causing problems, but apparently it confuses the optimizer.
Also, the strange thing is that I can get the procedure to execute without error when I run it directly through management studio, but I receive the error when executing from .NET code or anything using ODBC (Access).
View 5 Replies
View Related
Mar 28, 2007
Hey, i've written a query to search a database dependant on variables chosen by user etc etc. Opened up a new sqldatasource, entered the query shown below and went on to the test query page. Entered some test variables, everything works as it should do. Try to get it to show in a datagrid on a webpage - nothing. No data shows.
SELECT dbo.DERIVATIVES.DERIVATIVE_ID, count(*) AS Matches
FROM dbo.MAKES INNER JOIN
dbo.MODELS ON dbo.MAKES.MAKE_ID = dbo.MODELS.MAKE_ID INNER JOIN
dbo.DERIVATIVES ON dbo.MODELS.MODEL_ID = dbo.DERIVATIVES.MODEL_ID INNER JOIN
dbo.[VALUES] ON dbo.DERIVATIVES.DERIVATIVE_ID = dbo.[VALUES].DERIVATIVE_ID INNER JOIN
dbo.ATTRIBUTES ON dbo.[VALUES].ATTRIBUTE_ID = dbo.ATTRIBUTES.ATTRIBUTE_ID
WHERE ((ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID1 and (@VAL1 is null or VALUE = @VAL1)) or
(ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID2 and (@VAL2 is null or VALUE = @VAL2)) or
(ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID3 and (@VAL3 is null or VALUE = @VAL3)) or
(ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID4 and (@VAL4 is null or VALUE = @VAL4)) )
GROUP BY dbo.DERIVATIVES.DERIVATIVE_ID
HAVING count(*) >= CASE WHEN @VAL1 IS NOT NULL THEN 1 ELSE 0 END +
CASE WHEN @VAL2 IS NOT NULL THEN 1 ELSE 0 END +
CASE WHEN @VAL3 IS NOT NULL THEN 1 ELSE 0 END +
CASE WHEN @VAL4 IS NOT NULL THEN 1 ELSE 0 END -2
ORDER BY count(*) DESC
Here is the page source
<%@ Page Language="VB" MasterPageFile="~/MasterPage.master" Title="Untitled Page" %>
<asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server">
<asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:DevConnectionString1 %>"
SelectCommand="	SELECT dbo.DERIVATIVES.DERIVATIVE_ID, count(*) AS Matches 	FROM dbo.MAKES INNER JOIN 				 dbo.MODELS ON dbo.MAKES.MAKE_ID = dbo.MODELS.MAKE_ID INNER JOIN 				 dbo.DERIVATIVES ON dbo.MODELS.MODEL_ID = dbo.DERIVATIVES.MODEL_ID INNER JOIN 				 dbo.[VALUES] ON dbo.DERIVATIVES.DERIVATIVE_ID = dbo.[VALUES].DERIVATIVE_ID INNER JOIN 				 dbo.ATTRIBUTES ON dbo.[VALUES].ATTRIBUTE_ID = dbo.ATTRIBUTES.ATTRIBUTE_ID 	WHERE ((ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID1 and (@VAL1 is null or VALUE = @VAL1)) or 		 (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID2 and (@VAL2 is null or VALUE = @VAL2)) or 		 (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID3 and (@VAL3 is null or VALUE = @VAL3)) or 		 (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID4 and (@VAL4 is null or VALUE = @VAL4)) ) 	GROUP BY dbo.DERIVATIVES.DERIVATIVE_ID 	HAVING count(*) >= CASE WHEN @VAL1 IS NOT NULL THEN 1 ELSE 0 END + 									 CASE WHEN @VAL2 IS NOT NULL THEN 1 ELSE 0 END + 									 CASE WHEN @VAL3 IS NOT NULL THEN 1 ELSE 0 END + 									 CASE WHEN @VAL4 IS NOT NULL THEN 1 ELSE 0 END -2 	ORDER BY count(*) DESC ">
<SelectParameters>
<asp:ControlParameter ControlID="DropDownList1" Name="ATT_ID1" PropertyName="SelectedValue" />
<asp:ControlParameter ControlID="TextBox1" Name="VAL1" PropertyName="Text" />
<asp:Parameter Name="ATT_ID2" />
<asp:Parameter Name="VAL2" />
<asp:Parameter Name="ATT_ID3" />
<asp:Parameter Name="VAL3" />
<asp:Parameter Name="ATT_ID4" />
<asp:Parameter Name="VAL4" />
</SelectParameters>
</asp:SqlDataSource>
<asp:SqlDataSource ID="SqlDataSource2" runat="server" ConnectionString="<%$ ConnectionStrings:DevConnectionString1 %>"
SelectCommand="SELECT * FROM [ATTRIBUTES]"></asp:SqlDataSource>
<br />
<asp:DropDownList ID="DropDownList1" runat="server" DataSourceID="SqlDataSource2"
DataTextField="ATTRIBUTE_NAME" DataValueField="ATTRIBUTE_ID">
</asp:DropDownList>
<asp:TextBox ID="TextBox1" runat="server" AutoPostBack="True"></asp:TextBox><br />
<br />
<br />
<asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="DERIVATIVE_ID"
DataSourceID="SqlDataSource1">
<Columns>
<asp:BoundField DataField="DERIVATIVE_ID" HeaderText="DERIVATIVE_ID" InsertVisible="False"
ReadOnly="True" SortExpression="DERIVATIVE_ID" />
<asp:BoundField DataField="Matches" HeaderText="Matches" ReadOnly="True" SortExpression="Matches" />
</Columns>
</asp:GridView>
</asp:Content>
AFAIK I have configured the source to pick up the dropdownlist value and the textbox value (the text box is autopostback).
Am i not submitting the data correctly? (It worked with a simple query...just not with this one). I have tried a stored procedure which works when testing just not when its live on a webpage.
Please help!
(Visual Web Devleoper 2005 Express and SQL Server Management Studio Express)
View 4 Replies
View Related
Aug 5, 2014
I have the following code.
SELECT _bvSerialMasterFull.SerialNumber, _bvSerialMasterFull.SNStockLink, _bvSerialMasterFull.SNDateLMove, _bvSerialMasterFull.CurrentLoc,
_bvSerialMasterFull.CurrentAccLink, _bvSerialMasterFull.StockCode, _bvSerialMasterFull.CurrentAccount, _bvSerialMasterFull.CurrentLocationDesc,
_bvSerialNumbersFull.SNTxDate, _bvSerialNumbersFull.SNTxReference, _bvSerialNumbersFull.SNTrCodeID, _bvSerialNumbersFull.SNTransType,
_bvSerialNumbersFull.SNWarehouseID, _bvSerialNumbersFull.TransAccount, _bvSerialNumbersFull.TransTypeDesc,
[code]...
However, as you can see, the original select query is run twice and joined together.What I was hoping for is this to be done in the original query without the need to duplicate the original query.
View 2 Replies
View Related
Jun 15, 2007
I'm trying to find the command to open up an odbc conection inside sql2005 express. I only have ues of an odbc connector, we're conection to remedy. We will eventually be using stored procedures to extract the data we need from remedy and doing additional data crunching. I'm a foxpro programmer so once I get the correct syntax for making the odbc connector I shold be ok. Also I need a really good advanced book on sql2005. The type of book that would have my odbc answer. I've spent all morning trying to find this information and was unable to.
Thanks in advance
Daniel Buchanan.
If this was the wrong forum to post this on, please move this question to the correct one. I need this answer soon.
View 1 Replies
View Related
Jul 19, 2015
We have a issue with a MDS server that have been over us for a couple of days, the original error msg from SQL Server Engine is the one "The query processor could not produce a query plan" but the ones we get on the Excel-Addin are "Sequece contains no elements" or "The value cannot be null" T
• Using Microsoft SQL Server 2012 (SP1) - 11.0.3393.0 (X64) for 6months on this server without issues
• Two weeks ago we started to have 2 errors: "Sequence Contains No Elements" | "The Value Cannot Be Null"
• We are using the last version of Excel Add-in
• We try to reinstall the MDS feature
• If I backup/restore MDS database to other server it works
• We updated to SQL 2012 SP2 + CU4 but the error persisted ...
Looking at the MDSTraceLog we are routed to the this msg
SQL Error Debug Info: Number: 8624, Message: Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services., Server: bbdvsql03inst01, Proc: udpMetadataEntityGetDetailsXML, Line: 28
At line 28 udpMetadataEntityGetDetailsXML is calling udfMetadataEntityGetDetailsXML … and here is where we stopped
** Error found when try to get data from a entity using Excel add-in **
===================================
Sequence contains no elements
------------------------------
Program Location:
  at Microsoft.MasterDataServices.AsyncEssentials.AsyncResultBase.EndInvoke()
  at Microsoft.MasterDataServices.ExcelAddInCore.AsyncProviderBase`1.EndOperation(IAsyncResult ar)
[code]....
View 3 Replies
View Related
Jun 26, 2015
how do I get the variables in the cursor, set statement, to NOT update the temp table with the value of the variable ? I want it to pull a date, not the column name stored in the variable...
create table #temptable (columname varchar(150), columnheader varchar(150), earliestdate varchar(120), mostrecentdate varchar(120))
insert into #temptable
SELECT ColumnName, headername, '', '' FROM eddsdbo.[ArtifactViewField] WHERE ItemListType = 'DateTime' AND ArtifactTypeID = 10
--column name
declare @cname varchar(30)
[code]...
View 4 Replies
View Related