SQL 7.0 Massive Row Locking Performance
Mar 3, 2000
When updating large sets a row at a time the performance is lacking in comparison to 6.5. When using PeopleSoft which uses cursors with a begin transaction with a loop inside and a commit after the loop completes, SQL 6.5 with Page locking could handle a 300,000 row transaction in 3-4 hours. 7.0 took 17.5 hours. The difference is 6.5 used 50,000 locks and 7.0 used 300,000 locks.
Does anybody have solution short of rewriting PeopleSoft ?
View 2 Replies
ADVERTISEMENT
Jul 23, 2005
Dear Gurus,A Client has the following problems/requests for their Productiondatabases, what is your professional/practical advises to tackle andresolve these issues:1)Number of Transactions per day – The current database can currentlyhandle about 5000 order transactions per day2)Table locking – When doing row updates SQL Server will lock thewhole table. For example after a few instances of a driver updating anorder record it will lock the entire orders table3)Reports – When a user runs a report involving big quantities oforder-related data other processes will suffer occasional timeouts4)Archiving – There is no efficient way of archiving historical systemdata
View 1 Replies
View Related
Oct 15, 2004
Currenlty I have huge amounts of data going into a table.
I'm sending an xmldoc and using openxml with a cursor to seed them.
the question I have is whether to let duplicate keyed data rows bounce
and then check @@error and then do an update on the nokeyed field
or
to do a select on the keyed field and then do an insert or update based on the
selects results.
Speed is my goal.
View 3 Replies
View Related
Feb 6, 2007
Rather than posting twice, I thought I would put both issues I'm having in one. Our server is Windows Server 2003 and we're running SQL Server 2005.
The first issue is this: We have several databases and I have scheduled their backups to run nightly which works just fine. A couple weeks ago, one of the databases .bak file grew from about 500MEG to 2GB overnight. Then, just a few days ago, it went from 2GB to 3.5GB. There is nothing unusual going on in the live db that would warrant such an increase in the .bak file. All the dbs are in the same backup job schedule but this is the only one affected. Additionally, I had autogrowth enabled on all the dbs but today disabled it for this particular db. Any ideas?
The second issue is my tempdb.mdf file on my C drive. It will go from just a few hundred KB's to 4.5GB overnight consuming most of what is left on my C drive. I'm afraid I'm in for a system crash if it continues. I have to stop SQL Server and restart it to clear the size. Is there a way to move the location of the tempdb.mdf file to my F drive?
I don't know if these two issues are related or not but certainly would like to hear from someone.
Sorry, in advance, for the large post.
Dave
View 20 Replies
View Related
Sep 6, 2005
Hello,I have a huge database (2 GB / month) and after a while it is becomingnon-operational (time-outs, etc.) So I have written an SQL sentence(delete) that can reduce around 60% of the db size without compromisingthe application data needs. The problem is that when I execute it, thedb does reduce its size 60%, but the transaction log increases at thesame rate. Can I execute the sentence in a "commit" or"transaction" mode so to impede the SQL Server write in the log?Thanks for the help!Antonio
View 6 Replies
View Related
Apr 5, 2006
Hi Everyone,
We have a large and active MSSQL 2000 database. Recently, after a rebuild of the server, we had a problem with the SQL service SQLSERVERAGENT. The service could not start as the service account lost local permission to the registry. During this time, all of the data being sent to the database from our application accumulated into the database .ldf file. By the time we were able to get the service restarted, our .ldf file was approx. 28 Gigs. When the service restarded, the .ldf file shrunk down to regular size,about 40 megs, and the .trx tlog file grew up to 28 gigs for that specific period (new file every hour).
The problem is, the database file (database.mdf) stayed about the same as it was before the service was restarted. When the .ldf transfered to the .trn none of the 28 gigs of data got stored in the database. What does this mean? Perhaps with the service stopped the application using the db saw problems and did not commit the data making it all useless? Or is it possible that the data in the .trn log just needs to be forced to commit to the .mdf???
Is there any way to verify the data in the 28 gig .trn file and figure out if we should get it stored to the database? If yes, how would we go about verifying it, and after that how would we force it to commit to the .mdf file? Am I on the right track here or is it not as I see it??
Thanks!
Mike
View 4 Replies
View Related
Jul 2, 2006
Hi
Would you say that it's ok for a web site code to make ALL of it's access to a db through SP and views? And I mean everything including inserting new records and updating others with no use with SQL in the code.
The advantage would be very strict control over the access, but in order to achieve this it would take many many SP and views to cover all types of actions, can you think about a disadvantage except all the work creating those SP?? what about the server resources and performance? how demanding it would be?
Thanks,
Inon.
View 7 Replies
View Related
Jan 3, 2006
Hello,
I thought this was a neat solution I came up with, but I'm sure it's
been thought of before. Anyway, it's my first post here.
We have a process for importing data which generates a SELECT statement
based on user's stored configuration. Since the resulting SELECT statement
can be massive, it's created and stored in a text field in a temp table.
So how do I run this huge query after creating it? In my tests, I was
getting a datalength > 20000, requiring 3 varchar(8000) variables in
order to use the execute command. Thing is, I don't know how big it could
possibly get, I wanted to be able to execute it regardless.
Here's what I came up with, it's very simple:
Table is named #IMPORTQUERY, one field SQLTEXT of type TEXT.
>>
declare @x int, @s varchar(8000)
select @x = datalength(sqltext) / 8000 + 1, @s = 'execute('''')' from #importquery
while @x > 0
select @s = 'declare @s' + cast(@x as varchar) + ' varchar(8000) ' +
'select @s' + cast(@x as varchar) +
'=substring(sqltext,@x,@x+8000),@x=@x+8000 from #importquery ' +
replace(@s,'execute(','execute(@s' + cast(@x as varchar) + '+')
, @x = @x - 1
set @s = 'declare @x int set @x=1 ' + @s
execute(@s)
<<
At the end, I execute the "@s" variable which is SQL that builds and
executes the massive query. Here's what @s looks like at the end:
>>
declare @x int set @x=1
declare @s1 varchar(8000)
select @s1=substring(sqltext,@x,@x+8000),@x=@x+8000 from #importquery
declare @s2 varchar(8000)
select @s2=substring(sqltext,@x,@x+8000),@x=@x+8000 from #importquery
declare @s3 varchar(8000)
select @s3=substring(sqltext,@x,@x+8000),@x=@x+8000 from #importquery
execute(@s1+@s2+@s3+'')
<<
View 4 Replies
View Related
Jul 23, 2005
Our database server has started acting weird and at this point I'm eithertoo sleep deprived or close to the problem to adequately diagnose the issue.Basically to put it simply... when I look at the read disk queue length, thedisks queues are astronomical.normally we're seeing a disk queue length of 0-1 on the disks that containthe DB data and index. (i.e non clustered indexes are on a disk of theirown).Writes are just fine.Problem is, all our databases are on the same drive, and I can't seem tonail down which DB, let alone which table is the source of all our reads.Now, to really make things weirder.. during the busier times of the daytoday (say 1:00 PM to 4:00 PM) things were fine.At 4:20 PM or so it was like someone hit a switch and read disk queue lengthjumped from 0-1 up to 100-200+... with spikes up to 1500 for a split secondor so.What's the best way folks know to nail down this?Thanks.----
View 9 Replies
View Related
Dec 29, 2007
If I remove the TOP 200 this query returns about 2.5 million rows. It combines a lot of records and turns it into much more programmer friendly results. The query slowed down from 2 seconds to about 13 seconds as it has grown from about 10k to the now couple of million.
Code Block
SELECT TOP 200 *
FROM
(
SELECT
[UserProfile].[UserId]
,[aspnet_Users].[UserName]
,[City]
,[State]
,[RoleName]
,[ProfileItemType].[Name] AS pt_name
,[ProfileItem].[Value]
FROM
[UserCriteria]
,[aspnet_Users]
,[aspnet_Roles]
,[aspnet_UsersInRoles]
,[Location]
,[ProfileType]
,[ProfileTypeItem]
,[ProfileItem]
INNER JOIN [UserProfile]
ON [ProfileItem].[ProfileId] = [UserProfile].[ProfileId]
INNER JOIN [ProfileItemType]
ON [ProfileItem].[ProfileItemTypeId] = [ProfileItemType].[ProfileItemTypeId]
WHERE [UserProfile].[UserId] IN (
SELECT [UserCriteria].[UserId]
FROM [UserCriteria]
WHERE
Zipcode IN (
SELECT [Zipcode]
FROM [ZipcodeProximitySQR] ('89108' , 150))
)
AND [UserProfile].[UserId] = [aspnet_Users].[UserId]
AND [UserCriteria].[UserId] = [UserProfile].[UserId]
AND [Location].[Zipcode] = [UserCriteria].[Zipcode]
AND [aspnet_UsersInRoles].[UserId] = [aspnet_Users].[UserId]
AND [aspnet_UsersInRoles].[RoleId] = [aspnet_Roles].[RoleId]
) AS t
PIVOT
(
MIN([Value])
FOR pt_name IN ([field1],[field2]],[field3]],[field4]])
) AS pvt
ORDER BY RoleName DESC, NEWID()
The line: FOR pt_name IN ([field1],[field2]],[field3]],[field4]]) I change the values from the long names to read field1, field2... because it was irrelevant but confusing because of the names.
Here is the showplan text
Code Block
|--Sequence
|--Table-valued function(OBJECT:([aous].[dbo].[ZipcodeProximitySQR].[PK__ZipcodeProximity__5E54FF49]))
|--Top(TOP EXPRESSION:((200)))
|--Stream Aggregate(GROUP BY:([aous].[dbo].[UserCriteria].[UserId], [aous].[dbo].[aspnet_Users].[UserName], [aous].[dbo].[Location].[City], [aous].[dbo].[Location].[State], [aous].[dbo].[UserCriteria].[Birthdate], [aous].[dbo].[aspnet_Roles].[RoleName]) DEFINE:([Expr1039]=MIN(CASE WHEN [aous].[dbo].[ProfileItemType].[Name]=N'height' THEN [aous].[dbo].[ProfileItem].[Value] ELSE NULL END), [Expr1040]=MIN(CASE WHEN [aous].[dbo].[ProfileItemType].[Name]=N'bodyType' THEN [aous].[dbo].[ProfileItem].[Value] ELSE NULL END), [Expr1041]=MIN(CASE WHEN [aous].[dbo].[ProfileItemType].[Name]=N'hairColor' THEN [aous].[dbo].[ProfileItem].[Value] ELSE NULL END), [Expr1042]=MIN(CASE WHEN [aous].[dbo].[ProfileItemType].[Name]=N'eyeColor' THEN [aous].[dbo].[ProfileItem].[Value] ELSE NULL END)))
|--Nested Loops(Inner Join)
|--Nested Loops(Inner Join)
| |--Sort(ORDER BY:([aous].[dbo].[UserCriteria].[UserId] ASC, [aous].[dbo].[Location].[City] ASC, [aous].[dbo].[Location].[State] ASC, [aous].[dbo].[UserCriteria].[Birthdate] ASC, [aous].[dbo].[aspnet_Roles].[RoleName] ASC))
| | |--Hash Match(Inner Join, HASH:([aous].[dbo].[UserCriteria].[Zipcode])=([Expr1043]), RESIDUAL:([Expr1043]=[aous].[dbo].[UserCriteria].[Zipcode]))
| | |--Hash Match(Inner Join, HASH:([aous].[dbo].[ProfileItemType].[ProfileItemTypeId])=([aous].[dbo].[ProfileItem].[ProfileItemTypeId]))
| | | |--Index Scan(OBJECT:([aous].[dbo].[ProfileItemType].[ProfileTypes]))
| | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[UserProfile].[ProfileId]))
| | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[UserProfile].[UserId]))
| | | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[UserProfile].[UserId]))
| | | | | |--Hash Match(Inner Join, HASH:([aous].[dbo].[UserProfile].[UserId])=([aous].[dbo].[aspnet_UsersInRoles].[UserId]), RESIDUAL:([aous].[dbo].[UserProfile].[UserId]=[aous].[dbo].[aspnet_UsersInRoles].[UserId]))
| | | | | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[UserCriteria].[UserId]))
| | | | | | | |--Stream Aggregate(GROUP BY:([aous].[dbo].[UserCriteria].[UserId]))
| | | | | | | | |--Nested Loops(Left Semi Join, WHERE:([aous].[dbo].[UserCriteria].[Zipcode]=[Expr1044]))
| | | | | | | | |--Clustered Index Seek(OBJECT:([aous].[dbo].[UserCriteria].[UserCriteria]), SEEK:([aous].[dbo].[UserCriteria].[UserId] < {guid'E3D72D56-731A-410E-BCB1-07A87A312137'} OR [aous].[dbo].[UserCriteria].[UserId] > {guid'E3D72D56-731A-410E-BCB1-07A87A312137'}), WHERE:([aous].[dbo].[UserCriteria].[Male]=(1) AND [aous].[dbo].[UserCriteria].[SeekingMale]=(0)) ORDERED FORWARD)
| | | | | | | | |--Compute Scalar(DEFINE:([Expr1044]=CONVERT_IMPLICIT(nvarchar(5),[aous].[dbo].[ZipcodeProximitySQR].[Zipcode],0)))
| | | | | | | | |--Clustered Index Scan(OBJECT:([aous].[dbo].[ZipcodeProximitySQR].[PK__ZipcodeProximity__5E54FF49]))
| | | | | | | |--Clustered Index Seek(OBJECT:([aous].[dbo].[UserProfile].[UserProfileIds]), SEEK:([aous].[dbo].[UserProfile].[UserId]=[aous].[dbo].[UserCriteria].[UserId]) ORDERED FORWARD)
| | | | | | |--Nested Loops(Inner Join, OUTER REFERENCES:([aous].[dbo].[aspnet_Roles].[RoleId]))
| | | | | | |--Clustered Index Scan(OBJECT:([aous].[dbo].[aspnet_Roles].[aspnet_Roles_index1]))
| | | | | | |--Index Seek(OBJECT:([aous].[dbo].[aspnet_UsersInRoles].[aspnet_UsersInRoles_index]), SEEK:([aous].[dbo].[aspnet_UsersInRoles].[RoleId]=[aous].[dbo].[aspnet_Roles].[RoleId]) ORDERED FORWARD)
| | | | | |--Clustered Index Seek(OBJECT:([aous].[dbo].[UserCriteria].[UserCriteria]), SEEK:([aous].[dbo].[UserCriteria].[UserId]=[aous].[dbo].[UserProfile].[UserId]) ORDERED FORWARD)
| | | | |--Index Seek(OBJECT:([aous].[dbo].[aspnet_Users].[_dta_index_aspnet_Users_5_37575172__K2_K1_K4_3]), SEEK:([aous].[dbo].[aspnet_Users].[UserId]=[aous].[dbo].[UserProfile].[UserId]) ORDERED FORWARD)
| | | |--Index Seek(OBJECT:([aous].[dbo].[ProfileItem].[_dta_index_ProfileItem_5_1714105147__K2_K1_K3_4]), SEEK:([aous].[dbo].[ProfileItem].[ProfileId]=[aous].[dbo].[UserProfile].[ProfileId]) ORDERED FORWARD)
| | |--Compute Scalar(DEFINE:([Expr1043]=CONVERT_IMPLICIT(nchar(5),[aous].[dbo].[Location].[Zipcode],0)))
| | |--Index Scan(OBJECT:([aous].[dbo].[Location].[CityLocation]))
| |--Clustered Index Scan(OBJECT:([aous].[dbo].[ProfileType].[PKProfileTypeProfileTypeId]))
|--Clustered Index Scan(OBJECT:([aous].[dbo].[ProfileTypeItem].[ProfileTypeItem]))
Here is a link to the execution plan from Microsoft SQL Server management Studio.
http://epi.cc/BasicUserSearch.zip
There are no table scans, but the Hash Match from the inner join is pretty bad.
Can anyone give me a pointer or two?
View 1 Replies
View Related
Jan 20, 2014
I have to delete a ton of data from a SQL table. I have a unique identifier called the version. I would like to use if not in these versions then delete. I tried to using the statement below, but learned the hard way that it created an error this is the message I got....
Msg 9002, Level 17, State 4, Line 3...
The transaction log for database 'MonthEnds' is full due to 'ACTIVE_TRANSACTION'.
I was reading about truncate, I am not sure how I would do this or how I would setup the statement.
Delete Products
where versions were not in (('48459CED-871F-4971-B888-5083990332BC','D550C8D3-58C7-4C74-841D-1C1675F19AE3','C77C7817-3F04-4145-98D3-37BB1610DB35',
'21FE83FA-476D-4604-80EF-2ED57DEE2C16','F3B50B81-191A-4D71-A406-011127AEFBE1','EFBD48E7-E30F-4047-909E-F14DCAEA4181','BD9CCC41-D696-406B-
'C8BEBFBC-D362-4D0F-A555-B281FC2B3023','EFA64956-C2CF-41FC-8E21-F060597DAFCB','77A8DE56-6F7F-4490-8BED-AA6809B947EF','0F4C1E5F-B689-4DCB-
[code]....
View 2 Replies
View Related
Feb 8, 2008
Hi,
I have a series of SSIS packages, all of which are ultimately executed by a parent package.
I'm consitently getting "OutOfMemory" errors when working with the packages which is temporarily solved by closing Visual Studio and re-opening the package(s)... This solution is short lived however as the OutOfMemory error occurs quite quickly after re-opening, often after doing nothing other than altering a variables default value and attempting to save the package.
The average size of the packages in question (.dtsx files) is around 7,000kb with the largest being 12,500kb. The total size of all the solution's packages is ~75,000kb.
The Processes tab in Task Manager shows a Mem Usage counter for devenv.exe *32 of around 20,000kb when Visual Studio is first opened however, when a single ~6,000kb dtsx file is opened this counter jumps to +300,000kb and when the entire solution is opened (When the parent package is executed), the Mem Usage counter for devenv.exe *32 is a massive +800,000kb!!!
Is this normal SSIS behaviour or do I have a major problem? Any tips or suggestions as to how to resolve this issue would be gratefully received.
FYI, "SELECT @@VERSION" gives me "Microsoft SQL Server 2005 - 9.00.3042.00 (X64) Feb 10 2007 00:59:02 Copyright (c) 1988-2005 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2) "
My Server is Windows Server 2003 R2 Enterprise x64 SP2 with 8GB of RAM.
Thanks in advance.
Leigh.
View 7 Replies
View Related
Feb 14, 2005
Hey all,
We are using SQL Server 7 on Win 2k and there are some DTS packages set up which empty some large tables (delete from) and then import some datafiles.
The imported files are about 13 GB and during the process the log file gets to about 10GB and then runs out of disk space.
Is there a trick to empty a table without logging it? (a la LOAD Replace from Null in DB2)?
How can I go about keeping the log file size down during this operation?
I think the DB is set to autocommit, the trunc log on chkpt. is set on as is the select into/bulk copy (altho I'm reasonable sure we arent availaing of the bulk copy for the import).
Help? :)
View 2 Replies
View Related
Dec 4, 2006
Hello,
I am currently working on a project where I have to import a huge amount of data from CSV files into a database.
I don't want to have dublicate keys in my table, but my CSV file contains them. That means the line more at the end of the file contains the mor up to date information that I have to store.
I try to fix this problem since serveral weeks, but my algorithm is very slow and blocks all other processes on the server. At the moment I am copying all records into a temp table that occure more than once in the CSV file. After that I am running through this table line by line and check if the key already exists in the target table and then either make an insert or an update.
Does somebody know a better process?
I hope somebody can help me... :(
View 5 Replies
View Related
May 21, 2013
So I've stumbled across an audit table on one of our systems that has reached a hearty 180M rows in size.
The table is a heap (no indexes whatsoever).
Each record has a datetime value indicating when it was created.
I need to delete everything that was created prior to the last 6 months; what is my best plan of attack?
View 12 Replies
View Related
Apr 10, 2007
Background
SQL Server 2005 Standard 9.0.1399 64bit
Windows 2003 64-bit
8gb RAM
RAID-1 70gb HD 15K SCSI (Log Files, OS)
RAID-10 1.08TB HD 10K SCSI (Data Files)
Runs aproximately _Total 800 Transaction/Second
We deliver aproximately 70-80 million ad views / day
8 Clustered Windows 2003 32-bit OS IIS Servers running Asp.net 2.0 websites
All 8 servers talking to the one SQL server via a private network (server backbone).
In SQL Server Profiler, I see the following SQL statements with durations of 2000 - 7000:
select top 1 keywordID, keyword, hits, photo, feed from dbo.XXXX where hits > 0 order by hits
and
UPDATE XXXX SET hits=1906342 WHERE keywordID = 7;
Where the hits number is incremented by one each time that is selcted for that keyword ID.
Sometimes these happen so frequently the server stops accepting new connectinos, and I have to restart the SQL server or reboot.
Any ideas on why this is happening?
Regards,
Joe
View 6 Replies
View Related
May 3, 2007
I need some help I have this massive sql script the problem is I tried to put it in to the query string box in my sql reports and it will not take it this script will run if I break it up but I think it is to large is there a sql guru out there that can show me how to reduce the size of this script maybe by using an out parameter to a stored proceedure. I just dont know what to do and need to produce the report from this script. Below is the entire script
SELECT 'Prior Year All ' as 'qtr', COUNT(JOB.JOBID) AS 'transcount', COUNT(DISTINCT JOB.PATIENTID) AS 'patientcount', SUM(JOB.TRANSPORTATION_TCOST) AS 'tcost', SUM(JOB.TRANSPORTATION_DISC_COST) AS 'dtcost', AVG(JOB.TRANSPORTATION_DISC) AS 'avgTDisc', SUM(JOB.TRANSPORTATION_TCOST) + SUM(JOB.TRANSPORTATION_DISC_COST) AS 'TGrossAmtBilled', SUM(JOB.TRANSPORTATION_TCOST) / COUNT(DISTINCT JOB.PATIENTID) AS 'PatAvgT', SUM(JOB.TRANSPORTATION_DISC) AS 'avgPercentDiscT', SUM(JOB.TRANSPORTATION_TCOST) / COUNT(JOB.JOBID) AS 'RefAvgT', JOB.JURISDICTION, PAYER.PAY_GROUPNAME, PAYER.PAY_COMPANY, PAYER.PAY_CITY, PAYER.PAY_STATE, PAYER.PAY_SALES_STAFF_ID, JOB.PATIENTID, JOB.INVOICE_DATE, JOB.JOBOUTCOMEID, JOB.SERVICEOUTCOME, INVOICE_AR.INVOICE_NO, INVOICE_AR.INVOICE_DATE AS Expr1, INVOICE_AR.AMOUNT_DUE, INVOICE_AR.CLAIMNUMBER, PATIENT.LASTNAME, PATIENT.FIRSTNAME, PATIENT.EMPLOYERNAME, JOB_OUTCOME.DESCRIPTION, SERVICE_TYPE.DESCRIPTION, PAT_SERVICES_HISTORY.TRANSPORT_TYPE,
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed Successfully') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedSuccessfullyItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with complaint') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithComplaintItems', (SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with No Show') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithNoShowItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with No Charge') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithNoChargeItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with Situation') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithSituationItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Not Completed') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'NotCompletedItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Cancelled Prior to service') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CancelledPriorToServiceItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Cancelled During Service') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CancelledDuringServiceItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed Successfully') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'AwaitingforcompletionItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Pending for review') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like'%T ')) AS 'PendingforreviewItems'
FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID
WHERE (INVOICE_AR.AMOUNT_DUE > 0)AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (year,0,@startate) and DATEADD(year,0,@endate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12))AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%')AND (INVOICE_AR.INVOICE_NO like '%T')
GROUP BY JOB.JURISDICTION, PAYER.PAY_GROUPNAME, PAYER.PAY_COMPANY, PAYER.PAY_CITY, PAYER.PAY_STATE, PAYER.PAY_SALES_STAFF_ID, JOB.PATIENTID, JOB.INVOICE_DATE, JOB.JOBOUTCOMEID, JOB.SERVICEOUTCOME, INVOICE_AR.INVOICE_NO, INVOICE_AR.INVOICE_DATE, INVOICE_AR.AMOUNT_DUE, INVOICE_AR.CLAIMNUMBER, PATIENT.LASTNAME, PATIENT.FIRSTNAME, PATIENT.EMPLOYERNAME, JOB_OUTCOME.DESCRIPTION, SERVICE_TYPE.DESCRIPTION, PAT_SERVICES_HISTORY.TRANSPORT_TYPE
UNION ALL
SELECT 'Current Year 2007 All ' as 'qtr', COUNT(JOB.JOBID) AS 'transcount', COUNT(DISTINCT JOB.PATIENTID) AS 'patientcount', SUM(JOB.TRANSPORTATION_TCOST) AS 'tcost', SUM(JOB.TRANSPORTATION_DISC_COST) AS 'dtcost', AVG(JOB.TRANSPORTATION_DISC) AS 'avgTDisc', SUM(JOB.TRANSPORTATION_TCOST) + SUM(JOB.TRANSPORTATION_DISC_COST) AS 'TGrossAmtBilled', SUM(JOB.TRANSPORTATION_TCOST) / COUNT(DISTINCT JOB.PATIENTID) AS 'PatAvgT', SUM(JOB.TRANSPORTATION_DISC) AS 'avgPercentDiscT', SUM(JOB.TRANSPORTATION_TCOST) / COUNT(JOB.JOBID) AS 'RefAvgT', JOB.JURISDICTION, PAYER.PAY_GROUPNAME, PAYER.PAY_COMPANY, PAYER.PAY_CITY, PAYER.PAY_STATE, PAYER.PAY_SALES_STAFF_ID, JOB.PATIENTID, JOB.INVOICE_DATE, JOB.JOBOUTCOMEID, JOB.SERVICEOUTCOME, INVOICE_AR.INVOICE_NO, INVOICE_AR.INVOICE_DATE AS Expr1, INVOICE_AR.AMOUNT_DUE, INVOICE_AR.CLAIMNUMBER, PATIENT.LASTNAME, PATIENT.FIRSTNAME, PATIENT.EMPLOYERNAME, JOB_OUTCOME.DESCRIPTION, SERVICE_TYPE.DESCRIPTION, PAT_SERVICES_HISTORY.TRANSPORT_TYPE,
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed Successfully') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (@startDate) and DATEADD(@enddate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedSuccessfullyItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with complaint') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (@startdate) and DATEADD(@enddate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithComplaintItems', (SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with No Show') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (startdate) and DATEADD(@enddate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithNoShowItems',
(SELECT COUNT(JOB.JOBOUTCOMEID) FROM JOB INNER JOIN INVOICE_AR ON JOB.JOBID = INVOICE_AR.JOBID LEFT OUTER JOIN PAYER ON PAYER.PAYERID = JOB.PAYERID LEFT OUTER JOIN STATES ON JOB.JURISDICTION = STATES.INITIALS LEFT OUTER JOIN PATIENT ON PATIENT.PATIENTID = JOB.PATIENTID LEFT OUTER JOIN JOB_OUTCOME ON JOB_OUTCOME.JOB_OUTCOME_ID = JOB.JOBOUTCOMEID LEFT OUTER JOIN SERVICE_TYPE ON SERVICE_TYPE.DESCRIPTION = JOB.SERVICEOUTCOME LEFT OUTER JOIN PAT_SERVICES_HISTORY ON PAT_SERVICES_HISTORY.PATIENTID = JOB.PATIENTID WHERE (JOB_OUTCOME.DESCRIPTION = 'Completed with No Charge') AND (INVOICE_AR.AMOUNT_DUE > 0) AND (INVOICE_AR.INVOICE_DATE BETWEEN DATEADD (@startdate) and DATEADD(@enddate)) AND (MONTH(INVOICE_AR.INVOICE_DATE) in (1,2,3,4,5,6,7,8,9,10,11,12)) AND (PAYER.PAY_GROUPNAME like '%' + @Company + '%') AND (INVOICE_AR.INVOICE_NO like '%T')) AS 'CompletedWithNoChargeItems',
&nb
View 8 Replies
View Related
Jan 28, 2008
I've got a large MS Sql Server 2000 database that has 15 indexes, with roughly 180 million rows representing 240 GB worth of data. Due to the massive size of the database we are trying to purge it down to a smaller dataset, about 40 million rows, in order to speed up the query performance and to be able to defrag the indexes (which are 30-50% fragmented). To complicate the matter, this table is also a publisher in a transactional replication setup, with one subscriber. Also, the system needs to be up constantly so I'm only allowed about a 3-5 hour period to take an outage a week.
So far I've tested several methods of delete following all best practices (batch deletes, using indexes in delete's where clause), and have come up with deleting/commiting 500 rows at a time. The problem is that it still takes 3-4 seconds to delete this many rows, on a 8 GB RAM, 4 processor machine that is not currently used or replicated.
I'm at a loss on a way to pare down the data with a delete as the current purge script will take 7 hours a day for about 3 months. Another option I'm considering is to do a truncate and copy the data back over from the replicated database, but again this has its own set of problems, i.e. network latency and slow inset times. Yet another option would be to create a replica of the table on the production db, copy the data to it, then rename the table.
Any one have experience with purging such a massive amount of data? Any help would be greatly appreciated.
View 6 Replies
View Related
Sep 12, 2004
1. Use mssql server agent service to take the schedule
2. Use a .NET windows service with timers to call SqlClientConnection
above, which way would be faster and get a better performance?
View 2 Replies
View Related
Jun 23, 2006
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
View 2 Replies
View Related
Jun 22, 2006
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server witha particular query. It would take approximately 22 seconds to return100 rows, thats about 0.22 seconds per row. Note: I ran the query insingle user mode. So I tested the query on the Development server bytaking a backup (.dmp) of the database and moving it onto the devserver. I ran the same query and found that it ran in less than asecond.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue isrelated to some external hardware issue like: disk space, memory etc.Or could it be OS software related issues, like service packs, SQLServer configuations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating systemrelated issue.Any Ideas would help me greatly!Thanks,Brian T
View 2 Replies
View Related
May 28, 2008
Hi everyone,
I have a question about SQL Server 2005. I have written an ASP.Net 2.0 Web Application and it is using SQL Server 2005 as Database.
In the last few days I noticed that the app is down sometimes. To analyze the problem I looked at the activity monitor in SQL Management Studio. I can see there approximately 170 processinfos.
I want to describe the column values of the process infos:
Process-ID: Unique ID and a red down-showing-arrow-icon
User: My UserDatabase: My DatabaseStatus: sleepingCommand: AWAITING COMMANDApplication: .Net SqlClient Data Provider
When I click Locks by Object, I can see the IDs of the Processinfos. Again I will show some colums:Type: DATABASERequirementtype: LOCKRequirementstate: GRANTOwnertype: SHARED_TRANSACTION_WORKSPACEDatabase: My Database
So my question is, does this mean, that i have locked the db? How are they handled? For example I have a windows service, which is doing checks in db every 10 seconds. I can see, that each check generates a new processinfo?
Is this usual, or am I doing something wrong?
Thnaks for help,Byeee
View 5 Replies
View Related
Jul 9, 2001
When I run a select statement : select 'X' from table1 where c1 = condition locking on indexes behaves as expected
However if I run select 1 from table1 where c1 = condition locking on indexes goes wild locking pages and rows on indexes that are not even referenced in the query. Any ideas Why?
View 1 Replies
View Related
Dec 29, 2000
Hello All,
I'm just migrating from oracle to SQL.Can anybody tell me that how effectively I can use Row level locking in SQL? If tow users are attemping to
Moify same record how i can deal it in Back end(SQL)?
Thanks in Advance.
Suchee
View 1 Replies
View Related
Sep 25, 2000
currently im using NT 4 (SP 6a), Intel Pentium III 600 (DUal processor) and 1GB RAM.
the problem that i facing now is DEADLOCK. the lock type in my database is 'DB type', how can i change to 'KEY' type or 'TAB' type???
pls help me and thanx in advance... :)
from comel
View 1 Replies
View Related
Dec 4, 2000
i have an application in production(sql 6.5 ) which causes locking which times out my other processes , iwant to capture time the locking takes place i have found in bol that i can get time deadlock occurs using trace flag 3605 in sql7.0 ,if i have to use trace flag is it ok with dbcc traceon or -T option in startup is recommended.
any advice would be appreciated
tia
ram
View 1 Replies
View Related
Oct 30, 2002
I have used DTS in the past to copy information in certain tables in production over the top of those same tables in test. However, the process is now failing. Does DTS require an exclusive lock on the source table, as well as the destination table during the export process? Will shared locks on the table I need to copy prevent DTS from completing the process?
View 4 Replies
View Related
Oct 27, 1999
Hi
We are running out of locks while updating a particular table (table name = history, rows = 25,000,000) in SQL Server 6.5.
LE threshold maximum is set to 200.
LE threshold minimum is set to 20.
LE threshold percentage is set to 0.
Locks is set to 0.
I have also included the stored procedure, which we use to update the history table.
As you can see, from the first four lines, we ran this SP 4 times processing around 6 million rows at a time. It runs out of locks once it is around 5.5 to 6.5 million rows. Is there a way of locking the table so that this SP can be run just once which will effectively process all the 26 million rows in one go?
Any help will be greatly appreciated.
Winston
--declare minihist cursor for (select uin,uan,mailingdate from history(tablock)where rowno between 5635993 and 12000000)
--declare minihist cursor for (select uin,uan,mailingdate from history(tablock)where rowno between 12000001 and 19000000)
declare minihist cursor for (select uin,uan,mailingdate from history(tablock)where rowno > 19000000)
set nocount on
declare @sex char(1)
declare @huin integer
declare @huan integer
declare @hmailingdate char(8)
declare @mailtot integer
declare @mail12m integer
declare @lastday char(8)
open minihist
fetch next from minihist into @huin,@huan,@hmailingdate
while (@@fetch_status <> -1)
begin
if (@@fetch_status <> -2)
begin
select @mailtot = 1 select @mail12m = 0
/*** Get the gender ***/
select @sex = gender from name where uin = @huin
/*** Calculate if mailed in the last twelwe months ***/
if (@hmailingdate <> null) and (@hmailingdate > '19980524')
select @mail12m = @mail12m +1
/*** Get info for this uan from address_summary ***/
select @mailtot = (@mailtot+mailed_total), @mail12m = (@mail12m+mailed_12months), @lastday = last_date from address_summary where uan = @huan
/*** Insert a row into address_summary if doesn't exist ***/
IF @@rowcount = 0
Insert into address_summary ( uan, uin,mailed_total,Mailed_12months, last_date,last_gender)
values (@huan,@huin,1, @mail12m, @hmailingdate,@sex)
ELSE
/*** compare recency ***/
if (@hmailingdate < @lastday) or (@hmailingdate is null) /* history record is older */
update address_summary
set mailed_total = @mailtot,
mailed_12months = @mail12m
where uan = @huan
else
update address_summary
set uin = @huin,
mailed_total = @mailtot,
mailed_12months = @mail12m,
last_date = @hmailingdate,
last_gender = @sex
where uan = @huan
end
fetch next from minihist into @huin,@huan,@hmailingdate
end
deallocate minihist
View 1 Replies
View Related
Jul 12, 2001
Hi,
We are running SQL 6.5 in Produciton and I'm getting one blocking problem but mostly I kill the process and whenever I check the SQL Error Log I see this message :
Error : 17824, Severity: 10, State: 0
Unable to write to ListenOn connection '1433', loginname 'XXXY', hostname 'DT SA'.
OS Error : 64, The specified network name is no longer available.
Error 1608, Severity: 21, State: 0
View 1 Replies
View Related
Jul 23, 2004
Hi,
I'm trying to use the pessimistic row locking of SQL to get following result.
When a customer form is openend, the row should be locked for writing.
This lock should be left open until the user closes the customer form.
I cannot use transactions because there can be more then 1 customer form open in the same app. In ADO a connection is IN transaction or is NOT, nested transactions are not supported.
How can I keep this row locked on SQL and this until I unlock it or the connection is broken ( in case of problems on client machine )?
And how can I see on another machine of this row ( customer ) is already locked so I can open him in read-only?
For the moment I'm using extra fields that hold the info wether the customer is locked en by whom. But that's on application level, not on DB-level.
I hope this is clear enough.
View 4 Replies
View Related
Nov 16, 2004
Ok, this may be a brain dead question but I can't seem to figure out what it is I am doing wrong. I have a stored proc which has multiple inserts and updates and deletes. However, I do not want to commit until the end of the procedure. So near the end if no error has been return by a particular insert, update, delete I tell it to COMMIT TRAN. My problem is that it seems to run and run and run and run. I take out the Begin Tran and boom it runs fast and completes.
But if there is a problem near the end then those other statements will be committed. I wish to avoid that. I have an error routine at the end of the SP and I have if statement to GOTO sp_error: if @@error produces a non zero value. I am sure I am doing something goofy but can seem to see it. I know it has come down to the Begin Tran. Is it that I have too many uncommitted transactions? Or perhaps I am locking something up. I know its hard to tell without seeing what I am doing but is there something simple to remember about using explicit transactions that I am forgetting. Any help is appreciated.
Tom
View 12 Replies
View Related
Jan 9, 2004
Hello .
I am using SQL Server 2000 in order to create a multi user program that accesses data.
The problem is that multiple users will update and select data at the same time at the same table.
Is there a way to avoid deadlocks ?
I heard about two ways: using a temporary table to store data and then write the data only when the user finished the update.
and the other is using xml to write the database to a xml file that is stored locally. do the updates on the file and then after completion insert the xml file into the database.
does anybody know much about these ways? do you know where i can find code for this ?
is there a better way?
thanks !
and happy new year !
View 5 Replies
View Related
Nov 21, 2005
Hi all, firstly I would like to apologise because I don't actually use sql or know diddly squat about it. I am a network administrator and have a problem with a user's domain account getting locked out everytime he starts his sqlagent service (we are running a windows 2003 domain). I know this a vary vague post and I am sorry for that. I am just after some general ideas/information on why this keeps happening. Any help greatly appreciated.
Thanks
Rich.
View 5 Replies
View Related