Performance Degrade
Jun 20, 2007
Dear all,
I would like to share with you the following performance issue:
Configuration-Server
SQL 2005 workgroup edition
Windows 2003 server Small business
2 cpu
3.5 Gbytes Ram
RAID 0
boot.ini /3Gb userva=2560
Page file 2-4 Gbytes to each drive (two drives)
mdf file ~ 300 Mbytes
SQL dedicated server
Configuration-Clients
Windows XP
1 Gbyte RAM
Visual studio 2005 application
9 users
The problem
The users work smoothly for several days. The sql service is running continuously. After few days we have complaints from the users that the system is slow, SPECIALLY when they execute specific queries.
What we have done
1. We refine several queries.
2. We monitored several counters, specially those that reveal performance problems. They all return reasonable values.
3. defragmentation of Hard discs.
4. Stopped the services that are not required.
5. Reindex and update statistics every night.
What we found
When the users are complaining we monitored cpu spikes for sqlserv. We tried to find the reason but failed. The PF is around 2.1 Gbytes, sqlsrv Memory is around 1.5 Gbytes.
What we do not understand is why the system is not slow when we restart the sqlserv service. Also why after several days is becoming slow ??
Is there a memory leak ?
Has to do with continuous tempdb usage ?
Is there a probem with some system resources ?
ANY IDEAS ????
View 6 Replies
ADVERTISEMENT
Oct 7, 2002
Hello Guys,
I have a Question. I want to set up a sql alert that will monitor a particular counter(Eg. Memory:pages/sec) and send me an email when it reaches a particular threshold.
My question is if i set up this, Will sql will start running a perfmon on the background and degrade server's performance
OR
Will it just read it from some system file or something to to get the values.
I dont know if the perfmon counter values are stored in any system files or not.
Please advise..
thank you in advance.
View 1 Replies
View Related
Feb 21, 2007
We recently implemented merge replication.We were expereincing. The replication is between 2 SQL Servers (2005) over same network box, and since we have introduced the replication, the performance has degraded considerably on subscriber end.
1) One thing that should be mention is that its a "unidirectional Direction" flow of changes is from publisher towards subscriber (only one publisher and distributor as well and one subscriber ).
2) Updates are high than inserts and only one article let say "Article1" ave update up to 2000 per day and i am experiecing that dbo.MSmerge_upd_sp_Article1_GUID taking more cpu time.what should be do..
on subscriber database response time is going to slow and i am experiencing a lot of number of LOCK time outs on application end.
can any one can also suggest me server level settings for aviding locking time out.
looking for any experieced solution/suggestion.
Thanks in advance.
View 3 Replies
View Related
Sep 5, 2002
Some general tips and suggestions on this ? Say that Im running 2000 with a database and later discover that some parts of the application/system only works in 7.0. So I want to degrade SQLServer version from 2000 back to the old 7.0. Reinstall with attach/detach of db ? backup and restore ? Are 2000 backward compatible in any way with 7.0 ?
Regards Peter
View 2 Replies
View Related
Aug 6, 2004
Hi,
another daily problem ...
I've a table with half million records that my application uses continious with several UPDATE e SELECT statement (about 5 requests/sec).
After several (4-5) hours I've a degrade of performance, but if I update the statistics (of thi table) all return ok.
Now the situation is I create a job to maintenance this table updating statististic two times a day ....
Is it normal? SQL should update statistics by itself?
I choose the wrong way or ... what can i do?
Thx
View 1 Replies
View Related
Jan 13, 2006
(Win2003, SQL Server 2000 SP4)
I have a database of about 5Gb of size. Some queries where taking more than 1 minute to complete execution (all of them are stored procedures). Because of that lack of performance, I call the command DBREINDEX for each table, executed the sp_updatestats system stored procedure and finally I executed the sp_recompile system stored procedure for each sp in my database.
After all this task, queries completed in a matter of a few seconds instead of minutes. Strange enough is that some hours later (about 6 hrs), after normal use (this database belong to a Client/Server information system), the problem appeared again: Queries started to take too long to complete.
I am assuming that indexes are degrading too fast so that they required another ReIndex, but I am not sure.
Any thoughts? How can I prevent this behaviour?
Thank a lot in advanced.
View 5 Replies
View Related
Sep 18, 2007
Hi,
I'm seeing a few deadlocks on my SQL 2005 production database and wondered what i could expect if i switch on the trace flag 1222. Can this cause more deadlocks to occur? Will the performance of SQL degrade? Is it safe to have this flag set on a production server or is there another method you would recommend?
Thanks
Martin
View 1 Replies
View Related
Sep 12, 2004
1. Use mssql server agent service to take the schedule
2. Use a .NET windows service with timers to call SqlClientConnection
above, which way would be faster and get a better performance?
View 2 Replies
View Related
Jun 23, 2006
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
View 2 Replies
View Related
Jul 24, 2007
Hi,
Can we degrade from SQL Server 2000 Enterprise Edition to SQL Server 2000 Standand Edition without doing Uninstall and reinstall?
Thanks in advance,
View 1 Replies
View Related
Jun 22, 2006
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server witha particular query. It would take approximately 22 seconds to return100 rows, thats about 0.22 seconds per row. Note: I ran the query insingle user mode. So I tested the query on the Development server bytaking a backup (.dmp) of the database and moving it onto the devserver. I ran the same query and found that it ran in less than asecond.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue isrelated to some external hardware issue like: disk space, memory etc.Or could it be OS software related issues, like service packs, SQLServer configuations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating systemrelated issue.Any Ideas would help me greatly!Thanks,Brian T
View 2 Replies
View Related
Mar 9, 2007
We have the same application installed on a few different environments with similar servers and similar hardward. The only difference is the versions of SQL and the colations.
Is SQL 2005 a lot faster that SQL 2000? Could colation type make a big effect on performance?
ScAndal
View 1 Replies
View Related
Aug 31, 2007
HiI want to insert 1000s of records into SQL Server 2005 Database with some manipulation. So that i put into the For Loop and inserting record.Inside the loop i am opening the connection and closing after use. The sample code is belowfor(int i=0;i<1000;i++){ sqlCmd.CommandText = "ProcName"; sqlCmd.Connection = sqlCon; sqlCmd.Connection.Open(): sqlCmd.ExecuteNonQuery(); sqlCmd.Connection.Close(); } What my Question is.. How is the Performance of this Code..?? Will is take time to get the Connection and Close the Connection in every itration?Or Shall I Open the Connection in Begining of the outside loop and close the connection at end of the Loop? will it increase the Performace?Please clarify me these question.. Thanks in advance.
View 1 Replies
View Related
Dec 8, 2003
I have a following problem with SQL performance:
this line 'select * from [viewUserLatestFee]' executes instantly (in Query Analiser)
this line 'select * from [viewUserLatestFee] where orgID = 1' takes up to 30 seconds for 1000 rows (still in Query analiser)
can anyone please help - I seem to have ran out of ideas
I have a feeling people might be curious about the view so here it is:
SELECT dbo.viewUserPosition.id, dbo.viewUserPosition.username, dbo.viewUserPosition.password, dbo.viewUserPosition.title,
dbo.viewUserPosition.firstName, dbo.viewUserPosition.lastName, dbo.viewUserPosition.email, dbo.viewUserPosition.address1,
dbo.viewUserPosition.address2, dbo.viewUserPosition.suburb, dbo.viewUserPosition.postcode, dbo.viewUserPosition.country,
dbo.viewUserPosition.state, dbo.viewUserPosition.mailAddress1, dbo.viewUserPosition.mailAddress2, dbo.viewUserPosition.mailSuburb,
dbo.viewUserPosition.mailPostcode, dbo.viewUserPosition.mailCountry, dbo.viewUserPosition.mailState, dbo.viewUserPosition.birthDate,
dbo.viewUserPosition.joinDate, dbo.viewUserPosition.lastUpdated, dbo.viewUserPosition.orgID, dbo.viewUserPosition.positionID,
dbo.viewLatestPaidFee.feeID, dbo.viewLatestPaidFee.mshipID, dbo.viewLatestPaidFee.name, dbo.viewLatestPaidFee.[desc],
dbo.viewLatestPaidFee.terms, dbo.viewLatestPaidFee.period, dbo.viewLatestPaidFee.periodType, dbo.viewLatestPaidFee.fee,
dbo.viewLatestPaidFee.startDate, dbo.viewLatestPaidFee.endDate, dbo.viewLatestPaidFee.deleted, dbo.viewLatestPaidFee.feePaidID,
dbo.viewLatestPaidFee.paidDate, dbo.viewLatestPaidFee.effectiveDate, dbo.viewLatestPaidFee.approved, dbo.viewLatestPaidFee.optionID,
dbo.viewLatestPaidFee.paidAmount, dbo.viewLatestPaidFee.feePaidEndDate
FROM dbo.viewUserPosition LEFT OUTER JOIN
dbo.viewLatestPaidFee ON dbo.viewUserPosition.id = dbo.viewLatestPaidFee.userID
Here is viewUserPosition:
SELECT dbo.tblUser.id, dbo.tblUser.username, dbo.tblUser.password, dbo.tblUser.title, dbo.tblUser.firstName, dbo.tblUser.lastName, dbo.tblUser.email,
dbo.tblUser.address1, dbo.tblUser.address2, dbo.tblUser.suburb, dbo.tblUser.postcode, dbo.tblUser.country, dbo.tblUser.state,
dbo.tblUser.mailAddress1, dbo.tblUser.mailAddress2, dbo.tblUser.mailSuburb, dbo.tblUser.mailPostcode, dbo.tblUser.mailCountry,
dbo.tblUser.mailState, dbo.tblUser.birthDate, dbo.tblUser.joinDate, dbo.tblUser.lastUpdated, dbo.tblRelPosition.orgID,
dbo.tblRelPosition.positionID
FROM dbo.tblUser INNER JOIN
dbo.tblRelPosition ON dbo.tblUser.id = dbo.tblRelPosition.userID
and viewLatestPaidFee:
SELECT dbo.tblMshipFee.id AS feeID, dbo.tblMshipFee.mshipID, dbo.tblMshipFee.name, dbo.tblMshipFee.[desc], dbo.tblMshipFee.terms,
dbo.tblMshipFee.period, dbo.tblMshipFee.periodType, dbo.tblMshipFee.fee, dbo.tblMshipFee.startDate, dbo.tblMshipFee.endDate,
dbo.tblMshipFee.deleted, fp.id AS feePaidID, fp.paidDate, fp.effectiveDate, fp.approved, fp.optionID, fp.paidAmount, fp.endDate AS feePaidEndDate,
fp.userID
FROM dbo.tblRelMshipFeePaid fp INNER JOIN
dbo.tblMshipFee ON dbo.tblMshipFee.id = fp.feeID AND fp.endDate =
(SELECT MAX(fp2.[endDate])
FROM [dbo].[tblRelMshipFeePaid] fp2
WHERE fp2.[userID] = fp.[userID])
View 4 Replies
View Related
Jan 13, 2005
We used a stored proc to pull totals from a database. Everything was fine until the table grew and started to time out. So we created a temp table to populate with a range of data and then pull the totals from there. Everything was fine until the table grew and started to time out. Any suggestion?
View 3 Replies
View Related
Jan 17, 2002
Hi,
I am newly joined as SQL DBA. I want to check the Physical disk Performance. we have RAID 5 with 5+1 disks. I calculated NO Of IO's Per Disk. But how do we know what is actual limit of IO's per disk.
Thanks
Praveen
View 1 Replies
View Related
May 8, 2001
What's my best bet in getting better performance out of one of my database servers? Currently we have 1 set of Raid5 disks partitioned into 2 drives. This houses everything (system, database, and logs) If that server has 2 slots left for drives I was thinking of putting 2 mirrored drives and getting the logs off the main database space? (Make sense?) This is a vendored application so working with new indexes etc. isn't something I should do wo/ the vendor's interaction. Will what I describe above help?
Thanks
View 2 Replies
View Related
Mar 31, 2001
hi,
i am using to move data from oracle to oracle.
i have used stored procedure in oracle for the update/insert .
the dts calls the stored procedure for each record, due to this the performance has gone down. how do i increase the speed of data xfer.
has any one done any thing similar ?
Tushar
View 1 Replies
View Related
Jun 26, 2001
We have SQL Server running on a dual processor Pentium 500mhz server. Our database is hit by about 300 users. 200 of those users are doing constant searches though a client table of about 250,000 records, which in turn is linked to a history table containing over 5,000,000 records. This is only the tip of the iceberg, we have many triggers, procedures, updates, etc. going in the background. The database has over 500 tables.
Keep in mind, these searches that are taking place can involve all kinds of fields: phone number, company name, fax number, first name, last name, status, wildcard searches, etc. So as you can imagine, the database is being hit with all kinds of funky requests to find records. I will be the first to admit that our developers (vendor) are not the best code writers, and we have a tough time getting them to optimize something they do not even understand themselves.
As I speak, our processor utilization is maxing out between 95 to 100 percent. I've done a lot of performance tuning and all of the problems lie in the searching. We've built, tested, rebuilt, re-tested each and every index. I even used the Profiler to filter what I could. It has improved, but our database is growing at a rate of 10 megs a day (already close to 3 gigs, not that huge). I think I've optimized my indexes as best as I can considering all the fields and possibilities available to users to search for records.
For a database that requires all of these different search criteria, what would be a more optimal server? We are looking to purchase something ASAP. I could really use help from someone in a similar situation. It seems odd, in mind, that a company of 300 people would need to rely on a quad server (four processor capability.).
Thanks. JT
View 3 Replies
View Related
May 31, 2000
HI
I have 700 to 900 mb of production database , 2 gb of ram , 30 gb hard disk,
My production machine is runnng very slow , i have check everything memory,
page/sec, catch hit ratin , dbcc dbreindex but still it performance is not up to the mark.
If i stop SQL SERVER & restart for few days machine works fine but after that
again same thing it work very slow, what could be the reason
if any one had any solution please suggest.
Thanks
Nil
View 2 Replies
View Related
Jan 17, 2000
Hi friends,
My company has aution web site, it is written in Java and all sql statements generated dynamically. No stored procedures used. If 30 users uses this site it is OK but if around 300 users uses then the site becomes very slow(almost dead) and developers saying that database is the bottle neck. Please help me in this problem how can I check and overcome this problem.
Thanks
dindu
View 2 Replies
View Related
Apr 27, 2000
I am running a SQL 7.0 server on a two processor machine. We are having some performance issues.
one of the processor is always above 90% utilization but the second is barely at 50%.
Will adding another processor help or are the processes locked to one processor.
The server is a dedicated sql server. nothing else is running on it.
Thanks for any info you can provide.
Pierre
View 2 Replies
View Related
Oct 20, 1999
Hi,
What I have to do to determine which is the capacity (transactions / sec) of MS SQL Server 7.0 on a specific hardware configuration?
Thank you,
Sebastian Bologescu
View 1 Replies
View Related
May 5, 2001
We have recently upgraded to SQL 7.0 on NT 4.0/sp6 box which has got 4 PIII 700 processors, 1GB RAM, and 70GB HDD on RAID 1 and RAID 5. We feel that the application performance is not great as expected in SS7. (The application was running in 6.5 smoothly and performance was good)
Is there any option needs to set to improve performance? Now, SS 7 using all the 4 processors and dynamically allocated memory, etc. Any thoughts greatly appreciated.
Thanks in Advance
Jaya
View 2 Replies
View Related
Mar 14, 2002
I'm running MS SQL Server on a 1.4 GHz AMD Athlon Processor with 750 MB or RAM and ample disk space. I have a table with 14 columns; 2 datetime, 8 int and the rest are varchar of various sizes less than 13.
I run a java process on another machine that connects to the database and insert records. It takes about 6 minutes to insert 100,000 records.
I run the xp performance monitor and only about 25% of the SQL Server machine's cpu is being used. I run top on the Linux box running java and I see about the same results. Neither machine is kept busy processing. Why don't I get better performance? Could my local area network be that slow? How many inserts per minutes is good performance?
Thanks for your input.
View 1 Replies
View Related
Jan 23, 2001
Does anyone know the performance differences between returning data from SQL Server as XML vs. as a record set? We are about to dive into the For XML world full force, but we wanted to make sure that we are not heading for a performance nightmare.
Thanks for any insight on this. I'll try to look for white papers and do some testing in the meantime.
View 3 Replies
View Related
Feb 5, 2004
I ave the following Code in my Stored procedure.
Declare Cursor for table A
WHILE @@FETCH_STATUS = 0
Get values from other function based on some business logic.
INSERT Into another table B
(or)
UPDATE to another table B
END
I have to insert/update values to table B, one by one row. So, it is taking more time.
Is there any way to collect the values into a temporary storage and Insert/update or Move the values to table B.
View 11 Replies
View Related
Apr 4, 2008
1. where do we see the buffer cache hit ratio. can we set the buffer catche hit ratio manually.
2.In query execution plan we execute the query for performance issue.which parameters we check to take an action?
View 4 Replies
View Related
Apr 14, 2008
I have a small doubt. If we keep our data files and log files on sepertate disks how this can improve the database performance.
View 2 Replies
View Related
Apr 5, 2006
Hello,
I build a query in SQL-server 2000 but i'm not happy with the performance, it takes about 15 minutes to execute the query (4 min INSERT and 11 min UPDATE). The table tbl_total has 3 million records and an index on Contract and Item, the table contracts has 1 million records and a key on Contract and Item.
How can I speed up this query, is it for example possible to put an index on @table (internal table)?
Thanx in advance!
DECLARE @table TABLE (Contract nvarchar(15), Item nvarchar(12), Change_date datetime)
INSERT INTO @table
SELECT TOT.Contract, TOT.Item, MAX(TOT.Change_date)
FROM tbl_total TOT
WHERE EXISTS (SELECT 'X' FROM contracts CONT
WHERE TOT.Contract = CONT.Contract
AND TOT.Item = CONT.Item)
GROUP BY TOT.Contract, TOT.Item
UPDATE contracts
SET contracts.Change_date = TT.Change_date
FROM contracts INNER JOIN @table TT On
contracts.Contract = TT.Contract AND
contracts.Item = TT.Item
View 1 Replies
View Related
Dec 4, 2006
Hi
I wanted to find out which is faster in terms of performance:
e.g.
select * from orders where orderRef = '00093'
Or
select * from orders where orderRef like '00093'
I know there is a differnece if i use the wild cards % etc in the results but i wanted to find out with regards to the queries above?
View 13 Replies
View Related
Jan 20, 2007
For performance should we index on primary key & data in table in the same file group or different file group (same or different drive) ?
Thanks,
Andy
View 2 Replies
View Related
Aug 23, 2007
i need help in gaining the performance of this query
SELECT
tblSuperClientFile.ClientRefNo,
tblReferral.RefID,
tblRail.RailDescr,
tblReferral.SuperClientVendorID,
tblVendor.VendorName AS Client,
tblReferral.AssignedVendorID,
tblReferral.ReferralDate,
tblSpikeDate.DateCompleted AS PlanRevCompleted,
tblReferral.CloseDate,
tblCloseReason.CloseReason,
tblBankruptcyInfo.BK_Filing_State,
tblBankruptcyInfo.BK_Case_Number
INTO #PlanRev
FROM FNFBSDataMart.dbo.tblSpikeDate tblSpikeDate WITH (NOLOCK)
INNER JOIN #ActiveBK
ON tblSpikeDate.MasterID = #ActiveBK.MasterID
AND tblSpikeDate.FID = 3160
AND tblSpikeDate.DateCompleted <= GetDate()-5
INNER JOIN FNFBSDataMart.dbo.tblReferral tblReferral WITH (NOLOCK)
ON tblReferral.RefID = tblSpikeDate.RefID
AND tblReferral.ReferralDate >= GetDate()-180
AND tblReferral.AssignedVendorID NOT IN (188,1721)
INNER JOIN FNFBSDataMart.dbo.tblBankruptcyInfo tblBankruptcyInfo WITH (NOLOCK)
ON tblReferral.RefID = tblBankruptcyInfo.RefID
AND #ActiveBK.bk_Case_Number = tblBankruptcyInfo.bk_Case_Number
INNER JOIN FNFBSDataMart.dbo.tblSuperClientFile tblSuperClientFile WITH (NOLOCK)
ON tblReferral.ClientFileID = tblSuperClientFile.ClientFileID
AND tblSuperClientFile.SuperClientVendorID IN (1816,125,127,1706,766,1820,137,141,144,145,1593,1808,146,990,1745,149,1215,1854,1867)
INNER JOIN FNFBSDataMart.dbo.tblRail tblRail WITH (NOLOCK)
ON tblReferral.RailID = tblRail.RailID
INNER JOIN FNFBSDataMart.dbo.tblVendor tblVendor WITH (NOLOCK)
ON tblReferral.SuperClientVendorID = tblVendor.VendorID
INNER JOIN FNFBSDataMart.dbo.tlkpState tlkpState WITH (NOLOCK)
ON tblSuperClientFile.StateID = tlkpState.StateID
AND (tblSuperClientFile.SuperClientVendorID <> 1820
OR tlkpState.Abbrev NOT IN ('AZ','AK','CA','HI','ID','NV','OR','TX','UT','WA'))
LEFT OUTER JOIN FNFBSDataMart.dbo.tblCloseReason tblCloseReason WITH (NOLOCK)
ON tblReferral.CloseReaID = tblCloseReason.CloseReaID
can anyone have a look at it and give me a feed back asap
View 1 Replies
View Related