Indexes Degrade Too Fast

Jan 13, 2006

(Win2003, SQL Server 2000 SP4)

I have a database of about 5Gb of size. Some queries where taking more than 1 minute to complete execution (all of them are stored procedures). Because of that lack of performance, I call the command DBREINDEX for each table, executed the sp_updatestats system stored procedure and finally I executed the sp_recompile system stored procedure for each sp in my database.

After all this task, queries completed in a matter of a few seconds instead of minutes. Strange enough is that some hours later (about 6 hrs), after normal use (this database belong to a Client/Server information system), the problem appeared again: Queries started to take too long to complete.

I am assuming that indexes are degrading too fast so that they required another ReIndex, but I am not sure.

Any thoughts? How can I prevent this behaviour?

Thank a lot in advanced.

View 5 Replies


ADVERTISEMENT

Degrade 2000 To 7

Sep 5, 2002

Some general tips and suggestions on this ? Say that Im running 2000 with a database and later discover that some parts of the application/system only works in 7.0. So I want to degrade SQLServer version from 2000 back to the old 7.0. Reinstall with attach/detach of db ? backup and restore ? Are 2000 backward compatible in any way with 7.0 ?

Regards Peter

View 2 Replies View Related

Statistics Degrade

Aug 6, 2004

Hi,

another daily problem ...

I've a table with half million records that my application uses continious with several UPDATE e SELECT statement (about 5 requests/sec).
After several (4-5) hours I've a degrade of performance, but if I update the statistics (of thi table) all return ok.

Now the situation is I create a job to maintenance this table updating statististic two times a day ....

Is it normal? SQL should update statistics by itself?
I choose the wrong way or ... what can i do?

Thx

View 1 Replies View Related

Performance Degrade

Jun 20, 2007

Dear all,



I would like to share with you the following performance issue:



Configuration-Server

SQL 2005 workgroup edition

Windows 2003 server Small business

2 cpu

3.5 Gbytes Ram

RAID 0

boot.ini /3Gb userva=2560

Page file 2-4 Gbytes to each drive (two drives)

mdf file ~ 300 Mbytes

SQL dedicated server



Configuration-Clients

Windows XP

1 Gbyte RAM

Visual studio 2005 application

9 users





The problem

The users work smoothly for several days. The sql service is running continuously. After few days we have complaints from the users that the system is slow, SPECIALLY when they execute specific queries.



What we have done

1. We refine several queries.

2. We monitored several counters, specially those that reveal performance problems. They all return reasonable values.

3. defragmentation of Hard discs.

4. Stopped the services that are not required.

5. Reindex and update statistics every night.



What we found

When the users are complaining we monitored cpu spikes for sqlserv. We tried to find the reason but failed. The PF is around 2.1 Gbytes, sqlsrv Memory is around 1.5 Gbytes.



What we do not understand is why the system is not slow when we restart the sqlserv service. Also why after several days is becoming slow ??

Is there a memory leak ?

Has to do with continuous tempdb usage ?

Is there a probem with some system resources ?



ANY IDEAS ????

View 6 Replies View Related

Setting Alerts Degrade Performance?

Oct 7, 2002

Hello Guys,
I have a Question. I want to set up a sql alert that will monitor a particular counter(Eg. Memory:pages/sec) and send me an email when it reaches a particular threshold.
My question is if i set up this, Will sql will start running a perfmon on the background and degrade server's performance
OR
Will it just read it from some system file or something to to get the values.

I dont know if the perfmon counter values are stored in any system files or not.

Please advise..

thank you in advance.

View 1 Replies View Related

Does SQL Degrade When DBCC Traceon 1222 Is Switched On?

Sep 18, 2007



Hi,

I'm seeing a few deadlocks on my SQL 2005 production database and wondered what i could expect if i switch on the trace flag 1222. Can this cause more deadlocks to occur? Will the performance of SQL degrade? Is it safe to have this flag set on a production server or is there another method you would recommend?

Thanks
Martin

View 1 Replies View Related

Replication Performance Degrade In Unidirectional Direction And Lock Time Out (Update Are High Than Inserts)

Feb 21, 2007

 

We recently implemented merge replication.We were expereincing. The replication is between 2 SQL Servers (2005) over same network box, and since we have introduced the replication, the performance has degraded considerably on subscriber end.

1)   One thing that should be mention is that its a "unidirectional Direction" flow of changes is from publisher towards subscriber (only one publisher and distributor as well and one subscriber ).

2) Updates are high than inserts and only one article let say "Article1" ave update up to 2000 per day and i am experiecing that dbo.MSmerge_upd_sp_Article1_GUID taking more cpu time.what should be do..

 

on subscriber database  response time is going to slow and i am experiencing a lot of number of LOCK time outs on application end.

can any one can also suggest me server level settings for aviding locking time out.

 

looking for any experieced solution/suggestion.

Thanks in advance. 

View 3 Replies View Related

Removal Of Selected Indexes / Script Index Create For List Of Indexes

Jul 1, 2014

I'm working to improve performance on a database I've inherited, and there are several thousand indexes. I've got a list of ones which should definitely exist within the database, and I'm looking to strip out all the others and start fresh, though this list is still quite large (1000 or so).

Is there a way I can remove all the indexes that are not in my list without too much trouble? I.e. without having to manually go through them all individually. The list is currently in a csv file.

I'm looking to either automate the removal of indexes not in the list, or possibly to generate the Create statements for the indexes on the list and simply remove all indexes and then run these statements.

As an aside, when trying to list all indexes in the database, I've found various scripts to do this, but found they all seem to produce differing results. What is the best script to list all indexes?

View 5 Replies View Related

A Question About Clustered Indexes Forcing Rebuild Of Non-clustered Indexes.

Sep 18, 2007

So I'm reading http://www.sql-server-performance.com/tips/clustered_indexes_p2.aspx and I come across this:
When selecting a column to base your clustered index on, try to avoid columns that are frequently updated. Every time that a column used for a clustered index is modified, all of the non-clustered indexes must also be updated, creating additional overhead. [6.5, 7.0, 2000, 2005] Updated 3-5-2004
Does this mean if I have say a table called Item with a clustered index on a column in it called itemaddeddate, and several non-clustered indexes associated with that table, that if a record gets modified and it's itemaddeddate value changes, that ALL my indexes on that table will get rebuilt? Or is it referring to the table structure changing?
If so does this "pseudocode" example also cause this to occur:
sqlstring="select * from item where itemid=12345"
rs.open sqlstring, etc, etc, etc
rs.Fields("ItemName")="My New Item Name"
rs.Fields("ItemPrice")=1.00
rs.Update
Note I didn't explicitly change the value of rs.fields("ItemAddedDate")...does rs.Fields("ItemAddedDate")=rs.Fields("ItemAddedDate") occur implicitly, which would force the rebuild of all the non-clustered indexes?

View 4 Replies View Related

Need Lil Fast Help

Jan 17, 2006

my system has 2 db's - sql server 2000 & db2 @ separate locations. i have a select query which needs 2 pick up consolidated data from both the tables. also the schema on the db2 has minor changes when compared with the schema on sql server 2000.

while searching on microsoft i came across the technique of creating a linked server. would this be possible 2 implement in my scenario. also would in this case, be advised that i create another view in the db2 server which has changed the db2 schema to the sql server schema format??

please hurry..
regards,
sameer

View 2 Replies View Related

Need Help Fast :(

Aug 6, 2007

One or more files listed in the statement could not be found or could not be initialized. (Microsoft SQL Server, Error: 5009)

I accidentaly created a log file on my drive E:, but every time that I try to delete the log file it keeps on returning the same error.
Can someone please help me delete the log file.

View 7 Replies View Related

Best Way To Get Queries Fast

Aug 17, 2007

hii have over million records in my DB, what is the best way to get the results fast in case i need to get details of an employe name say "robert", if i do it normally it will take long, should i use index or is there any other good way.thanx in advancecheers         

View 1 Replies View Related

How Set Up 40 GB Db For Fast Recovery

May 28, 2002

Hello everybody .
I have 40 GB db running mostly transaction processing.
I set up
1. back full backup 2 times a day (takes 30 -40 min)
2. log backup every 15 min
3. custom log shipping
4. We don't won't use Cluster.

Once in while becouse of nethwork, or other problem log shipping fails,
so I have to restart log shipping all over starting from restore in stand by mode last full back of my db.IT takes 2-3 hrs just to do this restore !!!

1. So I am asking advice is any way I can bring down time for restore ?
2. Should diffrential backup be taken ?
3. We will not use Custer

Alex

View 4 Replies View Related

Many DELETE As Fast As Possible

Oct 23, 2006

hello all !for MS SQL 2000i am having a table with > 100 000 rowsI must clean itDELETE FROM myTable WHERE Name LIKE 'aser%' AND info IS NULLDELETE FROM myTable WHERE Name LIKE 'tuyi%' AND Info = 'ok'DELETE FROM myTable WHERE Name LIKE 'hop%' AND info LIKE 'retro%'.....about 20 DELETE commandswhat is the best way to do it ?thank you

View 14 Replies View Related

InNeed Of Help Fast Please!!

May 3, 2008

Hi everyone im in deep in need of help in a very easy query and few questions i want to ask,, i use msn boy22202@hotmail.com please i want to contact anyone who use sql server 2005 that can help me in it.... thank you

View 4 Replies View Related

FAST Insert ?

Dec 18, 2007

I need to insert data to a temp table in SQL ,
I have

CREATE TABLE TMP_X (
doc_name varchar(200)
)


--select * from TMP_X

INSERT into TMP_X
values
(
'...,



but its saying there isn't a match, and i know why its trying to insert all the data as one row, but i need them as seperate rows as i want only 1 column..
is there another INSERT type function ?

View 2 Replies View Related

Fast Insert

Feb 14, 2008

If I have a table with one column
and i want to insert a few 100's rows of names
I can't use the INSERT stmt as that does one row at a time ,
how can i achieve this ?

View 5 Replies View Related

SQL Server 2008 :: Logic To Rebuild Only Clustered Indexes / Skipping To Rebuild Non Clustered Indexes In Same Table

Jun 25, 2015

I have a requirement to only rebuild the Clustered Indexes in the table ignoring the non clustered indexes as those are taken care of by the Clustered indexes.

In order to do that, I have taken the records based on the fragmentation %.

But unable to come up with a logic to only consider rebuilding the clustered indexes in the table.

create table #fragmentation
(
FragIndexId BigInt Identity(1,1),
--IDENTITY(int, 1, 1) AS FragIndexId,
DBNAME nvarchar(4000),
TableName nvarchar(4000),

[Code] ....

View 5 Replies View Related

Indexes Vs Clustered Indexes

Sep 17, 2006

What is the difference please?

View 1 Replies View Related

Deleted Default DB, Need Hlp Fast!

Mar 27, 2004

I have stoopidly enough deleted default Db. That causes Enterprise Manager to be unable to work with my DB's. The default DB I deleted had no other functions other then being default DB, I mean it was outdated, and I had other DB's that contained all my importent work. They are still running, and I can view DB driven site hosted at localhost, even though default DB no longer excist. I am even able to upload new content, or add new users, so this means all my other DB's are fine. I can even see SQL server icon in my bottom right corner of my desktop, and it shows server running.

Now I am in the need of adding tables and rework some of my excisting tables and stored procedures, but I am not able to do that with Enterprise Manager, due to the lack of default Database.

How do I correct this problem? I have gotten one tip of doing the following: EXEC sp_defaultdb 'User', 'DB' but I am not sure what to do with this.....tried to run it from command line, and put my username and the DB I would set to default but nothing happend.

So I need more details, step-by-step guiding will work, as I don't know a hole lot about Enterprise Manager and SQL.

Btw, this is my error in Enterpr.Managr:

A connection could not be established to MyComputerVSDOTNET2003

Reason: Cannot open default database. Login failed..

Please verify SQL server is running and check your SQL server registration prpoerties and try again

Pls tell me there is a way to fix this problem.

View 6 Replies View Related

Fast User Interface

Dec 14, 2001

Hello everybody,
please advice: what is the fastest standard method of user interface access to SQL database? I am looking for fast display of one master record plus related dependent records, plus fast scrolling through master records with display of dependent records as fast as posible. Perhaps a standard problem with standard solution? At current state of matters, I am still much slower then with my old Access97 database.

thanks for any advice,
Otakar Kverka
Prague

View 1 Replies View Related

TempDB Growing Fast

Sep 10, 2003

I notice this morning that my tempdb grows very fast. I have 26GB in my
hardrive and all the space occupied by tempdb and finaly the qeury got failed due to 0 space in hardrive and there is no space to grow tempdb.
The select query supposed to bring about 40000 rows.
I ran this same query in different server that is not growing even 1 mb.
I checked the tempdb option the Trunc log on checkpoint is true.

Why this problem happening ?.
I have just dbo permission to access all the database.
Do you have any advice regarding this?.
Thanks,
Ravi

View 3 Replies View Related

How To Move Data Fast

Jan 23, 2004

Hello everybody
We need to move table T1 from database A to T1 database B on same server

size of table T1 15 GB and 40000000 rows

database B just created and will act as warehouse

could it be done simply by
1.creating table T1 on db B and then
2.set db to simple recovery
3.
insert into B.dbo.T1
select * from A.dbo.T1
4. create all the indexes on table T1 in db B

free disk space is 35GB

Any idea how to optimze import
Thank you

View 5 Replies View Related

The Fast Way To Restore Database

Oct 6, 2005

Hello, everyone:

My database backup files are 3-5GB. Restoring always take over 20 minutes. Is there the fast way to restore the big database?

Thanks

ZYT

View 5 Replies View Related

2005 Too Fast Problem!!!

Sep 5, 2007

I have a weird situation I had not expected.....

I insert a record to a table and "later" I update it.
I have two fields to capture time information: Created and LastModified.
My update is very simple: update .... set ..,[LastModifiedDate] = GetDate() where id = @pId.

Now my problem is that I am seeing the created and lastmodified times as the same (in format 2007-09-05 12:38:42.383) !!??!

The record has definitely been updated (other fields are populated).

Can anybody enlighten me?

View 14 Replies View Related

Seeking For The Fast Query

May 11, 2004

Hello, everyone:

There is a big table with several million records. I am developing a query that retrieve the first rowset that meets WHERE condition. Any suggestions for the fast query? Thanks a lot.

ZYT

View 2 Replies View Related

Sp Fast Loop Through Resultset

May 16, 2008

Hi,

I have made some stored procedures to check if a user is involved with a certain record. basically every stored procedure contains the following logic.

example spCheckClientRelated:
select @res = count(*) from client_role where client_id = @cid and employee_id = @eid

if (@res = 0)
begin
... next select
end
if (@res = 0)
begin
... next select
end
....
return @res
end

so far so good. But the final check in CheckClientRelated tests if a user is related to one of the sales projects for that client.

I allready have the spCheckSalesProjectRelated that returns 1 or 0 similar to the example above

so I want to find an efficient method that selects all the sales_project_id 's from the sales_project table where client_id = @cid (i use offcourse select @sid = sales_project_id from sales_project where client_id = @cid at the moment)

And then I have to execute the spCheckSalesProjectRelated method for each @sid and @eid. This if offcourse where my problem is located. I don't know how to do a fast check for every selected @sid, until spCheckSalesProjectRelated returns 1

As you probably can determine from my question, sql is not really my domain, and I'm certainly not an expert, but I don't mind reading or looking up some stuff, so even a clue or a direction to look in would be most appreciated

thx in advance.

View 3 Replies View Related

Form Slow, Then Fast

Oct 16, 2005

I have a very puzzling situation with a database. It's an Access 2000 mdbwith a SQL 7 back end, with forms bound using ODBC linked tables. At ourremote location (accessed via a T1 line) the time it took to go to a recordwas very slow. The go to mechanism was a box that the user typed the indexvalue into a combo box, with very simple code attached:with me.RecordsetClone.FindFirst "[Index] = " & me.cboGoToIf Not .NoMatch ThenMe.Bookmark = .BookmarkEnd Ifend withNow, one would say that going to a record is slow because I'm using..FindFirst over a T1 line. And that's what I thought. However, as I wasworking with the form, commenting out various sections not related to the GoTo, I found that the Go To functionality changed, though I didn't modify thecode.Previously, going to a record near the end of the 50,000 record recordsettook about 1-2 seconds, but going to a record near the beginning, took about20 seconds. After the form changed, going to any record in the recordsettook about 1-2 seconds.So the question remains: why did it take so long to go to a record near thebeginning of the recordset, but not near the end (and the ones in the middletook an amount of time about halfway between the two), and what changed sothat now the form is working fine for all records?I've compared the changed form with the previous copy, and I don't see anydifferences. I've compared all code in the form module, and I've comparedall form properties. The forms are identical as far as I could tell. Butsomething happened as I was commenting/uncommenting code in the form thatgot rid of the problem with it taking a long time to go to some of therecords.My first thought was that something got recompiled, and now the form isfast. So I went back to the original version and changed some code andrecompiled, also did a compact and repair. But it was still slow. I alsotried doing an explicit decompile and then recompiled it. But it was stillslow.So this is very frustrating that the form is now working fine, but I can'tsee anything that's changed. If I don't see why the form is now fast, thenthere's no reason to believe that it might not at some point go back tobeing slow again. And then I'd just have to hope that something changes. Itwould be good to figure this out.Any ideas as to what might have changed here to cause the form's Go To to befast would be appreciated.Thanks,Neil

View 8 Replies View Related

Fast Look Up Of Long (n)varchar

Mar 6, 2006

I have a table containing URLs. I want to be able to look up an URL veryfast, so I used an nvarchar to store the URL, and put an index on it(maybe naive).Anyway, I bump into:"The index entry of length 911 bytes for the index 'UQ__URL__1367E606'exceeds the maximum length of 900 bytes."What's the best way to handle this? I want to do the look up fast. Theonly thing I could think up was adding an extra column containing a digestfor the URL, and look up all URLs with the same digest, *and* having thesame value (which could give either 1 or 0 results).I am new to MS SQL, so I might describe a silly solution, basically I wantto look up URLs to ID the fastest way possible.--John MexIT: http://johnbokma.com/mexit/personal page: http://johnbokma.com/Experienced programmer available: http://castleamber.com/Happy Customers: http://castleamber.com/testimonials.html

View 3 Replies View Related

Need A Fast Queue Using A Table

Jul 20, 2005

I am trying to implement a very fast queue using SQL Server.The queue table will contain tens of millions of records.The problem I have is the more records completed, the the slower itgets. I don't want to remove data from the queue because I use thesame table to store results. The queue handles concurrent requests.The status field will contain the following values:0 = Waiting1 = Started2 = FinishedAny help would be greatly appreciated.Here is a simplified script to demonstrate what has been done.CREATE TABLE [dbo].[Queue] ([ID] [int] IDENTITY (1, 1) NOT NULL ,[JobID] [int] NOT NULL ,[Status] [tinyint] NOT NULL) ON [PRIMARY]GOCREATE INDEX [Status] ON [dbo].[Queue]([Status]) ON [PRIMARY]GOCREATE PROCEDURE dbo.NextItem@JobID integer,@ID integer outputASSELECT TOP 1 @ID = [ID]FROM Queue WITH (READPAST, XLOCK)WHERE (Status = 0) AND (JobID = @JobID)RETURNGO

View 6 Replies View Related

Fast Counting Of Records

Jul 20, 2005

I seem to remember reading many moons ago about a function where youcan retrieve a count of the last recordset you opened.For example:I've got a stored procedure that returns a recordset using the TOP 10so I only get the top 10 records. I need to know the recordcount but Idont want to reuse the SELECT statement because its quite complex.Any ideas?What does @@Count do?Thanks in advance

View 1 Replies View Related

Fast Record Count

Mar 21, 2007

I think I am asking this in the right place.

I'm trying to get a record count from a table with 30million items and it takes forever.

Here is my code:

SELECT COUNT(f_id) AS 'ROWCOUNT' FROM tablename

Is there a faster way.

BTW f_id is primary key indexed.

Thanks

View 5 Replies View Related

Templog Is Growing Too Fast

Dec 1, 2007



Templog is growing 1 GB per hour.

I've read some articles about this issue, that talk about how to shrink it.

In this case I need to find out what and why this is happening

How can I monitore it?

I know, sometimes, I exaggerate in using temporary files, in order to make reports faster.

The Tempdb size is normal.

Thanks

View 4 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved