Abe writes "i am using SQL server 2000, when i tried to back up the database in SQL server, the backup began failing coz SQL Server could no longer allocate 964kb of contiguous blocks. (the min needed is 1MB)
What causes the memory or blockes to be fragmented?
How do you solve this issue other than rebooting the server?
How do you prevent this from happening or monitor this issue?
I did a load testing and found the following observations:
1. The Memory:Pages/sec was crossing the limit beyond 20.
2. The Target Server Memory was always greater than Total Server Memory
Seeing the above data it seems to be memory pressure. But I found that AvailableMemory was always above 200 MB. Also Buffer Cache HitRatio was close to 99.99. What could be the reason for the above behavior?
I have a process(Peoplesoft) in SQL Server which takes long time to finish. I am looking for is there any way to find out fragmentation of the table to defagment it or is there any way to allocate the size for particular table. Couple of users are running the process at same time but as SQL Server has table/page locks it locks and releases after the job is done. Can i make it row level lock by executing sp_tableoption procedure. I would appreciate for all your suggeations.
I created a thread 2 days back on a performance problem RE non-trusted check constraints and foreign key constraints. We are planning to make them trusted to see if it works. On the other hand, we see about 50+ clustered/non-clustered indexes with >90% fragmentation but the page counts for all these indexes are in the range between 500-900. I'm reading in the whitepaper that index fragmentation with less than 1000 pages is not a concern.
sql server 2000 is running on windows server 2003 ... 4gb of memory on server .... 2003 was allocated 2.3gb nd sql server was allocated (and using all of it) 1.6gb for total of approx 4gb based on idera monitor software ... all memory allocated betweeen the OS and sql server .... then 4 more gb of memory added for total now of 8g ... now idera monitor shows 1.7gb for OS and 1.0 gb for sql server ..... 'system' info shows 8gb memory with PAE ... so I assume that the full 8gb can now be addressed .... why are less resources being used now with more total memory .... especially sql server ..... i thought about specifying a minimum memmry for sql server but i amnot convinced that would even work since it seems that this 1gb limit is artificial .... it it used 1.6 gb before why would it not use at least that much now ??
I have bunch of heap tables and the fragmentation seems to be high, i am not sure whether i shall add index for them, as these tables are inserted and updated every day.
We have a database with a table that contains around 180m records. Each day a further 70k are inserted. No records are ever deleted as this table is used for archiving only.Users are required to perform SELECTs on this table constantly but due to the high number of INSERTs the indexes become very fragmented very quickly. My aim is to avoid daily rebuilds of the indexes which is what our software house is telling us we have to do.
This is the DDL for the table:
CREATE TABLE [dbo].[Inventory]( [EAN] [bigint] NOT NULL, [Day] [smalldatetime] NOT NULL, [State] [int] NOT NULL, [Quantity] [int] NULL, [StockValue] [float] NULL, CONSTRAINT [PK_Inventory] PRIMARY KEY CLUSTERED
[code]...
There are also three clustered Indexes on this table each referencing a single column. The problem from my side is that I cannot understand why the three columns in a primary key would also be configured as non-clustered indexes.My solution would be one of the following:
1. Accept the tables are going to be fragmented and require a daily rebuild (don't like this one!)
2. Partition the table
3. Remove the non-clustered Indexes and let the clustered index for the primary key do the work.
Would someone please let me know which is the better way to reduce fragmentation on Windows:NT drive? dbcc indexdefrag or dbcc reindex or dropping and recreating indexes through SQL?
Hello. I have received the follwoing error upon an attempt to Browse the Cube. All other tabs are functional, including the Calculations tab. We are running Windows Server 2003 SP2 and SQL Server 2005 SP2. Any suggestions would be greatly appreciated!
**EDIT** - Have confirmed SP1 for VS2005 is installed both locally and on server, also.
Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Microsoft Visual Studio)
------------------------------ Program Location:
at Microsoft.Office.Interop.Owc11.PivotView.get_FieldSets() at Microsoft.AnalysisServices.Controls.PivotTableFontAdjustor.TransformFonts(Font font) at Microsoft.AnalysisServices.Browse.CubeBrowser.UpdatePivotTable(Boolean translate) at Microsoft.AnalysisServices.Browse.CubeBrowser.UpdateAll(Boolean translate) at Microsoft.AnalysisServices.Browse.CubeBrowser.InitialUpdate() at Microsoft.AnalysisServices.Browse.CubeBrowser.SupportFunctionWhichCanFail(FunctionWhichCanFail function)
I've been researching AWE to determine if we should enable this for our environment.
Currently we have a quad core box with 4 gb of RAM (VMware). OS: Windows 2003 std, SQL Server 2005 std. 3GB is not set but will be as soon as we can perform maintenance on the server.
I have read mixed feedback on AWE, either it works great or grinds you to a hault. I would assume that the grinding to a hault is due to not setting the min/max values correctly or not enabling the lock page in memory setting.
We only have one instance of SQL on the server and this box won't be used for anything else aside from hosting SQL services. We do plan on running SSRS off of this server as well.
1. Will running SSRS and enabling AWE cause me problems? Will I have to reduce the max setting by the SSRS memory usage or will it share and play nice?
2. How do I go about setting the Max value? Should it be less than the physical RAM in the box? Right now its set to the default of 214748364, even if I don't enable AWE should this default value be changed?
3. It seems that even at idle the SQL server holds a lot of memory and the page file grows. If I restart the process in the morning, memory usage in taskmon is at 600mb or so. By the end of the day, its up around 2gb. How can I track down whats causing this, should this even concern me?
4. The lock Page in memory setting worries me. Everything I've read on this seems to give a warning about serious OS and other program support degradation. In some cases to the point where they have to restore the settings on the server before they can bring it back up. What are your thoughts on this.
i have inherited a database running on sql7 sp2. a lot of the tables do not have a clustered index. some of these tables are highly fragmented. below is a sample of the dbcc showcontig output
this table has 5 non-null char columns whose total length is 43 chars and the table occupies about 900 mb for only about 670,000 rows! select on the table sucks!
since there is no clustered index i have to rebuild the table to remove the fragmentation. is there a way to reorganize the table without recreating it?
I have a problem with defragmenting the tables. When i ran dbcc showcontig for the tables, it shows 94%, 50%, 33% fragmentations . I have rebuild indexes on all those tables and ran dbcc showcontig again but still it shows the same % of fragmentations. Is there any way to remove fragmentation on all the tables completely. Any help would be appreciated.
What's the best way to find out if disk fragmentation on Windows 2000 Server is affecting SQL Server performance?
If disk fragmentation is shown to be a cause of performance problems, what are the recommendations for a disk fragmentation strategy? eg. use the win 2000 built in disk defrag utility or buy a 3rd party product like DiskKeeper? How much of an overhead is a product like DiskKeeper that defrags in the background?
How can I mesure the database fragmentation ? Cause DBCC SHOWCONTIG shows obects fragmentation only. I would like to see the whole database fragmentation.
What functions of tools do you use for managing index fragmentation?
DBCC?
I am working through MS Press SQL 2005 book and it mentions the sys.dm_db_index_physical_stats function? It then give an example of code which is very involved.
before rebuild index --------------------- tablenameIndexname avg_fragmentation_in_percent Payoff_QuotePK_Payoff_QuoteDetail 83.3333333333333 ALTER INDEX ALL ON Payoff_Quote REBUILD after issue the above statement result remain same.
I have a nice script that will look at the index fragmentation by using the DMV (sys.dm_db_index_physical_stats) and if its above a specificed threshhold, it will rebuild or reorg the index.
What I have noticed is that even after reorging and/or rebuilding the index, the fragmentation percent in the DMV does not change for some indexes. Other indexes are updated just fine.
Is this a result of updating statistics? Why would the fragmentation change for some indexes and not others? Why does it seem no matter how much rebuilding or reorging is done, the fragmentation percent for some indexes does not change?
We backup a lot of SQL databases. The db admin uses SQL to dump the databases to a *.bak file on a network share, then we pick them up to tape.
The problem is that for some reason, the backup files are MASSIVELY fragmented, which kills our backup speed to tape (via Netbackup Enterprise). If I defragment the drive (via the built-in tool on Win2K3 or PerfectDisk 8) our speed to tape more than doubles (can go from 16MB/sec to 36MB/sec). However, after the next SQL backup, the drive is completley fragmented once again (around 70%).
Is there any way to improve how these files are written to disk (to keep them relatively in one piece)? For some reason it appears this is a bigger issue with the SQL backup files than any other file type. Thanks.
I have a Windows sever 2012 with sql server 2012 enterprise. Ram size is 22GB. Sometimes SQL sever takes 95% memory.My question, How to reduce memory size without killing any process because it's production server.So there are many background process is running. And,Is there any guides to learn why Memory is raise d so high and how to reduce it.
How do you defragement SQL database data files? Running the DBCC DBREINDEX , DBCC INDEXDEFRAG, does that minimize fragmentations of the tables and the indexes files or is it just the index files? Thanks in advance
We have nightly job running to reorg all in one of our prod database. But the index on one of the table fragmenting quickly by the morning showing 90% fragmentation.
I have a table with 50% Logical Scan Fragmentation. [ according to Dbcc Showcontig (myTable) ] Why after running DBCC INDEXDEFRAG (myDB,myTable) does it still sit at 50%. Why isn't it lower?
I've inherited a SQL 2005 Server with a major problem. It's been badly admin'd for ages.
It's got a 250gb disk with 3% space free. The disk is 93% fragmented. There is a sql data file of 231gb on that volume that i'm trying to shrink as it has 92gb of free space on it apparently. It won't shrink because the fragmentation is too bad - it just hangs for 2 days then times out. I've tried shrinking the files and database to no avail. I can't defrag as it just says it's finished after 2 minutes and doesn't do anything.
can you defrag system tables. They appear in my dbcc showcontigreport. Some are worse than other but if it where a user table Iwould defrag it. However the reindex commands doesnt work on systemtables.... Any ideas.Thanks Matt.
Hi, It is natural that index gets fragment overtime. But I would like to know how you do the reindexing or defragment when your database table is big and you cannot afford the time to rebuild them. Thanks.