Indexes: An Overview And Maintenance For Performance
Nov 30, 2004
Many people know the importance of creating indexes on SQL Server database tables. Indexes greatly improve the performance of a database. However, while many people create indexes on their SQL Server tables, many people don't maintain them properly to ensure queries run efficiently as possible.
Hi, I maintain a 24x7 SQL server on a claster with a big table (one of the 900 others) and few indexes on it. It takes up to 4hrs to rebuild those indexes on this table (alone) every week on Sunday morning, and cause of that users who use the server complain that the application is "down" and they can't access the data and blah, blah, blah. I tried all the possible scenarios of index maintenance and no way to fit it within our 1hr window. So I just wondering how you guys as DBAs of the 24x7 systems facing the same issue (if you have it indeed). Would be nice to know your expirience on that.
Normally we use rebuild, reorganize indexes when it is required, I used a SQL job using maintenance plan to run daily and rebuild, reorganize indexes and update statistics but I do not know if it runs either they are required or not. Should this plan automatically execute the build upon required indexes to be rebuild or it fires either they are required to be executed or not.
Hi all, Data load on a database without indexes(only nonclustered indexes on primary keys) took appx. 45 minutes. Data load on the same database after creating many indexes (with default fillfactor of 0%)to improve query execution time took appx. 45 minutes. Data load on the same database after creating many indexes (with fillfactor of 70%)to improve load time took appx. 90 minutes.
Can someone see the reason for this or justify this behaviour please?
OK so I have this EAV system on a server that is old enough for kindergarten. Insanely enough, this company that makes more money than any of your gods can not buy me a new box.
Before you say "redesign", I need funding allocated for that. See my first statement.
Anywho, I have this page that touches the dreaded Value table and does a clustered index seek on it. Can't search faster than that, right? Well I am getting some funding for "performance tuning". I am wondering if maybe incorporating some clustered index views involving the value table and producing a smaller clustered index for it to seek may alleviate some of this. Any thoughts?
I'm trying to improve the loading of some tables with large amounts of data that forms part of an ETL. I was going to try removing any indexes before the inserting to speed up the process, but I had some questions on whether or not I should include the clustered index (assuming one exists).
I was originally planning on including a step to disable all indexes on the destination table using the following:
ALTER INDEX ALL ON MyTable DISABLE
Once the load had finished I'd simply rebuild all the indexes.
should I simply disable the non-clustered indexes?
hello,I've been assigned to do performance tuning on an SQL2000 database(around 10GB in size, several instances).So far, I see a single RAID5 array, 4CPU (xeon 700MHZ), 4GB RAM.I see the raid5 as a bottleneck. I'd setup a raid 10 and seperate thelogs, database and OS(win2k).The one thing that was a bit odd to me was that I was told this placedoesn't use indexes. The company is a house builder. They are prettylarge.The IT manager isn't a programmer so she couldn't explain to me why noindexes are used. She told me the programmers just don't use indexes.Before I start investing more time on this, I'd really like to learnabout why you wouldn't want to use indexes - especially on such a largedatabase!Thanks,Oskar
Dear all, I'm using SQL Server 2005 Standard Edetion. I have the following stored procedure that is executed against two tables (RecrodedCalls) and (RecordedCallsTags) The table RecordedCalls has more than 10000000 Records and RecordedCallsTags is about 7500000 Records Now the lines marked in baby blue are dynamic (Dynamic where statement) that varies every time this stored procedure is executed, may it contains 7 columns in condetion statement or may it contains 10 columns, or 2 coulmns.....etc Now I want to create non-clustered indexes on the columns used in the where statement, THE DTA suggests different indexing whenever the where statement changes. So what is the right way to created indexes, to create one index on all the columns once, or to create separate indexes on each columns, sometimes the DTA suggests 5 columns together at one if I€™m using 5 conditions, I can€™t accumulate all the possible indexes hence the where statement always vary from situation to situation, below the SP:
CREATE TABLE #tempLookups (ID int identity(0,1),Code NVARCHAR(100),NameE NVARCHAR(500),NameA NVARCHAR(500))
CREATE TABLE #tempTable (ID int identity(0,1),TypesCount INT,CallsType NVARCHAR(50))
INSERT INTO #tempLookups SELECT Code, NameE, NameA FROM lookups WHERE [Type] = 'CALLTYPES' ORDER BY Ordering ASC
INSERT INTO #tempTable SELECT COUNT(DISTINCT(RecordedCalls.ID)) As TypesCount,RecordedCalls.CallType as CallsType
FROM RecordedCalls LEFT OUTER JOIN RecordedCallsTags ON RecordedCalls.ID = RecordedCallsTags.CallID
WHERE RecordedCalls.ID <= '9369907'
AND (RecordedCalls.CallDate BETWEEN cast ('01 Jan 1910 00:00:00:000' as datetime ) AND cast ( '01 Jan 2210 00:00:00:000' as datetime ))
AND (RecordedCalls.Duration BETWEEN 0 AND 1000000)
AND RecordedCalls.ChannelID NOT IN('62061','62062','62063','62064','64110','64111','64112','64113','64114','69860','69861','69862','69863','69866','69867','69868')
AND RecordedCalls.ServerID NOT IN('2')
AND RecordedCalls.AgentID NOT IN('1000010000')
AND (RecordedCallsTags.TagID is null OR RecordedCallsTags.TagID NOT IN('100','200'))
AND RecordedCalls.IsDeleted='false'
GROUP BY RecordedCalls.CallType
SELECT IsNull(#tempTable.TypesCount, 0) AS TypesCount, CASE('English')
WHEN 'Arabic' THEN #tempLookups.NameA
ELSE #tempLookups.NameE
END AS CallsType FROM
#tempTable RIGHT OUTER JOIN #tempLookups ON #tempTable.CallsType = #tempLookups.Code
DROP TABLE #tempLookups
DROP TABLE #tempTable
Thanks all, Tayseer
Any suggestions how to create efficient indexes??!!
Is there a performance limit on the number of indexes per table / database ? With Filtered indexes there appear to be many more opportunities for more finely defined, and therefore smaller indexes resulting in many more indexes on a single table.
I've done some basic database stuff with Access and ASP, but I have a client who has told me that he is in over his head and wants me to try getting his "mini CMS" site to work. He told me that he set everything up in Dreamweaver using ASP and "MSSQL"
I have never used what he calls: "MSSQL" before. Do I need software to create an "MSSQL" database? or is it all coded?
Also, one other question. The whole purpose of this site is to have Admin users log in to a page where they can edit the content of a page and upload it. If there is a large block of text, does it all reside in a field in the database, or would it be a link to a text file? kind of like linking to an image since images don't actually reside in the database.
This is my first project with SSIS and I'm having some problems migrating our current data import to it. Generally: right now we're pulling hourly data (which has a timestamp on it) in from a customer's db(which can't be modified, and is big and growing quickly) into our db. To do this we have been using a field to keep track of the last data download time (in our db) and then using that then when querying the customer's db to reduce the number of rows returned.
Will a similar system work in SSIS? I'm having trouble pulling the last download date out of the "target" database and using it in the query on the customer's db. Any advice on what I should be looking into to do this and am I headed in the right direction or just not thinking about this in a SSIS way?
How powerful/common are variables in SSIS and should I be trying to use them here?
My requirement is to sling a rowset from one place in SQL server into a table in another place in the most performant way. I want this to be parameterizable - I want to provide just a connection string and some SQL for the source and a connection string and a table name for the destination. The package should do the rest.Â
The solution I chose was an 2014 SSIS package with source and destination as ADO.NET connections configured from project variables. The package has a script task to bulk copy the data. For performance I disable the non-clustered indexes first.Â
But this performance precaution causes the bulk copy to timeout after delivering the correct rowcount to the destination table. What I can do to avoid this error?
Here's my script code:
//get hold of the source and a data reader from it SqlConnection sqlconnSource = new SqlConnection(); sqlconnSource = (SqlConnection)(Dts.Connections["source"].AcquireConnection(Dts.Transaction) as SqlConnection); SqlCommand sourcesqlCommand = new SqlCommand(SourceSQL, sqlconnSource); sourcesqlCommand.CommandTimeout = 1500;
[Code] ....
This takes 128 seconds to put 13 million thin rows into my empty destination table and then throws an exception with this message:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
SQL Server 2008 delivers on Microsoft€™s Data Platform vision by helping your organization manage any data, any place, any time. It enables you to store data from structured, semi-structured, and unstructured documents, such as images and music, directly within the database. SQL Server 2008 delivers a rich set of integrated services that enable you to do more with your data such as query, search, synchronize, report, and analyze. Your data can be stored and accessed in your largest servers within the data center all the way down to desktops and mobile devices, enabling you to have control over your data no matter where it is stored.
SQL Server 2008 enables you to consume your data within custom applications developed using Microsoft .NET and Visual Studio and within your service-oriented architecture (SOA) and business process through Microsoft BizTalk Server while information workers can access data directly in the tools they use every day, such as the 2007 Microsoft Office system. SQL Server 2008 delivers a trusted, productive, and intelligent data platform for all your data needs.
SQL Server 2008 New Features
TRUSTED
SQL Server provides the highest levels of security, reliability, and scalability for your business-critical applications.
Protect valuable information
Transparent Data Encryption
Enable encryption of an entire database, data files, or log files, without the need for application changes. Benefits of this include: Search encrypted data using both range and fuzzy searches, search secure data from unauthorized users, and data encryption without any required changes in existing applications.
Extensible Key Management
SQL Server 2005 provides a comprehensive solution for encryption and key management. SQL Server 2008 delivers an excellent solution to this growing need by supporting third-party key management and HSM products.
Auditing
Create and manage auditing via DDL, while simplifying compliance by providing more comprehensive data auditing. This enables organizations to answer common questions, such as, "What data was retrieved?"
> Learn more about SQL Server 2008 Security
Ensure business continuity
Enhanced Database Mirroring
SQL Server 2008 builds on SQL Server 2005 by providing a more reliable platform that has enhanced database mirroring, including automatic page repair, improved performance, and enhanced supportability.
Automatic Recovery of Data Pages
SQL Server 2008 enables the principal and mirror machines to transparently recover from 823/824 types of data page errors by requesting a fresh copy of the suspect page from the mirroring partner transparently to end users and applications.
Log Stream Compression
Database mirroring requires data transmissions between the participants of the mirroring implementations. With SQL Server 2008, compression of the outgoing log stream between the participants delivers optimal performance and minimizes the network bandwidth used by database mirroring.
> Learn more about SQL Server 2008 High Availability
Enable predictable response
Resource Governor
Provide a consistent and predictable response to end users with the introduction of Resource Governor, allowing organizations to define resource limits and priorities for different workloads, which enable concurrent workloads to provide consistent performance to their end users.
Predictable Query Performance
Enable greater query performance stability and predictability by providing functionality to lock down query plans, enabling organizations to promote stable query plans across hardware server replacements, server upgrades, and production deployments.
Data Compression
Enable data to be stored more effectively, and reduce the storage requirements for your data. Data compression also provides significant performance improvements for large I/O bound workloads, like data warehousing.
Hot Add CPU
Dynamically scale a database on demand by allowing CPU resources to be added to SQL Server 2008 on supported hardware platforms without forcing any downtime on applications. Note that SQL Server already supports the ability to add memory resources online.
> Learn more about SQL Server 2008 Performance and Scale
PRODUCTIVE
To take advantage of new opportunities in today's fast-moving business world, companies need the ability to create and deploy data-driven solutions quickly. SQL Server 2008 reduces time and cost of management and development of applications.
Manage by policies
Policy-Based Management
Policy-Based Management is a policy-based system for managing one or more instances of SQL Server 2008. Use this with SQL Server Management Studio to create policies that manage entities on the server, such as the instance of SQL Server, databases, and other SQL Server objects.
Streamlined Installation
SQL Server 2008 introduces significant improvements to the service life cycle for SQL Server through the re-engineering of the installation, setup, and configuration architecture. These improvements separate the installation of the physical bits on the hardware from the configuration of the SQL Server software, enabling organizations and software partners to provide recommended installation configurations.
Performance Data Collection
Performance tuning and troubleshooting are time-consuming tasks for the administrator. To provide actionable performance insights to administrators, SQL Server 2008 includes more extensive performance data collection, a new centralized data repository for storing performance data, and new tools for reporting and monitoring.
> Learn more about SQL Server 2008 Manageability
Simplify application development
Language Integrated Query (LINQ)
Enable developers to issue queries against data, using a managed programming language, such as C# or VB.NET, instead of SQL statements. Enable seamless, strongly typed, set-oriented queries written in .NET languages to run against ADO.NET (LINQ to SQL), ADO.NET DataSets (LINQ to DataSets), the ADO.NET Entity Framework (LINQ to Entities), and to the Entity Data Service Mapping provider. Use the new LINQ to SQL provider that enables developers to use LINQ directly on SQL Server 2008 tables and columns.
ADO.NET Object Services
The Object Services layer of ADO.NET enables the materialization, change tracking, and persistence of data as CLR objects. Developers using the ADO.NET framework can program against a database, using CLR objects that are managed by ADO.NET. SQL Server 2008 introduces more efficient, optimized support that improves performance and simplifies development.
> Learn about SQL Server 2008 ADO.NET Object Services and LINQ
Store any information
DATE/TIME
SQL Server 2008 introduces new date and time data types:
DATE€”A date-only type
TIME€”A time-only type
DATETIMEOFFSET€”A time-zone-aware datetime type
DATETIME2€”A datetime type with larger fractional seconds and year range than the existing DATETIME type
The new data types enable applications to have separate data and time types while providing large data ranges or user defined precision for time values.
HIERARCHY ID
Enable database applications to model tree structures in a more efficient way than currently possible. New system type HierarchyId can store values that represent nodes in a hierarchy tree. This new type will be implemented as a CLR UDT, and will expose several efficient and useful built-in methods for creating and operating on hierarchy nodes with a flexible programming model.
FILESTREAM Data
Allow large binary data to be stored directly in an NTFS file system, while preserving an integral part of the database and maintaining transactional consistency. Enable the scale-out of large binary data traditionally managed by the database to be stored outside the database on more cost-effective storage without compromise.
Integrated Full Text Search
Integrated Full Text Search makes the transition between Text Search and relational data seamless, while enabling users to use the Text Indexes to perform high-speed text searches on large text columns.
Sparse Columns
NULL data consumes no physical space, providing a highly efficient way of managing empty data in a database. For example, Sparse Columns allows object models that typically have numerous null values to be stored in a SQL Server 2005 database without experiencing large space costs.
Large User-Defined Types
SQL Server 2008 eliminates the 8-KB limit for User-Defined Types (UDTs), allowing users to dramatically expand the size of their UDTs.
Spatial Data Types
Build spatial capabilities into your applications by using the support for spatial data.
Implement Round Earth solutions with the geography data type. Use latitude and longitude coordinates to define areas on the Earth's surface.
Implement Flat Earth solutions with the geometry data type. Store polygons, points, and lines that are associated with projected planar surfaces and naturally planar data, such as interior spaces.
> Learn more about SQL Server 2008 Programmability and Spatial Data
INTELLIGENT
SQL Server 2008 provides a comprehensive platform, delivering intelligence where your users want it.
Integrate any data
Backup Compression
Keeping disk-based backups online is expensive and time-consuming. With SQL Server 2008 backup compression, less storage is required to keep backups online, and backups run significantly faster since less disk I/O is required.
Partitioned Table Parallelism
Partitions enable organizations to manage large growing tables more effectively by transparently breaking them into manageable blocks of data. SQL Server 2008 builds on the advances of partitioning in SQL Server 2005 by improving the performance on large partitioned tables.
Star Join Query Optimizations
SQL Server 2008 provides improved query performance for common data warehouse scenarios. Star Join Query optimizations reduce query response time by recognizing data warehouse join patterns.
Grouping Sets
Grouping Sets is an extension to the GROUP BY clause that lets users define multiple groupings in the same query. Grouping Sets produces a single result set that is equivalent to a UNION ALL of differently grouped rows, making aggregation querying and reporting easier and faster.
Change Data Capture
With Change Data Capture, changes are captured and placed in change tables. It captures complete content of changes, maintains cross-table consistency, and even works across schema changes. This enables organizations to integrate the latest information into the data warehouse.
MERGE SQL Statement
With the introduction of the MERGE SQL Statement, developers can more effectively handle common data warehousing scenarios, like checking whether a row exists, and then executing an insert or update.
SQL Server Integration Services (SSIS) Pipeline Improvements
Data Integration packages can now scale more effectively, making use of available resources and managing the largest enterprise-scale workloads. The new design improves the scalability of runtime into multiple processors.
SQL Server Integration Services (SSIS) Persistent Lookups
The need to perform lookups is one of the most common ETL operations. This is especially prevalent in data warehousing, where fact records need to use lookups to transform business keys to their corresponding surrogates. SSIS increases the performance of lookups to support the largest tables.
> Learn more about SQL Server 2008 Integration Services and Data Warehousing
Deliver relevant information
Analysis Scale and Performance
SQL Server 2008 drives broader analysis with enhanced analytical capabilities and with more complex computations and aggregations. New cube design tools help users streamline the development of the analysis infrastructure enabling them to build solutions for optimized performance.
Block Computations
Block Computations provides a significant improvement in processing performance enabling users to increase the depth of their hierarchies and complexity of the computations.
Writeback
New MOLAP enabled writeback capabilities in SQL Server 2008 Analysis Services removes the need to query ROLAP partitions. This provides users with enhanced writeback scenarios from within analytical applications without sacrificing the traditional OLAP performance.
> Learn more about SQL Server 2008 Analysis Services
Drive actionable insights
Enterprise Reporting Engine
Reports can easily be delivered throughout the organization, both internally and externally, with simplified deployment and configuration. This enables users to easily create and share reports of any size and complexity.
Internet Report Deployment
Customers and suppliers can effortlessly be reached by deploying reports over the Internet.
Manage Reporting Infrastructure
Increase supportability and the ability to control server behavior with memory management, infrastructure consolidation, and easier configuration through a centralized store and API for all configuration settings.
Report Builder Enhancements
Easily build ad-hoc and author reports with any structure through Report Designer.
Built-In Forms Authentication
Built-in forms authentication enables users to easily switch between Windows and Forms.
Report Server Application Embedding
Report Server application embedding enables the URLs in reports and subscriptions to point back to front-end applications.
Microsoft Office Integration
SQL Server 2008 provides new Word rendering that enables users to consume reports directly from within Microsoft Office Word. In addition, the existing Excel renderer has been greatly enhanced to accommodate the support of features, like nested data regions, sub-reports, as well as merged cell improvements. This lets users maintain layout fidelity and improves the overall consumption of reports from Microsoft Office applications.
Predictive Analysis
SQL Server Analysis Services continues to deliver advanced data mining technologies. Better Time Series support extends forecasting capabilities. Enhanced Mining Structures deliver more flexibility to perform focused analysis through filtering as well as to deliver complete information in reports beyond the scope of the mining model. New cross-validation enables confirmation of both accuracy and stability for results that you can trust. Furthermore, the new features delivered with SQL Server 2008 Data Mining Add-ins for Office 2007 empower every user in the organization with even more actionable insight at the desktop.
> Learn more about SQL Server 2008 Reporting Services and Data Mining For more info, please go to http://www.microsoft.com/sqlserver/2008/en/us/default.aspx
I'm working to improve performance on a database I've inherited, and there are several thousand indexes. I've got a list of ones which should definitely exist within the database, and I'm looking to strip out all the others and start fresh, though this list is still quite large (1000 or so).
Is there a way I can remove all the indexes that are not in my list without too much trouble? I.e. without having to manually go through them all individually. The list is currently in a csv file.
I'm looking to either automate the removal of indexes not in the list, or possibly to generate the Create statements for the indexes on the list and simply remove all indexes and then run these statements.
As an aside, when trying to list all indexes in the database, I've found various scripts to do this, but found they all seem to produce differing results. What is the best script to list all indexes?
I have a question that I hope someone can clear up for me. I have come across a number of different suggestions on DB maintenance, for example reindexing with the following script:
USE DatabaseName --Enter the name of the database you want to reindex
DECLARE @TableName varchar(255)
DECLARE TableCursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table'
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName WHILE @@FETCH_STATUS = 0 BEGIN DBCC DBREINDEX(@TableName,' ',90) FETCH NEXT FROM TableCursor INTO @TableName END
CLOSE TableCursor
DEALLOCATE TableCursor
My question is, doesn't the maintenance plan have this functionality inherent in it when you create the maintenance jobs to reindex? Is there a benefit to scripting things out vs just using the maintenance plan wizard for this sort of thing and any of the items it covers? I came from an Oracle background where this was a no-brainer but I am a bit confused on the choices with SQL Server.
So I'm reading http://www.sql-server-performance.com/tips/clustered_indexes_p2.aspx and I come across this: When selecting a column to base your clustered index on, try to avoid columns that are frequently updated. Every time that a column used for a clustered index is modified, all of the non-clustered indexes must also be updated, creating additional overhead. [6.5, 7.0, 2000, 2005] Updated 3-5-2004 Does this mean if I have say a table called Item with a clustered index on a column in it called itemaddeddate, and several non-clustered indexes associated with that table, that if a record gets modified and it's itemaddeddate value changes, that ALL my indexes on that table will get rebuilt? Or is it referring to the table structure changing? If so does this "pseudocode" example also cause this to occur: sqlstring="select * from item where itemid=12345" rs.open sqlstring, etc, etc, etc rs.Fields("ItemName")="My New Item Name" rs.Fields("ItemPrice")=1.00 rs.Update Note I didn't explicitly change the value of rs.fields("ItemAddedDate")...does rs.Fields("ItemAddedDate")=rs.Fields("ItemAddedDate") occur implicitly, which would force the rebuild of all the non-clustered indexes?
I have a requirement to only rebuild the Clustered Indexes in the table ignoring the non clustered indexes as those are taken care of by the Clustered indexes.
In order to do that, I have taken the records based on the fragmentation %.
But unable to come up with a logic to only consider rebuilding the clustered indexes in the table.
I am testing some maintenance tasks sql commands such as index rebuild, index reorg, update statistics and db integrity check on a SQL Server 2014 Database. This is a new non-production vendor database (DB Size 500 GBs, Log Size 25 GBs) which eventually will be created in production. Currently, it is in full recovery model and without log backups. The database has a whole lot of indexes. I am just trying to rebuild and reorganize all the indexes (that need it), in addition to trying to get an idea of how long these maintenance task will take and the space needed in the log file to complete these tasks/commands. I would like to execute these tasks manually (the first time) to gather the duration and space required information. Eventually, I would probably schedule a weekly job to perform this maintenance.
I ran the index rebuild task on the database and noticed that the log file grew by over 50 GBs. I killed the process and truncated and shrunk the log file back down.
1. Does the index rebuild, index reorg, update statistics and db integrity check commands all use the log file?
2. Does Indexs Reorg have less impact on log file then Index Rebuild?
3. Should a truncate log and shrink log file be performed after these maintenance commands?
4. Should a full database backup be performed after these maintenance commands? Or before the maintenance commands?
I have read and understand that shrinking is not good for the database (could lead to more fragmentation and more data file growth when data is added) and I know about rebuilding indexes when fragmentation is GT 30% and reorganizing indexes when fragmentation is GT 5% and LE 30%.
Since this is a non-production database maybe I should set the recovery model to simple, run the maintenance commands and leave the database in simple recovery model unless the vendor needs it in full recovery model for some unknown reason.
5. With the simple recovery model the log file should be reused in a circular manner and not grow during these maintenance tasks. Is this correct?
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server witha particular query. It would take approximately 22 seconds to return100 rows, thats about 0.22 seconds per row. Note: I ran the query insingle user mode. So I tested the query on the Development server bytaking a backup (.dmp) of the database and moving it onto the devserver. I ran the same query and found that it ran in less than asecond.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue isrelated to some external hardware issue like: disk space, memory etc.Or could it be OS software related issues, like service packs, SQLServer configuations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating systemrelated issue.Any Ideas would help me greatly!Thanks,Brian T
I have deleted a database from SQL Enterprise Manager. Anyone know a way to clear that database from my maintenance plan? I do not wish to just uncheck the deleted database or create a new database plan. Thanks!
My index maintenance job that was setup through Enterprise manager database maintenance fails with the following notice. It ran great for several weeks then it started failing. Any suggestions!!
sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029). The step failed.
Hi, anyone who administering the pretty big database not less than 30 Gb with the average number of rows in a table about 2M and more, please share you experience with maintenace of such a db. Esspecially i'm interesting in:
1) Indexes maintenance (When and how - just regular dbcc, maint. plan or some script to split the job twice and so on.)
2) Remove unused space from db. (not major)
The serever works 24*7, and it's transactional environment. SQL 7 sp3 on claster.
I run the sp. to rebuild all the indexes it takes about 2-3 hrs to determin the objects withfragmentation less than 80% and actually rebuilding, during this process the users experience the performance (specially for update/insert) problem. It looks like I need to change the plan or strategy to do this. Any thoughts appreciated!
I have been given a SQL Server 2000 database to look after which has been set up with a Database maintenance plan. The plan is set to backup the complete database and the transaction log. The backups are written to the local disk correctly but the plan is also set to remove any backup files (both database .BAK and transaction log .TRN) that are over one week old. Complete database .BAK files are written daily and the .TRN are written every hour daily. The .BAK files are removed ok automatically but the .TRN files are not - they are just slowly filling the disk. There does not seen to be anything different between the way the main database and the transaction log is set up in the maintenance plan.
I am looking for opinions of setting up a database maintenance plan. I want to know if it is safe to trust the wizard and let it set up all of the jobs, or if it is better to write your own procedures to handle backups and maintenance as in 6.5. All sugestions and opinions are welcome. Thanks.
For SQL server 7.0, is it necessary to schedule database maintenance plan on a regulare basis? I know it is necessary for SQL server 6.5. Thanks. Su Ge
I have a strange thing in one of our Maintenance plans.
On the first tab where you check which databases you're including in the plan I have (say my database name is CAT) a 'CAT' and 'cat' database listed and the one chosen is 'cat'. However my database in all other views shows up in all caps. (even when I do an sp_helpdb)
The backups look like they're working, etc. but it just seems weird. If I go to create a new plan it only gives me the one option 'CAT' which is really what's there. I'm new and I'm thinking the database at one time was 'cat' and this is when the maintenance plan was created. Then it was renamed to 'CAT' and there's the two db's showing in the old mainenance plan.
What would you do? Create a new plan with "CAT" and just get rid of the old one with the weird 'cat' and 'CAT'?