Well good morning/afternoon to everyone.
It's been a while sinse I've posted here and it seems that the site is a lot faster now. Good to see. :)
Anyways, I'm working a current problem here at work with our database being quite slow. I've done some research already and will continue to do so but i wanted to get some of your opinions.
Right now, I've run the 'DBCC SHOWCONTIG' command and it is telling the following in the first 3 system tables:
According to the DBCC SHOWCONTIG command documentation, there should be no fragmentation at all.
Some questions:
1. would system performance be severly negatively reduced with the above fragmentation (logical and extent)?
2. can the 'DBCC INDEXDEFRAG(dbname, tablename, indexname)' command be issued against those system tables without consequences?
3. is there some other command that can defrag the entire database without having to specify which tables?
Also, I have also used the index tuning wizard after a profile trace but that failed with some unknown error.
Thats it for now, please let me know if you have some info I could use to help speed up my database.
How do I determine which method I should use ifI want to optimize the performance of a database.I took Northwind's database to run my example.My query is I want to retrieve the Employees' Firstand Last Names that sold between $100,000 and$200,000.First let me create a function that takes the EmployeeIDas the input parameter and returns the Employee'sFirst and Last name:CREATE FUNCTION dbo.GetEmployeeName(@EmployeeID INT)RETURNS VARCHAR(100)ASBEGINDECLARE @NAME VARCHAR(100)SELECT @NAME = FirstName + ' ' + LastNameFROM EmployeesWHERE EmployeeID = @EmployeeIDRETURN ISNULL(@NAME, '')ENDMy first method to run this:SELECT EmployeeID, dbo.GetEmployeeName(EmployeeID) ASEmployee, SUM(UnitPrice * Quantity) AS AmountFROM OrdersJOIN [Order Details] ON Orders.OrderID =[Order Details].OrderIDGROUP BY EmployeeID,dbo.GetEmployeeName(EmployeeID)HAVING SUM(UnitPrice * Quantity) BETWEEN100000 AND 200000It's running in 4 seconds time. And here are theStatistics IO and Time results:SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 17 ms, elapsed time = 17 ms.(3 row(s) affected)Table 'Order Details'. Scan count 1, logical reads 10,physical reads 0, read-ahead reads 0.Table 'Orders'. Scan count 1, logical reads 21,physical reads 0, read-ahead reads 0.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3934 ms.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3935 ms.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3935 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.Now my 2nd method:IF (SELECT OBJECT_ID('tempdb..#temp_Orders')) IS NOT NULLDROP TABLE #temp_OrdersGOSELECT EmployeeID, SUM(UnitPrice * Quantity) AS AmountINTO #temp_OrdersFROM OrdersJOIN [Order Details] ON Orders.OrderID =[Order Details].OrderIDGROUP BY EmployeeIDHAVING SUM(UnitPrice * Quantity) BETWEEN100000 AND 200000GOSELECT EmployeeID, dbo.GetEmployeeName(EmployeeID),AmountFROM #temp_OrdersGOIt's running in 0 seconds time. And here are the Statistics IOand Time results:SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.Table '#temp_Orders0000000000F1'. Scan count 0, logicalreads 1, physical reads 0, read-ahead reads 0.Table 'Order Details'. Scan count 830, logical reads 1672,physical reads 0, read-ahead reads 0.Table 'Orders'. Scan count 1, logical reads 3, physical reads 0,read-ahead reads 0.QL Server Execution Times:CPU time = 15 ms, elapsed time = 19 ms.(3 row(s) affected)SQL Server Execution Times:CPU time = 15 ms, elapsed time = 19 ms.SQL Server Execution Times:CPU time = 15 ms, elapsed time = 20 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 1 ms.(3 row(s) affected)Table '#temp_Orders0000000000F1'. Scan count 1,logical reads 2, physical reads 0, read-ahead reads 0.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.By the way why "SQL Server Execution Times"exists 3 times and not just one time?Summary:The first code is clean, 1 single SELECT statement buttakes 4 long seconds to execute. The logical reads arevery few compared to the second method.The second code is less clean and uses a temp table buttakes 0 second to execute. The logical reads are waytoo high compared to the first method.What am I supposed to conclude in this example?Which method should I use over the other and why?Are both methods good depending on which I prefer?If I can wait four seconds, it's better to reduce the logicalreads in order to provide less Blocking on the live tablesin a heavily accessed database?Which method should I choose on my own database?Calling a function like dbo.GetEmployeeName getsprocessed per each returned row, correct? That meansIf i had a scenario where 1000 records were to be returnedwould it be better to dump 1000 records to a temp tablevariable and then call a function to process each recordone at a time?Or would the direct approach without usinga temp table cause slower processing and moreblocking/deadlocks because I am calling the functionper each row as I am accessing directly from the tables?Thank you
I have table having around 100 million rows.Everyday we have an ETL process in which table will be trucnated and relaoded. Will creating a partition on the table increase the inserting speed?
I have a small tricky problem here...need help of all you experts.
Let me explain in detail. I have three tables
1. Emp Table: Columns-> EMPID and DeptID 2. Dept Table: Columns-> DeptName and DeptID 3. Team table : Columns -> Date, EmpID1, EmpID2, DeptNo.
There is a stored procedure which runs every day, and for "EVERY" deptID that exists in the dept table, selects two employee from emp table and puts them in the team table. Now assuming that there are several thousands of departments in the dept table, the amount of data entered in Team table is tremendous every day.
If I continue to run the stored proc for 1 month, the team table will have lots of rows in it and I have to retain all the records.
The real problem is when I want to retrive data for a employee(empid1 or empid2) from Team table and view the related details like date, deptno and empid1 or empid2 from emp table. HOw do we optimise the data retrieval and storage for the table Team. I cannot use partitions as I have SQL server 2005 standard edition.
Please help me to optimize the query and data retrieval time from Team table.
I have several data bases on a server (SQL Server 2000 only, no web server installed) and lately, as the company keeps gowing, my users complain saying the server gets slow, (this dbs are well designed and recieve optimizations and integrity checks, etc) because of this, Im thinking about getting a new server to repleace my old ProLiant ML 330 which was bought 4 years ago but Im concerned about what server arquitecture or characteristic can help me best to improve response performance, is it HD speed? Processor speed? or more Ram? I want to make a good decision, so I´d really appreciate your help...
My company is undertaking a database optimization project. Optimization the schema, the code, etc. I would like to ask, if you guys could help out, the following:
1. What risks are there? What are the pitfalls?
2. My company is hesitant to do a database freeze and stop all new development until our vendor (who's restructuring tables and changing database objects) has a stable database for us to obtain, then, and only then can we continue development on this newer copy. My question to this: how can we either reduce the database code freeze or work in parallel?
3. Can anyone point me to other sources of information? Another thread? A book? A URL?
My company is undertaking a database optimization project. Optimization the schema, the code, etc. I would like to ask, if you guys could help out, the following:
1. What risks are there? What are the pitfalls?
2. My company is hesitant to do a database freeze and stop all new development until our vendor (who's restructuring tables and changing database objects) has a stable database for us to obtain, then, and only then can we continue development on this newer copy. My question to this: how can we either reduce the database code freeze or work in parallel?
3. Can anyone point me to other sources of information? Another thread? A book? A URL?
HiWe use Sql 2000.When defining a database maintenance plan one of the options is tooptimize the database, reorganize data and index pages, remove unusedpages etc.Is there a command to do this manualy (or in a job) and not by usung aDBM plan ?
Need to confirm if we can add space(increase data file size) for the database which is configured for always on similar to that of mirroring or we need to follow any different procedure.
I have a requirement wherein the datafiles on both the primary and secondary replica got full, if i add space to the primary database will it automatically get added to the secondary replica or not?
Our DBAs want to speed up SQL database backup of a database of 4 GB data size into a network drive on another server. We are using WINNT 4.0 (SP3)& SQL Server 6.5 (SP4). We know that "Backup buffer size" SQL configuration parameter is a way of doing it. Below are results of increasing the value: 1. If the value is more than 10, the runnable value is always 10 even when stopping and starting SQL server services. 2. The SQL backup took much longer time than before when increasing the value from 5 to 10.
How to properly do it ? What could be the problem ? and are there any other considerations to meet ?
Note that total RAM of servers is 512 MB out of which 128 MB is allocated to SQL server.
Anyone's experience and cooperation will be highly appreciated. Regards, Einas (DBA)
I had several databases under sqlserver ce 2.0 (in my Pocket PC) which contained ntext fields. The size of the databases varies from 50,000 to 700,000 records. The size of an ntext field is from 4 bytes to 2 megabytes.
When I recreated my databases under sql server 2005 mobile on my desktop using VS2005 (see my post just under this one), I saw a big difference between the old and new database sizes. This problem was mentioned in one of the posts in this forum and the reply was to replace ntext data with ncharvar type.
Since most of my data was longer than 4000 bytes (which is the limit for nvarchar type), I couldn't use this suggestion. Instead, I changed my ntext type to image type and used a GetByte conversion.
No change! The size of the new database is still 50 % larger than the original. Since the difference is around 300 MB, this is an unacceptable thing.
Now, I either wait from somebody to suggest a new solution (apart from keeping the ntext data in a separate binary file and keep index of the records of this file in the records of sql database) or, most preferably, have
i am doing a research about high volume database treatment (maybe a database with tera bytes volume) , so is there any optimization or specialization for queries deal with such database? !!
I am asking this question on behalf of a friend. I have little knowledge of SQL 2005 but my friend is quite knowledgeable, although this is the first time he is dealing with large database for a client. So here's the story.
His client has a database containing 1.5 million books. Now he is setting up a website which will enable users to search books. Searching by ISBN is no problem as it only takes 1 seconds. The problem is, searching by Title takes more than 20seconds, which is unacceptable. My friend has only done smaller database and he just recently thought of implementing indexing and now looking for other ideas.
Each row contains book details such as Title, Author1, Author2, Author3, Publisher, Publication Date, ISBN, etc.
Can anyone who are more experienced in doing large database share with me some design ideas? His client is aiming for 8seconds or less.
Dear All,We have a procedure that takes12 minutes to run on the first server butthat same procedure now takes 3 hours to run on the second server using thesame data. Does anyone have any suggestions why this is happening and howto make the procedure faster on the second serverThanks in advance.Jeff Magouirk
Choose an EEO-1 Classification: Increase all employees salaries that have: the selected EEO-1 classification by 10%.
Increase all employees salaries by 5% Hi! I am new to SQL, but trying to teach myself. I have had lots of help from you guys and appreciate it very much. Today, I was trying to do the following:
In my table of "Employees", I have a column called classifications. I want to increase the salaries of the employees with a classification of EE0-1 by 10%, but I have not figured out to do so.
My second question is how would I increase all employees' salaries by 5%?
Hello all, I have, what i think, is a unique problem that i'm hoping some of you can help me on.
I need to create a record number that is incremented by 1 whenever someone adds a new record to the database. For example, records numbering 1,2,3 are in the database. When the users adds a new record, SQL takes the last recordno, 3 in this case, and adds 1 to it thus producing 4.
Also, i need to have the ability to replace deleted record numbers with new ones. Using the example above, say a user deletes record number 2. Whenever someone adds a new record, sql would see the missing number and assign the new record that number.
I hope i'm making sense here. Does anyone have any ideas about this? Any articles on the web that someone could point me to?
Can someone tell me how I can write a stored procedure that will automatically increment the value of the index by 001? I am attempting to build a table for a menu system and I need to increment the index value of the document by 001 when submitted to the database. I also need to have the script obtain the last index value of the node before inserting new value.
All ideas appreciated.
Sincerely,
Tim
If interested, this requested is based upon an article written by Michael Feranda: http://www.pscode.com/vb/scripts/ShowCode.asp?txtCodeId=7321&lngWId=4
We are getting the following error on an SSIS package: "54 Diagnostic VirtualSQLName DomainUserName Load Daily 859CF005-CB7F-47D8-8432-AE7C074B343C 1A986F18-343F-4424-ABAB-AC6575187DF3 2007-02-14 10:05:42.000 2007-02-14 10:05:42.000 0 0x Based on the system configuration, the maximum concurrent executables are set to 4. "
Basically it is saying the we have reached the maximum amount of executables. How can we increase this value and where?
I am really starting to like CTEs. My only qualm is that you can only use them within the first statement after you declare them. I wish they would remain active for the duration of the sql batch just like everything else (variables,local temp tables, ect). Is there any trick so that you can use them multiple times? Maybe define them as a string and do dynamic sql or something like that? Hopefully (i have not researched the new version of sql server) in SQL Server 2008 the scope has increase. Anyways any help would be appreciated.
Hi, I created a view on two huge tables. I tried to run a simple SELECT statement on this view and it took me several hours to obtain the result. How can I improve the performance of a view? The view should make use of the indexes built in both table, am I right? Thanks.
"Cursor provide row-by-row level processing and it will store the result sets in 'TEMPDB' database".
(Because of this) or (By using Cursor in Triggers or Stored Procedures) the performance will increase or performance will come down?. I am thankful if I get a good reason for this?
I am receiving an error from my ODBC driver “Maximum number of DBPROCESSes already allocated.”
I confirmed that there are 25 connections and that this is the default. This is caused by error message 10029, SQLEDBPS, when the maximum number of simultaneously open DBPROCESS structures exceeds the current setting. I would like to increase this maximum.
I have found only two ways to do this. One is using dbsetmaxprocs using C and the other is using SqlSetMaxProcs using Visual Basic. My problem is that I am interfacing to SQL Server using a third party tool that is doing the lower level programming.
Is there some way that I can increase the maximum number of DB processes for all databases that are part of the SQL Server 7 environment, or can I set this value using a program that is called from a stored procedure?
Any ideas in this area will be greatly appreciated.