I am having a table with 40 columns and it contains 4 million records. I got the results for one year in 40 secs. After tuning, it is retuning in 24 secs( what i have done is i created index on order by fields).
Can you please suggest me in which way I can increase the performance.
I'm having an issue with a query I'm running on Sql Server 2005. It's a semi-complex query involving an in-line table function and several left outer joins which are joined on to the results of the function call. Two of the left outer joins are then qualified in a where clause of the form where table.Col is not null; the idea is that the final result set contains data that has no match in those two tables.
The problem revolves around a where clause in the function and the last left outer join (ie, one of the ones qualified with where not null). When I alter the where clause of the function to further restrict the result set the function returns, the query times shoots up from 1 second to roughly 2-3 minutes. Note that the time the function takes to complete is not affected. The difference in time is purely down to what the query does with the results the function provides. Also note that the change to the where clause provides a subset of the original data; it does not add any more data (it actually restricts the original resultset by roughly 1000 rows).
I can bring the query speed back down again by removing the last left outer join - this join takes one of the columns from the function, and joins it to a small table - 924 rows. So it appears that this particular join is the cause of the issue, but only when using the resultset generated from the modified function query.
Now, as the thread title alludes, Sql Server 2000 and 2005 handle this differently, or appear to. When I execute this same query on a Sql 2000 machine, there's no apparent time differences, and the data that is returned is as expected. Does anyone have any suggestions as to what might be causing this and how I can fix it? I could simply return the larger resultset and use managed code to filter out the rows I don't want; however, I would like to get to the bottom of this, especially if it's going to effect future queries.
Hi, I want to know if anyone have any clue about the reason why this happens. I have a table on SQL Server 7 with 320 thousand registers and when I execute a SELECT * on it, it takes about 6 seconds to give an answer. But the same table on SQL Server 2005 Ent takes about 16 seconds, Is it normal?:shocked: :shocked:
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
Hello!I have a very simple structured table:id | datawhere "data" is a varchar(100) This table would contain a lot rows (~ 500.000.000) and I want to select all "id" where data=@data. Is it realistic that the SQL Server could serve this request on a normal webserver within 1 or 2 seconds? Thanks!
I have recently upgraded from SQL Server 2000 to SQL Server 2005, and now all my queries run infinitely more slowly.
Here is the scenario - I run an extract of a MS SQL Server database at a client site, then recreate the database on our in-house server - but without indexes etc. Then I run various queries in order to created data files that will be used for importing into a global system. When I was running Server 2000, most of the queries ran in less than 10 seconds each, but under Server 2005 they take 3 minutes or more! Does anybody know of any parameters that I need to adjust to fix this problem?
I am now facing a problem related to the linked server. I created the connection between server A and B as linked servers. When I execute the following SQL statement on server A,
select * from B.database1.dbo.tableA where id ='12345'
I can get the results within couple of seconds. But the similar query would take several minutes if I switch the server name in the query from B to A and run it on server B! The tables on server A and B actually have the same sizes and the same indexes.
Do you have any clues and suggestions on this issue?
The execution time for this query on DB2 v8.0 DBMS one second but I execute it on SQL SERVER 2000 is around 55 second so how i can incease the performance for SQL server
SELECT ACC_KEY1,ACC_STATUS_LAST FROM PSSIG.CLNT_ACCOUNTS INNER JOIN PSSIG.CLNT_CUSTOMERS ON PSSIG.CLNT_ACCOUNTS.CSTMR_OID = PSSIG.CLNT_CUSTOMERS.CSTMR_OID WHERE (PSSIG.CLNT_CUSTOMERS.CSTMR_START_DT >= '1900-1-1 12:00:00') AND (PSSIG.CLNT_CUSTOMERS.CSTMR_END_DT <= '2106-12-31 12:00:00') AND (PSSIG.CLNT_ACCOUNTS.ACC_KEY1 >= '0000000000000') AND (PSSIG.CLNT_ACCOUNTS.ACC_KEY1 <= '9999999999999') AND (PSSIG.CLNT_ACCOUNTS.ACC_STATUS_LAST = 5 ) AND ACC_KEY1 > '0' ORDER BY ACC_KEY1 Note 1: value 5 exist in most of rows about ( 999999/1000000 ) from the table rows count Note 2: the number of rows in each table around 15000000 Note 3: I used the same index structure for both DB2 and SQL server 2000 Note 4: I used some other feature in DB2 that increase the performance but I did not found the alternative for it in SQL server 2000 : a- cardinality varies at run time feature b- include column in index instead of use compound index for ( ACC_KEY1 ,ACC_STATUS_LAST ) columns Note 5 : Enable reverse scan for index
Hi,I have a Microsoft SQL Server 2005 Enterprise installed on Windows Server 2003, and developing web application for 500 clients. So I am interested will I have any performance issues if I put in 'Articles' table, data for all 500 clients and then filter it on client ID, or should I make 500 'Articles' tables for every client one with different name and then change sqldatasource for gridview depending on which client is working on it. I will have, beside 'Articles' table, another 10 tables, which means 5500 tables total, if I use second approach, on first I will have only 11 tables. So I am asking is it better to have more tables with less data, or less tables with more data. And what are pros and cons for both approach. Thanks a lot!
Ive got sql server 2005 WG edition running and have an access adp application which connects to it. However since upgrading to sql server 2005 from 2000 the adp project runs a lot slower. However when I install express on a machine and connect the adp project to it which sits on the same machine it runs just fine. We have also rebuild all the indexes for the database but that doesnt fix the problem. Could someone please help...
I have one query which is calulating running total and taking just 6 mins to run on production SQL Server 2000 server but it is taking more than 45 mins to run on QA on SQL Server 2005 server. The index and data is same on both server, What other things we can check beside the index? Thanks
Does using varchar in SQL Server 2005 significantly affect performance on updates?
Why or why not?
I have seen many SQL Server databases with many varchar columns - in other databases other than SQL Server it is advised not to use varchar because it significantly impacts performance.
I am trying to weigh when to waste space to help performance.
I'm not sure I chose the right forum, so any comments on that are also welcome
We recently changed from SQLserver2000 to SQLserver 2005 in the beginnen all went fine. But now we are struggling with a severe performance problem... suddenly SQLserver2005 reaches its max and is not longer able to work properly -> Extremely slow
I'm wondering if there are other people / companies / ... sharing this same issue?
A query was taking 20 seconds and consuming 70% CPU takes only 1 second after setting Maximum Memory property to 2048 MB - why?
Server: OS Microsoft(R) Windows(R) Server 2003, Enterprise Edition Version5.2.3790 Service Pack 1 Build 3790 8 GB memory Two Dual-core AMD Opteron 285 2.6GHz Processors Server is not configured for AWE Fiber channel connection to EMC Clarion - two LUNs - one for MDF, one for LDF
SQL 2005 SQL 2005 32 bit Standard Edition - SP1 (version 9.0.2047) Three instances installed on server - only one instance in use Binaries and system databases on local mirrored disk Database file (MDF) on one EMC LUN - dedicated physical drives Log file (LDF) on one EMC LUN - dedicated physical drives
Query in question:
SELECT TOP 10 Address.Address1, Address.Address2, Address.City, Address.County, Address.State, Address.ZIPCode, Address.Country, Client.Name, Quote.Deleted, Client.PrimaryContact, Client.DBA, Client.Type, Quote.Status, Quote.LOB, Client.ClientID, Quote.QuoteID, Quote.PolicyNumber, Quote.EffectiveDate, Quote.ExpirationDate, Quote.Description, Quote.Description2, Quote.DateModified, Quote.DateAccessed, Quote.CurrentPremium, Quote.TransactionDate, Quote.CreationDate, Quote.Producer FROM ((Client INNER JOIN Address ON Client.ClientID = Address.ClientID) INNER JOIN Quote ON Client.ClientID = Quote.ClientID) WHERE (Quote.Deleted = 0) AND ((Address.AddressType)='Mailing') ORDER BY Client.Name
With default maximum memory setting (2,147,483,647 MB) - query runs in 20 seconds and consumes over 70 % of the CPU.
After changing maximum memory setting to 2048 MB, query runs in less than 1 second.
Question is: What is the best practice for setting the minimum and maximum memory settings for SQL 2005? What can be monitored to identify the cause of these type of issues - using profiler, PerfMon, other tool?
I am looking for an useful sql server 2005 performance tuning book. i have been searching for a real nice book as i m going to start my job from next month in a financial domain with one of the requirement as sql server 2005 performance tuning.so i m looking forward a book which can help me doing well at my workplace. Any suggestions and links appreciated in advance .
Does anyone know of any documentation on the performance of partitionmerge/split? Does the merge or split of a partition cause any lockingon the partitioned table? If you were merging or splitting a largevolume of data rebalancing your partitioned table would youpotentially lock users out?
My Performance Counters for SQL Server 2005 are corrupted. How do I repair them ?
Any help would be appreciated. Thanks.
Salyx
Specs Windows 2003 Standard, AMD x64. SQL Server 2005; x64; 9.00.3042.00; SP2 Standard Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2) This is a new install, so no "upgrade from SQL 2000". This is a production server, so "reboot" is hopefully not part of the suggested repair.
Symptom Open Performance Monitor. Open Add Counters. Open Dropdown "Performance Object". Instead of the SQL Server Performance Counter names, a list of 4-digit numbers appears. Other Performance Counters, eg, Processor, work as normal.
Attempted repair 1 - Recovery of system performance counters Open Command Prompt CD WindowsSystem32 lodctr /R This failed to restore the full set of performance counters for an unknown reason.
Attempted repair 2 - Recovery from a backup file from a second host I used the performance counter backup file from a second host which has an identical windows install. This properly restored the system performance counters, but failed to restore the SQL Server ones. This seems odd, because both system have - as much as I can tell - the same applications installed.
Open Command Prompt CD WindowsSystem32 REM Load backup file from second host lodctr /R:c:PerfStringBackup.INI
Attempted repair 3 - Recover SQL Server - specific counters Open Command Prompt CD WindowsSystem32 REM Load backup file from second host lodctr /R:c:PerfStringBackup.INI REM Clear and re-load MSSQLServer counters... unlodctr MSSQLServer lodctr "/R:C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLBinnsqlctr.ini"
Executing this pretty much wiped most performance counters. Only a small sub-set is now available.
More Info SQL Server 2005 and later SP2 were installed under the administrator account. MSSQLServer service runs under its own Windows Account (permission issues ??) I get Event Log entries regarding x86 vs x64 Performance Counter Libraries. These, however, do not refer to ASP, not SQL Server. I have 2 (virtually) identical hosts (same install sequence of apps). The Performance Counters on the second host work fine. Exctrlst.exe lists MSSQLSERVER service, but I don't know how to diagnose the details.
We have our SQL databases clustered using MSCS on X64 servers and are planning to apply SP2. During initial tests, we find that around 20-25% queries perform slow after applying SP2 compared to SP1. Just wanted to know if anyone else has found the same behavior and if there are any known patterns / issues with respect to performance for Sp2
I have a SQL Server 2005 database where covering indexes had to be used to improve performance for the heavy amounts of retrievals; however, the inserts into the tables are now very slow of course. Is there any way to improve the performance of the inserts without taking away the indexes.
Would changing locking or partitioning the index help the inserts?
Other databases use a concept of "freespace" to set up in the beginning - making pre-existing space for inserts - is there anything like this in SQL Server 2005?
We have updated to SQL Server 2005, let€™s say, in a hurry without thinking or testing. Databases were attached to the new instance of SQL Server 2005. It looked great when I tested it alone but then a new day come and as all users logged into the system we had got a big problem. The response times are very long and users receive time out errors all the time.
A little background:
The instance of SQL Server 2005 is installed on the same server as 2000 was installed on. 2000 has been uninstalled. It is a Xenon 3.2 GHz with 2GB RAM and SCSI raid. Data and logs are on different spins.
Application is an old ASP code and some parts are not optimized at all. But it worked fine on SQL Server 2000.
What could be the problem?
I really don€™t want to downgrade to SQL Server 2000.
Wondering if anyone has any experience with SQL Server Express Edition (SSEXP). We're looking at a mobile sales force type model, so a local database on a laptop with no real time network connection. So the users would collect data locally, then connect up to the network every few days to replicate the data to a central server. So questions.. Has anyone tried anything similar? How stable/mature is SSEXP? Any other thoughts, alternatives or gotchas anyone can think of?
I installed the SQL Server 2005 SP2 update 2 rollup on my 64-bitserver and the performance has tanked!I installed rollup 3 on some of them, but that did not seem to help.I thought it was just a linked server performance issue, but myoptimization started running today on one of the "update 2" instancesand so far it's been running about 10 hours longer than it normallydoes.The rollup 3 fixed our stack dumping issues, but we NEED to have thisperformance thing fixed!I saw that MS has come out with update 4 last week - doesn't sayanything about fixing this, though.Has anyone else experienced this?I'm not necessarily expecting anyone to have a fix for this, justwantto know I'm looking in the right place before I call MS.
SQL Server 2005 is installed on a brand new 64-bit server (Windows 2003 x64 std. Edition, 2.4 Ghz AMD opteron- 2cpu, 8.8 Gb of RAM). There is barely few hundred rows of data scattered among few tables in one database.
SQL server and SSIS performace grossly degrades overnight and in the morning everything is slow including the clicking of tool bar selection.It takes 3 seconds to execute a simple select statement against an empty table.
It takes15-20 seconds to execute a SSIS package that normally would take 2-3 seconds.
But once SQL Server is restarted, everything returns to normal and the performance is good all day and then the next day everything is slow again.
Everything is flowing smoothly for the SQL Server Database I have, except one type of retrieval and that is when the where clause has a range of data values to do the retrieval then the performance is terrible. I cannot anticipate every range. There are indexes on the table to try to help; however, nothing seems to help. Has anyone had a similiar problem? Any suggestions to improve performance?
For example in a Select Statement we have many tables and we have Where Clause with many conditions with AND operations. Do the SQL SERVER would apply the Where clause after all fetch or can dynamically decide about to include the related Tables from Select Statement Orderly with respect to where clause predicates? (SQL SERVER would not fetch data of those tables for its Select, where the AND condition in Where clause fails or by logic would be fruitless/not-related.)
Hello, we currently have our database (MSSQL 2005) on our web server however to do increased traffic and business we are now moving our database to its own server. I was wondering if anyone here knew of some good ways to setup/tune Windows Server 2003 and SQL 2005 for best performance. MSSQL will be the only application running on the server and want to make sure it is as fast as possible! Thanks in advance!!!
We have recently updated an application from SQL Server CE 2.0 to SQL Server Mobile 2005 and we are seeing a huge decrease in performance? Is this normal? Database query that used to take 8 or 9 seconds are now around 20 secs, the database is only about 5 MB and the two tables in this particular query have 20 rows and 14K rows respectively. The query is basically:
select * from table1 join table2 on table1.myint = table2.myint
myint is the Primary Key of table2 and I have even created an index on myint for table1, any ideas?
Hi We are using the SQL Server 2005 Full Text Service. The data is not huge, but the kind of data is that each record is small and there are a large number of records. There are 35 million records now with 11 GB of data and about 1.6 GB of FT catalog on the table. This is expected to grow to at least 10 times the size of this data. The issue is with FTS taking a long time to return results when the number of hits (rows) getting returned from FTS is large for some searches, it takes a very long time. With the same data & catalog, those full text queries for less common words return timely. The nature of the problem doesnt allow us to only have top results. We need all the results. So it’s not about the size of data but the number of results getting returned from FT. (As the catalog is inverted). The machine is dual processor with 4 GB RAM.
I am considering splitting the table and hence the catalog and using multiple servers to do full text searches in smaller catalogs. Is there any other way this issue can be solved ?
If splitting is the only way, can you give me an idea as to what is a statistical/standard limit to the number of search results/cataog size as which FTS gives good results
I am having major performance issues with Microsoft SQL 2005 x64 Standard Editions performance on Windows Server 2003 x64. The PC has two quad core cpu's with 8gb of ram and running a 500gb mirrored SCSI (Raid 1) drive system. The database running on the server is about 11gb. I've run a defrag several times which helps a little but I was hoping I could do something else to increase the performance.
I have also found that the bottle neck in the SSIS package is the backup and restore process of an 11gb database which takes about 1 hour (backup takes 1 hour and restore takes 1 hour) when it should take about 11 minutes. Is there anything I can do to make these processes run faster or to find out why they are taking so long? Any ideas would be a great help.
I have designed a fairly simple report in report designer that uses an analysis services cube as its input, and a basic matrix object as the primary design component. Cube performance either via object browser in Analysis Services or via connection from Excel is great. It does exactly what you build a cube for, i.e. it builds aggregations you can get to quickly. However rendering from a report designer report within Report Manager is a different story.
Even though this report is being rendered from an execution snapshot, it is painfully slow at opening and closing different levels of drill down. The report does have six levels of drill down, which I am sure is a factor. If I use a smaller dataset, performance does become more acceptable. However, I don't consider the requirements of my report to be all that extraordinary.
My conclusion thus far is that larger data sets with many possible page renderings are not the best candidates for HTML rendered reports. In this case Microsoft's cube analysis add-in to Excel might be a better choice. Thoughts and feedback on either how to improve the rendering speed or other presentation alternatives that would perform better would all be welcome.