I am investigating a SQL server performance issue where the system
operates well at times and poorly at others. This SQL server is
connected to a SAN where I believe the issue lies. I have started some
testing using the SQLIOSimx86 utility from Microsoft with the
application stopped. My initial results show that there are quite a
few errors that indicate
"IO requests are outstanding for more than 15 sec."
I am next going to look @ the IO system as a whole (bios,driver
version)
Any thoughts? I can't seem to find any documentation regarding this
and whether this is acceptable.
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
We are trying to create some alerts in our SQL Server 2014 BI edition.Issue is that, after I chose "Type" as "SQL Server performance condition alert" nothing is listed in the "Object" list box.SQL Server event alerts are working. Issue is only with "SQL Server performance condition alert".
I can't find 'SQL Server: SSIS Pipeline' performance object in performance monitor on a 64-bit SQL Server. I see it on a 32-bit. Does anybody know why?
I set up the collector, and specify the Run As as my AD account in the Collector Set - Properties - General screen. My AD account is the local admin of the remote server.
However, the collector does not seem to work. Although the collecting set is shown as running, the The blg file stays at 64K. If I open it, there is nothing inside (no counter at the bottom). What did I miss?
Hi, I am interested if anyone else has come across performance problems with the SQL Server linked servers to SQL Server. I suspect that the OLE DB Provider that I am using perhaps has some performance issues when passed parameters.
I have set the dynamic paramters option on, and use collation compatible.
Does anyone know of a quick check list I can use (config setup), for the Server running my SQl Server to enable best performance. The box is dedicated to the SQL Server. The problem is when running a farely heavy sp, the box lockeds up, CPU hits 100% and I'm ending up staring at a screen trying to load itself for ages.
We are currently in the process of loading large amounts of data into our database. We are running into a situation where we are getting a message of "Waiting for WRITELOG", and this is slowing down our process to a crawl. I do no have a lot of experience in the performance side. I would truly appreciate any help on this matter.
Hi guys, I have windows NT workstation 4.0 and SQL Server Client Configuration Utility installed on my computer,so I have remote administration of SQL Server. But because Performance monitor is'not a part of NT workst. I can't check behavior of Cache hit ratio and ..... May be you give me some idea how can I do it remotely without Performance monitor.
My company is contemplating which platform (Oracle or SQL Server) to develop a new imaging application in (large databases, many users). All of our current SQL Server applications are relatively small now (100 MB databases). Can anyone give me some idea of what size databases they are running in SQL Server and number of users concurrently accessing them. What can SQL Server realistically handle?
I have to admin a datbase which makes almost no use of stored procedures. The C++ frontend makes use of ad hoc calls to the database(No this is not my idea of how to do things, but I have no say). Any ideas on the best way to tweak the SQL Server to handle this? Thanks
We have an application that uses SQL 2000 server. I am almost certain all the performance issues we are having are due to the SQL server. I really need to confirm this.
We have an application that uses SQL 2000 server. I am almost certain all the performance issues we are having are due to the SQL server. I really need to confirm this.
Hello Everyone,Regarding stored procedures and views, I know that stored procedurescause SQL Server to create a cached execution plan. Is the same thingdone for views? Also, how bad is the performance hit for a storedprocedure that use 1 or a few views as opposed to re-creating the sameselect statement with the proper joins to the required tables?I know that there are a bunch of variables that affect this stuff, Ijust trying to get a ball park idea of how this stuff works.Thanks,Frank
Recently created a library module (ASP.Net and SQL Server 2005) where we store files inside the SQL DB. We now have several thousand files and the DB is around 25 gb. I think we are starting to see performance problems when trying to select files for download. Filed under 10 Mb seem to download fine, but over 10 MB we are having problems with. Was wondering if someone could point me to a good article that might talk about these kind of performance issue and what I might do to over come it.
We have recently updated an application from SQL Server CE 2.0 to SQL Server Mobile 2005 and we are seeing a huge decrease in performance? Is this normal? Database query that used to take 8 or 9 seconds are now around 20 secs, the database is only about 5 MB and the two tables in this particular query have 20 rows and 14K rows respectively. The query is basically:
select * from table1 join table2 on table1.myint = table2.myint
myint is the Primary Key of table2 and I have even created an index on myint for table1, any ideas?
I understand that the number of records allowed in a SQL server is limited by storage capacity, but at what point will there be a noticable decline in performance? Assuming each row has, say, 10 fields, and the only operation being performed on the table are selects (no inserts, updates, etc).Will performance begin to drag at 50,000 records? 100,000? 1,000,000? 5,000,000?I'm trying to wonder just how scalable SQL server is, and perhaps explore some alternatives if tremendous amounts of data will bring down performance. Currently using 8.0, not 2005.Thanks in advance.
Hi,I am having a problem with one of my stored procedures in SQL Server 2005. Basically the proc brings back a data set for the ASP.NET front end, but it is running very slowly from .NET. I have run SQL profiler on the procedure and its taking around 20 seconds to bring back the data for the .NET, where as if I copy and paste the executed SP from profiler into the management studio and run it in a query window, it runs in around 1 second, even if I run DBCC DROPCLEANBUFFERS before I run it. More worryingly, the CPU usage is 40 times higher and the number of reads is 50% higher from .NET.We have the .NET front end spread over 3 clustered web servers with load balancers and the SQL db is on a dedicated rig. I am having the same problem on my locally published version of the site as well, so I don't think it's an issue with the web site.If anyone has got any ideas on this then please let me know as I am completely stuck. I should mention that the issue has only recently started occuring and it used to be fine and the rest of the site is fine...Thanks in advance Tom
Hi experts, I am facing one performance issue.This is the scenario. For eg., EmpId,EmpName,DeptId,DeptName,Sal are there.Except sal all the remaining columns values are same(I mean having same data). Assume,here 4 rows are there in that table. so I wrote a select query to get the data like 1,xx,1,yy,50,500,5000,1000 here 50,500,5000,1000... are the sal column values. I wrote a function to get the sal values as comma separated ones if the remaining column values are same.But for Huge no' of records (assume for 25000 records), the performance is very bad. So need a inbuilt query / any other solution for this problem.
Hello!I have a very simple structured table:id | datawhere "data" is a varchar(100) This table would contain a lot rows (~ 500.000.000) and I want to select all "id" where data=@data. Is it realistic that the SQL Server could serve this request on a normal webserver within 1 or 2 seconds? Thanks!
I hard that SQL Server 7.0 has problems when the database reaches 50 - 100GB, in areas such as backup, transaction logging, and database admin and that by 100GB parallel queries are also affected.
Is this true ? Where I can get information on this ?
ENVIRONMENT: I have SQL Server6.5 running under a dedicated NT Server. NT configuration includes dual pentium 200Mhz processors, 256MB RAM and RAID system. The Database size is 1GB with actual data size about 500MB.
PROBLEM: I have an application which uses lots of joins to get the results. My select query is running too slow even when I run it on the server.
I updated the statistics and rebuild all the indexes on the tables used by the query.
Any suggestions on using SQL Trace and tuning the server/database are welcome.
We have got a Server with couple 731MHz Pentium III + 1GB Memory and about 25-30 users connecting to it. The Processor Queue Length has always been more than 70%-99%. Is there any fix for this problem, like increasing any buffer sizes and so forth or should I add more processor power to that server?
We are facing performance related problem using Sql server 2000.
We have one stand alone P4 Pc (128 ram) and around 30 users access the sql server through network.
We have written our aplication in VB 6 and backend as Sql Server 2000. We have used Stored Procedure where ever necessary. We have used cursor location as Server side.
When we start with 5 users it is not slow, when all the users say 30 comes in it is slow down.
Can some one help to find out what is the problem.
We've got a clustered environment in which we run several SQL servers. However, one of the platforms, Greats Plains hangs for no reason. Is there a tool out there that we can use to determine what is causing the load ?
We are currently exploring the possibility of migrating from mysql to Microsoft SQL Server. The reason for the move is to make use of the extra functionality.
We have migrated a website which was running on mysql to SQL Server using the freeTDS library. We have noticed a considerable increase in the time it takes to execute the pages.
One particular heavy page has gone from 0.16566 sec execution time in mysql to 1.52923 sec in SQL Server. A 10 fold increase.
I understand that mysql is faster than SQL Server but I would not have expected such a large change.
We are looking to have about 100 sites working with this configuration, and these performace decreases are concerning.
Reading the freeTDS FAQ http://www.freetds.org/faq.html#pending describes a strict one-query-at-a-time limitation and a further limitation in php creating new connections.
Is anyone using PHP and SQL Server together in a demanding environment? Do they work together well?
Is it just a case that we need to optomize the new database?
Is freeTDS the only/right solution for the job?
Any help would be greatly appreciated,
Karl
We are running a redhat 8 system with apache2, php4.3.9 and freetds0.62.4. I connect to MSSQL 2000 on a windows 2000 server.
I'm still new to SQL Server so some of my lingo/verbage may be incorrect, please bare with me.
The company I work for relies strictly on ASP and SQL Server for 85% of it's daily operations. We have some Access projects and some VB projects as well, but for the majority it's ASP and SQL Server.
Previously we had 2 T1 lines with something like 3MB a piece and a handfull of Dell Servers. Our main server is also a Dell running Windows Server 2003 and is hosted through a reputable company here in town. They have a host of fiber lines running all over so I know we're getting good throughput. We've actually just upgradded to a DS3 but we're still working out the kinks with that. Anyway, I just want to eliminate that up front - we have great connection speeds.
The problems lies, I believe in our database design. The company supposedly had a DBA come in and help setup the design some 3 or 4 years ago, however even with my limited knowledge I feel like something is just not working right.
Our main table is "Invoices" which is obviously all of our Invoices, ever. This table has an Identity field "JobID" which is also the Clustered Index. We have other Indexes as well, but it appears they're just scattered about. The table probably 30-40 fields per row and ONLY 740,000 rows. Tiny in comparison to what I'm told SQL Server can handle.
However, our performance is embarassing. We've just landed a new client who's going to be brining us big business and they're already complaining about the speed of their website. I am just trying to figure out ways to speed things up. SQL is on a dedicated machine I believe with dual Xeon processors and a couple gigs of ram. So that should be ok. THe invoices table I spoke of is constantly accessed by all kinds of operations as it's heart of what we do. We also have other tables such which are joined on this table to make up the reporting we do for clients.
So I guess my question is this. Should the Clustered Index be the identify field and is that causing us problems? We use this field alot for access a single Invoice at a time and from what I understand this makes it a good Clustered Index, because the index IS the jobID we're looking for. But when it comes time to do reporting for a client, we're not looking at this field. We just pull the records for that Clients Number. And we only have 1400 clients at this point. So if we were to make the "ClientID" field the Clustered Index, it would much faster to Zero in on the group of Invoices we wanted because the ClientID is ALWAYS included in our queries.
But because a "DBA" came in to design this setup, everyone is afraid to change it. I guess it's hard to explain without people sitting here going through the code and look at the structures of all our tables - but I guess what I need is like a guide of what to do to easily increase performance on SQL Server and the proper use of Clustered and Non-Clustered Indexs and how to mix and match those.
Sorry I wrote a book. Ideas? This place has always helped me before, so thanks in advance!
Very recently my sql server started performing very poorly. Nothing has been installed or loaded on it to cause the poor response. What ever I do, from running a query to opening windows explorer response well, all very slow and very often query time out, ones that took a few second to run before.
The CPU usage and memory usage do not indicate anything specific. Is there some way I can pin point specifically what database or query or job is causing the problem.
Please can someone help me, this is VERY urgent, our production is suffering huge, and a reload is not an option as I can't get a backup of the databases as they also time out.
Hi, I am new to database admin. Actually, I really have no experience what so ever. My boss has asked me to do some tests on our sql server to find bottlenecks or whatever is causing our server to respond so slowly.
Any help would be, ugh helpful.
I am loking for tools or information that will help troubleshoot any problems with the ms sql server 2k. I have tried sql profiler, but I found myself lost, not knowing what to look for.