Question Regarding Slow Performance An PAGEIOLATCH_SH Locks On Sql2005
Dec 7, 2007
Hi,
I wondered in anyone can help with the following problem that i'm
experiencing, i'll try to provide as much info as possible and any
suggestions would be appreciated.
I have just started at an organsiation and there seems to be slow
performance maybe on the san on a 64bit itanium dual core machine. 4
CPUs are being showed to sql server, it also has 16gb of RAM. I'll
start with the configuration of the SAN.
After speaking to the SAN guy, rather than carve the SAN up into
different area's for san Logs/Data etc they have gone for the approach
of spreading a Vdisk across as many spindles as possible (All 145 of
them). So the area that is presented to the SQL Server according the
the SAN guys is a vraid 5 stripe made up of all 145 disks which are
all 72gb fibre-channel disks.
This storage is not just made available to sql server but also made
available to other apps as well that need storage. Having read the
manufactres best practice on setting this up there is a valid argument
for doing this.
The bandwidth from the SAN is 2Gb fibre, with each computer that uses
the SAN having 2Gb fibre cards.
Clearly, that could act as a bottle-neck. But, there's nothing that
can be done about it according to the SAN guy.
Needless to say, any changes on the SAN are pretty much going to be
out of the question as far as he's concerned but i think performance
isn't that good for the type of box they have and the SAN its attached
to.
The 2nd thing i'll explain is the setup of the database in question,
firstly whoever set it up split the database into 16 different file of
4 filegroups so the table that i'm selecting to is in one filegroup
split over 4 files and the the table selecting from is in another
filegroup made up of another 4 files. These are placed on the same
physical disk made up of the SAN LUN with 145 spindles.
Anyway when i do a select from a sales table which has various group
bys and then insert the results into a blank table with no indexes it
can take over 2hours for 200k rows which i find very slow.
When i look at the sysprocesses table i am getting various waits as
follows :-
The process seams to be going inbetween a PAGEIOLATCH and
SOS_SCHEDULER_YIELD a few times per second.
Running the following to get io stalls gives the following :-
Select * from sys.dm_io_virtual_file_stats (6,7)
Select * from sys.dm_io_virtual_file_stats (6,8)
Select * from sys.dm_io_virtual_file_stats (6,9)
Select * from sys.dm_io_virtual_file_stats (6,10)
It worries me that when the process is on the PAGEIOLATCH the wait
can be over 1000. Is it normal for the wait to be this long and what
would be the best way to prove one way or another if the configuration
of the san is causing this kind of performance???
An Application Times out with tons of SQL Server Locks but there is no log of any errors in SQL Servers. I checked all the Indexes and I reindex it but still same issues. CPU is 100% full at that time and users unable to do anything. Any ideas on how to fix it
Hi,We have shifted one of our data processing to a new instance of SQL2005. For this a new DB was created in SQL2005 and it is not anupgrade of SQL200 DB.The data processing application is a VB6 application that runs a batchprocess to insert about4.5 million records everyday in this SQL 2005 database. I am usingADO(ver 2.6) Connection object to execute T-SQL insert statements toinsert the 4.5 mill. records.This same application used to insert the data in the SQL 2000databaseand used to take about 2.5 hours to do so. After moving it to SQL2005 the performance has really detriorated and it now takes morethan6 hours.Is there any configuration change that I need to do in SQL 2005?Please help.Note: There is also an instance of SQL 2000 running in the sameserver.TIA,Tawfiq
I need to delete data which is older than a year from a table which contains 54 million rows (yes i know, ridiculous isn't it). I am using the below query:
DELETE tblLineCard WHERE CreateDT <= dateadd(yy,-1,getdate())
It worked fine for the smaller tables, but shortly after starting up the above query it starts blocking itself (ie. it shows: SPID 55(Blocked by 55), under current activity in EM), goes into sleeping status and and has a wait type of PAGEIOLATCH_SH. There are NO other queries taking place. The transaction log keeps on growing and Query analyzer shows the "Executing Query Batch" message.
Any ideas? I have killed the process and restarted but same scenario each time. Your help would be wonderful. Yours sincerely SQLJunior...
I just upgraded my SQL 2000 server to SQL2005. I forked out all that money, and now it takes 4~5 seconds for a webpage to load. You can see for yourself. It's pathetic. When I ran SQL2000, i was getting instant results on any webpage. I can't find any tool to optimize the tables or databases. And when I used caused SQL Server to use 100% cpu and 500+MB of ram. I can't have this.Can anyone give me some tips as to why SQL 2005 is so slow?
Would be interested in any advice or comment on the issue we are experiencing with SQL 2005.
In order to test some DTS package migrations, we simply created a DTS package on SQL 2000 (using Export) to copy all database objects from one database to another.
Then we restored the source database on SQL 2005 server, migrated the DTS package, and tried to execute it.
The package migrated 'fine', ran part way, and then failed. Since then we have been unable to connect to the server (Database Engine, or Integration Services) using Windows Authentication. (Error 18456, Sev 16, State 11).
We can connect to the Database Engine using SQL Authentication (but not to Integration Services which only allows Windows Authentication).
We have been able to replicate this consistently - every annoying time we've tried it!
(Reinstalling SQL Server does 'fix' the issue ... until you try a similar package again).
I've migrated an DB from SQL2000 to SQL2005, all it's working, but i've a "strange" problem, during the night, the transactions become very slow (no specific load) and if i restart service or I change the Degree of parallelims or Cost Threshold for parallelims to another value, it's solve the problem.
In the server logs (W2003) and SQL logs I can't see an warning or error...
I'm querying a small SQL2005 database and finding that the query can sometimes complete in under a second and then 5 minutes later the same query can take 15 minutes to complete.
The query I'm running is very simple as follows: select TOP 26 * from vSearchListOpportunityItem WHERE OpIt_OpportunityId=2495 ORDER BY Prod_Name, OpIt_OpportunityItemId
The view it is pulling data from only contains only 1890 lines, which in turn pulls data from 3 tables with 821, 2560, and 1957 lines of data. In other words it's small. I have noticed that if I try and open the smallest of these tables while on a 'go slow' period it also takes around 15 minutes to return the data.
The database was originally on SQL 2000. It is the only database on this powerful quad core server. The SQL Server CPU usage never goes above 40%, and always has free memory. No sign of locks.
I can't figure out why such a small database is going so slow with such a simple query. Any ideas?
On 1 pc i installed VS2008 besides VS2005 and SQL 2005 (before i installed VS2008 everything was fine)
On the other is a fresh installed XP machine with only VS2008 SQL2550 and Office2007 Latets service packs are installed
On both machines when i open a table in SQL 2005 with the 'open table' command the show of the table is very slow. Every row is updated file by field, the grid is not showing, the row and column headers are not 'grey' as normal.
Scrollign is impossible as the screen updates start again that slow.
Even a small table with 10 rows and 15 field behaves this slow. Its not workable.
I have an application written in VB6 that creates a ADO connection using the (native SQL2005 clien)t from the client to SQLServer 2005 on Server 2003 configured as a stand alone server. The application works great on XP and has for a number of years.
Now I am attempting to deploy in Vista and using the same code the connection speed CRAWLS. it's in the magnitude of atleast 10 times slower. It eventualy works but the selects and doing a readnext against the resulting record set is at a snails pace.
What am I missing. It's has to be some sort of configuration problem somewhere.
I am running into problems while running a large procedure, and i think it may have something to do with a PAGEIOLATCH_SH wait problem. My server, whose sole purpose is to run this one procedure, is doing plenty of disk i/o, and the CPU’s bouncing around, so I assume it’s working. But when I look at its process info, it seems to be sleeping a lot of the time on PAGEIOLATCH_SH. No other users are in the DB, so I'm quite confused. I don't find much info on this anywhere, so any insight would be very appreciated.
We have an installed site with two SQL2005 servers (workgroup, SP2).At the publisher we recently (yesterday) bulk inserted about 2million linesinto a table from a third server that's going to be part of the replicatinggroup. However, since importing these records (using 'Import Data' to getthe data in and then running a no-change update to make replication aware ofthem) the performance has become dreadful. The hard-drive is swamped withdisk writes.The first symptom that something was wrong was when replication startedtiming out, even when I turned off all subscriptions except for one - andthat one didn't have any of its tables modified by the import. Even puttingthe agent on a slow profile with extended query timeouts doesn't help.Any suggestions?Many thanks.Jim--http://www.ursaMinorBeta.co.uk"When one of the local cats, an especially fluffy creature with a vasttail, rushed down the street towards me the other day, mewing delightedly,I have to admit it was probably shouting 'HAZ CHEZBURGR?' " - Bella, ucsm
I need urgent help, about someting: i've developed and deploy an aspnet web site (data works with sqlserver), but after a few minutes working with some users, the permormance slows and stop the site.
I need to transfer a database from one server to another, I'm using the DTS utility because the servers have different sort orders. our database size is about 5GB which include about 2500 tables. Using DTS is taking many hours to transfer all objects and data. is there a better/faster way to do this? Any help would be appreciated, Thank you
Help! I am new to SQL (6.5) and am designing a small database on a small LAN. Everything seems to work fine until more than one user gets in the database. It was originally an Access database and we upsized some of the information to SQL. I have not upsized all the tables yet, and am not sure if that is causing a problem. Since we are new to SQL we are trying to make all the changes in Access then upsize everything. If this is the problem though, I will not be hesitant in upsizing.
Since sp2 a number of applications, specifically web ado access has slowed very badly and I seem to be seeing a high cpu usage too. Internal jobs such as backups and table defrags are not affected. I'm on a 4 way box with ample hardware .. stats indexes etc. are fine .. client access is just slow. I suspect mdac but I can't pin anything down and it's driving me crazy !!!!! I've even built a new box with a fresh install but to no avail. help ???
I'm running the following statements. They have been running for 48 hours and counting... In performance monitor, I can see the system is steadily 100% disc bound. Any idea how to get status or ETA on these? Any idea how I can speed up such actions? This seems unusually slow. There are no errors in the Error Log. Does other DBMS systems (such as Oracle) handle such scenarios more quickly or with better status information?
INSERT INTO Domains (Domain) SELECT DISTINCT Domain FROM Stages WHERE Domain NOT IN (SELECT Domain FROM Domains)
UPDATE Stages SET DomainID = (SELECT Domains.[ID] FROM Domains WHERE Domains.Domain = Stages.Domain)
Stages is 173 million records Domains is 2.4 million records
This is running on pretty decent hardware: Windows 2000 SP5 SQL Server 2000 Enterprise Edition SP3a Dual 3.06 GHz Xeons with HyperThreading enabled (4 virtual CPUs) 4GB RAM (OS 3GB switch is enabled to give SQL Server 3GB of RAM) 70GB SCSI boot drive Data/Transaction Log is on a RAID 5, 30 spindle, fibrechannel SAN
Hi - I have MS SBA2007beta and Office Outlook 2007 BCM Beta. Both use SQL2005 and they are both as slow to point of being frustrating. Outlook2007 without BCM is fine and every other application runs fine if no SQL server apps are present.
I have this running on a 2.8ghz desktop with 1G memory and a Laptop of 1.8ghz and 1.5g memory. The laptop is far faster.
I've got an INSERT that's selecting data from a linked server and attempting to push 10 million rows into the blank table. More or less, it looks like this:
The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions. There are no other active users. I ran it again and monitored the following DMO to watch the growth of locks for that spid:
SELECT request_session_id, COUNT (*) num_locks -- select * FROM sys.dm_tran_locks --where request_session_id = 77 GROUP BY request_session_id ORDER BY count (*) DESC
The number of locks started small and held for a while around 4-7 locks, but at about 5 minutes in the number of locks held by that spid grew dramatically to more than 8 million before finally erroring again with the same message. Researching, I can't figure out why it's not escalating from row locks to table locks at the appropriate threshold. The threshold in was set to 0 at first (Server Properties > Advanced > Parallelism > Locks). I set it to 5000, and it still didn't seem to work. Rewriting the INSERT to include a WITH (TABLOCK) allows it to finish successfully in testing. My problem is that it's coming out of an ETL with source code that I can't edit. I need to figure out how to force it to escalate to locking the entire table via table or server level settings.
A colleague suggested that installing service packs may take care of it (the client is running SQL Server 2008 R2 (RTM)), but I haven't found anything online to support that theory.
I have the same database on two different servers. One for production and one for testing. A view that I use runs in less than 2 seconds on the test system, but takes almost 2 minutes on the production server.
What I have noticed is on the test server the view will use an index. The production server ends up scanning a whole table. All indices are the same on both machines for the tables involved and I have updated the statistics. I even went through the process of creating a new table with its indices for the table that is being scanned. Both machines have had service pack 1 installed on them.
We are facing performance related problem using Sql server 2000.
We have one stand alone P4 Pc (128 ram) and around 30 users access the sql server through network.
We have written our aplication in VB 6 and backend as Sql Server 2000. We have used Stored Procedure where ever necessary. We have used cursor location as Server side.
When we start with 5 users it is not slow, when all the users say 30 comes in it is slow down.
Can some one help to find out what is the problem.
I'm still new to SQL Server so some of my lingo/verbage may be incorrect, please bare with me.
The company I work for relies strictly on ASP and SQL Server for 85% of it's daily operations. We have some Access projects and some VB projects as well, but for the majority it's ASP and SQL Server.
Previously we had 2 T1 lines with something like 3MB a piece and a handfull of Dell Servers. Our main server is also a Dell running Windows Server 2003 and is hosted through a reputable company here in town. They have a host of fiber lines running all over so I know we're getting good throughput. We've actually just upgradded to a DS3 but we're still working out the kinks with that. Anyway, I just want to eliminate that up front - we have great connection speeds.
The problems lies, I believe in our database design. The company supposedly had a DBA come in and help setup the design some 3 or 4 years ago, however even with my limited knowledge I feel like something is just not working right.
Our main table is "Invoices" which is obviously all of our Invoices, ever. This table has an Identity field "JobID" which is also the Clustered Index. We have other Indexes as well, but it appears they're just scattered about. The table probably 30-40 fields per row and ONLY 740,000 rows. Tiny in comparison to what I'm told SQL Server can handle.
However, our performance is embarassing. We've just landed a new client who's going to be brining us big business and they're already complaining about the speed of their website. I am just trying to figure out ways to speed things up. SQL is on a dedicated machine I believe with dual Xeon processors and a couple gigs of ram. So that should be ok. THe invoices table I spoke of is constantly accessed by all kinds of operations as it's heart of what we do. We also have other tables such which are joined on this table to make up the reporting we do for clients.
So I guess my question is this. Should the Clustered Index be the identify field and is that causing us problems? We use this field alot for access a single Invoice at a time and from what I understand this makes it a good Clustered Index, because the index IS the jobID we're looking for. But when it comes time to do reporting for a client, we're not looking at this field. We just pull the records for that Clients Number. And we only have 1400 clients at this point. So if we were to make the "ClientID" field the Clustered Index, it would much faster to Zero in on the group of Invoices we wanted because the ClientID is ALWAYS included in our queries.
But because a "DBA" came in to design this setup, everyone is afraid to change it. I guess it's hard to explain without people sitting here going through the code and look at the structures of all our tables - but I guess what I need is like a guide of what to do to easily increase performance on SQL Server and the proper use of Clustered and Non-Clustered Indexs and how to mix and match those.
Sorry I wrote a book. Ideas? This place has always helped me before, so thanks in advance!
Hi, I'm executing a stored procedure in my local LAN which executes another one in a loop and I update a Table. The number of loops is about 6300. This operation takes about 25 seconds in my local LAN. Then I try to execute though in a VPN which has an upload speed of 256 kbps. I open query analyser connect to the remote server which is must faster than mine and I just write exec mystoredprocname in order to execute the procedure. The performance is very very slow. In 7 minutes 180 loops are completed out or 6300. I really cannot understand this. What is the reason of such slow perfomance?? My ADSL model displays no activity when the procedure is executed. I just use the PRINT method in MSSQL in order to display the progress of the operation. I tried to comment it out but with no difference. I also use SET NOCOUNT ON in order not to display the update results.
Can someone explain me the cause for this? Are there some tricks in order to improve the performance when a slow connection is used like a ADSL with a static IP? It seems that something wrong is happening here.
I have a bunch of packages that take views and create tables from them. Some of the views are rather complex, but the packages themselves are very simple... drop and re-create a table using the data from a view on the same server. We create a new DB for each year, and this year we've upgraded to a new server with SQL 2005, so our DTS packages on the 2000 SQL server had to be recreated in SSIS on the new server. No problem, as I said the packages are really simple. But when I create the packages in SSIS they now take an extremely long time to execute, and I cannot figure out why.
For instance, one DTS package would take approximately 5 minutes to run when the view contained hundreds of thousands of rows and the underlying tables contained millions. But now, even with MUCH smaller tables (since it's the beginning of the year, new DB) the SSIS package I created on the new server takes over an hour, literally. The view that the SSIS package is using to create the table only takes about 15 seconds to execute in management studio (only about 16,000 rows). How can this possibly take so long??
the new server is virtually the same hardware-wise... 4 x 2400mhz, 4gb ram, win2k3 server
I have a stored procedure that has some problems with slow performance.The procedure has 2 parameters@firstnamevarchar(100)@lastnamevarchar(100)These parameters can have values like a%, that is wildcard searches.The strange thing is that if one of the parameters has the value %, andthe other one a%, the performance is very bad.If i subsistute the variables with exactly the same values hardcoded inthe where-clause, the query is very fast.If both variables has some characters prepending the percent sign, theperformance is better.SELECT distinct u.user_id, u.username, u.status_idFROM statusnames sn, statuses s, users u, users_persons up, persons p,users_roles urWHERE p.lastname like @lastnameAND p.firstname like @firstnameAND s.status_id = u.status_idAND sn.statusname_id = s.statusname_idAND u.user_id = up.user_idAND up.person_id = p.person_idWhat makes SQL server behave so differently with variables andhardcoded values?
I was transfering more that 100,000 records from flat file to sql table
It took about 1 hour.Is this the way it is?????i used oledb command.
As the data passes by i got to insert to several table.Like i insert some of incoming data to one table then get the key from that table and insert rest of the data with the key field from previous table to another table.
In this case i felt OLedb would be best as we can use query.
I cannot use oledb destination as it has only error output(to insert some of incoming data and i want to have a look up to get the key but oledb des has only error output)
i cannot use sql destination as the database is sql server 2000.It dosent let me.
How can i increase the performance????Please let me know
I have successful upgraded from Windows 2000 / SQL Server 2000 to Windows 2003 / SQL Server 2005. Here is the process: - Install Windows 2003 / SQL Server 2005 on new VM instance - Detached SQL Server 2000 database -> Copy data to the new server and Attached to SQL Server 2005 - Run scripts to restore user login - Change Compatibility level to: SQL Server 2005 (90)
Now, open 3rd party application and connect to the new database server (via IP address), the application was connecting ok and access data without no problem. However, database take more than 5 minute to connect/open instead of 30sec via the old the server?
We are migrating our database(s) from ORACLE to SQL. In Oracle we were able to issue a SELECT statement and see all of the locks (Blocking and Non-Blocking) currently in the system. The query also included the Process ID of the process we needed to kill in order to get rid of the lock.
We now need to create the same type of query for Microsoft SQL Server 2012. I have seen postings on different sites saying that this info can be obtained using SP_WHO2 or using the SQL Server Management Studio Activity Monitor's PROCESSES tab, but we are looking for a SELECT statement that will give us similar information.
When working with databases containing myriad of huge tables, I am very much tempted to create categorized views on those tables in order to simplify and facilitate data query programming? Some developers I talk to say such views generally slow down query performance. Is this true? Thanks.
I have a table with about 20,000 records. So, to minimise performance issues, i try to only retrieve the top 100 records. The databaset retrieved can be paged and sorted. Below is my code, but it's taking quite a while even to only load the 100 records. Any suggestions?Private Sub BindGrid() Dim connectionString As String = "MyConnectionString" Dim dbConnection As System.Data.IDbConnection = New System.Data.SqlClient.SqlConnection(connectionString) Dim queryString As String = "SELECT TOP 100 [Person].* FROM [Person] ORDER BY " & viewstate("sortField").ToString() & " " & viewstate("sortDirection").ToString() Dim dbCommand As System.Data.IDbCommand = New System.Data.SqlClient.SqlCommand dbCommand.CommandText = queryString dbCommand.Connection = dbConnection Dim dataAdapter As System.Data.IDbDataAdapter = New System.Data.SqlClient.SqlDataAdapter dataAdapter.SelectCommand = dbCommand Dim dataSet As System.Data.DataSet = New System.Data.DataSet dataAdapter.Fill(dataSet) Try DataGrid1.DataSource = dataSet DataGrid1.Databind() Catch e As Exception DataGrid1.CurrentPageIndex = 0 End TryEnd Sub
DECLARE @num int SET @num = 0 DECLARE @tableVariable table(ColA int, ColB decimal(18,4)) WHILE @num < 1000 BEGIN INSERT INTO @tableVariable VALUES (2, 10.56) SET @num = @num +1 ENDWhen this code is run in SQL Server 2000 Query Analyzer it commits in less than 1 second. The same code run as a SQLServerAgent job takes 16 seconds.
Similar behaviour appears if INSERT statement is substituted with UPDATE.
SELECT statement runs equally nice on both alternatives.
Has anybody got an idea what might be the reason for slow execution of INSERT and UPDATE in a job?