Drives On Perfomance Monitor
Jan 20, 2004Can someone tell me if it is possible to see the drives on the server using Perfomance Monitor? I so, where are tehy hiding because i struggled the wholed day!
View 6 RepliesCan someone tell me if it is possible to see the drives on the server using Perfomance Monitor? I so, where are tehy hiding because i struggled the wholed day!
View 6 RepliesHi, I'm looking for a way to check the free space left on the hard drives and then if needed send an alert to notify when we need to free up some space. I played around with the performance monitor and realized I could do it that way but I think you would have to leave the performance monitor running all the time and I'm not sure if I want to do that. I also read about the xp_fixeddrives proc that displays how much free space is available but then I don't know what to do from there? Does anyone have any recommendations for the best way to do this.
View 3 Replies View RelatedIf I'm on a remote machine, meaning a computer not in the WSFC cluster, and I open SSMS 2014, point it to a SQL Instance, and open activity monitor:
1. I get all the panes and charts except % Processor Time.
2. Then, if I authenticate to the cluster's domain by mapping a drive with valid domain credentials, I'm free to put performance counters in the Perfmon - - - but SQL Activity Monitor shuts down with“The Activity Monitor is unable to execute queries against server SQL-V01INSTANCE1..Activity monitor for this instance will be placed into a paused state.Use the context menu in the overview pane to resume the activity monitor.
Additional information: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))(Mscorlib)”
3. Of course, the Activity monitor can't be resumed via the context menu. Removing counters and closing the perfmon do not work. I dropped the mapped drive and rebooted the machine. That brought back 95% of the information in the Activity monitor.
4. Further experimentation showed that any mapping of drive shares present on the SQL Server to the computer running SSMS cut off functionality of the 'overview' pane in the remote machine's SQL Activity monitor -- the monitor that had been trying to watch the server offering the shares.
Hello,
I have a table with almost 2 mil records. There are two columns (Col1 int, Col2 int). Combination of them is unique. When i run a simple select statement like (select * from tblTableName Where Col1 = 5) with execution plan, cost is extremely high, even with PK(col1+col2). I tried to put clustered index on those columns but execution plan still shows "table scan" instead of "index seek". How can I speed this up ? I use that table mostly for search.
Thank you for help.
Dear friends
We have one problem in our existing system.We are expecting some expert comment on this.We have one corebanking system back end as MS SQL server with IIS server.Our system is always very slow in the peak times of tranasactions.We are planning to optimize this with a short time plan .So pls give some suggestions that our DBA team can implement in a short time with SQL SERVER 2000
Thanks in Advance
Filson
Dear friends
We have one problem in our existing system.We are expecting some expert comment on this.We have one corebanking system back end as MS SQL server with IIS server.Our system is always very slow in the peak times of tranasactions.We are planning to optimize this with a short time plan .So pls give some suggestions that our DBA team can implement in a short time with SQL SERVER 2000
Thanks in Advance
Filson
I look after a database which is part of a third party CRM product. Theusers of the product complain of intermittant poor performance, thesuspicion is that some more senior users are running their own queries(the product allows users to do this). I've been asked by thedevelopment team to try to capture the details of long running queries.I've looked at the events listed in profiler and can't see one thatwould be useful. Ideally I want to know who is running which query thatis taking longer than x seconds.Any suggestionsTIALaurence
View 2 Replies View RelatedI have the next code to add 1260 records to SQL Server CE database on my device:
for (int nGroupNr = 1; nGroupNr <= 35; nGroupNr++)
{
myAdapter.Insert(null,(byte)nGroupNr,null,0,nSubGroupColor,false,false);
this.pbarCurrent.Value++;
for (int nArtNr = 1; nArtNr <= 35; nArtNr++)
{
myAdapter.Insert(nGroupNr, (byte)nArtNr, null, 0, nSubGroupColor, false, false);
this.pbarCurrent.Value++;
}
}
myAdapter is strong-typed DataAdapter generated by Visual Studio.
My database has just one simple table with 8 fields.
This code takes more than 5 minutes... to add 1260 records... is it normal or i'm missing something?
How can i optimise this operation?
Development enviroment:
Visual Studio 2008
.NET CF 3.5
SQL Server CE 3.5
Devices that were tested (.NET CF 3.5 & SQL Server CE 3.5 are installed):
Windows Mobile 5 emulator
Unitech PA500
Acer N300
;with cte as
(
select rank() over (partition by username order by guid ) as rank from MyTable where siteurl='myurl'
and VisitedDateTime between '2007.02.05' and '2007.09.30' and IsFiltered=0
group by username,guid
)
select count(rank) from cte where rank=2
this query is taking 6 seconds to execute .. 'MyTable' table is properly indexed..
1 million rows are returned by common table expression.. i want to reduce execution time of
this query to some milli seconds..please help me..
shipin
Iam using the below query to create a key using the column combinations and a seperator"%" to insert the same in to table my issue is with the perfomance of this query.
This query is returing aroung 6000 records and its taking 11 seconds to return the result.is there any way that i can optimize this query to improve perfomance.Select (Item.ItemCode +'%'+ Product.Name + '%' + Quantity.ID) as Key1 from
Items,Products,QualityPlease advice
Hello,
In a database here we have a table that is a list of codes. Many other tables in our database have foreign keys to this table (T_TYPE_CODE). The joined column is an integer that is a clustered index on the table.
One of the developers here says that in one of his queries he has to OUTER JOIN to this table 15 times and that the OUTER JOIN is killing performance. He wants to add a record to T_TYPE_CODE that will represent NULL so that any NULL values in the tables that foreign key to this table will use that ID instead of NULL. In this way he could use all INNER JOINs.
To me this seems like a bad idea - NULL is NULL and creating a value to represent NULL will open a whole can of worms.
My question: Is there a performance hit for using OUTER JOINs against this table, considering that the join is on a single column and is a clustered, unique index?
Also, what problems can we expect to run into if we use a
Thanks!
-Tom.
Hello everybody,
We have a web site with about 100 connections per hour.
This site is only for consultation : read only.
We implement ASP on IIS 4.0 and use as often as possible stored procedures with ADO objects on client side (ASP).
We close every objects after they are used.
The site works about 1 hour and then is stopped because inetinfo service is down.
The memory and the CPU on the SQL server machine are quiet at maximum.
'user connections' is to 0 (so 32767 enabled connections).
'locks' is to 0 (so it will give memory dynamically).
The machines are very powerfull : 2 X bi-processor P III 700 Mgz, 512 Mo RAM :
1 for the front-end, the other for the SQL Server.
We think that we can make it work with such a materials.
Thank you very much.
Laurent Diep.
Hi all
We recently upgraded from sql 2000 to sql server 2005. Our system was developed using Microsoft visual Basic.
Since we upgraded to sql server 2005 our system has been very slow. I suspect that it was because of the new sql 2005 installation that i made.Does anyone know how to solve this problem for me to be able to increase the perfomance speeed of our system again.
It has been stressing me for a while now.
Pliz help
Thanks!
Hi,
In my cube, I used Time Intelligence wizard to create calculated members such as YearToYearGrowth and YearToYear Growth%. But when I try to pull those two items in RS 2005 for more detailed levels, the performance of running MDX dataset is terrible even though my fact table has only 36234 rows of data and I have 6 dimensions (the largest dimension includes 37801 rows).
One thing that I did was removing ALLMEMBERS keywords in the MDX designer but I didnt improve much the time it takes to execute a query. Right now, it takes 1.5 minutes to execute may MDX query in RS 2005. Is there a way to improve the performance of MDX query (generated in RS) in RS 2005?
Thanks!
i have a table with 5 million rows (6 months data) .. every month approximately 1 million data is getting inserted..(not
bulk insert) ..table is properly indexed.. There is a stored procedure to generate report based on this table.. (only this table no joins) .. stored procedure do lot of permutations (includes lot of temporary tables ,group by having,count() etc)..
it takes 2 minutes to generate the report.. i want the report to be generated with out taking a second..
should i partition this table??? size of table is 2gb.. i dont know whether this table is a right candidate for partitioning or not.. please help me..
Thanks and regards
shipin
Dear ReaderI am trying to design a database. How can I make best Judgement that Indexing (which I am trying to fix during Diagram Desingning process)is ok.I am able to identify the best candidate for the indexing.Below is the details I want to understand:AreaZIPCityCountyDistrictState/ProvinceCountryNow I want the data retrival optimization through Index. (you can suggest another idea, also)Entities Area,...., Country have independent tables.Example:Area_TableAreaID (PK)AreaThey have relationship- one to many- if you go from Country to Area.There is one more table:Location_Table (PK)LocationIDAreaIDZIPIDCityIDCountyIDDistrictIDState/ProvinceIDCountryID(Location_ID is further related to the Address of the contact.)GUI has a single form to enter these details.On a save command details in all the tables -Area to Country- (individually) being inserted.& simultaniously Location_Table is also being inserted with the details.Following is the situation of being queried these tables:(1) GUI user can select an Area than the related details of ZIP .., ..., ...upto Country etc. should be loaded automatically (id it is previously stored by the user entry in the database.)(2) Contacts have to be retrived on the basis of Area, ZIP, .....County. (Necessary Groupings are required )Example:If Contacts are queried Country Wise then the Display should beCountry1State1District1County1City1ZIP1Area1Area2ZIP2City2County2District2Country2Please Guide.SuryaPrakash****************************************** This message was posted via http://www.sqlmonster.com** Report spam or abuse by clicking the following URL:* http://www.sqlmonster.com/Uwe/Abuse...0255a1765491f15*****************************************
View 5 Replies View RelatedI have setup transactional replication everything on one box. later(two or three weeks later), Replication monitor is show red X Under my publishers (publications is disconnected). this is SQL2005.
Everyone known how to fix this problem?
Thanks,
Frank
(From an exchange originally posted on SQLServer.com, which wasn't resolved...)
To return views tailored to the user, I have a simple users table that holds user IDs, view names, parameter names, and parameter values that are fetched based on SUSER_SNAME(). The UDF is called MyParam, and takes as string arguments, the name of the view in use, and a parameter name. (The view the user sees is really a call to a corresponding table returning UDF, which accepts some parameters corresponding to the user.)
But the performance is very dependent on the nature of the function call. Here are two samples and the numbers reported by (my first use of) the performance monitor:
Call to table returning UDF, using local variables:
declare @orgauth varchar(50)
set @orgauth = dbo.MyParam('DeptAwards', 'OrgAuth')
declare @since datetime
set @since = DATEADD(DAY,-1 * dbo.MyParam('DeptAwards', 'DaysAgo'),CURRENT_TIMESTAMP)
select * from deptAwardsfn(@orgauth,@since)
[187 CPU, 16103 Reads, 187 Duration]
Call to same table returning UDF, using scalar UDFs in parameters:
SELECT *
from deptAwardsFn (
dbo.MyParam('DeptAwards', 'OrgAuth')
,DATEADD(DAY,-1 * dbo.MyParam('DeptAwards', 'DaysAgo'),CURRENT_TIMESTAMP)
)
[20625 CPU, 1709010 Reads, 20632 Duration]
(My BOL documentation claims the CPU is in milliseconds and the Duration is in microseconds -- which I question.) Regardless of the unit of measure, it takes a whole bunch longer in the second case.
My only guess is that T-SQL is deciding that the parameter values (returned by dbo.MyParam) are nondeterministic, and continually reevaluates them somehow or other. (What ever happened to call by value?)
Can anyone shed some light on this strange (to me) behavior?
----- (and later, from me)---
(I have since discovered that the reference to CURRENT_TIMESTAMP in the function argument is the cause, but I suspect that is an error -- it should only capture the value of CURRENT_TIMESTAMP once, when making the function call IMHO.)
I am trying to run a program called NVivo7, which - when I install - also requires installation of SQL Server. Because my C drive is small (and nearly full), I am trying to run NVivo7 off my D drive, though SQL seems to install on C. Is it possible, do you think, to use 2 different drives in this way, or do both the program and the Server need to be on the same one? If so, is there any way to get them both on D?
View 3 Replies View RelatedSo i'm not really new, but got a question. i've recently been looking to to the Western Digital Raptor Drives.
as far as performance, it's always been my understanding that the speed of the hard drive is just about always the bottle-neck of a computer. i'm currently running 2 stripped WD 500gb SATA drives for my SQL server (dual Xeon 2.8 with 2GB memory).
i'm thinking of upgarding to 4 WD Raptors (10k RPM) drives, the new 150GB models. anyone have an opionion? do you think i'll get a large performance increase?
the database that i run queries off of now is about 125 Million names, with about 80 fields in width. so it's rather large, and usally takes a fair abount of time to get my results back (we're talking anywhere from a minute, to half the day.)
do you think the raptors will slim that down significantly?
We recently installed SQL Server 2005. The server has 3 drives. When I try to restore a database I can only access the C: drive. How do I make the D: and E: drives visible in the "locate folder" window?
Hi,
Can someone help me, I installed SQL 2005 Enterpirse Editon on windows clustered servers. Then after the installation
I want to change the path of my DB logs but the problem was, I can not see the other drives. I can see only the drive where the DB was located. Is there any special configurations that should be done.
Thanks.
Russell
Ideally we'd like to configure our SQL cluster w/ the databases on one drive and the logs on another. Is this feasable in a cluster solution.. Will it basically just be 2 drives that are failed over vs. 1?
Thanks
Hi ,
We are having 4 sql7 servers. All are in network only.
serverA: drives c,d
serverB: drives c,e
serverC: drives c,d
serverD: drives c,d,f
Now my question i want map all the drives to each other. So that i can use the space wherever available, because in serverA i dont have space but in ServerC have lot of free space.
Can anyone pls tell me in detail how we have to map the drives.
Thanks
--Siva
Hi,
I mapped a drive on to my SQL Server box. It points to another server from the same domain. When I try to backup or restore a database, I can't see this mapped drive through my SQL Server. Even if I type the entire path, SQL Server wouldn't take it. I don't have a clue about why it is not working. Can anyone throw some light on this. Your help is grately appreciated.
Thanks,
Varma
Hi,
Why we allocate .mdf and .ldf on seperate drives?
Please tell me a proper logical reason behind it.
My sql server is not showing all of my drives in server. Please help me
View 5 Replies View RelatedI have a SQL 2000 server that is installed on a Dual Xeon server running win2k. The server has two raid 5 hard drives, a C drive and an E drive.
The C drive is currently where the operating system files are stored as well as the SQL program files. As things stand there are SQL DB and transaction logs strewn between these two drives with no particular logic.
My question is, with two drives as it stands how should I move things around to gain the best performance? For example, should I keep all my data on the E drives and all my transaction logs on the C drives with the OS and the program files?
There are about 10 Databases in use. One database run's the configuration for proprietary predictive dialing software. The other databases are calling information for each campaign we run within the dialing software.
I have enough space on both drives to accommodate the data, its performance I would like to see a difference in.
SQL Windows Server: 2008 R2
SQL: 2008 R2
Problem: I have following drives on this server, Is there a query that I can run daily which will show us the % full and remaining storage capacity on these drives?
Logs(L)
Data(M)
Tempdb(T)
I would like to run that query on all of my servers to get a daily report.
Hi,
I have a SAN and configuring a cluster on SQL 2005. I initially created a Quorum drive when setting up the cluster and now added 4 more drives to the physical node but when I try to install SQL that drive cannot be located.
Do we need to create all the drives when installing the cluster or what is the way to add the drives later on.
Thanks
Anup
I have a 75 GB hard drive and a 300 GB. I want to mirror the 75 to the 300 and use the extra space as data storage. Is this possible if I partition the 300 and then mirror the hard drives.
Thanks
I'm trying to attach to a database that's stored in my D: partition on my disk. The disk has partitions C: D: E: F:
But when I choose "Attach", and open the requester to select the database-file, only drives C: and E: are shown.
All other programs lists the partitions right.
Where have D: and F: gone to ?
Is this possible? I seem to remember doing it with SQL Server 7 a long time ago. The Microsoft Knowledgebase says it's not possible with 4.2 through 6.5, but nothing about 7.0 and up.
View 2 Replies View Related