Perf Mon
Mar 15, 1999On an installation of sql 6.5, when I go to performance monitor, there are no sql counters. What gives? How can this be fixed,and do I have to reinstall,(wipeout data) to do it?
View 2 RepliesOn an installation of sql 6.5, when I go to performance monitor, there are no sql counters. What gives? How can this be fixed,and do I have to reinstall,(wipeout data) to do it?
View 2 RepliesThe User Defined Counters Object has disappeared from me Performance Monitor.
Any ideas why or how to get it back? Thanks, Mike
Is there anyway that you can automate performance monitor from the cmd line. I.e feed it some parameters like server and counters and receive a log file at the end.
Cheers
I have a SQL2000 db running a vendor custom app on a Win2000 sp3 server. The front-end is run on Citrix to the client. Can't change the app or the somewhat poor infrastructure of client. users starting to complain about slowness ( we can shadow them and this seems the case). On the SQL box all the standard perf counters look good (buffer cache at 100%, queue length always less than 1, memory and disk look good. The server currently has 2 nics doing load balancing. The network counter bytes/sec averages over 100,000. The network counter output queue length shows an average of 4,294,967,251 (That seems high but in reading on another sql perf site this counter doesn't always work). Any other ideas where I might look. Do these numbers look high?
Thanks
CREATE TABLE [Perf] ([TransId] INTEGER,[FileNo] VARCHAR(80),[TimeInSeconds] INTEGER,[FileSizeMB] INTEGER,[FileName] VARCHAR(255),[StartDate] datetime)Ok!! Here's the Problem. I am working on a perf stats report. FileType is First 9 chars of the field FileName. I need to compare a similar filename from this month to last month or before ordered by file size. Maybe this is a very simple query but currrently my mind refuses to work. Seeking F1.
View 3 Replies View RelatedThe SQL performance couters do not get installed with MSDE. Does anyone know how to install them??
View 2 Replies View RelatedAny recommendations from DBA's on good books for performance monitoring for sql 2000
View 2 Replies View RelatedHowdy
I have a server that has the following average readings :
No. CPUs = 1
% CPU = 2
SystemCPU Queue Length = 2 to 4
SQL Server:Buffer ManagerBuffer Cache = 99.85%
RAM in the box = 1 GB
MemoryPages/sec = 1 to 5
SQL memory in use ( using Task Manager ) = 250 MB
Max worker threads = 255
Average number of connection = 60
So...........all indicators are that the CPU is idling, there is way enough RAM but we still have a ( in theory ) a congested CPU as the queue length is over 2 consistantly. Thing is, I need to work out if the CPU isnt working hard as the queue is long, or whether we can put extra databases/load on the box.
As the max worker threads are greater than number of connections ( 60 vs 255 ) we could reduce these as the number of users doesnt seem to alter much. BUT.......would this make much difference as if the 255-60=195 worker threads arent doing anything much, they shouldnt put any load on the server, right?
Any thoughts much appreciated.
Cheers,
SG.
Hello All,
I have a 64-bit SQL Server running on 64-bit OS having 12GB of RAM. The Server only hosts our Application database (its a 30 GB database). There are no major process running on it. Despite of this, the Avg Pagessec shows a count of 1500 and
the Avg Page Faultsec shows 22000. Is there any specific reason as to why this is happening? The %idle time of the disk is 87%. Also there is no major load on the server. Do let me know if you need any other input.
Thanks in Advance,
Mitesh
Are there really any benefit on using Read Uncommitted Isolation Level or having a NOLOCK hints for retrieve queries when the default Isolation level just Read Committed (not using COM+). I'm confused why the Community Server uses this technique perhaps for perf issues but I couldn't see any reason why...
View 1 Replies View RelatedThe challenge: I have to extract and convert data between 2 SQL server systems - only 4 tables on the source systems, 8 tables on the target system. Source tables have between 5,000 rows and 16,000,000 rows. For most of the tables (for example Customer, which goes into 4 target tables), there will be 1 row in target tables for each row in the mapped source system table - so my 13.5M customer rows will end up as around 40M rows across the 4 target tables. So far, so good. But - this is a 24x7 online retail web-site, and to get the data across as a clean process, we require the smallest possible duration.
I have progressed on the customer migration, and am testing on a test environment (2xdual core HT processors, 4 GB ram) which was 2.15 million rows. Live environment is likely to be a 4xdual core with 8-16 GB ram.
I am trying to optimize the extract data flow, and have read the SSISperfTuning doc. I am now trying to put that into practice.
I have a row size of approx 340 bytes, so based on that, and my test environment of 2.15 million rows, I work out at around 700 MB ram required to buffer the data. That is a factor of 7 times greater than the max buffer space for a data flow of 100 MB, which it seems, means I should divide the base MaxBufferRows (10000) by 7 to go down to 1400 rows?
I see a LOT of the following messages in my progress, when running with default settings:
[DTS.Pipeline] Information: The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers. 30 buffers were considered and 30 were locked. Either not enough memory is available to the pipeline because not enough is installed, other processes are using it, or too many buffers are locked.
The design of the data flow at the moment is:
..........................................|--target table 1
SOURCE SP ---- MULTICAST---|--target table 2
..........................................|--target table 3
..........................................|--target table 4
any thoughts on Buffer tweaking, corrections to my assumption and other hints/techniques?
*##* *##* *##* *##*
Chaos, Disorder and Panic ... my work is done here!
I have this SP that takes several varchar columns and concatinates them all together then inserts them into a text field. I do this with a cursor which was the quickest way to get it done when it was setup...
However when I moved the process to a 2005 server (on the same physical server) the process drastically slowed down. On 2000 the process took about 7 min to handle all 350k+ rows with the processors hanging around 20-40%... On 2005 it took over 30 min (not sure how long it would take cause I killed the process) and the processors stay above 98%...
I have rewritten the process to use a while loop instead of the cursor (I wanted to do this anyways) and it had no effect. At this rate (about 1 row a second) it will take forever and this process runs everyday.
Any ideas??
Here is the procedure...
declare @srch_field varchar(8000)
declare @row int, @productid varchar(25)
DECLARE @title varchar(150), @actors_keyname varchar(1200), @directors_name varchar(400)
Declare @genres varchar(700), @theme varchar(1500), @type varchar(1500), @studio_desc varchar(100)
DECLARE @media_format varchar(50), @artist_name varchar(100), @dev_name varchar(100)
DECLARE @flags varchar(256), @starring varchar(256), @esrb varchar(100), @esrb_desc varchar(500)
DECLARE @ptrval varbinary(16), @text varchar(max)
declare @productlist table(product_id varchar(25), IDNUM int identity)
insert into @productlist (product_id)
select product_id
from music_load..globalsearch
select @row = @@rowcount
while @row > 0
begin
select @productid = product_id
from @productlist
where idnum = @row
SELECT @title = rtrim(title) ,
@actors_keyname = actors_keyname ,
@directors_name = directors_name,
@genres = genres ,
@theme = theme ,
@type = type ,
@studio_desc = studio_desc,
@media_format = media_format ,
@artist_name = artist_name,
@dev_name = dev_name,
@flags = flags ,
@starring =starring ,
@esrb = esrb ,
@esrb_desc = esrb_desc
FROM globalsearch
where product_id = @productid
Set @srch_field = isnull(@title,'')
if @actors_keyname is not null and @actors_keyname <> 'unknown'
Set @srch_field = @srch_field + ' ~ ' + rtrim(@actors_keyname)
if @directors_name is not null and @directors_name <> 'unknown'
Set @srch_field = @srch_field + ' ~ ' + rtrim(@directors_name)
if @genres is not null
Set @srch_field = @srch_field + ' ~ ' + (ltrim(rtrim(replace(@genres, 0,''))))
if @theme is not null
Set @srch_field = @srch_field + ' ~ ' + (ltrim(rtrim(replace(@theme, 0,''))))
if @type is not null
Set @srch_field = @srch_field + ' ~ ' + (ltrim(rtrim(replace(@type, 0,''))))
if @studio_desc is not null
Set @srch_field = @srch_field + ' ~ ' + rtrim(@studio_desc)
if @media_format is not null
Set @srch_field = @srch_field + ' ~ ' + rtrim(@media_format)
if @artist_name is not null
Set @srch_field = @srch_field + ' ~ ' + rtrim(@artist_name)
if @dev_name is not null
Set @srch_field = @srch_field + ' ~ ' + rtrim(@dev_name)
if @flags is not null
Set @srch_field = @srch_field + ' ~ ' + rtrim(@flags)
if @starring is not null
Set @srch_field = @srch_field + ' ~ ' + rtrim(@starring)
if @esrb is not null
Set @srch_field = @srch_field + ' ~ ' + rtrim(@esrb)
if @esrb_desc is not null
Set @srch_field = @srch_field + ' ~ ' + rtrim(@esrb_desc)
update globalsearch
set srch_field = @srch_field
where product_id = @productid
SELECT @ptrval = TEXTPTR(srch_field),
@text = credits
FROM globalsearch
where product_id = @productid
UPDATETEXT globalsearch.srch_field @ptrval NULL NULL @text
SELECT @ptrval = TEXTPTR(srch_field),
@text = track
FROM globalsearch
where product_id = @productid
UPDATETEXT globalsearch.srch_field @ptrval NULL NULL @text
set @row = @row - 1
end
Hi all,
I'm loading my data warehouse using several SCDs. Some of these SCDs need to occur in sequence, while others can be run at the same time. I'm wondering what the best option for me is in terms of performance. Here is what I was considering:
1) Create a single package. Create two sequence containers --- one that will contain SCD loads that occur in sequence; the other sequence container contains SCD loads that occur in parallel.
OR
2) Create a set of packages for each SCD load. Then create a "Master" package that will use "Execute Package Task" components to call these packages.
The othe reason I want to bring up these difference ways to design an DW Load is because the second option is a "cleaner" approach, or a more organizational approach, to the load. The first option can get quite messy and large if you have several SCDs and several sequence containers. However, I'm looking for the fastest performance. Any thoughts?
This webcast from our partner, ETI, may be of interest to readers on the forum - we see questions quite often about high-performance interaction with other databases.
Donald
Trying to install 2005 Dev edition in xp pro, sp2.
Performance Monitor Counter Check Failed Error.
Had to stop the install twice at the point it verified what components were to be included as
the docs were not going to be installed. Resolved the issue.
Now it won't pass the system config test because the registry is not the way it wants it.
I look up the messages and the only solution is to hack the registry and risk my system.
This is nuts.
Isn't there some way to restart cleanly without hacking registry keys ?
Help.