I have 600 instances on my network .... I need to monitor the Disk Space ... how should I do that I mean the best way for this would be ???
By Disk Space I mean the SQL Instance should atleat have 10% of free disk space ... If it is less , maybe an alert can be sent or something of that sort .
Now it would be a pain configuring alerts on each machine.
-- Initialize Control Mechanism DECLARE@Drive TINYINT, @SQL VARCHAR(100)
SET@Drive = 97
-- Setup Staging Area DECLARE@Drives TABLE ( Drive CHAR(1), Info VARCHAR(80) )
WHILE @Drive <= 122 BEGIN SET@SQL = 'EXEC XP_CMDSHELL ''fsutil volume diskfree ' + CHAR(@Drive) + ':'''
INSERT@Drives ( Info ) EXEC(@SQL)
UPDATE@Drives SETDrive = CHAR(@Drive) WHEREDrive IS NULL
SET@Drive = @Drive + 1 END
-- Show the expected output SELECTDrive, SUM(CASE WHEN Info LIKE 'Total # of bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS TotalBytes, SUM(CASE WHEN Info LIKE 'Total # of free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS FreeBytes, SUM(CASE WHEN Info LIKE 'Total # of avail free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS AvailFreeBytes FROM( SELECTDrive, Info FROM@Drives WHEREInfo LIKE 'Total # of %' ) AS d GROUP BYDrive ORDER BYDrive
I'd like to find out how people handle monitoring the disk space used and available per database across all of their databases and severs. The information returned in sp_spaceused is what I'm looking for, but it is for one databases only. How is everyone managing many databases on many different severs?
Are there scripts or tools available for this? Has anyone written any vb code using SQL-DMO?
Hello I need to setup a compaq sever with 300 MB database, and will be adding around 600 records on a daily basis. Can someone help with how much disk space i should have on sqlserver, providing i have c: and d: setup.
I have a server and it has C: D: F: I: Drives and all the system files are on C:Drive and and all the .MDF's and .LDF's(model,temp,master) are on the F: Drive and now I am running out of space on both(C: and F: Drives)
1. Can we add space to the C: and F: drives on the fly?. 2. Can I move the System databases ( MDF's and LDF's to some other drive)and if so, how do I do it?( Moving the databases ) and this is on the production database so when I have to do this.Will there be any impact.
I noticed something strange today. I was running a query using query analyzer on a large database (8.8 million records) and the disk space on the c: drive was dropping and eventually went to 0. Availalbe space on the c: drive is 10GB. The query did complete. SQL server and all the databases are on the d: drive. After closing the query results in query analyzer the disk space returned. Is this a concern and is there a way to change it to use the d: for whatever it is doing?
This is my first attempt using SQL 2000 and DTS. I am importing an Access database using the DTS wizard. The process fails with a "Not enough space on temporary disk" error. There is definitely enough space on the physical disk. I don't have any limits on any folder sizes either. What "disk" is the error talking about, and how do I give it enough space. The database is relatively small, about 10MB. I believe the database was created using Access 97. Please help.
We recently moved from v6.5 to v7.0. Now I have the databases and logs set to "autogrow". How can I monitor the disk space to ensure I do not run out of room (or is that preset as to how large it can grow ?). Can't find anything in the books online. Do I do this through the NT admin tool or through the SQL*Server Enterprise Manager and more importantly - how ??? Thanks so much for any help... Nancy
I'm trying to save a dts package and it keeps coming back with insufficient disk space. I noiticed that db MSDB was full so I manually increased the size. It was set to manually grow at 1 mb increments. But for some reason it didn't look like it was doing that so I manually increased it. Right now this is about 355 MB free so that should be plenty to save a package. But its still coming back with the same error insufficient disk space to complete operation. Any ideas on why or why it didn't grow on its own? Please help I can't seem to save any packages.
Our database -SQL Server 7.0 sp1 (NT 4.0 sp5)- is growing at a very fast rate despite the fact that we are deleting old record. It doesn't seem to be recovering disk space for the deleted records. Please let me know if there is a specific setting that can help us recover disk space. )
Hi, I'm new to these forums (and to SQL Server), so please be gentle with me.
I am developing a process to obtain information on all our remote servers/databases, and store it in a single local database. I'm after things like db size, last backup date, free drive space etc...the usual weekly statistics.
I've linked the remote servers to my local one, and have written a few simple procedures (which exist on the local server) to grab backup and file size information from the remote tables. The output is stored locally in tables which we can then query as necessary.
I am having difficulty obtaining the free drive space details. I'm using :- 'exec <remote_server>.master.dbo.xp_fixeddrives' to get the info, but I cannot store the output in a table on the local server. (remote_server_name, date, drive_letter, space_mb)
I wish to avoid creating any objects on the remote servers if at all possible. I really want to pass the remote server name into the procedure, and the output to be inserted into the table.
I think i know the answer to this one already, but would like to check before going back to my management.
Background. In the past 2 weeks, a number of our databases have shot up in size, and are now at 100% utilisation of allocated disk space. My management have asked me to look into what is causing these to fill up so quickly.
Unfortunatly there were no snap shots or information relating to the databases / tables so i can not determine which tables have grown and are causing the problems.
I have also looked through the sql logs and the event viewer logs to see if there is anything out of the ordinary, but again apart from log / database backups there is nothing of note in there.
I am going to be implementing a solution that I got off another thread which will give me some database / table history to help me in the future, but for now is there anything else i can do? or is it a case of me getting back to the application guys and getting them to reduce data (as there is no more disk space to give them).
Any thoughts or advice you can give me would be greatfully received.
Okay, so i have a Dual Xeon SQL2k5 Server set up. it's got 4 10k RPM Raptor Drivers (160gb) on it all stripped together. my question is this, i was just doing all kinds of index tuning, and my server didnt have enough disk space to create 2two of the recommended adjustments. i'm curious, if i should just have those two indexes stored on the 1TB Backup array i have? (two 500gb 7200RPM drives)
or should i backup the database, and expand the Raptor Array and then restore the database and create the indexes?
i'm not really worried about down time as long as it doesnt exceed 2 days.
I first ran indexdefrag on a table with 1.5 billion rows. logical fragmentation was at 95%. logical frag went down to 3% with no real effect on disk.
DBCC reindex had previously been bombing undetected.
Now I've run a reindex on this table: Reindex Job with Fillfactor =100 Ran in 3:05 Free Disk went from ~150GB before operation to 49GB File4 went from 347GB to 504GB
Why has so much free disk been consumed by this operation and not released ??????????
one user database ldf file is growing like it reached 50GB and total D: drive space is 70GB. No SAN drives for D:. I tried to shrink the file but it not shrinking. Even I tried take to Tlog backup but it is throwing the below error 'not enough disk space'
This is production server. Please let me know how to resolve this issue?
one user database ldf file is growing like it reached 50GB and total D: drive space is 70GB. No SAN drives for D:. I tried to shrink the file but it not shrinking. Even I tried take to Tlog backup but it is throwing the below error 'not enough disk space'
This is production server. Please let me know how to resolve this issue?
I heard somewhere that using reporting services you are able to report on more useful aspects also such as server disk space, how is this possible, or where can i find any tutorials to help me out
I need to check the space available on a specific disk (D on a remote server, again, this task will be executed from an SSIS package, if I have less than 60GB available, I have to delete some files.
How would you guys do this using VB.NET? If I had SQL Server installed on that box, I could achieve this executing a DOS command... But I don't...
I wanted to know on what basis the disk space allocation for the databases is planned . Suppose if we plan 60 GB for data files ( mdf )for a given database then what should be the space allocation for the log files ( ldf ) and the tempdb ( both mdf and ldf files ).
Is there any thumb rule or any defined ratio for the same ?
We are experiencing an intermittent locking or hanging problem on our SQL Server (at the application level on the clients) and it has gotten worse very recently.
I haven't seen any processes that appear to be locked yet but I noticed that we don't have a lot of space available (the database is 1.4 gigs and we have about 245 megs available).
If this is inadequate disk space, could that be causing an apparent locking problem?
One of the drives that stores the database file is close to running out of space. The chance of me getting more space added to this drive any time soon are really low. What are other options I have?
We've got an internal database that replicates with another database server for our website.
Not all tables are replicated, some use merge and the others are snapshot based and published regularly to the public website facing server.
However, there's a lot of data (well, large textual data) that's being transferred and it seems to be generating massive log files that continue to grow and grow.
I'm fairly new to adminning an SQL Server box, so was wondering if anyone can tell me what the best way to keep it under control is? I've heard its possible to truncate the logs, effectively deleting any data that has already been processed by subscribing servers etc.?
As I said, I'm very much new to this and would really appreciate some guidance, if only to the right part of the SQL Server Books Online :)
Hi, I have a 250 GB database and not much space left on the disk drive. I want to run SQLMAINT to do optimization and integrity checks on this db. My question is : How much work space does SQLMAINT need to perform these tasks?. Thanks in advance for your help. F.
One of the drives that stores the database file is close to running out of space. The chance of me getting more space added to this drive any time soon are really low. What are other options I have?
I've written a SP which does some complex calculations and in the enddumps data into 2 tables (master & detail) When I run this sp forsmaller no of IDS (employees i.e for 13000 in Master and 60000 recordsin detail table) it takes around 3-4 hrs and if I run for allemployees in the database (i.e. abt 60000 records in master and 180000records in detail table) then it takes around 10hrs to complete.I'm using temp table to hold data and then do the calculations, butsometimes when I run the SP temp db starts growing and reaches up to25 GB and the process fails as there is no space left on the disk, andlately I'm not able to run the SP for every employee, I had to end theprocess after 16 hrsIf anybody can guide me what could be posible resons or where I shouldlook for solution.My row size in master table is arounnd 2000 bytes and in detail tableabt 300 bytes.Thanks in advance.Subodh
I've been running web Synchronisation for over a month. Just today some subscribers received a message warning of an "OS Error" could not retrieve file "dynsnapshotvalidation.tok". Sometimes they also got an error regarding low or no virtual memory. After much investigation it seems that the Web Server hosting the web sync IIS had literally no space left on the system c: drive. It also turns out that the \WindowsTEMP folder contained over 9GB of snapshot files.
It seems that when the Web server collects snapshot files from Data Server to deliver to the subscriber it creates a Temp copy on it's own local system. Unfortunately this temp data is not cleaned up and over time has jammed up the system and caused this failure.
Is there an answer to this or does it involve manually checking the server from time to time to clean-up the TEMP folder?
I am using the below code for sending the notification if any of the drive space falls below the threshold, I am having an issue here because email notification is sent out irrespective of whatever threshold I put in the if clause, any help is greatly appreciated. Thanks!
declare @MB_Free int create table #FreeSpace( Drive char(1), MB_Free int) insert into #FreeSpace exec xp_fixeddrives select @MB_Free = MB_Free from #FreeSpace where Drive = 'C' -- Free Space on C drive Less than Threshold if @MB_Free < 12000 declare @rc int exec @rc = master.dbo.xp_smtp_sendmail @FROM = N'XX@email.com', @TO = N'XX@email.com', @priority = N'HIGH', @Subject = N'C drive space issue', @Message = N'C drive space issue, please check', @Server = N'emailserver' select @MB_Free = MB_Free from #FreeSpace where Drive = 'D' -- Free Space on D drive Less than Threshold if @MB_Free < 305048 declare @rc1 int exec @rc1 = master.dbo.xp_smtp_sendmail @FROM = N'Email.com', @TO = N'Email.com', @priority = N'HIGH', @Subject = N'D drive space issue', @Message = N'D drive space issue, please check', @Server = N'emailserver' drop table #FreeSpace