I've got a question about the automatic database growth feature of V7. Here's an example:
I have a 1gb db that can grow to max size of 2gb.
I set the auto grow option to 75%
The first time the db grows it will grab 75% of the free space (1gb)
What happens if the database needs to grow again?
Will the db grow using the remaining free space (25%) or
has the database reached its max size because it can't grow any further?
We use SQL 2000 and our database is configured to grow automatically by10%. Currently 96% of our database is used. At what point will thedatabase expand - what is the trigger point?
I'm a beginner in SQL Server databases, my problem is this:
i'm making a database witch the frontend is an access project, the database has several stored procedures views and user functions (the normal..), but a few data, (only the experimental), last night i've noticed that the file grow from 22 MB to 89 MB, the objects are the same and also the data, the only diference was that i forgot to put in an event procedure code, the ADO method, "MoveNext", to update various records, and the loop was infinit. Is it possible that SQL statments generated by ADO make the file grow so rapidly!? If so how can i shrink it, because i've tried and and the results was 4%.
insert into DB_Growth (Database_Name, Logical_File_Name, File_Size_MB, Growth_Factor) exec (@l_sql_string)
fetch next from db_name_cursor into @l_db_name end close db_name_cursor deallocate db_name_cursor select * from DB_Growth with (nolock) if object_id('DB_Growth') is not null drop table DB_Growth set nocount off set ansi_warnings on return
insert into DB_Growth (Database_Name, Logical_File_Name, File_Size_MB, Growth_Factor) exec (@l_sql_string)
fetch next from db_name_cursor into @l_db_name end close db_name_cursor deallocate db_name_cursor select * from DB_Growth with (nolock) if object_id('DB_Growth') is not null drop table DB_Growth set nocount off set ansi_warnings on return
insert into DB_Growth (Database_Name, Logical_File_Name, File_Size_MB, Growth_Factor) exec (@l_sql_string)
fetch next from db_name_cursor into @l_db_name end close db_name_cursor deallocate db_name_cursor select * from DB_Growth with (nolock) if object_id('DB_Growth') is not null drop table DB_Growth set nocount off set ansi_warnings on return
:eek: I am somewhat confused -- I have a database in production that I restored to a QA environment; upon restore, the size has grown by 200MB.
Both production and QA are running SQL2000 -- the only difference is that QA has the latest security hotfixes installed -- version 8.0.0.665 from KB article at the following link:
I am trying to find a way to calculate everyday my DB Growth, I did find a script on some site but it seems to give me the same information as the taskpad wich is not very specific. Basicaly i would like to know the size of a table in MB or in whatevever conversion possible, so that i will be able to do some forcasting.
Hello,I need to monitor every 15 minutes growth in data file and log file .Since mdf and intial file sizes are set to high value,measuring these values at 15 min interval will not provide the changein size .My intention is to measure the log file size growth which helps tocalculate the disk space and bandwidth required to setup log shipping .We need to set up this infrastructure based on this calculationThanksM A Srinivas
My DB size was from 500MB to 10GB since 8/1998 to 12/2004. But now is 16GB (from 1/2005 - 5/2005), I don't why the data size growth too fast (as double) ?
I have a client running RMS, since moving to SQL express his database size has jumped 2 from 2G to 4G in 8 months. Previiuosly it took 2 years to reach the 2G size. has anyone else experienced this rapid growth of their database?
Suspected Problem: Distribution Database Transaction Log Not Checkpointing
I have a distributor with a distribution database that keeps growing and growing (About 40 GB in 7 days). The database is using the SIMPLE recovery model but the log continues to accumulate data. I have spent time looking at articles such as: "Factors that keep log records alive" (http://msdn2.microsoft.com/en-us/library/ms345414.aspx) and the one thing that stands out is the Checkpoint. I noticed that I can run a manual checkpoint and clear the log. If the log records were still active, the checkpoint would not allow the log to be truncated. This leads me to believe that the server is not properly initiating checkpoints in the Distribution database even though Recovery Model = SIMPLE and the server Recovery Interval = 0.
I found this: "FIX: Automatic checkpoints on some SQL Server 2000 databases do not run as expected" (http://support.microsoft.com/kb/909369/en-us) but I suspect this is a followup to a problem that may have been introduced with SP4 (since SP4 is a requirement for the hotfix). I am running SP3a (Microsoft SQL Server 2000 - 8.00.850) so I don't think that is the issue. I have several other nearly identical servers with the same version and configuration that have properly maintained log files.
SP4 is not a good option for me at this point - the next upgrade will be to SQL 2K5.
Hopefully I'm posting in the right area. There is a database that has grown to about 41-42 GB in size in about a 2 month period. The previous database had grown to about 22 GB before it was purged out. I'm running this on SQL 2000, and I've tried running all the DBCC SHINKFILE and SHRINKDATABASE commands to no avail. In this case, the MDF file is the one that has grown out of control as opposed to the log file (LDF file).
Does anyone have any suggestions on what could be done to control the size?
I need to monitor my database growth, as few of databases are growing rapidly. My client wants the growth list of my databases. have report of database growth of specific databases, at least of one month.
I currently have a DB that is growing at a rate of 10gb per month. It is set to 1mb unrestricted growth and the log file is set to 400mb restricted growth. I take regular transaction log backups so the log file is well under under without any issue. This DB's recovery model is set to FULL as it has to be mirrored to a backup site. Any recommendations on how to control the growth. - Is it advisable to take create a new DB with data older than 2 years and transfer that file to an external drive and if i do this, can i "attach" it back to the main server if and when required ?
I'm currently using SQL Server 2005. Before I have set my database on unrestricted auto growth. But today, I have noticed that the Log file has been set to limit its growth to 2,097,152 MB. I have 160GB space for my log files, I just want to maximize the space for logs in my hard drive.
When I try to change the settings back to auto growth it still keeps on returning to its previous setting it is still set on 2,097,152 MB. What I did was : Right Click on the Database - Properties - Files - Click the (...) - set the auto growth option to unrestricted - Click Ok But when I checked log file, it is still set on 2,097,152MB.
Can some one help me change the settings of my Database.
Can anyone tell me why my SQL2000 database has grown aprox 15 % and my Log file 20,000 % when I attach it to SQL2005 .I've Thousands of Databases to Upgrade, but with the log file increasing to more than the size of the Database Its going to be a struggle !
It also takes a fair ammount of time to attach,
I suspect there is some reindexing going on , as when I try to reattach to SQL 2000 I get index errors ?
Is the re anything I can do in advance to reduce the database growth ?
I know I can truncate the log afterward but the peak diskspace consumed during my Migration may be an issue !
I am only DBA in my company and client want to know the growth rate of his SQL server DataBase which is in production. How can I get the growth rate per day?
I'm trying to get an understanding of a serious problem I have with a large DB in production. This is going to be obvious to someone (everyone probably) <bg>
I have a table which consists of numerous varchars and ints but also a Text type field. This table resides in a SQL 2000 Database. This DB currently has a data file size of 16Gb and a Transaction Log size of 17Gb. When I edit the table and increase the size of a Varchar field from 50 to 100 these files grow to more than double their size!
Is there any automated script available to - "Monitor Database Growth and if any DB is grown by 20%, sending mail alerts"? If not, what is the approach to write the T SQL script ?
Yesterday a user has run a storedprocedure which updates a very transaction table having million of records , as usual the user killed the process in the middle of the execution.(which has resulted in rollback action).
this rollback process blocked other scheduled jobs and its was running for more than 8 hours without completing
as usual decided to shutdown and restart the sqlserver while restarting the sql server went into automatic recovery
the following are the log recording
2001-04-17 12:00:01.86 spid9 Recovery progress on database 'ISH' (7): 48 percent. 2001-04-17 12:00:37.89 spid6 Using 'xpstar.dll' version '1998.11.13' to execute extended stored procedure 'sp_MSgetversion'. 2001-04-17 12:02:36.60 spid9 Recovery progress on database 'ISH' (7): 49 percent. 2001-04-17 12:08:12.79 spid9 Recovery progress on database 'ISH' (7): 49 percent. 2001-04-17 12:11:49.90 spid9 Recovery progress on database 'ISH' (7): 50 percent. 2001-04-17 12:11:50.32 spid9 Recovery progress on database 'ISH' (7): 92 percent.
if you see the log reading from 50% jumps to 92 % and the automatic recovery process is running from another 5 hours (now not able to open the database)
in order to avoid the sql server going into automatic recovery mode (usually the dba use to update the master.sysdatabase table the status column with some magic number to set the database into emergency mode) but i followed my own style (deleting the reocord for the user database from sysdatabases) and restarted the sql server , and used my monday tape backup (complete bkup) to recover the database. In the recovery option i used force restore over existing database (hoping that this will reduce the database recovery time ,bcaus the data file (mdf and ldf ) need not to be created.
but the result :- after running six hours of loading data from tape ( i am getting the following reading in the log file)
realy do not what is happening in the background.(if its oracle the dba has the control over the database . In sql server has the control over the dba)
any one who has any idea on sql server recovery (the size of the database is 22 gb which is recovered and tape device is hp sure store DAT 40 E (DSS-4,4MM TAPE). Thanks in advance
Hi,How to automate database backup (MSDE server v8.0)? Is some free toolwhich can help on this or can I use some stored procedure? Plan:Complete - 1 per weekDifferential - 1 per day--*Best regards,*Klaudiusz Bryja
While checking the SQL server error logs, I notice that the pubs and msdb database are automatically being backed up, even though no job is set up to do so....in addition, its backing up to a directory that I cannot find on our network.....does anybody have an idea of whats going on ?
the path its backingup to is: (FILE=1, TYPE=PIPE: {'.pipedbasql70dbagent0s0'}).
I am a C++/C# developer my SQL skills are very limited. I have a database set up for mirroring and I would like to get an e-mail notification whenever an Automatic Failover occurs. Can anyone show me how to do this using SQLServer 2005? Please provide T-SQL script sample if possible!
what i need to do for my project are as following:
a mobile user send data via GPRS to SQL server Database. then i need to have a method to detect while a particular table is being inserted. and then extract data from table construct a report dynamiclly. what should i do to achieve this goal? e.g. window application, store procedure or trigger ?
PS : client side is a mobile application developed using MCL. i don't in which way he will send all data to SQL server Database, so what i need to do is to monitor new data inserting.
2) how to auto generate and print report directly after record been inserted into the table ? Do i need to import report web service API ? if yes, which one? or i can use other methods e.g. predefine report control view in my window application, turn off pop-up menu while printing a report(I guess)
What is the recommended size and file growth for a database and log file? We will be storing approx 10000 records a day.Currently we have the following:
CREATE DATABASE Dummy ON PRIMARY ( NAME = Dummy_data, FILENAME = 'D:....DATADummy.mdf', SIZE = 250MB, FILEGROWTH = 25MB ) LOG ON ( NAME = Dummy_log, FILENAME = 'D:....DATADummy_log.ldf', SIZE = 50MB, FILEGROWTH = 5MB ) ; GO
Does this seem right? We have our transaction logs set to "Truncate Log on Checkpoint" and they still grow over 1GB. Is it possible that one transaction (to a checkpoint) generates this much logged information? Will transaction log backups every 5-10 minutes help me out better or is this just a poorly written application?
I am having a problem with the growth of the tempdb on my SQL 7.0 box. I have over 300+ stored procedures that are running (many with group by and order by in them). This is causing the size of my tempdb to grow to over 30gigs in size. If i reset the services of the DB it shrinks back down to the managable 6 gigs that i expect. Is there a way to have the services restart automatically on a nightly basis or is there a way to have the tempdb deallocate resources once they are used without restarting services? I apreciate any help you can provide, Nathan