Why The Database Size Growth Too Fast ?

May 31, 2005

My DB size was from 500MB to 10GB since 8/1998 to 12/2004. But now is 16GB (from 1/2005 - 5/2005), I don't why the data size growth too fast (as double) ?

View 4 Replies


ADVERTISEMENT

Database Size Growth

Sep 23, 2002

:eek: I am somewhat confused -- I have a database in production that I restored to a QA environment; upon restore, the size has grown by 200MB.

Both production and QA are running SQL2000 -- the only difference is that QA has the latest security hotfixes installed -- version 8.0.0.665 from KB article at the following link:

Q316333 (http://support.microsoft.com/default.aspx?scid=kb;en-us;Q316333&id=Q316333)

View 3 Replies View Related

Database Size Not Growing Despite Unlimited Growth

Mar 11, 2004

Hi All,

Database size is not increasing automatically ,however I have set it as unlimited growth. Any idea about this ?

thanks for in advance,


Sedat Duztas

Probil

View 1 Replies View Related

DB Engine :: Recommended Size And File Growth For A Database And Log File?

Sep 22, 2015

What is the recommended size and file growth for a database and log file? We will be storing approx 10000 records a day.Currently we have the following:

CREATE DATABASE Dummy
ON 
PRIMARY
( NAME = Dummy_data,
    FILENAME = 'D:....DATADummy.mdf',
    SIZE = 250MB,
    FILEGROWTH = 25MB )
LOG ON
( NAME = Dummy_log,
    FILENAME = 'D:....DATADummy_log.ldf',
    SIZE = 50MB,
    FILEGROWTH = 5MB ) ;
GO

View 3 Replies View Related

File Size Growth

Mar 3, 2008

i have a database which has a log file size of 10 Mb. it goes into single user mode automatically . i tried to increase the size of file size of log file from 10 mb to 50 mb... but i want to make it only 20 mb ... i am unable to change since it gives a message .cannot decrease the size of the file .. is there another way to decrease the size of log file .....

View 2 Replies View Related

SQL 7 Uncontrolled Table Size Growth !!!

Jun 22, 2001

We have a nightly application that when run during SQL Backup caused a single table in a 7GB database to increase to 13GB. Total database size reached 20GB when the disk array ran out of space. Table only contained 661,000 records and should have been less than 100MB.

Recently we have moved from SP1 to SP3.

HELP !!!

View 1 Replies View Related

TempDB Growth And File Size

Aug 10, 2007

We have a problem with the size of the tempdb.mdf file. The tempdb had grown to 25Gb and consumed all the available disk space. SQL server was restarted and the tempdb was reset back to the default size. The following day the tempdb suddenly increased in size from 200mb to 25GB within a very short space of time. There were a couple of event log entries from sqlservger regarding the lack of disk. Since then the server is running without any problems but the level of free space is virtually zero on the drive with tempdb.mdf file.

What would cause the tempdb to grow suddenly and to this size?

Can I control the size the tempdb can grow to?

SQL 2005 (x64) sp1
W2K3 R2 SP1

View 4 Replies View Related

SQL2K: MDF File Size Growing Fast

Jul 30, 2007

Hello,

I have got another annoying problem. The MDF file size on one of the machines is growing really fast. We zip the mdf/ldf files every day from all the machines in the dataentry dept. On this particular machine, the mdf file size is growing by about 1GB per day. However, when the file is zipped, the zipped file size comes closer to the zipped files from the other machines.

I have tried doing this:

http://www.sql-server-performance.com/lost_data_sql_server.asp

on it as well, but didn't solve my problem.

Any ideas as to what it might be? and how to solve this problem?

Thanks in Advance.
J!

View 11 Replies View Related

SQL 2012 :: MDF File Size Is Growing Fast?

Jul 28, 2014

I have a Problem like the Following ..

On 24th my Mdf size was 10GB,when i checked now the Mdf size was increased suddenly to 30GB.

solution to decrease the Size and as well as where can i check the reasons behind that..

View 2 Replies View Related

SQL Server Admin 2014 :: 1MB Datafile Growth Size?

Aug 14, 2014

I'm aware of the issues with sizing your logfile growth size too low (causing too many VLFs, etc). But I haven't seen much about the datafile side of it.

Are there any benchmarks specifically on setting datafile growth so low (on databases 1-100Gb in size)? Are there circumstances in well utilized servers where that might be warranted?

View 3 Replies View Related

OLE DB Destination - Fast Load With Maximum Insert Commit Size

Sep 8, 2006

I'm seeing some strange behavior from the OLE DB Destination when using the "fast load" access mode and setting the "Maximum insert commit size".

When I do not set the "Rows per batch" or the "Maximum insert commit size", the package I'm working with inserts 123,070 rows using a single "insert bulk" statement. The data seems to flow through the pipeline until it gets to the OLE DB Destination and then I see a short pause. I'm assuming the pause is from the "insert bulk" statement handling all of the rows at once.

When I set the "Rows per batch" option but leave the "Maximum insert commit size" alone, I generally see the same behavior -- a single "insert bulk" statement that handles all 123,070. In this case, however, the "insert bulk" statement has a "ROWS_PER_BATCH" option appended to the statement that matches the "Rows per batch" setting. This makes sense. I'm assuming the "insert bulk" then "batches" the rows into multiple insert statements (although I'm unsure of how to confirm this). This version of the "insert bulk" statement appears to run in about the same time as the case above.

When I set the "Maximum insert commit size" option and leave the "Rows per batch" statement alone, I see multiple "insert bulk" statements being executed, each handling the lower of either the value I specify for the "Maximum insert commit size" or the number of rows in a single buffer flowing through the pipeline. In my testing, the number of rows in a buffer was 9,681. So, if I set the "Maximum insert commit size" to 5,000, I see two "insert bulk" statements for each buffer that flows into the OLE DB Destination (one handling 5,000 rows and one handling 4,681 rows). If I set the "Maximum insert commit size" to 10,000, I see a single "insert bulk" statement for each buffer that flows into the OLE DB Destination (handling 9,681 rows).

Now the problem. When I set the "Maximum insert commit size" as described in the last case above, I see LONG pauses between buffers being handled by the OLE DB Destination. For example, I might see one buffer of data flow through (and be handled by one or more "insert bulk" statements based on the "Maximum insert commit size" setting), then see a 2-3 minute pause before the next buffer of data is handled (with its one or more "insert bulk" statements being executed). Then I might see a 4-5 minute pause before the next buffer of data is handled. The pause between the buffers being passed through the OLE DB Destination (and handled via the "insert bulk" statements) is sometimes shorter, sometimes longer.

Using Profiler, I don't see any other activity going on within the database or within SQL Server itself that would explain the pauses between the buffers being handled by the OLE DB Destination and the resulting "insert bulk" statements...

Can anyone explain what is going on here? Is setting the "Maximum insert commit size" a bad idea? What are the differences between it and the "Rows per batch" setting and what are the recommended uses of these two options to try to improve the performance of the insert (particularly when handling millions of rows)?

TIA for any thoughts or information...

Dave Fackler

View 8 Replies View Related

SQL Server 2008 :: Virtual Log Files And Determining Right Growth Size

Oct 14, 2015

Any good starting point to understand for a specific db, how many max VLFs are good to have so that it does not cause long startup or backup times?

Also, I need some calculation so that I can identify a best growth parameter I will setup for each database ?

I'm seeing the below msg in errorlog and curious to know the changes (right sizing/growth) to be done? As of now 100 MB of log file growth value is set (refer: [URL] ....)

Database BizTalkMsgBoxDb has more than 1000 virtual log files which is excessive. Too many virtual log files can cause long startup and backup times. Consider shrinking the log and using a different growth increment to reduce the number of virtual log files.

View 3 Replies View Related

Growth Of A Database!!??

Sep 17, 2007

Hi everyone,

I'm a beginner in SQL Server databases, my problem is this:

i'm making a database witch the frontend is an access project, the database has several stored procedures views and user functions (the normal..), but a few data, (only the experimental), last night i've noticed that the file grow from 22 MB to 89 MB, the objects are the same and also the data, the only diference was that i forgot to put in an event procedure code, the ADO method, "MoveNext", to update various records, and the loop was infinit.
Is it possible that SQL statments generated by ADO make the file grow so rapidly!?
If so how can i shrink it, because i've tried and and the results was 4%.

Can you help me!?

Thanks

View 1 Replies View Related

Database Growth

Apr 20, 2007



I would like to know followings:



I want to see every day or weekly Database growth (%) save on table



I have some SP which will give me one time run and see the growth. which is ...



Please advice any other way to find out and save on a location ...



create procedure sp_growth as

set ansi_warnings off

declare @l_db_name varchar(50)
,@l_sql_string varchar(1000)

set nocount on
if object_id('DB_Growth') is not null
drop table DB_Growth

create table DB_Growth (Database_Name varchar(30), Logical_File_Name varchar(15), File_Size_MB int, Growth_Factor varchar(100))


declare db_name_cursor insensitive cursor
for
select name from master..sysdatabases

open db_name_cursor

fetch next from db_name_cursor into
@l_db_name

While (@@fetch_status = 0)
begin
select @l_sql_string = 'select ' + '''' + @l_db_name + '''' + ', name, ceiling((size * 8192.0)/(1024.0 * 1024.0)), case when status & 0x100000 = 0 then convert(varchar,ceiling((growth * 8192.0)/(1024.0*1024.0))) + '' MB''' + char(10)+char(13)
+ 'else convert (varchar, growth) + '' Percent''' + char(10)+char(13)
+ 'end' + char(10)+char(13)
+ 'from [' + @l_db_name + '].dbo.sysfiles'

insert into DB_Growth (Database_Name, Logical_File_Name, File_Size_MB, Growth_Factor)
exec (@l_sql_string)

fetch next from db_name_cursor into
@l_db_name
end
close db_name_cursor
deallocate db_name_cursor
select * from DB_Growth with (nolock)
if object_id('DB_Growth') is not null
drop table DB_Growth
set nocount off
set ansi_warnings on
return


GO


Thanks
Faiz Farazi
Daudkandi,Comilla, Bangladesh
http://www.databasetimes.net/

View 1 Replies View Related

Database Growth

Apr 20, 2007



I would like to know followings:



I want to see every day or weekly Database growth (%) save on table



I have some SP which will give me one time run and see the growth. which is ...



Please advice any other way to find out and save on a location ...



create procedure sp_growth as

set ansi_warnings off

declare @l_db_name varchar(50)
,@l_sql_string varchar(1000)

set nocount on
if object_id('DB_Growth') is not null
drop table DB_Growth

create table DB_Growth (Database_Name varchar(30), Logical_File_Name varchar(15), File_Size_MB int, Growth_Factor varchar(100))


declare db_name_cursor insensitive cursor
for
select name from master..sysdatabases

open db_name_cursor

fetch next from db_name_cursor into
@l_db_name

While (@@fetch_status = 0)
begin
select @l_sql_string = 'select ' + '''' + @l_db_name + '''' + ', name, ceiling((size * 8192.0)/(1024.0 * 1024.0)), case when status & 0x100000 = 0 then convert(varchar,ceiling((growth * 8192.0)/(1024.0*1024.0))) + '' MB''' + char(10)+char(13)
+ 'else convert (varchar, growth) + '' Percent''' + char(10)+char(13)
+ 'end' + char(10)+char(13)
+ 'from [' + @l_db_name + '].dbo.sysfiles'

insert into DB_Growth (Database_Name, Logical_File_Name, File_Size_MB, Growth_Factor)
exec (@l_sql_string)

fetch next from db_name_cursor into
@l_db_name
end
close db_name_cursor
deallocate db_name_cursor
select * from DB_Growth with (nolock)
if object_id('DB_Growth') is not null
drop table DB_Growth
set nocount off
set ansi_warnings on
return


GO


Thanks
Faiz Farazi
Daudkandi,Comilla, Bangladesh
http://www.databasetimes.net/

View 1 Replies View Related

Database Growth

Apr 20, 2007



I would like to know followings:



I want to see every day or weekly Database growth (%) save on table



I have some SP which will give me one time run and see the growth. which is ...



Please advice any other way to find out and save on a location ...



create procedure sp_growth as

set ansi_warnings off

declare @l_db_name varchar(50)
,@l_sql_string varchar(1000)

set nocount on
if object_id('DB_Growth') is not null
drop table DB_Growth

create table DB_Growth (Database_Name varchar(30), Logical_File_Name varchar(15), File_Size_MB int, Growth_Factor varchar(100))


declare db_name_cursor insensitive cursor
for
select name from master..sysdatabases

open db_name_cursor

fetch next from db_name_cursor into
@l_db_name

While (@@fetch_status = 0)
begin
select @l_sql_string = 'select ' + '''' + @l_db_name + '''' + ', name, ceiling((size * 8192.0)/(1024.0 * 1024.0)), case when status & 0x100000 = 0 then convert(varchar,ceiling((growth * 8192.0)/(1024.0*1024.0))) + '' MB''' + char(10)+char(13)
+ 'else convert (varchar, growth) + '' Percent''' + char(10)+char(13)
+ 'end' + char(10)+char(13)
+ 'from [' + @l_db_name + '].dbo.sysfiles'

insert into DB_Growth (Database_Name, Logical_File_Name, File_Size_MB, Growth_Factor)
exec (@l_sql_string)

fetch next from db_name_cursor into
@l_db_name
end
close db_name_cursor
deallocate db_name_cursor
select * from DB_Growth with (nolock)
if object_id('DB_Growth') is not null
drop table DB_Growth
set nocount off
set ansi_warnings on
return


GO


Thanks
Faiz Farazi
Daudkandi,Comilla, Bangladesh
http://www.databasetimes.net/

View 3 Replies View Related

The Fast Way To Restore Database

Oct 6, 2005

Hello, everyone:

My database backup files are 3-5GB. Restoring always take over 20 minutes. Is there the fast way to restore the big database?

Thanks

ZYT

View 5 Replies View Related

Automatic Database Growth

Aug 21, 2001

I've got a question about the automatic database growth feature of V7. Here's an example:

I have a 1gb db that can grow to max size of 2gb.
I set the auto grow option to 75%
The first time the db grows it will grab 75% of the free space (1gb)

What happens if the database needs to grow again?

Will the db grow using the remaining free space (25%) or
has the database reached its max size because it can't grow any further?

Thanks

View 1 Replies View Related

Tracking Database Growth

Jan 14, 2005

Hi everyone.

I am trying to find a way to calculate everyday my DB Growth, I did find a script on some site but it seems to give me the same information as the taskpad wich is not very specific. Basicaly i would like to know the size of a table in MB or in whatevever conversion possible, so that i will be able to do some forcasting.

Any help here would be greatly apreciated.

View 1 Replies View Related

Is There A Way To Find Growth In Log (ldf) And Database(mdf) ?

Sep 22, 2005

Hello,I need to monitor every 15 minutes growth in data file and log file .Since mdf and intial file sizes are set to high value,measuring these values at 15 min interval will not provide the changein size .My intention is to measure the log file size growth which helps tocalculate the disk space and bandwidth required to setup log shipping .We need to set up this infrastructure based on this calculationThanksM A Srinivas

View 6 Replies View Related

DataBase Recovering Please - Need Fast Reply

Nov 23, 1999

What should I have done? Is there anything that can be done other than restoring from backup?
How does one know if the database is really recovering or is EM just joken? I can wait 2 hours
before starting the restore

I was BCPing 12 million rows into a staging table. II used the '-b' option every 20K which I thought
would do a commit and clear the log in batches. After the process EM appeared to show the transaction log
as empty. Upon inspecting the Bcp output file I discovered the message that the BCP did not complete
because syslogs was full. I could not do a truncate transaction log or a dump database. I tried to
do a truncate transaction with no_log and it appeared to just hang. I stopped the SQL Server thinking
I could dump the transaction log, but could not start the Sql Server again. I then stopped the NT Server
because 'if all else fails'. The SQL Server started but the user database if marked as recovering.

View 5 Replies View Related

SQL CE Database Doesn't Seem To Update Fast Enough.

Jan 4, 2008

I must be doing something wrong.




Code Block
TREE Form

ssql.Append("INSERT INTO FINDINGS (Facility) ")
ssql.Append("VALUES ('" & Facility & "')")
Try
Dim NewRow As Integer = dba.ExecuteSQL_Affected(ssql.ToString)
Catch ex As Exception
MsgBox("There was an error saving records.", MsgBoxStyle.Information, "No Key")
Exit Sub
End Try

Assessment.dtblFindings_Initialize()








Code Block
Public Function ExecuteSQL_Affected(ByVal sSql As String) As Integer
'//Execute the query like Insert, Update and delete
Dim RowsAffected As Integer
Try

If Conn.State = ConnectionState.Closed Then

Conn.ConnectionString = "Data Source=" & oDBConfig.LocalDBLocation & "" & oDBConfig.LocalDBName & ";"
Conn.Open()
End If
Dim cmd As New SqlCeCommand(sSql, Conn)
cmd.CommandType = CommandType.Text
RowsAffected = cmd.ExecuteNonQuery()
cmd.Dispose()
Conn.Close()
Return RowsAffected
Catch err As SqlCeException

MsgBox(Utility.ComposeSqlErrorMessage(err))
Catch ComErr As Exception

MsgBox(ComErr.ToString, MsgBoxStyle.Information)
Finally
End Try
End Function








Code Block
Assessment Form

Public Sub dtblFindings_Initialize()
Dim rdr As SqlCeDataReader
Dim dba As New DBAccess
Dim ssql As StringBuilder = New StringBuilder
ssql.Append("SELECT Facility FROM FINDINGS")
rdr = dba.OpenResultSet(ssql.ToString)
Try

rdr.Read()

While rdr.Read
...






So here is the problem. The normal function is to initiate the insert by pression a button. That should go through all the steps then hit the dtblFindings_Initialize command and rebuild the datatable. However when it happens for the first time (i.e. the first facility going into the database), the SELECT statement always returns nothing.

If I stop the application and Pull the database to the desktop, the row has been inserted. So I feel that I am somehow doing something wrong, not closing something, not initializing something....argh! Please help!!

View 1 Replies View Related

Database && File Growth Monitoring

Mar 8, 2004

Can someone point me to examples of database & file growth monitoring.

I specificially want to monitor a number of separate SQL servers (2000, 7.0)

I want to end of with statistics of any size changes on any of these over time.

Help is greatly appericated..

thanks

View 5 Replies View Related

Performance Degradation W/Database Growth

Mar 27, 1999

At approximately what db size is sql 6.5 known to degrade in performance?

Also, what is the maximum db size 6.5 can handle?

thanks in advance...

View 1 Replies View Related

Rapid Growth Of SQL Express Database

Oct 30, 2007

I have a client running RMS, since moving to SQL express his database size has jumped 2 from 2G to 4G in 8 months. Previiuosly it took 2 years to reach the 2G size. has anyone else experienced this rapid growth of their database?

View 5 Replies View Related

Distribution Database Log File Growth

Jan 11, 2007

SQL Server 2000 | Transactional Replication

Suspected Problem: Distribution Database Transaction Log Not Checkpointing

I have a distributor with a distribution database that keeps growing and growing (About 40 GB in 7 days). The database is using the SIMPLE recovery model but the log continues to accumulate data. I have spent time looking at articles such as: "Factors that keep log records alive" (http://msdn2.microsoft.com/en-us/library/ms345414.aspx) and the one thing that stands out is the Checkpoint. I noticed that I can run a manual checkpoint and clear the log. If the log records were still active, the checkpoint would not allow the log to be truncated. This leads me to believe that the server is not properly initiating checkpoints in the Distribution database even though Recovery Model = SIMPLE and the server Recovery Interval = 0.

I found this: "FIX: Automatic checkpoints on some SQL Server 2000 databases do not run as expected" (http://support.microsoft.com/kb/909369/en-us) but I suspect this is a followup to a problem that may have been introduced with SP4 (since SP4 is a requirement for the hotfix). I am running SP3a (Microsoft SQL Server 2000 - 8.00.850) so I don't think that is the issue. I have several other nearly identical servers with the same version and configuration that have properly maintained log files.

SP4 is not a good option for me at this point - the next upgrade will be to SQL 2K5.

Any thoughts?

Jeff

View 1 Replies View Related

Large Database Growth Out Of Control

Oct 23, 2007

Hopefully I'm posting in the right area. There is a database that has grown to about
41-42 GB in size in about a 2 month period. The previous database had grown to about
22 GB before it was purged out. I'm running this on SQL 2000, and I've tried running all
the DBCC SHINKFILE and SHRINKDATABASE commands to no avail. In this case,
the MDF file is the one that has grown out of control as opposed to the log file (LDF file).

Does anyone have any suggestions on what could be done to control the size?

View 17 Replies View Related

Troubleshooting: My Database Has Started To Grow TOO Fast

Jun 19, 2007

The primary database i'm responsible for has started to grow super fast. Every couple of days is growing by 10% (which matches with the db settings). But, the recent growth doesn't match with the historical growth. It took a couple of months to grow from 7 to 8 GB, but it has grown to about 24 Gb in the last 2 months. Bottom line - trust my assertion that it's growing alarming fast.

I need help determine what objects are fueling the growth. If I know the objects, I can probably determine the cause. From a flip-side, it might be legit data stored very poorly. I'm open to any ideas...but I need to get ahead of this problem in the next week or so...or I'm going to run out of room on the hard drive and could start to affect my users.

Please send my any ideas you might have.

Thanks,

alex8675

View 5 Replies View Related

Database Design For Fast Client Updates

Mar 29, 2006

I'm trying to work out a database design to make it quicker for my clientprogram to read and display updates to the data set. Currently it reads inthe entire data set again after each change, which was acceptable when thedata set was small but now it's large enough to start causing noticabledelays. I've come up with a possible solution but am looking for others'input on its suitability to the problem.Here is the DDL for one of the tables:create table epl_packages(customer varchar(8) not null, -- package_type char not null, -- primary keypackage_no int not null, -- /dimensions varchar(50) not null default(0),weight_kg int not null,despatch_id int, -- filled in on despatchloaded bit not null default(0),item_count int not null default(0))alter table epl_packagesadd constraint pk_epl_packagesprimary key (customer, package_type, package_no)My first thought was to add a datetime column to each table to record thetime of the last change, but that would only work for inserts and updates.So I figured that a separate table for deletions would make this complete.DDL would be something like:create table epl_packages(customer varchar(8) not null,package_type char not null,package_no int not null,dimensions varchar(50) not null default(0),weight_kg int not null,despatch_id int,loaded bit not null default(0),item_count int not null default(0),last_update_time datetime default(getdate()) -- new column)alter table epl_packagesadd constraint pk_epl_packagesprimary key (customer, package_type, package_no)create table epl_packages_deletions(delete_time datetime,customer varchar(8) not null,package_type char not null,package_no int not null)And then these triggers on update and delete (insert is handled automaticallyby the default constraint on last_update_time):create trigger tr_upd_epl_packageson epl_packagesfor updateas-- check for primary key changeif (columns_updated() & 1792) > 0 -- first three columns: 256+512+1024insert epl_packages_deletionsselectgetdate(),customer,package_type,package_nofrom deletedupdate Aset last_update_time = getdate()from epl_packages Ajoin inserted Bon A.customer = B.customer andA.package_type = B.package_type andA.package_no = B.package_nogocreate trigger tr_del_epl_packageson epl_packagesfor deleteasinsert epl_packages_deletionsselectgetdate(),customer,package_type,package_nofrom deletedgoThe client program would then do the initial read as follows:select getdate()selectcustomer,package_type,package_no,dimensions,weight_kg,despatch_id,loaded,item_countfrom epl_packageswherecustomer = {current customer}order bycustomer,package_type,package_noIt would store the output of getdate() to be used in subsequent updates,which would be read from the server as follows:select getdate()selectcustomer,package_type,package_no,dimensions,weight_kg,despatch_id,loaded,item_countfrom epl_packageswherecustomer = {current customer} andlast_update_time > {output of getdate() from previous read}order bycustomer,package_type,package_noselectcustomer,package_type,package_nofrom epl_packages_deletionswherecustomer = {current customer} anddelete_time > {output of getdate() from previous read}The client program will then apply the deletions and the updated/insertedrows, in that order. This would be done for each table displayed in theclient.Any critical comments on this approach and any improvements that couldbe made would be much appreciated!

View 4 Replies View Related

SQL Server 2008 :: How To Monitor Database Growth

May 5, 2015

I need to monitor my database growth, as few of databases are growing rapidly. My client wants the growth list of my databases. have report of database growth of specific databases, at least of one month.

View 3 Replies View Related

DB Design :: Control Growth Of Database File

Oct 7, 2015

 I currently have a DB that is growing at a rate of 10gb per month. It is set to 1mb unrestricted growth and the log file is set to 400mb restricted growth. I take regular transaction log backups so the log file is well under under without any issue. This DB's recovery model is set to FULL as it has to be mirrored to a backup site.  Any recommendations on how to control the growth. - Is it advisable to take create a new DB with data older than 2 years and transfer that file to an external drive and if i do this, can i "attach" it back to the main server if and when required ?

View 7 Replies View Related

DB Engine :: How To Find Database Growth Rate

Apr 22, 2015

Wanted to do the forecasting of disk growth for one year. How to find the database growth rate?

View 4 Replies View Related

Can't Change The Auto Growth Option On My Database

Aug 3, 2007



I'm currently using SQL Server 2005. Before I have set my database on unrestricted auto growth. But today, I have noticed that the Log file has been set to limit its growth to 2,097,152 MB. I have 160GB space for my log files, I just want to maximize the space for logs in my hard drive.

When I try to change the settings back to auto growth it still keeps on returning to its previous setting it is still set on 2,097,152 MB. What I did was :
Right Click on the Database - Properties - Files - Click the (...) - set the auto growth option to unrestricted - Click Ok
But when I checked log file, it is still set on 2,097,152MB.


Can some one help me change the settings of my Database.

View 6 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved