Log File Size Increases Over And Over
Oct 4, 2004
Hi everybody,
I have a database in production server with 3,5 GB of data file size and 10 GB log file size. This is very strange isn't it?
The features of this database are:
SQL Server 2000
Recovery Model = Full
Auto Update Statistics = Yes
Torn page detection = Yes
Auto create statistics = Yes
Full database backup taken once daily.
No log backup is taken.
So, I would like to apply some statregy to avoid the log file increase out of control. Can you give me your suggestions???
My free disk space is very low.
Thank you all,
View 5 Replies
ADVERTISEMENT
Sep 5, 2007
We have 2 SQL Server 2k5 servers running the same build - 9.0.2047 . When I backup any database from one server and attempt to restore it to the other, the log file generally increases by 100 fold. It errors out after I try to restore a 100MB db and it tries to create a 9.8GB log file. This happens both when I use the GUI to restore and when I restore from a T-SQL script. What am I doing wrong?
Thanks in advance.
View 1 Replies
View Related
Feb 4, 2008
Hi,
Im in the process of setting up logshipping on sqlserver 2005 enterprise edition.
My scenario is like this:
My Avg size of my tlog is 500MB and im planning to set the log shipping at 30mins interval(ie backup job schedule,Copy,restore job schedule).But at some part of the day the Tlog suddenly increases up to 1.5GB - 2 GB .So i wanted to know, wht if that 1.5GB-2GB tlog file is unable to get backed up,copy and restore at 30mins interval?.How to deal with this kind of issues where the size of tlogs are increased suddenly.I cannot do it at15mins interval due to some network restrictions at my office.
i have One more doubt about the setting on Logshipping screen:
Now let us suppose my settting on log shipping screen 'Alert if no restore occurs within' is set to '180mins', then does this setting mean that the restore job will keep on looking for the copied file in the folder on secondary for next 90mins and if its not able to find any, it will generate an alert after 90mins ??? or it will generate an error if its nt able to find any copied file after the first restore job execution.???
in the same way,
Thnx in advance for any help.
Regards
Arvind L
View 1 Replies
View Related
Jun 2, 2015
Is this Possible, If database is in Simple recovery Mode and the ldf size gets increased?? .
mdf size : Â 159 GB (171,383,717,888 bytes)
ldf size :Â 6.46 GB (6,945,505,280 bytes).
My question is if the recovery model is in Simple Mode then why the log gets generated high.
dbcc sqlperf(logspace) --output
DATABASE Â Logsize(MB) Â Â Â Log space used(%) Â Â status
mam      6623.742
    0.4305579
       0
Is there any issue or it is Normal.
View 9 Replies
View Related
Sep 4, 2007
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
Any help with this process?
View 1 Replies
View Related
Apr 15, 2008
I have a log file that is approximately 50 GIG. I backed up just the log and the file size of the .bak is 192 GIG . Why is this? Shouldn't it be closer to the 50 GIG.
Normally I wouldn't let log grow this much. But we are in process of getting new server up and running and don't have backups going yet. They are working on getting that up and running this week.
So I did a log backup to give me back some log space for now but was concerned when I saw the size of the .bak file.
When I view media contents of the backup device it shows one tranaction log back up and size of 192 GIG.
What is up with this. I know in SQL 2000 the log backup files where never this big. they were about the size of the log itself.
Any ideas?
Stacy
View 8 Replies
View Related
Jun 15, 2006
I installed sql 2005 a while back. Then I recently found out my file system was fat32 (I don't understand why the hardware people did this...) and I had to convert to NTFS. Naturally the sql service no longer worked so I uninstalled inorder to reinstall now I can't reinstall it I keep getting this message
native_error=5039, msg=[Microsoft][SQL Native Client][SQL Server]MODIFY FILE failed. Specified size is less than current size.
I'll try to post the full log in a new post.
View 11 Replies
View Related
Jul 25, 2007
I have one db test with one .mdf and .ldf file...mdf file size is 100mb and for some reson i removed all the tablesfrom that .mdf file and transfer it into new secondary file so all thetables moved into secondary file now i want to reduce the first .mdffile from 100 mb to 50mb is that possible,it's showing 90mb is free.Please reply
View 1 Replies
View Related
Apr 14, 2015
Here are my scenarios:
We have an application with replicated environment setup on sql server 2012 . Users will have a replica on their machines and they will replicate to the master database. It has 3 subscriptions subscribed to the publications on the master db.
1) We set up a replica(which uses sql server 2012) on a machine with no sql server on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 33gigs and ldf has grown to 41 gigs. I went to sql server management studion . Right click and checked the properties of the local database. over all size is around 84 gb with little empty free space available.
2) We set up a replica(which uses sql server 2012) on a machine with sql server 2008 on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 49 gigs and ldf has grown to 41 gigs. I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
3) We set up a replica(which uses sql server 2012) on a machine with sql server 2012 on it. We have dropped the local database and recreated the local db and did the initial synchronization using replmerge tool. The mdf file has grown to 49 gigs and ldf has grown to 41 gigs. I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
Why it is allocating the space differently? This is effecting our initial replica set up times.
View 0 Replies
View Related
Jul 31, 2014
I need to write a process to get file size in kb and record count in a file. I was planning on writing a c# console app that takes the file path and name as a param however should i use a CLR?
I cant put a script in the ssis when it's bringing the file down because it has been deemed that we only use ssis for file consumption.
View 1 Replies
View Related
Sep 22, 2015
What is the recommended size and file growth for a database and log file? We will be storing approx 10000 records a day.Currently we have the following:
CREATE DATABASE Dummy
ONÂ
PRIMARY
( NAME = Dummy_data,
  FILENAME = 'D:....DATADummy.mdf',
  SIZE = 250MB,
  FILEGROWTH = 25MB )
LOG ON
( NAME = Dummy_log,
  FILENAME = 'D:....DATADummy_log.ldf',
  SIZE = 50MB,
  FILEGROWTH = 5MB ) ;
GO
View 3 Replies
View Related
Dec 28, 2007
On some tables when I run DBCC ShowContig followed by DBReindex followed by ShowContig I notice Extent Scan Fragmentation actually increases. Why does this happen? Below are the SHOWCONTIG results after running DBReindex three times.
After First DBReindex
- Pages Scanned................................: 986
- Extents Scanned..............................: 124
- Extent Switches..............................: 123
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 100.00% [124:124]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 47.58%
- Avg. Bytes Free per Page.....................: 91.0
- Avg. Page Density (full).....................: 98.88%
After Second DBReindex
- Pages Scanned................................: 986
- Extents Scanned..............................: 124
- Extent Switches..............................: 123
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 100.00% [124:124]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 20.16%
- Avg. Bytes Free per Page.....................: 91.0
- Avg. Page Density (full).....................: 98.88%
After Third DBReindex
- Pages Scanned................................: 986
- Extents Scanned..............................: 124
- Extent Switches..............................: 123
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 100.00% [124:124]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 67.74%
- Avg. Bytes Free per Page.....................: 91.0
- Avg. Page Density (full).....................: 98.88%
Thanks, Dave
View 6 Replies
View Related
Feb 19, 2013
I have a database whose log file size is 4 time greater then data file size, and its continuously growing day by day. Recently face limited disk related issue.
Is there any way to truncate log file???
What is impact on db if i truncate log file???
Is there any way to prevent this file continuously growing???
View 13 Replies
View Related
Dec 5, 2014
is there limitation for size of file to store in db by filestream in sql server 2008?or it accept all sizes?
View 1 Replies
View Related
May 22, 2015
I measure PLE on my server and insert them every minute into a table. Now, when I look into the table I just dont know how to interpret the following data. I dont understand how is that possible. It's an sql server bug? or? How to interpret that data?Â
View 10 Replies
View Related
Apr 28, 2008
Hello!
I have a problem. I want to know if the time which is needed for creating an index increases proportional to the amount of rows. example: if creating an index on a table which 10.000 rows takes 15 seconds. does creating an index on a table with 20.000 rows take 30 seconds , 40.000 rows 60 seconds and so on...
or does it take longer like 10.000 rows 15 second, 20.000 rows 40 seconds, 40.000 rows 80 seconds.
thx for your help!!
Filipe
View 4 Replies
View Related
Apr 27, 2008
Hi,
i'm trying to write this script that check my database file and log size(in MB) and insert them into a table.i need the following columns
dbid,dbname,compatability_level,recovery_model,db_size_in_MB,log_size_in_MB.
i try to write this a got stuck.
select sysdb.database_id,sysdb.name,sysdb.compatibility_level,
sysdb.recovery_model_desc,sysmaster.size from sys.databases sysdb,sys.master_files sysmaster
where sysdb.database_id = sysmaster.database_id
can anyone help me with this script?
THX
View 13 Replies
View Related
Jun 2, 2014
I am writing a performance baseline test.
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
View 4 Replies
View Related
Dec 5, 2007
Hi All,
The current/ Base table would be like below,
Products
level
Date
N1
b
11/5/2007
N2
p
11/6/2007
N3
p
11/7/2007
N4
p
11/14/2007
N5
b
11/15/2007
N6
p
11/23/2007
Expected Result.
<=11/7/2007
<= 11/14/2007
<=11/21/2007
b
1
1
2
p
2
3
4
Total
3
4
6
As you can see, the above table has cumulative data.
1. It calculates the number of Products submitted till a particular date- weekly
2. The date columns should increase dynamically(if the dates in base table increases) each time the query is executed
For ex: the next date would be 11/28/2007
I tried something like, it gives me count of €˜b€™ level and €˜p€™ level products by week
declare @date1 as datetime
select @date1 = '6/30/2007'
while (@date1 != (select max(SDate) from dbo.TrendTable))
begin
set @date1 = @date1 + 7
select Level, count(Products)
from
dbo.TrendTable
where SDate < @date1
group by Level
end
what I think is required is a pivot that dynamically adds the columns for increase in date range.
/Pls suggest if any other way of achieving it.
Pls help!!!
Thanks & Regards
View 3 Replies
View Related
Apr 3, 2001
How can I find the size of a file/ database in kb/megs?
View 1 Replies
View Related
Apr 4, 2001
Is there a way to get a particular file/tables size??
View 1 Replies
View Related
Dec 13, 2000
Hi Everybody
First of all i would like to thank everyone for there time and efforts in this web page
I am new to the feild of DBA and i have some uncleared points that i would like any one to make them clear for me
Why the transactions log file size is not decreasing after the truncation of log?
is there any thing i have to do or is it normal way?
View 1 Replies
View Related
Dec 6, 2004
Hi
I am currently trying to get file sizes and insert them into a table. The table already has the path to the actual file, so its just a matter of using that path and getting the size.
Any help is appreciated
View 6 Replies
View Related
May 22, 2007
Hi,
I've an application which saved logs of internet access.. the file size can reach to >150 GB per month.
I don't know till what size I should keep the mdf file before creating a secondary.
what is best size for mdf should be.
Thanks in advanced.
View 2 Replies
View Related
Jan 30, 2004
I use bcp to copy a file and the bak it generated is 966637 KB. However, when I run sp_spaceused, it shows me the following size :
reserved data index_size unused
------------------ ------------------ ------------------ ------------------
550752 KB 395272 KB 155464 KB 16 KB
The difference doesn't make sense to me. Can anyone help pls?
View 5 Replies
View Related
Feb 5, 2004
I have one database in which MDF file size is 120 MB while LDF file has expanded to 10GB. How can i reduce the size of LDF file.
View 4 Replies
View Related
Aug 13, 2007
Hi everyone,
I'm asked to continue an existing project in SSIS, which currently uses XML for logging. I noticed that we now have log files up to 200MB in size. Is there a way to limit those log files in size?
Please do keep in mind that I'm still learning SSIS, so keep your answer as simple as possible :-)
Many thanks for any help!
View 5 Replies
View Related
Jun 7, 2008
Hi my database .ldf (log file) size is very big. How can I decrease the size of it?
Thanks.
View 2 Replies
View Related
Nov 16, 2005
Yes my DB file is 10MB but my LOG file (LDF) is 300MBis there a way to reduce or compres that one?thanx
View 2 Replies
View Related
Dec 14, 2001
Greetings,
I want to increase the size of a transaction log file. Can I do this while the database is live? If not, what do I need to do?
Thanks!
Rob
View 2 Replies
View Related
Apr 17, 2002
How the size of transaction file is calculated. are there any standards.
I created my database with 2000 mb data filesize and 200 log file size.
now my datafile size has reach to 11000 mb but log file size is still 200.
Therefore when I take the online backup my logsize totally consumed and I have to cancel my backup.
Please reply ASAP.
regards
Sanjeev Lamba
View 1 Replies
View Related
May 19, 2000
My Database Log file Size Increased dramatically upto 5GB. My Data file size
is only around 600MB. Almost my Harddisk space occupied fully by log file. How I reduce the log file size. Anyone can give me some tips?.
View 2 Replies
View Related
Nov 29, 1999
I created a database and had its file size as automatic grow. Now the database file is of 17 MB and its transaction log file size is 230 MB. After checking transaction log file properties I came to that it is using 13 mb only and the rest of the 230 MB i.e 217 MB is free. I want that area in the transaction log to be freed and get the transaction file size to its actual size. Any help will be greatly appreciated.
Thank you in advance.
P.S: It is very urgent.
View 1 Replies
View Related