SQL2K: MDF File Size Growing Fast
Jul 30, 2007
Hello,
I have got another annoying problem. The MDF file size on one of the machines is growing really fast. We zip the mdf/ldf files every day from all the machines in the dataentry dept. On this particular machine, the mdf file size is growing by about 1GB per day. However, when the file is zipped, the zipped file size comes closer to the zipped files from the other machines.
I have tried doing this:
http://www.sql-server-performance.com/lost_data_sql_server.asp
on it as well, but didn't solve my problem.
Any ideas as to what it might be? and how to solve this problem?
Thanks in Advance.
J!
View 11 Replies
ADVERTISEMENT
Jul 28, 2014
I have a Problem like the Following ..
On 24th my Mdf size was 10GB,when i checked now the Mdf size was increased suddenly to 30GB.
solution to decrease the Size and as well as where can i check the reasons behind that..
View 2 Replies
View Related
Oct 15, 2007
Dear All,
I am using Append to media backup option in 2000 Version. The size of backup is growing. how can I best create the maintenace plan to clear the history or clear the old files in BACKUP (.bak) file but still be able to restore point in time from same physical file. I
Is there any side effect of doing that.
Thanks,
View 10 Replies
View Related
Jan 3, 2007
I'm having a problem. When I use the SQL query to make a backup of the database, it worked fine. But everytime I use it, the backed-up file's size kept growing in size. Say I have the file, test.bak whose filesize is 450 MB then I run a new backup to overwrite the existing test.bak file, it just end up as 900 MB. If I run it again, it become 1350 MB and so on.
Is there a way to prevent that from happening?
View 1 Replies
View Related
Jun 8, 2006
Hello,
I have a SQL Server that generates a reasonable amount of log data in the SQL Event logs. I'm finding it difficult to use this data as the individual log files become rather large i.e. several hundred MB before they rotate.
What I would like to achieve: Set a constraint like: Each log will rotate when it reaches [XX]Mb in size and the files will be kept through [Y] rotations.
I can't seem to find how to do this through Enterprise Manager. I've had a decent look through google and there seems to be no info on this forum....can anyone point me in the right direction?
Thanks for your time.
Rgds,
Sithlordelvis.
View 2 Replies
View Related
May 10, 2015
I want to truncate my sharepoint config database and WSS_Logging database logs the size of sharepoint_config database is growing at a pace of ~10GB every week. I have scheduled a weekly full backup. Current .ldf file size is 113GB.
I am using SQL server 2012 with Always On High Availability feature. I am not able to set the recovery mode from Full to Simple as it gives me message that mirroring is running on both server.
In my case to reduce the log file what I need to do.
View 3 Replies
View Related
Sep 10, 2003
I notice this morning that my tempdb grows very fast. I have 26GB in my
hardrive and all the space occupied by tempdb and finaly the qeury got failed due to 0 space in hardrive and there is no space to grow tempdb.
The select query supposed to bring about 40000 rows.
I ran this same query in different server that is not growing even 1 mb.
I checked the tempdb option the Trunc log on checkpoint is true.
Why this problem happening ?.
I have just dbo permission to access all the database.
Do you have any advice regarding this?.
Thanks,
Ravi
View 3 Replies
View Related
Dec 1, 2007
Templog is growing 1 GB per hour.
I've read some articles about this issue, that talk about how to shrink it.
In this case I need to find out what and why this is happening
How can I monitore it?
I know, sometimes, I exaggerate in using temporary files, in order to make reports faster.
The Tempdb size is normal.
Thanks
View 4 Replies
View Related
Mar 5, 2014
My database server memory utilisation is growing faster from past 1 week. it remained same for 1 week around 55% and now it is going to 70% and increasing.
Total OS memory is 32GB and I kept cap for sql server memory upto 29GB. Dont know what to do..
View 9 Replies
View Related
Oct 10, 2006
hi all,
my tlogs at the subscriber are growing very large
irregardless of my recovery mode. any help
using snapshot-push replication
thanks,
joey
View 8 Replies
View Related
Mar 11, 2004
Hi All,
Database size is not increasing automatically ,however I have set it as unlimited growth. Any idea about this ?
thanks for in advance,
Sedat Duztas
Probil
View 1 Replies
View Related
May 31, 2005
My DB size was from 500MB to 10GB since 8/1998 to 12/2004. But now is 16GB (from 1/2005 - 5/2005), I don't why the data size growth too fast (as double) ?
View 4 Replies
View Related
Sep 8, 2006
I'm seeing some strange behavior from the OLE DB Destination when using the "fast load" access mode and setting the "Maximum insert commit size".
When I do not set the "Rows per batch" or the "Maximum insert commit size", the package I'm working with inserts 123,070 rows using a single "insert bulk" statement. The data seems to flow through the pipeline until it gets to the OLE DB Destination and then I see a short pause. I'm assuming the pause is from the "insert bulk" statement handling all of the rows at once.
When I set the "Rows per batch" option but leave the "Maximum insert commit size" alone, I generally see the same behavior -- a single "insert bulk" statement that handles all 123,070. In this case, however, the "insert bulk" statement has a "ROWS_PER_BATCH" option appended to the statement that matches the "Rows per batch" setting. This makes sense. I'm assuming the "insert bulk" then "batches" the rows into multiple insert statements (although I'm unsure of how to confirm this). This version of the "insert bulk" statement appears to run in about the same time as the case above.
When I set the "Maximum insert commit size" option and leave the "Rows per batch" statement alone, I see multiple "insert bulk" statements being executed, each handling the lower of either the value I specify for the "Maximum insert commit size" or the number of rows in a single buffer flowing through the pipeline. In my testing, the number of rows in a buffer was 9,681. So, if I set the "Maximum insert commit size" to 5,000, I see two "insert bulk" statements for each buffer that flows into the OLE DB Destination (one handling 5,000 rows and one handling 4,681 rows). If I set the "Maximum insert commit size" to 10,000, I see a single "insert bulk" statement for each buffer that flows into the OLE DB Destination (handling 9,681 rows).
Now the problem. When I set the "Maximum insert commit size" as described in the last case above, I see LONG pauses between buffers being handled by the OLE DB Destination. For example, I might see one buffer of data flow through (and be handled by one or more "insert bulk" statements based on the "Maximum insert commit size" setting), then see a 2-3 minute pause before the next buffer of data is handled (with its one or more "insert bulk" statements being executed). Then I might see a 4-5 minute pause before the next buffer of data is handled. The pause between the buffers being passed through the OLE DB Destination (and handled via the "insert bulk" statements) is sometimes shorter, sometimes longer.
Using Profiler, I don't see any other activity going on within the database or within SQL Server itself that would explain the pauses between the buffers being handled by the OLE DB Destination and the resulting "insert bulk" statements...
Can anyone explain what is going on here? Is setting the "Maximum insert commit size" a bad idea? What are the differences between it and the "Rows per batch" setting and what are the recommended uses of these two options to try to improve the performance of the insert (particularly when handling millions of rows)?
TIA for any thoughts or information...
Dave Fackler
View 8 Replies
View Related
Sep 4, 2007
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
Any help with this process?
View 1 Replies
View Related
Jun 16, 2014
The TEMPDB transaction log file keeps growing.The database server is new and the transaction log was presized to 1 GB on installation. After installing a number of databases, the log file grew over a day to 38GB. Issuing a manual checkpoint was the only way to free some space to allow it to be shrunk back to a usable size. The usage of the file is still going up.
I am struggling to find what process is causing the log to be used so heavily. Looking at the log reuse wait desc for tempdb returns "Nothing" and tempdb itself isn't being used very much or growing in size.
View 9 Replies
View Related
Aug 7, 2007
Why does a log (.ldf) file keep growing and growing and growing? Is this related to the fact that the scheduled Maintenance keeps failing due to exclusive access problems?
SQL Server 2000 Std.
Thanks!
View 7 Replies
View Related
Nov 8, 2000
Hi,
my log files are growing like anything. One of my log file size is 20GB.
How i have to reduce the log file size.
If i run DBCC command is it come backs...
Pls tell me the way how i have to find the free space and reduce logsizes.
After taking backups also my log file sizes are not reducing.
Thanks!
Kavira
View 2 Replies
View Related
May 16, 2006
I have a database of 22 gb in sql 2000, my database option is set to full recovery mode, the problem i'm having is the tran log is growing too fast, this morning it was 24 gb, more than the database size. Can anyone help how I can keep it in a managable size?
Thanks in advance!!
View 2 Replies
View Related
Nov 15, 2004
Hi all
I have a DB with 1 data file, 1 log file and 1 index file.
data file is 3 GB but index file is 12 GB.
Index file is growing big day by day.
This cause performance of DB down.
What should I do to prevent index file become bigger and size of index file smaller?
Thanks in advanced
Thi Nguyen
View 5 Replies
View Related
Nov 8, 2012
My log file was 2x the size of my actual Database which is obviously too large on a DEV box. I know that my data can be easily recovered so I actually do not even want/need a log file.
After doing some investigation I found that I should turn my database into "Simple Recovery Mode" and after this I used a few scripts to truncate my log file. Things at this point looked great!
Unfortunately my log File is still growing even with this 'simple recovery mode'. So how do I stop this craziness from occurring?
I even unchecked the box 'allow autogrowth' on the database! However, I eventually get errors when creating records in the system because it complains about running out of room in the log file.
Code:
The transaction log for database 'ReportingDB' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
View 8 Replies
View Related
Jun 28, 2000
there is a sql.txt file in my c: which grows very fast when i start sqlagent...it keeps occupying the free space in the drive..can anyone please help?
View 4 Replies
View Related
Sep 28, 2004
I started my database in last week to user with transfer data from Sybase to sql 2000 server. Intitally log file size was few MB near to 20 MB for each co’s, within 8 days it has reached upto 300 MB still datafile size in few MB , approximately 40MB for each co’s, why log file growing in such manger, how I can manage it?
Thanking You
R.Mall
View 2 Replies
View Related
Oct 12, 2007
I am running Microsoft SQL 2005 Express.
The transaction log of the main database has gotten huge and I just noticed it because the disk is nearly out of free disk space.
The database data file is 46MB and the transaction log is 210GB. This happended because autogrowth was on, and no limit was set I suppose.
So my goal is to knock this thing down to size, and set the max size of the transaction file to like 5GB.
I CANNOT do a backup of the transaction log because I do not have enough free space for such an operation. I tried doing:
DUMP TRANSACTION dbName
No luck, got the following error: Backup or restore requires at least one backup device.
Can I just stop the service and manually DELETE the transaction log file?
Thanks for the help.
View 2 Replies
View Related
Oct 11, 2007
Hello, we are running Microsoft SQL 2005 Express edition (9.0.32).
Recently I just noticed that the database log file of our main database is HUGE. The database data file is only 50MB and the log file is 210GB.
Any idea what is causing this? Seems to be getting bigger with time, in the last 7 days seems to have grown by 100GB. I noticed the following settings under the database:
Autogrowth: By 15 percent, unrestricted growth
Does that seem right? Thanks.
View 13 Replies
View Related
May 23, 2007
I have an 19 gig database that somehow has a 100gig log file. The DB MUST BE in full recovery mode, I backup the transaction logs EVERY hour and shrink nightly. but for some reason my logfile WILL NOT SHRINK.
HELP,
I've used both the DBCC Shrinkfile (xxxxxx) and DBCC ShrinkDatabase (xxxxx) and these don't seem to work. I Have No current backup, I have Not capacity for addtional 100 gig worth of backup drive or off-site tape.
View 1 Replies
View Related
Feb 11, 2006
Hi there,I have a data manipulation process written in a Nested Stored procedurethat have four levels deeper. When I run these individual proceduresindividually they all seems to be fine. Where as when I run them alltogether as Nested proces (calling one in another as sub-procedures) Logfile is growing pretty bad like 25 to 30GB.. and finally getting kickedafter running disk space. This process is running around 3hrs on a SQLserever Standard Box having dual processer and 2gb ram.This procedures have bunch of bulk updates and at least one cursor ineacch procedure that gets looped through.I was wondering if anybody experienced this situation or have any clueas to why is this happening and how to resolve this?I am in a pretty bad shape to deliver this product and in need of urgenthelp.Any ideas would be greatly appreciated..Thanks in advance*** Sent via Developersdex http://www.developersdex.com ***
View 1 Replies
View Related
Oct 12, 2007
The company for which I work did not have a DBA until I started a few weeks ago. Whoever installed SQL2K used the wrong CD so they have been running Personal Edition on their servers. I have installed a new SQL2K standard instance and have restored everything except the jobs and DTS packages. Can the msdb from the Personal edition be restored to the standard instance?
View 3 Replies
View Related
Apr 15, 2008
I have a log file that is approximately 50 GIG. I backed up just the log and the file size of the .bak is 192 GIG . Why is this? Shouldn't it be closer to the 50 GIG.
Normally I wouldn't let log grow this much. But we are in process of getting new server up and running and don't have backups going yet. They are working on getting that up and running this week.
So I did a log backup to give me back some log space for now but was concerned when I saw the size of the .bak file.
When I view media contents of the backup device it shows one tranaction log back up and size of 192 GIG.
What is up with this. I know in SQL 2000 the log backup files where never this big. they were about the size of the log itself.
Any ideas?
Stacy
View 8 Replies
View Related
Jun 15, 2006
I installed sql 2005 a while back. Then I recently found out my file system was fat32 (I don't understand why the hardware people did this...) and I had to convert to NTFS. Naturally the sql service no longer worked so I uninstalled inorder to reinstall now I can't reinstall it I keep getting this message
native_error=5039, msg=[Microsoft][SQL Native Client][SQL Server]MODIFY FILE failed. Specified size is less than current size.
I'll try to post the full log in a new post.
View 11 Replies
View Related
Mar 15, 2007
Is it possible to downgrade SQL from Enterprise to Standard Edition, or do you have to remove the previous installation (uninstall) and reinstall. Meaning you would also have to restore all user databases? Thanks.
View 1 Replies
View Related
Jul 25, 2007
I have one db test with one .mdf and .ldf file...mdf file size is 100mb and for some reson i removed all the tablesfrom that .mdf file and transfer it into new secondary file so all thetables moved into secondary file now i want to reduce the first .mdffile from 100 mb to 50mb is that possible,it's showing 90mb is free.Please reply
View 1 Replies
View Related
Jun 2, 2006
Hi everyone,
We€™ve got almost 250 old dts packages which simply loading data into Sql tables from plain files or at the reverse point. Most of them are defined with fixed fields and its fixed positions one after one. We don€™t want to migrate them using Import wizard, on the contrary we€™re producing them from the beggining taking advantatge of SSIS architecture to the full.
And now, we€™re trying to imagine how to migrate automatically that valuable info from Sql Server 2000 to Sql Server 2005 without efforts€¦ You know, any program be able to move that detailed info
to SSIS.
So we would avoid to select again all these positions per each file -very tedious and we're lazy
I don€™t see how except, of course, migrate them directly
Let me know if you need further explanations or more clarity on that.
View 5 Replies
View Related
Aug 22, 2001
I upgraded from SQL 6.5 to SQL 7 last month, and so far, everything's
been going fine.
However, I'm not using my old SQL 6.5 backup scripts, which, when the
backup was done, would dump the transaction log with TRUNCATE_ONLY, shrinking
the log size.
My SQL 7 server is set up with a Maintenance Plan which does everything,
including backup, but the log file seems to be growing and growing. I'm
up to 4.5 gigs now, for a database with a data file of 3.5 gigs.
How do I "dump transaction with TRUNCATE_ONLY" on a SQL 7 database?
Thanks,
Todd Wallace
View 3 Replies
View Related