:eek: I am somewhat confused -- I have a database in production that I restored to a QA environment; upon restore, the size has grown by 200MB.
Both production and QA are running SQL2000 -- the only difference is that QA has the latest security hotfixes installed -- version 8.0.0.665 from KB article at the following link:
My DB size was from 500MB to 10GB since 8/1998 to 12/2004. But now is 16GB (from 1/2005 - 5/2005), I don't why the data size growth too fast (as double) ?
We have 300+ databases on one sinlge server. If I need to change log size to "unlimited" for all of them, is there any way to do so? Please advice. -Julie
What is the recommended size and file growth for a database and log file? We will be storing approx 10000 records a day.Currently we have the following:
CREATE DATABASE Dummy ONÂ PRIMARY ( NAME = Dummy_data, Â Â FILENAME = 'D:....DATADummy.mdf', Â Â SIZE = 250MB, Â Â FILEGROWTH = 25MB ) LOG ON ( NAME = Dummy_log, Â Â FILENAME = 'D:....DATADummy_log.ldf', Â Â SIZE = 50MB, Â Â FILEGROWTH = 5MB ) ; GO
I am using Append to media backup option in 2000 Version. The size of backup is growing. how can I best create the maintenace plan to clear the history or clear the old files in BACKUP (.bak) file but still be able to restore point in time from same physical file. I
i have a database which has a log file size of 10 Mb. it goes into single user mode automatically . i tried to increase the size of file size of log file from 10 mb to 50 mb... but i want to make it only 20 mb ... i am unable to change since it gives a message .cannot decrease the size of the file .. is there another way to decrease the size of log file .....
I have got another annoying problem. The MDF file size on one of the machines is growing really fast. We zip the mdf/ldf files every day from all the machines in the dataentry dept. On this particular machine, the mdf file size is growing by about 1GB per day. However, when the file is zipped, the zipped file size comes closer to the zipped files from the other machines.
I'm having a problem. When I use the SQL query to make a backup of the database, it worked fine. But everytime I use it, the backed-up file's size kept growing in size. Say I have the file, test.bak whose filesize is 450 MB then I run a new backup to overwrite the existing test.bak file, it just end up as 900 MB. If I run it again, it become 1350 MB and so on.
We have a nightly application that when run during SQL Backup caused a single table in a 7GB database to increase to 13GB. Total database size reached 20GB when the disk array ran out of space. Table only contained 661,000 records and should have been less than 100MB.
We have a problem with the size of the tempdb.mdf file. The tempdb had grown to 25Gb and consumed all the available disk space. SQL server was restarted and the tempdb was reset back to the default size. The following day the tempdb suddenly increased in size from 200mb to 25GB within a very short space of time. There were a couple of event log entries from sqlservger regarding the lack of disk. Since then the server is running without any problems but the level of free space is virtually zero on the drive with tempdb.mdf file.
What would cause the tempdb to grow suddenly and to this size?
I want to truncate my sharepoint config database and WSS_Logging database logs the size of sharepoint_config database is growing at a pace of ~10GB every week. I have scheduled a weekly full backup. Current .ldf file size is 113GB.
I am using SQL server 2012 with Always On High Availability feature. I am not able to set the recovery mode from Full to Simple as it gives me message that mirroring is running on both server.
In my case to reduce the log file what I need to do.
I'm aware of the issues with sizing your logfile growth size too low (causing too many VLFs, etc). But I haven't seen much about the datafile side of it.
Are there any benchmarks specifically on setting datafile growth so low (on databases 1-100Gb in size)? Are there circumstances in well utilized servers where that might be warranted?
Any good starting point to understand for a specific db, how many max VLFs are good to have so that it does not cause long startup or backup times?
Also, I need some calculation so that I can identify a best growth parameter I will setup for each database ?
I'm seeing the below msg in errorlog and curious to know the changes (right sizing/growth) to be done? As of now 100 MB of log file growth value is set (refer: [URL] ....)
Database BizTalkMsgBoxDb has more than 1000 virtual log files which is excessive. Too many virtual log files can cause long startup and backup times. Consider shrinking the log and using a different growth increment to reduce the number of virtual log files.
We have a database that's growing pretty fast because of firewall logs. We need the data available via an asp.net application. I don't have great experience with SQL other than installing and doing some development as a back end, so i'm wondering if there's a general rule of thumb of database size, when you should start breaking it out into smaller segments? if so, what are some good practices?
I have a database that seems to have grown out of control. I have tried deleting tables, but that has not really reduced the size. What could have caused the database to grow this big and what can I do to reduce it's size. I have backed up, truncated the logs, ran the shrink database command, all to no avail. Pleas help.
Does anyone know at what point SQL Server 7.0 decides to grow the database when the autogrow option is set? Our site just went down for 45 minutes because the growing process was taking too long as compared to the data coming in, so the device filled up.
Ray? Craig? You guys seem to know all, so jobs.com appreciates your input...
I have question about database automatically growing for SQL Server 7. It seems to me that SQL database automatically grow will ONLY happen when it's getting really full, maybe above 90% full. Even if you manually increase the DB size, actually the increased DB size will only keep very short time. And DB size will be back to the smaller size again. If you have any suggestions I will be really appreciated. Thanks
I have a database that is growing by 40 to 50 megs a day. I understand that the '_W' objects are statisical information for query performance and not indexes, but does anybody know how much disk space is actually used by these objects. I do have the 'auto create statistics' and 'auto update statistics' set on.
for the first time in my long SQL DBA live I see such a behaviours. My tempdb database is growing every damn second since a this morning. Now it reached 30Gb, the log file is empty (217 Mb).
We use SQL 2000 Ent on Win 2000 Advance Server. Running Siebel Call Center (7.5 ver) with about 300 users.
Some users time to time obtain and hold a huge amount Exclusive locks on the tempdb extents
Hi All.I'm currently maintaining 4 servers - 1 for public/customers and 3for backups, development, etc...I regularly backup the entire SQL database for our public server andrestore it on each of the other servers. Lately, however, the databasebackups have grown (in size) incredibly fast - they've gone from about200MB to 2+ GB in 2 months. (I wasn't entirely surprised by this atfirst since our client traffic has drastically increased as well.) Theweird thing, though, is that (on two of the backup servers) when Irestore the backup then use those servers to create a new completebackup, the new backup is only about 200-300 MB in size.My assumption is that there's some kind of setting buried deep insidethe sql configuration allowing it to compress or otherwise alterbackups. Does anyone have any ideas/thoughts as to what may be causingthis issue?We're using SQL Server 7 on Windows 2000 servers.Thanks in advance.GreggJoin Bytes!
I'm a beginner in SQL Server databases, my problem is this:
i'm making a database witch the frontend is an access project, the database has several stored procedures views and user functions (the normal..), but a few data, (only the experimental), last night i've noticed that the file grow from 22 MB to 89 MB, the objects are the same and also the data, the only diference was that i forgot to put in an event procedure code, the ADO method, "MoveNext", to update various records, and the loop was infinit. Is it possible that SQL statments generated by ADO make the file grow so rapidly!? If so how can i shrink it, because i've tried and and the results was 4%.
insert into DB_Growth (Database_Name, Logical_File_Name, File_Size_MB, Growth_Factor) exec (@l_sql_string)
fetch next from db_name_cursor into @l_db_name end close db_name_cursor deallocate db_name_cursor select * from DB_Growth with (nolock) if object_id('DB_Growth') is not null drop table DB_Growth set nocount off set ansi_warnings on return
insert into DB_Growth (Database_Name, Logical_File_Name, File_Size_MB, Growth_Factor) exec (@l_sql_string)
fetch next from db_name_cursor into @l_db_name end close db_name_cursor deallocate db_name_cursor select * from DB_Growth with (nolock) if object_id('DB_Growth') is not null drop table DB_Growth set nocount off set ansi_warnings on return
insert into DB_Growth (Database_Name, Logical_File_Name, File_Size_MB, Growth_Factor) exec (@l_sql_string)
fetch next from db_name_cursor into @l_db_name end close db_name_cursor deallocate db_name_cursor select * from DB_Growth with (nolock) if object_id('DB_Growth') is not null drop table DB_Growth set nocount off set ansi_warnings on return
We're a very small company, I’m the “technical” director who has evolved enough skill in a wide variety of tasks (network setup, machine config, email systems, html, asp, database...) and then one day you notice that parts of the system are starting to get way more complex and troublesome than the layman knowledge you have can cope with... Well, I think I’ve got to that point and I need some outside help to get our system to the next level!
OK, some rough details to start with. We run a small but fast-growing vehicle tracking system that sends back a LOT of data via GPRS to our SQL 2000 Enterprise server hosted on a dedicated server in London. The physical machine is a P4 3.2Ghz Dual-core Dell rackmount with 2GB RAM and 2 x 76GB SCSI disks in a RAID 1 array. This is partitioned into a 15GB C: partition and a 51GB D: partition. The system paging file is set to be 1536MB and is on the C: partition. The server is used for everything we do... it runs Smartermail email server (only about 5 or 6 domains and a few users, hardly used at all), SQL server as mentioned, web server & the proxy software that receives incoming data from our tracking devices.
There are 9 or 10 active databases on the SQL server. 8 of them take up less than a gigabyte between them and are sparingly used. The main “active” database on the SQL server is the tracking system – and this is big... As our tracking devices send in data every 10 – 30 seconds, the database is hit with hundreds of thousands of events per day. On a weekday, some half a million rows of data are written to the main “events” table on the database. Over 7 days from 26th November to 2nd December, almost exactly 3 million rows of data were written to the events table. We undertake to hold 3 months or so of data “live” for our customers and I periodically archive data off. I’ve been too busy to archive recently and the database is holding data on the events table going back to July 1st. The physical .mdf file is just under 30GB on partition d: at present. The plan is to drop the active data stored to only 1 – 2 months, but this still leaves a 12GB .mdf file.
The worrying thing with this is that this is only 700 or so devices writing to us at present... we aim to have thousands out there soon! We are looking into how we can hugely improve system performance and look to the future. Our hosting company is recommending VMWare virtual servers and SAN storage, but I’m not entirely sure that is the best way forward.
Our non-tech MD thinks the way forward is to have one database per customer and can't understand when I tell him I think that's bad as it will create all the system tables and bits & pieces for EVERY customer if we do that, right? Also it would be a nightmare to add a new column to a table as I'd have to update every single version of the database too... I want to avoid this unless I'm missing something and this is actually the best way to go forward?
I've had someone mention horizontal partitioning to me? not sure what implications this has to coding and table naming? Or is it all one big database spread among separate servers?
Currently our server is drowning on disk access and it's only going to get worse... any suggestions or links to reading online that I can do would be great, thanks!
I've got a question about the automatic database growth feature of V7. Here's an example:
I have a 1gb db that can grow to max size of 2gb. I set the auto grow option to 75% The first time the db grows it will grab 75% of the free space (1gb)
What happens if the database needs to grow again?
Will the db grow using the remaining free space (25%) or has the database reached its max size because it can't grow any further?
I am trying to find a way to calculate everyday my DB Growth, I did find a script on some site but it seems to give me the same information as the taskpad wich is not very specific. Basicaly i would like to know the size of a table in MB or in whatevever conversion possible, so that i will be able to do some forcasting.
Hello,I need to monitor every 15 minutes growth in data file and log file .Since mdf and intial file sizes are set to high value,measuring these values at 15 min interval will not provide the changein size .My intention is to measure the log file size growth which helps tocalculate the disk space and bandwidth required to setup log shipping .We need to set up this infrastructure based on this calculationThanksM A Srinivas