Auto-grow Option Isn't Working
Jul 20, 2005
We're using SQL2000 on Windows 2000 Server, but this is a problem
we've had on one particular database since SQL7 on NT4.
The database in question is set to autogrow by 10% (currently sitting
at 31Gb total size). However, last week users complained of a
slowdown in performance. When we checked we found that only 14Mb was
free on the database (we thought it would've grown automatically
before then), and when we added an additional 1Gb manually performance
picked up.
Does SQLServer wait until all the space is used up (i.e. 0% free)
before autogrowing? Even at that, we've never actually had the
database grow automatically - we've always had to add space manually.
Settings on this database, and one that does grow automatically,
appear to be the same (have also checked via sp_helpdb). So where
does the problem lie?
Any help you can give would be greatly appreciated.
View 1 Replies
ADVERTISEMENT
Jan 15, 2005
I've been through the forum and read a number of threads on people's DBs not growing and the answer usually is they don't have auotmatically grow data file. Unfortunately I have this on, but when I look at the properties of the database it reports the space available is 0.00 MB? Up until about two weeks ago I was showing appx 48% space utilization. When I ran an SP to show growth, it tells me that it was expanded by 20% yesterday, but SQL Server is still telling me the space available is zero.
The log file is also set for auto growth. The DB is 14.5 GB in size and the drives still have around 92 GB of space.
Has anyone experienced this before? Any ideas? Does anyone know of an SPs that can give me detailed info on internal data file size compared to stated size (i.e. wasted space in data file)? Is SQL Server doing something funny in the way it is seeing the database or data files individually? Any help is appreciated.
View 6 Replies
View Related
Feb 9, 2000
Hi ,
I have a SQL 7 database in which I have set the autogrow on. I need some way to be notified when the database does an autogrow. The reason for this is that if it does an autogrow once then if I am notified then I can manually expand the DB size without having SQL Server do multiple autogrows. I was looking at setting an alert but cannot find any message in sysmessages that seem to be information types for auto grow. Has anyone done this kind of thing.
Thanks
Venkat
View 2 Replies
View Related
Jul 23, 2005
SQL 2000I thought I would throw this out there for some feedback from others.I'd like to know if you feel using MS auto-increment field is a goodsolution these days or should one grow their own ?Thanks,Me.
View 11 Replies
View Related
Jul 20, 2005
I made a database to hold recordings of calls made to our customers.When I made it I set the size of the primary datafile to 18GB. It'sbeen running flawlessly for over 10 months. A few days ago the userswere suddenly no longer able to save the recordings to the database.They got an error message to the effect that the timeout had expired.The failure occurred on the .Execute statement of the Command thatcalls the stored procedure.I noticed that the data had reached the size allocated for the file.The file was set to auto-grow (5%). However, since I couldn't findanything else wrong, and since the test version of the database (whichonly has 15GB of data in an 18GB-dimensioned file) did not exhibit thesame behavior, I decided to try increasing the size of the file withan ALTER DATABASE statement. I increased it to 21GB. Lo and behold,the problem disappeared.Here's what I think might be going on: The default timeout for theADO Command object is 30 seconds... this is probably not long enoughfor SQL Server to add 900 MB to the datafile, therefore the Commandtimeout expired. So from now on instead of relying on auto-grow, I'mgoing to just make sure the datafile always has plenty of headroom.FWIW.
View 1 Replies
View Related
Jul 3, 2001
Hello all!
I've a problem with my database. Till yesterday the option for Auto Grow of Database (10 %) was working very fine, but now it seems to be some problems with it. Finally I had to specify a restricted size for the database and then it again startd to give me some space in the database to write in. Ideally it should have worked automatically, isnt it ???
There is no problem with the space on the drive, I still have some 76 gb of free space there ...
Thanks in advance ...
Anjä
View 1 Replies
View Related
Jan 31, 2011
Currently i am working on SSRS 2008 R2.The issue is that it is wrapping long words and not growing. I set the property can grow to True.
How to prevent the word wrapping?
For example, the column will have the word "information" in it. Instead of the column showing:
information
it shows:
informatio
n
The "n" gets wrapped to the next line. Is there a way to prevent this from happening.
How to prevent the word wrapping?
View 4 Replies
View Related
Feb 22, 2012
Is Auto shrink is good option where database is very critical and there is no down time?
View 4 Replies
View Related
Aug 3, 2007
I'm currently using SQL Server 2005. Before I have set my database on unrestricted auto growth. But today, I have noticed that the Log file has been set to limit its growth to 2,097,152 MB. I have 160GB space for my log files, I just want to maximize the space for logs in my hard drive.
When I try to change the settings back to auto growth it still keeps on returning to its previous setting it is still set on 2,097,152 MB. What I did was :
Right Click on the Database - Properties - Files - Click the (...) - set the auto growth option to unrestricted - Click Ok
But when I checked log file, it is still set on 2,097,152MB.
Can some one help me change the settings of my Database.
View 6 Replies
View Related
Dec 28, 2007
Is there any option to set auto fit the cell size of a table in SSRS 2005?
Thanks
View 7 Replies
View Related
Mar 19, 2003
I have a ton of recompiles happening from a trigger:
INSERT INTO tableA (col1,col2)
SELECT col1, col1FROM INSERTED
OPTION(KEEPFIXED PLAN)
I added option keepfixed plan and it is not working, I am still getting recompiles in the profiler trace. The table with the trigger on it has a huge amount of inserts, updates and deletes all day long.
Does anyone have any ideas? I know triggers are expensive, but we want to keep this trigger (I don't have a choice to rewrite another way).
Thanks!!
View 1 Replies
View Related
Oct 25, 2011
I am trying to push the install for ReportBuilder 3.0 and am having an issue with the REPORTSERVERURL option for installing via command line.I have a batch file that works fine, however when I launch the app it does not have a report server configured. I have verified I can connect to my report server if I enter it manually.
View 3 Replies
View Related
Dec 18, 2007
Has anyone had experience of using Parent/Child packages while enlisting them in Transactions.
I tested this on a small sample and thought that I had got it to work, but in my real-world package it does not.
The parent package essentially calls three child packages.
In each child package there are multiple DFT's that import and transform data into SQL Server.
All data must be imported or not at all.
Therefore I created a FELC container into which three Exec child package tasks were placed.
The FELC is set to Trans Option 'Required' and the Exec child package tasks to supported.
Unfortunately upon failure of one of the DFT's in the child the data was not rolled back.
So initially we had in terms of container hierarchy for the Trans Option property:
Parent package Supported
FELC for calling child packages Required
Task execute child package Supported
Child package Suppored
Tasks Suppored
Looking at this more closely we thought that we would need
Parent package Supported
FELC for calling child packages Required
Task execute child package Required
Child package Required
Tasks Suppored
for it to work.
However, the latter now gives us failures with error messages on the tasks on the child packages.
[Execute SQL Task] Error: Failed to acquire connection "Conn ECARS1CEDImport". Connection may not be configured correctly or you may not have the right permissions on this connection.
Even more strange the first couple of tasks in the child pkg complete successfully even though they use the same connection listed in the error.
These tasks also have Event handlers.
View 7 Replies
View Related
Aug 21, 2015
I am trying to change variable value at run time in ssis 2012 package using DTEXECUI utility but can not see any changes happening in config file variable value and also data is not getting populated in my table as per new variable value.
What is the right syntax or method of dynamically changing variable value either through DTEXECUI or DTEXEC command prompt command.
View 2 Replies
View Related
Dec 29, 2007
The auto stats not working
I have both Auto Update Statistics and Auto Update Statistics Asynchronously set to True
Created a little test table.
USE [TEST]
GO
/****** Object: Table [dbo].[CUSTOMER] Script Date: 12/29/2007 10:42:49 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[CUSTOMER](
[Customer_Id] [nchar](10) NOT NULL,
[Customer_Name] [nvarchar](1000) NULL,
[Customer_Address] [nvarchar](1000) NULL,
[Customer_Address1] [nchar](1000) NULL,
CONSTRAINT [PK_CUSTOMER] PRIMARY KEY CLUSTERED
(
[Customer_Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
--Created an insert table
DECLARE @COUNT INT
SET @COUNT = 1
WHILE @COUNT <= 1000
begin
insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)
VALUES (@COUNT, '12345678901234567890')
SET @COUNT = @COUNT + 1
END
I then look at Tables then statistics the statistics are empty so i fire update statistics and see 1000 rows in here.
I run again the insert script
DECLARE @COUNT INT
SET @COUNT = 1001
WHILE @COUNT <= 2000
begin
insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)
VALUES (@COUNT, '12345678901234567890')
SET @COUNT = @COUNT + 1
END
Look again at statistics it does not have 2000 rows in here.
If i do select * from CUSTOMER where CUSTOMER_ID = '2000' then go checks statictics it works.
I was under the impression that when you do insert or delete, update then the statistics are fired.
The sys.sysindexes rowmodctr shows the 1000 rows.
I checked the conditions that sql fires if the no of rows int able > 6 and < 500 then updates when 500 mods made.
Also if row > 500 auto update done when 500 = 20% are added
So both are met.
Anyone other any other suggestions about the auto stats ?
View 7 Replies
View Related
Dec 29, 2007
The auto stats not working
I have both Auto Update Statistics and Auto Update Statistics Asynchronously set to True
USE [TEST]
GO
/****** Object: Table [dbo].[CUSTOMER] Script Date: 12/29/2007 10:42:49 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[CUSTOMER](
[Customer_Id] [nchar](10) NOT NULL,
[Customer_Name] [nvarchar](1000) NULL,
[Customer_Address] [nvarchar](1000) NULL,
[Customer_Address1] [nchar](1000) NULL,
CONSTRAINT [PK_CUSTOMER] PRIMARY KEY CLUSTERED
(
[Customer_Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
--Created an insert table
DECLARE @COUNT INT
SET @COUNT = 1
WHILE @COUNT <= 1000
begin
insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)
VALUES (@COUNT, '12345678901234567890')
SET @COUNT = @COUNT + 1
END
Look at Tables then statistics the statistics are empty so i fire update statistics and see 1000 rows in here.
I run again the insert script
DECLARE @COUNT INT
SET @COUNT = 1001
WHILE @COUNT <= 2000
begin
insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)
VALUES (@COUNT, '12345678901234567890')
SET @COUNT = @COUNT + 1
END
Look again at statistics it not firing.
If i do select * from CUSTOMER where CUSTOMER_ID = '2000' then go checks statictics it works.
I was under the impression that when you do insert or delete, update then the statistics are fired.
The sys.sysindexes rowmodctr shows the 1000 rows.
I checked the conditions that sql fires if the no of rows int able > 6 and < 500 then updates when 500 mods made.
Also if row > 500 auto update done when 500 = 20% are added
So both are met.
Anyone other any other suggestions about the auto stats ?
View 2 Replies
View Related
May 29, 2014
I am running a query on SQL 2012 Server with the Resource governer setup for my account to have Max DOP option set to 1.
The query still runs in about 1 minute and the execution plan still considers parallelism.
When I explicitly mention the OPTION (MAXDOP 1) , the query runs in 6 seconds.
How can i tell by querying DMV's whether my query is using parallelism or not?
View 2 Replies
View Related
Apr 27, 2008
Hi everyone In my SqlServer Management Studio Express, on start up it shows the server type option, but greyed.So that value is fixed to database engine. ( I'm trying to work on an SqlServer Compact Edition database through the SSMStudiothat's why I'm trying to get this to change.)Besides, after I connect i go to the Object Explorer, expand the server node, and go to Replication.When i expand replication, i get the "Local Subscription" option, but nothng for Publication.( I want to work on Merge Replication, that's why I desparately need Publication to work)Am i missing something here? I did not install SqlServer separately, I only have what comes bundled with the Visual Studio 2005 Setup.
View 2 Replies
View Related
Feb 16, 2014
Since upgrading from SQL Server Management Studio 2008 R2, I've noticed that it no longer autosaves queries that have not been manually saved first. If a file has been manually saved the autorecover files end up in the following directory:
%appdata%MicrosoftSQL Server Management Studio11.0AutoRecoverDatSolution1
However, I have ended up in the situation where I have unsaved queries when my computer has crashed and have not been able to recover them.
I have also found references to .sql files stored in temp files in the following directory, but the files here seem to be very haphazardly caught:
%userprofile%AppDataLocalTemp
View 2 Replies
View Related
Feb 10, 2015
So I started a new job recently and have noticed a few strange configurations. Typically I would never mess with min memory per query option and index create memory option configuration because i just haven't seen any need to. My typical thought is that if it isn't broke... They have been modified on every single server in my environment.
From Books Online:
• This option is an advanced option and should be changed only by an experienced database administrator or certified SQL Server technician.
• The index create memory option is self-configuring and usually works without requiring adjustment. However, if you experience difficulties creating indexes, consider increasing the value of this option from its run value.
View 3 Replies
View Related
Mar 24, 2000
Hi,
I know that SQL 7 grows databases dynamically, but I'm wondering how it determines how much to grow it by? I have a couple of databases on our servers that are 3.4 GB but with 1.6 GB space available. So I'm wondering when it determines it needs to grow a database and what it does to determine how much to grow it by.
Thanks,
Mike Gagne
View 2 Replies
View Related
Jun 27, 2007
Bear with me - My SQL Server 2005 Maintenance is as good as a Newbie..
I was running a Very Large Transaction over the weekend (Say 10Mill Inserts)..
And after waiting for 3/4 Hrs for the transaction to complete -- Checked the LDF File, I has grown to a 100 GB.
After that i discovered that i had the Recovery Model as FULL .. So Killed the Job and Changed the recovery mode from Full -> Simple.
Now i see that the LDF file is not growing in size even though there are many transactions that were complete successfully (Still Very slow though)...
What am i missing here - Iam clueless as to why my LDF is not growing in size?
Any Ideas??
View 4 Replies
View Related
Jan 23, 2004
I have an MS SQL Server table with a Job Number field I need this field to start at a certain number then auto increment from there. Is there a way to do this programatically or within MSDE?
Thanks, Justin.
View 3 Replies
View Related
Jan 13, 1999
OK. Here's a good one.
I wrote a query that caused a HUGE amount of stuff to be written to the transaction log. Since I set the database up before I had enough coffee yesterday, I didn't turn on a "Restrict Filegrowth" on the log. So the transaction ran until it filled up the available space on the drive (my local workstation, so it grew to about 6 GB) and then it rolled back. (BTW: Microsoft finally figured out that rolling back a transaction shouldn't be a blocking operation. ISQLW tells you that the transaction failed as soon as it fails, and then releases the connection to you, so you can go on with your life while SQL Server cleans up. Good one!)
OK. So that done, I figured I'd just truncate the transaction log and do the nifty new "DBCC SHRINKFILE()" thing. So I truncate the log and do DBCC SHRINKFILE. Nothing happens. Enterprise Mangler (OOPS Manager. I really mean Manager) shows that only 43 MB of the 6 GB file is in use. DBCC SHRINKFILE reports that the minimum size is 128 pages, the current size is 697,256 pages, and 697,256 pages are used.
Great. So I can't shrink the file.
Step 2: I dump (OOPS, sorry, BACKUP) the database, delete the database, make sure all the files are gone, and then restore the database. It re-creates the 6 GB file, which, by the way takes a very long time. What's funny about that is the query timer in ISQLW reports that the query took 30 minutes, but the return from the restore command shows that it took about 300 and some seconds (about 5 minutes) because the restore command doesn't count the amount of time it took to build the files (I'm guessing). After I figure out that it rebuilt the 6 GB file, I screamed, and started downloading PostGreSQL for my Linux box, and got on to other projects.
This morning I came in and started reading Books Online to figure out what's going on. It says something about "Virtual Log Files" and how a log can't be shrunk past that point. Great. MS basically defines a virtual log file as "the point past which you can't shrink a log". So I have a 5 GB virutal log file, and I can't truncate it, shrink it, or make it go away.
So I have a stroke of genius and decide to build a new log file in the database, and then use the DBCC SHRINKFILE command with the EMPTYFILE option, and then use ALTER DATABASE to remove the file.
Then I get this really cool error that says:
Server: Msg 5020, Level 16, State 1, Line 1
The primary data or log file cannot be removed from a database.
OK. Last time I checked, a log file doesn't belong to a primary filegroup, so there's something else going on here. Basically, it looks like the first file that gets created is the "Primary" file and can never be removed.
So, new policy, every "first" file in a database is going to be a 2 MB file, with a 2 MB growth limit, so we can remove it later. That's a load of....fertilizer.
It looks like the AutoShrink for logs is just a myth. Auto-Grow seems to work almost too well, though. I'm picturing one of those Access newbies using the Export function in Access to put data into SQL Server on one of our pre-production boxes, and having a 180 GB log that can't be shrunk. That'll be a good time.
The moral of the story: Always set growth restrictions, especially on log files.
The questions:
1. Anybody got any bright ideas on how I can get my disk space back WITHOUT using BCP (or DTS, or similar methods)?
2. Anybody know how a different file can be set as a "PRIMARY" file?
3. Anybody know why MS decided to fill the Transact-SQL help in ISQLW with "You can't get there from here" messages that reference Books Online?
Thankfully, this isn't anywhere in our production system, and if the quality continues this way, it won't ever be in our production system.
chris.
View 1 Replies
View Related
Jan 24, 2003
Hardware:
IBM Netfinity 8500
2 processors Xeon 700
1,5 Gb memory
Windows 2000 Server SP2 Build 2195
SQL Server 2000 Standard Edition 8.00.534 SP2
There is only on Database (DB) of 16 Gb in drive G.
Drive G has 32 Gb space free.
Yesterday we appended tables to the database and in SQL logs appears the next error:
2003-01-23 12:26:42.57 spid101 fcb::ZeroFile(): GetOverLappedResult() failed with error 2.
2003-01-23 12:26:42.61 spid101 Error: 1105, Severity: 17, State: 2
2003-01-23 12:26:42.61 spid101 Could not allocate space for object 'ttdssc030104' in database 'MYDATABASE' because the 'PRIMARY' filegroup is full..
2003-01-23 12:26:48.03 spid101 fcb::ZeroFile(): GetOverLappedResult() failed with error 2.
DB configured to grow automatically by 100 Mb and transaction log Automatically grow in 10% .
Unrestricted file grow selected on both.
I try to expand the DB manually by Enterprise Manager to 20 Gb but not work and in SQL log appears the error
"2003-01-23 12:26:48.03 spid101 fcb::ZeroFile(): GetOverLappedResult() failed with error 2."
In Enterprise Manager-Databases-Properties-General-Size of DB maintain 16Gb.
Windows explorer say MYDATABASE.MDB is 20480 MB.
I delete the tables inserted and the problem persist.
Thanks in advance,
View 6 Replies
View Related
Jan 8, 2004
Hello everyone,
I have 45 GB db with
-Automatically grow file by 10 %
- full recovery
-log shipping every 5 minutes.
-full backup every 24 hrs
database grown from 33GB to 45 GB for 1 year period
4-5 times a year massive insert
done to database(no specific dates)
if I change autogrow to by 300MB or 4%
1.how would it affect insert process ?
2.how it affect daily performance ?
Thank you
Alex
View 3 Replies
View Related
Oct 1, 2004
What is the best option to set for File Growth?
Is it in megabytes or by percent?
View 3 Replies
View Related
Dec 4, 2007
Hi All
i am bit confused about how data and log files grow in databases. suppose i turn off the auto grow and restrict the maxsize upto some limit but size of data/log file is less then maxsize at some stage because what i understand is size of data/log file keep changing depends upon the activities going on the database. in future if data/log file need to grow can it grow upto the maxsize without turing on the auto grow.
regards
View 4 Replies
View Related
Oct 9, 2007
Hello,
I'm running a long and heavy query. during the running the log file of the DB is growing more than 20 GB and i'm running out of disk space consequently. Is there a way to restrict the log file size without demaging my query?
Thanks.
View 9 Replies
View Related
Oct 26, 2006
I've got a little console app that basically pulls back a recordset from our SQL Server 2005, goes through each row in the dataset and may/may not insert a record into a different table in the database. We use sproc's for every transaction and I close every connection in the application. However, when the application ends, I still show connection pools open in the performance monitor. Same with websites that I know have no traffic or that have been stopped by me in IIS.Last night I showed a total of 6000+ "Current # pooled and nonpooled connections". Should I be worried about what seems to be unending growth in the connection pools? If so, how can I look to manage this better?
View 2 Replies
View Related
Nov 25, 2005
Hi all,
I'm having a problem with one of our ddbb because we didn't run the maintenance plan from the beginning. The thing is that the hard drive is out of space and the log files are around 100GB. We only have 20MB free. Do you think that is space enought to run the maintenance plan or the shrink command??
Thanks very much!!!
View 1 Replies
View Related
Mar 20, 2004
My logfile has grow the disk full - the logfil is 25 gb and I have 4 gb free.
I can't shrink the log fil !
Can I set the log file to null ??
I have backup my datafil successfully!
Help!
View 9 Replies
View Related
Jun 19, 2007
The primary database i'm responsible for has started to grow super fast. Every couple of days is growing by 10% (which matches with the db settings). But, the recent growth doesn't match with the historical growth. It took a couple of months to grow from 7 to 8 GB, but it has grown to about 24 Gb in the last 2 months. Bottom line - trust my assertion that it's growing alarming fast.
I need help determine what objects are fueling the growth. If I know the objects, I can probably determine the cause. From a flip-side, it might be legit data stored very poorly. I'm open to any ideas...but I need to get ahead of this problem in the next week or so...or I'm going to run out of room on the hard drive and could start to affect my users.
Please send my any ideas you might have.
Thanks,
alex8675
View 5 Replies
View Related