Connection Pools Constantly Grow
Oct 26, 2006
I've got a little console app that basically pulls back a recordset from our SQL Server 2005, goes through each row in the dataset and may/may not insert a record into a different table in the database. We use sproc's for every transaction and I close every connection in the application. However, when the application ends, I still show connection pools open in the performance monitor. Same with websites that I know have no traffic or that have been stopped by me in IIS.
Last night I showed a total of 6000+ "Current # pooled and nonpooled connections". Should I be worried about what seems to be unending growth in the connection pools? If so, how can I look to manage this better?
View 2 Replies
ADVERTISEMENT
Oct 26, 2006
I know that best practice is to open late, release soon on connections. I recently went through my app and changed all Sqlcommand executes to:Try
dbConn.Open()
com.ExecuteNonQuery()
Finally
dbConn.Close()
End Try My question is, should I include some kind of fail-safe for my Sqldataadapter.fill requests? I know that the connection is closed when the fill completes, but what if there's an error during the fill? Should I write something like:Try
da.Fill(ds,"Results")Finally If dbConn.State = ConnectionState.Open Then
dbConn.Close()
End IfEnd Try
View 1 Replies
View Related
Jan 15, 2005
I've been through the forum and read a number of threads on people's DBs not growing and the answer usually is they don't have auotmatically grow data file. Unfortunately I have this on, but when I look at the properties of the database it reports the space available is 0.00 MB? Up until about two weeks ago I was showing appx 48% space utilization. When I ran an SP to show growth, it tells me that it was expanded by 20% yesterday, but SQL Server is still telling me the space available is zero.
The log file is also set for auto growth. The DB is 14.5 GB in size and the drives still have around 92 GB of space.
Has anyone experienced this before? Any ideas? Does anyone know of an SPs that can give me detailed info on internal data file size compared to stated size (i.e. wasted space in data file)? Is SQL Server doing something funny in the way it is seeing the database or data files individually? Any help is appreciated.
View 6 Replies
View Related
Jul 19, 2007
The comapny i work for has a server running the following.
Opteron 246x2
2 Gig memory
320 Gig Sata 2 drives
Windows 2003 Standard Edition
SQL Server 2005 Express Edition. The Free one.
There are approx 10 users that connect to the server.
There are two programs which seem to use sql server. Act 7.0 and service ceo.
When the computer is rebooted its at 0% for sqlserver.exe
Than when all connect it maxes it 50% and its steady there. Seems Service ceo affects it the most.
Told by comapny need to buy full blown sql server to resolve problem. But i dont think this is the problem.
Questions: Is it common for server to be at 50% all the time with sql server running?
And if its not is there a way to reduce the sql cpu usage.
I am new to sql server and have done alot of research and fixes. Ive unsitalled and reinstalled all sql instances and done the tweaks suggested. Any fresh ideas would be great thanks.
John
View 3 Replies
View Related
Jun 15, 2007
Hi, I've been trying to figure out this error message I'm getting. The exact error message is: " Procedure addservicerequest has no parameters and arguments were supplied." and this is applied to a SQL Command, but what confuses me is that I've got parameters already created and added. Is there any advice to help me out with this problem? Here's the code for reference: 1 Dim sqlparadate As New sqlparameter2 Dim sqlparacompany As New SqlParameter3 Dim sqlparalocation As New SqlParameter4 Dim sqlparacontact As New SqlParameter5 Dim sqlparaphone As New SqlParameter6 Dim sqlparadetails As New SqlParameter7 Dim sqlparaso As New SqlParameter8 Dim sqlparadrive As New SqlParameter9 Dim sqlparawork As New SqlParameter10 Dim sqlparaback As New SqlParameter11 Dim sqlparaproblem As New SqlParameter12
13 Dim sqlcon As New SqlConnection14 Dim sqlinsert As New SqlCommand("addservicerequest", sqlcon)15
16 'txttest.Text = caldateofservice.SelectedDate
17 If sqlcon.State = Data.ConnectionState.Closed Then
18 sqlcon.ConnectionString = "***"
19 sqlcon.Open()20 End If
21
22 sqlinsert.CommandType = Data.CommandType.StoredProcedure23 'sqlinsert.CommandText = "addservicerequest"24 'sqlinsert.Connection = sqlcon
25
26 With sqlparadate27 .SqlDbType = Data.SqlDbType.SmallDateTime28 .Direction = Data.ParameterDirection.Input29 .ParameterName = "@dateofservice"
30 .Value = caldateofservice.SelectedDate.ToString31 End With
32 sqlinsert.Parameters.Add(sqlparadate)33
34 With sqlparacompany35 .SqlDbType = Data.SqlDbType.NVarChar36 .Direction = Data.ParameterDirection.Input37 .ParameterName = "@company"
38 .Value = txtcompany.Text39 End With
40 sqlinsert.Parameters.Add(sqlparacompany)41
42 With sqlparalocation43 .SqlDbType = Data.SqlDbType.NVarChar44 .Direction = Data.ParameterDirection.Input45 .ParameterName = "@location"
46 .Value = txtlocation.Text47 End With
48 sqlinsert.Parameters.Add(sqlparalocation)49
50 With sqlparacontact51 .SqlDbType = Data.SqlDbType.NVarChar52 .Direction = Data.ParameterDirection.Input53 .ParameterName = "@contactname"
54 .Value = txtcontactname.Text55 End With
56 sqlinsert.Parameters.Add(sqlparacontact)57
58 With sqlparaphone59 .SqlDbType = Data.SqlDbType.NVarChar60 .Direction = Data.ParameterDirection.Input61 .ParameterName = "@contactphone"
62 .Value = txtcontactphone.Text63 End With
64 sqlinsert.Parameters.Add(sqlparaphone)65
66 With sqlparadetails67 .SqlDbType = Data.SqlDbType.NVarChar68 .Direction = Data.ParameterDirection.Input69 .ParameterName = "@details"
70 If ddldetails.SelectedValue = "Other" Then
71 .Value = txtreason.Text72 Else
73 .Value = ddldetails.SelectedValue74 End If75 End With
76 sqlinsert.Parameters.Add(sqlparadetails)77
78 With sqlparaso79 .SqlDbType = Data.SqlDbType.NVarChar80 .Direction = Data.ParameterDirection.Input81 .ParameterName = "@serviceorder"
82 .Value = txtserviceorder.Text83 End With
84 sqlinsert.Parameters.Add(sqlparaso)85
86 With sqlparadrive87 .SqlDbType = Data.SqlDbType.NVarChar88 .Direction = Data.ParameterDirection.Input89 .ParameterName = "@drivetime"
90 .Value = ddldrivetime.SelectedValue91 End With
92 sqlinsert.Parameters.Add(sqlparadrive)93
94 With sqlparawork95 .SqlDbType = Data.SqlDbType.NVarChar96 .Direction = Data.ParameterDirection.Input97 .ParameterName = "@worktime"
98 .Value = ddlworktime.SelectedValue99 End With
100 sqlinsert.Parameters.Add(sqlparawork)101
102 With sqlparaback103 .SqlDbType = Data.SqlDbType.NVarChar104 .Direction = Data.ParameterDirection.Input105 .ParameterName = "@backhome"
106 .Value = ddlbackhome.SelectedValue107 End With
108 sqlinsert.Parameters.Add(sqlparaback)109
110 With sqlparaproblem111 .SqlDbType = Data.SqlDbType.NVarChar112 .Direction = Data.ParameterDirection.Input113 .ParameterName = "@typeproblem"
114 .Value = ddlproblem.SelectedValue115 End With
116 sqlinsert.Parameters.Add(sqlparaproblem)117
118 sqlinsert.ExecuteNonQuery()119
120 sqlcon.Close() Specifically it occurs on line 118. Thanks
View 5 Replies
View Related
Jun 21, 2000
Hello:
In my production environment, ALL OF THE SUDDEN, my backups via EM schedule tasks, fail, yet when i examine the error, it says that the DUMP/LOAD was successful. I receive the following error:
'Could not insert a backup or restore history/detail record in msdb.dbo.sysbackuphistory or sysrestore history. This may indicate a problem with the msdb.. . . . . .
I expanded the MSDB database, and ran sp_purgehistory (no params) and it still FAILS. Is this related to the log? Please HELP asap. Thank you in advance.
View 2 Replies
View Related
Aug 22, 2014
I have a database with a table called RAW, this table receives msgs via XML, sms and various other formats. The data is written into this table at a rate of approximately 50-100 rows per minute.
I have a SP which takes the data written into the raw table and performs various actions looks for account information, writes to a log table, writes to incident table’s gets GPS information and so on.
The records written into the raw table need to be processed at almost instantly with a maximum of under a minute from when they arrive into the table.
At present there is a sql agent job which executes the SP, this consist of a step to execute the SP which on completion moves to the next step which is a loop waitfor delay and then back to step 1.
The trouble is it never actually finishes and runs 24/7 there is no break point for error handling and occasionally the records that arrive in the raw table do not get processed and the job has to be restarted for it to pick them up again.
I am looking for best way to handle this process, I thought about a trigger but the performance impact on using a trigger was too heavy to consider.
View 2 Replies
View Related
May 21, 2007
Hello,
when is seemd that everything works some weird behaviours comes out.
I try to summarize the problem without to post the complete code.
Service Broker is set to have a dialog between two databases on the same SQL Server instance.
The Initiator queue has retention=on and there is an activation SP to handle errors and Target's end dialog message.
The Target queue has retention=off, MAX_READER =1 and there is an activation SP to receive the message (WAIT FOR (RECEIVE (1) ...), TIMEOUT 30000 and do something with this message (sample insert into a DB).
The conversation has a Timeout Dialog to end the dialog after a while.
The problem that the message is constantly processed. The Process doens't stop is I end the dialof after the processing either.
n.b.the Receive is within a Transation that I commit at the end.
some other informations that in the meanwhile I found out :
This was my complete WAIT FOR(RECEIVE :
WAITFOR ( RECEIVE top(1) -- just handle one message at a time
@message_type=message_type_id, --the type of message received
@messagetypename=message_type_name,
@message_body=message_body, -- the message contents
@dialog = conversation_handle -- the identifier of the dialog this message was received on
FROM [TargetQueue]
), timeout 1000;
if (@@ROWCOUNT = 0)
BEGIN
COMMIT;
BREAK;
END
IF I delet TIMEOUT 1000, everything works as expected ...
Inside if (@@ROWCOUNT = 0)BEGIN..END I wrote also an Insert into a table to see wheter the end of the queue was reached but this insert never occurs (neither with not without timeout)
I'm happy that it works what if this is the solution, it make no sense to me!
Any ideas?
Thank you!
M.B.
Thank you very much
M.B.
View 9 Replies
View Related
Jun 30, 2007
Hello.
Let me describe first my replication setup:
- SQL Server 2005 SP1 (SP2 coming soon)
- Approximately 35 remote users (Salesrep laptop) using Pull Subscriptions
- Merge (Bi-Directional) (8 articles - tables only)
- Merge (Uni-Directional) (5 articles - tables only)
- Transactional (5 articles - tables only)
Users receive data based on their territory #, therefore they receive their customers sets of data. It happens that customer change from one territory to another but not frequently. When it happens, so far so good, the data is redirected to the new salesrep using the model we configured (Territory table with SUSER_NAME() to filter the data).
Ok, here's my problem. Since a while, I can see in the replication monitor that some users seems to log the same conflict again and again (Merge process). I mean, checking the history for many subscribers, there is always the same number in the "Conflict" colums.
As an example:
- Merge completed after processing 18 data change(s) (4 insert(s), 14 update(s), 0 delete(s), 31 conflict(s))
- Merge completed after processing 27 data change(s) (10 insert(s), 17 update(s), 0 delete(s), 31 conflict(s))
- Merge completed after processing 20 data change(s) (5 insert(s), 15 update(s), 0 delete(s), 31 conflict(s))
and so on...(Those are only 3 historical entries for a single subscriptions but there are many like that, always with the same count of conflict - vary per user). It appears to me that the same conflicts come over and over.
The thing is that if I decide to reinitialize a subscription, conflicts will disappear, therefore I know that it is not a process on the server that keeps changing the data; anyway, even if it was, changes would be applied on the subscription because the server always win in my setup.
Any idea what should I do with this? Any help would be greatly appreciated.
Thanks.
View 3 Replies
View Related
Jul 7, 2015
We have an issue with the Version Store growing constantly. According to sys.dm_os_performance_counters, "Version Generation rate (KB/s)" is growing, but "Version Cleanup rate (KB/s)" isn't. We use read-committed snapshot isolation
While dbcc opentran and sys.dm_exec_requests don't show any long running transactions, I wrote a query looking at sys.dm_tran_ active_snapshot_ database_transactions. This shows a number of long running transactions but, according to sys.dm_exec_sessions, they are all sleeping. The transactions that are running come and go very quickly, as I would expect.Could these sleeping transactions be responsible for preventing the version store from cleaning up?
View 2 Replies
View Related
Nov 21, 2001
The log for our SQL 2000 database shows constant "starting up database dbname" entires. Is there a option that causes the databases to constantly be starting or is this a new feature of SQL 2000.
View 1 Replies
View Related
Feb 6, 2007
Our report server are constantly getting the below error.
What causes this - I know how to fix it, in fact, I've automated it but why does it constantly happen on some servers? I guess I'd like to know what causes it to try and fix it at those points instead of having to fix it here. Proactively.
Reporting Services Error
The report server cannot decrypt the symmetric key used to access sensitive or encrypted data in a report server database. You must either restore a backup key or delete all encrypted content. Check the documentation for more information. (rsReportServerDisabled) (rsRPCError) Get Online Help
The report server cannot decrypt the symmetric key used to access sensitive or encrypted data in a report server database. You must either restore a backup key or delete all encrypted content. Check the documentation for more information. (rsReportServerDisabled)
The report server cannot decrypt the symmetric key used to access sensitive or encrypted data in a report server database. You must either restore a backup key or delete all encrypted content. Check the documentation for more information. (rsReportServerDisabled)
Bad Data. (Exception from HRESULT: 0x800900
Why
View 3 Replies
View Related
Aug 16, 2006
Trying to set up a tranform task between a mysql db using and ADO.NET connection and sql server.
My query to pull from the mysql db is something like "select x,y,z from table where last_updated" > @User::LastUpdated. This command is set up as an expression for the Data Flow Task and is the value for the [DataReader Source].[SqlCommand]
I have two questions.
Why does the package attempt a query against the mysql database all the time?
And Why is the query attempting to pull the entire table instead of having any regards for my where clause?
I've even added where last_updated > greatest('2006-08-15', '" + @User::LastUpdated to attempt to get it a where clause even when the parameter isn't set yet.
What is the trick? This is not feasible when pulling from multi-million row tables.
View 2 Replies
View Related
Mar 8, 2007
Hello,
I'm using SQL service Broker 2005 with ASP.NET 2.0 in order to use the sql cache dependency.
everything works fine...
I have only a doubt regarding a query that is constantly executed on mu db ( i can see it be means of the SQL Profiler)
The query is:
exec sp_executesql N'BEGIN CONVERSATION TIMER ("'') TIMEOUT = 120; WAITFOR(RECEIVE TOP (1) message_type_name, conversation_handle,
cast(message_body AS XML) as message_body from [SqlQueryNotificationService-GUID]), TIMEOUT @p2;',N'@p2 int',@p2=60000
The web application is not running from a browser ...
It this ok or I forget to clean/reset something from my web application and/or sql server?
Thank you
Marina B.
View 3 Replies
View Related
Mar 24, 2000
Hi,
I know that SQL 7 grows databases dynamically, but I'm wondering how it determines how much to grow it by? I have a couple of databases on our servers that are 3.4 GB but with 1.6 GB space available. So I'm wondering when it determines it needs to grow a database and what it does to determine how much to grow it by.
Thanks,
Mike Gagne
View 2 Replies
View Related
Jun 27, 2007
Bear with me - My SQL Server 2005 Maintenance is as good as a Newbie..
I was running a Very Large Transaction over the weekend (Say 10Mill Inserts)..
And after waiting for 3/4 Hrs for the transaction to complete -- Checked the LDF File, I has grown to a 100 GB.
After that i discovered that i had the Recovery Model as FULL .. So Killed the Job and Changed the recovery mode from Full -> Simple.
Now i see that the LDF file is not growing in size even though there are many transactions that were complete successfully (Still Very slow though)...
What am i missing here - Iam clueless as to why my LDF is not growing in size?
Any Ideas??
View 4 Replies
View Related
Feb 9, 2000
Hi ,
I have a SQL 7 database in which I have set the autogrow on. I need some way to be notified when the database does an autogrow. The reason for this is that if it does an autogrow once then if I am notified then I can manually expand the DB size without having SQL Server do multiple autogrows. I was looking at setting an alert but cannot find any message in sysmessages that seem to be information types for auto grow. Has anyone done this kind of thing.
Thanks
Venkat
View 2 Replies
View Related
Jan 13, 1999
OK. Here's a good one.
I wrote a query that caused a HUGE amount of stuff to be written to the transaction log. Since I set the database up before I had enough coffee yesterday, I didn't turn on a "Restrict Filegrowth" on the log. So the transaction ran until it filled up the available space on the drive (my local workstation, so it grew to about 6 GB) and then it rolled back. (BTW: Microsoft finally figured out that rolling back a transaction shouldn't be a blocking operation. ISQLW tells you that the transaction failed as soon as it fails, and then releases the connection to you, so you can go on with your life while SQL Server cleans up. Good one!)
OK. So that done, I figured I'd just truncate the transaction log and do the nifty new "DBCC SHRINKFILE()" thing. So I truncate the log and do DBCC SHRINKFILE. Nothing happens. Enterprise Mangler (OOPS Manager. I really mean Manager) shows that only 43 MB of the 6 GB file is in use. DBCC SHRINKFILE reports that the minimum size is 128 pages, the current size is 697,256 pages, and 697,256 pages are used.
Great. So I can't shrink the file.
Step 2: I dump (OOPS, sorry, BACKUP) the database, delete the database, make sure all the files are gone, and then restore the database. It re-creates the 6 GB file, which, by the way takes a very long time. What's funny about that is the query timer in ISQLW reports that the query took 30 minutes, but the return from the restore command shows that it took about 300 and some seconds (about 5 minutes) because the restore command doesn't count the amount of time it took to build the files (I'm guessing). After I figure out that it rebuilt the 6 GB file, I screamed, and started downloading PostGreSQL for my Linux box, and got on to other projects.
This morning I came in and started reading Books Online to figure out what's going on. It says something about "Virtual Log Files" and how a log can't be shrunk past that point. Great. MS basically defines a virtual log file as "the point past which you can't shrink a log". So I have a 5 GB virutal log file, and I can't truncate it, shrink it, or make it go away.
So I have a stroke of genius and decide to build a new log file in the database, and then use the DBCC SHRINKFILE command with the EMPTYFILE option, and then use ALTER DATABASE to remove the file.
Then I get this really cool error that says:
Server: Msg 5020, Level 16, State 1, Line 1
The primary data or log file cannot be removed from a database.
OK. Last time I checked, a log file doesn't belong to a primary filegroup, so there's something else going on here. Basically, it looks like the first file that gets created is the "Primary" file and can never be removed.
So, new policy, every "first" file in a database is going to be a 2 MB file, with a 2 MB growth limit, so we can remove it later. That's a load of....fertilizer.
It looks like the AutoShrink for logs is just a myth. Auto-Grow seems to work almost too well, though. I'm picturing one of those Access newbies using the Export function in Access to put data into SQL Server on one of our pre-production boxes, and having a 180 GB log that can't be shrunk. That'll be a good time.
The moral of the story: Always set growth restrictions, especially on log files.
The questions:
1. Anybody got any bright ideas on how I can get my disk space back WITHOUT using BCP (or DTS, or similar methods)?
2. Anybody know how a different file can be set as a "PRIMARY" file?
3. Anybody know why MS decided to fill the Transact-SQL help in ISQLW with "You can't get there from here" messages that reference Books Online?
Thankfully, this isn't anywhere in our production system, and if the quality continues this way, it won't ever be in our production system.
chris.
View 1 Replies
View Related
Jan 24, 2003
Hardware:
IBM Netfinity 8500
2 processors Xeon 700
1,5 Gb memory
Windows 2000 Server SP2 Build 2195
SQL Server 2000 Standard Edition 8.00.534 SP2
There is only on Database (DB) of 16 Gb in drive G.
Drive G has 32 Gb space free.
Yesterday we appended tables to the database and in SQL logs appears the next error:
2003-01-23 12:26:42.57 spid101 fcb::ZeroFile(): GetOverLappedResult() failed with error 2.
2003-01-23 12:26:42.61 spid101 Error: 1105, Severity: 17, State: 2
2003-01-23 12:26:42.61 spid101 Could not allocate space for object 'ttdssc030104' in database 'MYDATABASE' because the 'PRIMARY' filegroup is full..
2003-01-23 12:26:48.03 spid101 fcb::ZeroFile(): GetOverLappedResult() failed with error 2.
DB configured to grow automatically by 100 Mb and transaction log Automatically grow in 10% .
Unrestricted file grow selected on both.
I try to expand the DB manually by Enterprise Manager to 20 Gb but not work and in SQL log appears the error
"2003-01-23 12:26:48.03 spid101 fcb::ZeroFile(): GetOverLappedResult() failed with error 2."
In Enterprise Manager-Databases-Properties-General-Size of DB maintain 16Gb.
Windows explorer say MYDATABASE.MDB is 20480 MB.
I delete the tables inserted and the problem persist.
Thanks in advance,
View 6 Replies
View Related
Jan 8, 2004
Hello everyone,
I have 45 GB db with
-Automatically grow file by 10 %
- full recovery
-log shipping every 5 minutes.
-full backup every 24 hrs
database grown from 33GB to 45 GB for 1 year period
4-5 times a year massive insert
done to database(no specific dates)
if I change autogrow to by 300MB or 4%
1.how would it affect insert process ?
2.how it affect daily performance ?
Thank you
Alex
View 3 Replies
View Related
Oct 1, 2004
What is the best option to set for File Growth?
Is it in megabytes or by percent?
View 3 Replies
View Related
Dec 4, 2007
Hi All
i am bit confused about how data and log files grow in databases. suppose i turn off the auto grow and restrict the maxsize upto some limit but size of data/log file is less then maxsize at some stage because what i understand is size of data/log file keep changing depends upon the activities going on the database. in future if data/log file need to grow can it grow upto the maxsize without turing on the auto grow.
regards
View 4 Replies
View Related
Jul 23, 2005
SQL 2000I thought I would throw this out there for some feedback from others.I'd like to know if you feel using MS auto-increment field is a goodsolution these days or should one grow their own ?Thanks,Me.
View 11 Replies
View Related
Jul 20, 2005
I made a database to hold recordings of calls made to our customers.When I made it I set the size of the primary datafile to 18GB. It'sbeen running flawlessly for over 10 months. A few days ago the userswere suddenly no longer able to save the recordings to the database.They got an error message to the effect that the timeout had expired.The failure occurred on the .Execute statement of the Command thatcalls the stored procedure.I noticed that the data had reached the size allocated for the file.The file was set to auto-grow (5%). However, since I couldn't findanything else wrong, and since the test version of the database (whichonly has 15GB of data in an 18GB-dimensioned file) did not exhibit thesame behavior, I decided to try increasing the size of the file withan ALTER DATABASE statement. I increased it to 21GB. Lo and behold,the problem disappeared.Here's what I think might be going on: The default timeout for theADO Command object is 30 seconds... this is probably not long enoughfor SQL Server to add 900 MB to the datafile, therefore the Commandtimeout expired. So from now on instead of relying on auto-grow, I'mgoing to just make sure the datafile always has plenty of headroom.FWIW.
View 1 Replies
View Related
Oct 9, 2007
Hello,
I'm running a long and heavy query. during the running the log file of the DB is growing more than 20 GB and i'm running out of disk space consequently. Is there a way to restrict the log file size without demaging my query?
Thanks.
View 9 Replies
View Related
Nov 25, 2005
Hi all,
I'm having a problem with one of our ddbb because we didn't run the maintenance plan from the beginning. The thing is that the hard drive is out of space and the log files are around 100GB. We only have 20MB free. Do you think that is space enought to run the maintenance plan or the shrink command??
Thanks very much!!!
View 1 Replies
View Related
Mar 20, 2004
My logfile has grow the disk full - the logfil is 25 gb and I have 4 gb free.
I can't shrink the log fil !
Can I set the log file to null ??
I have backup my datafil successfully!
Help!
View 9 Replies
View Related
Jul 20, 2005
We're using SQL2000 on Windows 2000 Server, but this is a problemwe've had on one particular database since SQL7 on NT4.The database in question is set to autogrow by 10% (currently sittingat 31Gb total size). However, last week users complained of aslowdown in performance. When we checked we found that only 14Mb wasfree on the database (we thought it would've grown automaticallybefore then), and when we added an additional 1Gb manually performancepicked up.Does SQLServer wait until all the space is used up (i.e. 0% free)before autogrowing? Even at that, we've never actually had thedatabase grow automatically - we've always had to add space manually.Settings on this database, and one that does grow automatically,appear to be the same (have also checked via sp_helpdb). So wheredoes the problem lie?Any help you can give would be greatly appreciated.
View 1 Replies
View Related
Jul 3, 2001
Hello all!
I've a problem with my database. Till yesterday the option for Auto Grow of Database (10 %) was working very fine, but now it seems to be some problems with it. Finally I had to specify a restricted size for the database and then it again startd to give me some space in the database to write in. Ideally it should have worked automatically, isnt it ???
There is no problem with the space on the drive, I still have some 76 gb of free space there ...
Thanks in advance ...
Anjä
View 1 Replies
View Related
Jun 19, 2007
The primary database i'm responsible for has started to grow super fast. Every couple of days is growing by 10% (which matches with the db settings). But, the recent growth doesn't match with the historical growth. It took a couple of months to grow from 7 to 8 GB, but it has grown to about 24 Gb in the last 2 months. Bottom line - trust my assertion that it's growing alarming fast.
I need help determine what objects are fueling the growth. If I know the objects, I can probably determine the cause. From a flip-side, it might be legit data stored very poorly. I'm open to any ideas...but I need to get ahead of this problem in the next week or so...or I'm going to run out of room on the hard drive and could start to affect my users.
Please send my any ideas you might have.
Thanks,
alex8675
View 5 Replies
View Related
Jan 9, 2008
My primary (and only) data file has reached the point where it is auto growing. I would like to grow this file in one big chunk at an off peak time. I can't seem to find the code I need to make the file grow when I want it to?
View 1 Replies
View Related
Feb 7, 2006
Hi all,I have a SQL Server 2000 database that is using the Full recoverymodel. The database is purely receiving inserts (and plenty of them)with maybe some view/table creation for reporting.In this state I would expect the log to grow ad infinitum but it getsto about 32% used and then empties.The log is not being backed up at all so am I missing something else?CheersDee
View 2 Replies
View Related
Aug 20, 2015
For one of our database we have an issue where its log file got increased rapidly last week on Fri and Sat. The database is on SQL Â server 2008 R2 with compatibility level at 80. Please see below log grow events :
First, we thought Index maintenance like Re-index and update stats could have been the reason, but when check the schedule that job ran on 16th using below code:
USE ABC
GO
EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)"
GOEXEC sp_updatestatsGO
I know above is OLD fashioned, but we believe that should not be the major cause here? How can i determine what happened on 14th and 15th which cause the event to trigger and log file bumps to 80 and 70 GB both days.
View 8 Replies
View Related