I have read that MS suggests you tempdb be atleast 25% of your largest database. Is there any way to monitor your tempdb to determine if it is filling up or being 100% utilized and needs to be expanded?
I have an SQL server that has probably 20 or so people connected at a time. In addition to these people, there are five people that run Crystal reports with some voluminous data, usually coming from views. About every week and a half, one of the report writers get a message that the master device is full and must be expanded, because tempdb is full. When I expand temdb, it fixes the problem. My question is, the tempdb is now 800 Megs. The sum total of memory allocated to all of the other(besides master)devices is 650. Should this be a flag that something is wrong? It just keeps growing and growing. Tempdb is not on it's own device-it is still on master. Any advice would be greatly appreciated! Jane Davis
I have a server with 2.5GB of RAM. I am allocation 1GB to SQL Server and am attempting to put 500MB tempdb in RAM. It would appear as though I am hitting some sort of max threshhold that I am unaware of as I can set tempdb in RAM up to 320MB. Anything beyond that and I get the following errors on start up and SQL Server will not start.
98/12/02 08:25:47.03 spid1 Clearing temp db
98/12/02 08:25:47.03 kernel udactivate(IN_RAM): Operating system error 8(Not enough storage is available to process this command.) encountered
98/12/02 08:25:47.03 spid1 Device activation error. The physical filename 'IN_RAM' may be incorrect
98/12/02 08:25:47.03 spid1 crdb_tempdb: Unable to move tempdb into RAM; RAM device doesn't exist, cannot be created, or doesn't have enough space for tempdb
I am certain that I am not hitting a Physical RAM limit and the memory checks out and is visible to NT. Please Advise. Thanks
Is there any way by which I can determine whether the tempdb size for the SQL Server is enough or not? (I ean are there any symptoms like excessive paging etc. that can identify this bottleneck?).
We had a runaway query which built the size of tempdb to 24000mb. Then someone changed the unrestricted file growth property to restricted growth while the size was 24000mb. Now I can not reduce the initial size. I have set the property back to unrestricted file growth. I have shrunk the tempdb and available space is almost 24000mb. I have stopped sqlserver. I even deleted the existing tempdb.mdf & tempdb.ldf files. But when SQL server is restarted, the initial size is set to 24000mb. It will not let me reduce the size. Is there anything short of manipulating the system tables to reduce the size back to 500mb?
We currently have a hard-drive size of 3.89MB and 3.3MB is being used by tempdb. I have tried shrinking the database truncateonly but this is not working. The problem is that the tempdb file is as large as my C: drive size. In addition can this be moved to another directory. For example can I move the tempdb.mdf and ldf from C: to E:. Any help would be greatly appreciated.
On my server C drive is of 34GB. Right now tempdb size is 22GB which is causing C drive to be full. How I can I reduce it? I dont want to move tempdb to any other drive, and I am only looking a way to reduce its size.
I have the classic "tempdb-out-of-space" problem. Unfortunately, my server fails to reboot properly as tempdb is located on the C: drive which is now completely full. While I understand the changes required to prevent this from happening again, I want to know if it will even reboot if I delete tempdb.mdf and tempdb.ldf. I've read conflicting information on MSDN about default tempdb file size: - files are built to the default size (I will be fine) - files are built to the same size as before (problem) Which is true for SQL 2005?
I have a tempdb that was created at 1Gig. I don't know why but I want to shrink it below the original creation size. Is there a way to shrink this file or create a new file and delete the old?
I have tried shinkfile and shrink database with no luck.
Against my better judgement, we are using fixed allocations of tempdb on some of our servers. This is to deal with specific limitations of our applicaitons and hardware configuration that I'm not allowed to discuss in much detail.
The problem that I have is that the present plan is to configure the data file at around 18 Gb and the log file at around 2 Gb. This seems just plain wrong to me, but I haven't been able to find a formal recommendation that gives any relative sizing. I would expect to have about twice as much log as data space, especially for tempdb.
Does anyone know of a formal citation (preferably from Microsoft) that discusses this?
We have a problem with the size of the tempdb.mdf file. The tempdb had grown to 25Gb and consumed all the available disk space. SQL server was restarted and the tempdb was reset back to the default size. The following day the tempdb suddenly increased in size from 200mb to 25GB within a very short space of time. There were a couple of event log entries from sqlservger regarding the lack of disk. Since then the server is running without any problems but the level of free space is virtually zero on the drive with tempdb.mdf file.
What would cause the tempdb to grow suddenly and to this size?
I have got SQLv6.5 SP5a with SMS1.2 SP4 on seperate Alpha boxes. I have automated the backups so they are scheduled for after hours. SMS gets backed up first and TEMPDB shortly afterwards. However, since a back log in SMS MIFS has happened, the TEMPDB backup displays of 100,000pages backed up. When you back it up on its own, it only shows 170+ pages.
The SMS DB is 600MB in size, the Log is 210MB, Open objects is 5000, and TEMPDB is set 210MB on its own device.
I am running a Query in my Production Server. It is hardly taking 15 Mins. The same Query is taking more than 3 Hours in my test server. I can see the only difference between these two servers is Tempdb Size. Will tempdb size matters the performance of a Query. Can anyone reply me?
My prod server (only default instance) is configured TempDB 1024 MB data and Log 200MB. when I run 'sqlperf logspace' it shows most of time around 45% 'log space used'. There nothing going on the instance when I ran 'whoisactive' and select * from sys.sysprocesses where dbid = 2!!!
So my questions are is this normal to see log space around 45%, how to find what what CAUSED the tempdb log space to grow 45%? Is there something to do about it?
We have installed SQL Server 2008 R2 SP1 instance and it's having Share Point 2010 databases.
We have 2 dedicated drives for Tempdb on SAN with 50 GB space. Both tempdb data & log files are created with default size. I would like to presize them.
What are the best values to start with?
U ->Tempdbdata having tempdb.mdf file V->Tempdblog having templog.ldf file
I receive Error: 3967, Severity: 17, State: 1. Insufficient space in tempdb to hold row versions. We have 8 data files for temp db of 10210 GB size and given 10240 GB as max size.
As MS suggest to calculate the temp db file size and growth rate we need to monitor the perform counters Free Space in Tempdb (KB) and Version Store Size (KB) in the Transactions object.
basic formula: [Size of Version Store] = 2 * [Version store data generated per minute] * [Longest running time (minutes) of your transaction
My report disk utilizations says tempdb is full ? I thonk I need a shrink for the file .
Still I am confused in calculating the size , My perform counter gives me data as such
Free Space in tempdb (KB)              279938496 Version Generation rate (KB/s)          53681040 Version Cleanup rate (KB/s)      53422320 Version Store Size (KB)     258720 Version Store unit count      22 Version Store unit creation                     774 Version Store unit truncation        752
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
"tempdb is skipped. You cannot run a query that requires tempdb"?
We're running a .Net web application with a SQL Server 2000 backend, and we get the error intermittently. Restarting the SQL Server service seems to fix it, as it causes tempdb to be rebuilt, but this isn't a long term solution. Any direction or hints would be greatly appreciated. Thanks! - Mike
I installed sql 2005 a while back. Then I recently found out my file system was fat32 (I don't understand why the hardware people did this...) and I had to convert to NTFS. Naturally the sql service no longer worked so I uninstalled inorder to reinstall now I can't reinstall it I keep getting this message
native_error=5039, msg=[Microsoft][SQL Native Client][SQL Server]MODIFY FILE failed. Specified size is less than current size.
Hi folks,Can anyone enlighten me here? I'm trying to use a SPROC which, when supplied with an int, looks up the table and returns certain columns from it. I'm using a SqlCommand, here's my codebehind: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ SqlCommand dataSource = new SqlCommand("retrieveData", new SqlConnection(dbConnString)); dataSource .CommandType = CommandType.StoredProcedure; dataSource .Parameters.AddWithValue("id", poid); dataSource .Parameters.AddWithValue("title", title).Direction = ParameterDirection.Output; dataSource .Parameters.AddWithValue("creator", creator).Direction = ParameterDirection.Output; dataSource .Parameters.AddWithValue("assignee", assignee).Direction = ParameterDirection.Output; etc, etc... And the SPROC:------------------------------------------------------------------------------------------------------------------set ANSI_NULLS ONset QUOTED_IDENTIFIER ONGOALTER PROCEDURE [dbo].[retrieveData] @id int, @title varchar(50) OUTPUT, @creator varchar(50) OUTPUT, @assignee varchar(50) OUTPUT, @contact varchar(50) OUTPUT, @deliveryCost numeric(18,2) OUTPUT, @totalCost numeric(18,2) OUTPUT, @status tinyint OUTPUT, @project smallint OUTPUT, @supplier smallint OUTPUT, @creationDateTime datetime OUTPUT, @amendedDateTime datetime OUTPUT, @locked bit OUTPUT AS /**SET NOCOUNT ON; **/ SELECT [title] AS [@title], [datetime] AS [@creationDateTime], [creator] AS [@creator], [assignee] as [@assignee], [supplier] as [@supplier], [contact] AS [@contact], [delivery_cost] AS [@deliveryCost], [total_cost] AS [@totalCost], [amended_timestamp] AS [@amendedDateTime], [locked] AS [@locked] FROM purchase_orders WHERE [id] = @id; ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ The id being passed in is definately not null, and is set to a value of an item I know exists. The resulting error is:
Exception Details: System.InvalidOperationException: String[1]: the Size property has an invalid size of 0.Line 63: retrievePODetails.Connection.Open();Line 64: retrievePODetails.ExecuteNonQuery();[InvalidOperationException: String[1]: the Size property has an invalid size of 0.] System.Data.SqlClient.SqlParameter.Validate(Int32 index) +717091... ... Can anyone see anything I'm missing? Thanks,Ally
Using C#, SQL Server 2005, ASP.NET 2, in a web app, I've tried removing the size from parameters of type NCHAR, NVARCHAR, and VARCHAR. I'd rather just send a string and let the size of the parameter in the SP truncate any extra chars if need be. I began getting the error below, and eventually realized it happened only with output parameters, as in the code snippet below.String[3]: the Size property has an invalid size of 0. par = new SqlParameter("@BusinessEntity", SqlDbType.NVarChar); par.Direction = ParameterDirection.Output; cmd.Parameters.Add(par); cmd.ExecuteNonQuery();What's the logic behind this? Is there any way around it other than either finding out what the size should be, or assigning a size larger than would ever be needed? ThanksMike Thomas
I have one db test with one .mdf and .ldf file...mdf file size is 100mb and for some reson i removed all the tablesfrom that .mdf file and transfer it into new secondary file so all thetables moved into secondary file now i want to reduce the first .mdffile from 100 mb to 50mb is that possible,it's showing 90mb is free.Please reply
I am getting error to run stored procedure using executenonquery method. The Stored Procedure is having OUTPUT parameter. ExecuteNonQuery statement is called using SqlHelper. Error : String[18]: the Size property has an invalid size of 0
Just wanted to know what is a general rule of thumb when determining log file space against a database's data file.We allow our data file for our database to grow 10%, unlimited. We do not allow our log file to autogrow due to a specific and poorly written process (which we are in a three month process of remove) that can balloon the log file size.Should it be 10% of the Data file, i.e. if the Date file size is 800MB the log file should be 8MB?I realize there are a myraid of factors that go against file size but a general starting point would be nice.ThanksJeff--Message posted via http://www.sqlmonster.com
Hi, i use this script that show me the size of each table and do the sum of all the table size.
SELECT X.[name], REPLACE(CONVERT(varchar, CONVERT(money, X.[rows]), 1), '.00', '') AS [rows], REPLACE(CONVERT(varchar, CONVERT(money, X.[reserved]), 1), '.00', '') AS [reserved], REPLACE(CONVERT(varchar, CONVERT(money, X.[data]), 1), '.00', '') AS [data], REPLACE(CONVERT(varchar, CONVERT(money, X.[index_size]), 1), '.00', '') AS [index_size], REPLACE(CONVERT(varchar, CONVERT(money, X.[unused]), 1), '.00', '') AS [unused] FROM (SELECT CAST(object_name(id) AS varchar(50)) AS [name], SUM(CASE WHEN indid < 2 THEN CONVERT(bigint, [rows]) END) AS [rows], SUM(CONVERT(bigint, reserved)) * 8 AS reserved, SUM(CONVERT(bigint, dpages)) * 8 AS data, SUM(CONVERT(bigint, used) - CONVERT(bigint, dpages)) * 8 AS index_size, SUM(CONVERT(bigint, reserved) - CONVERT(bigint, used)) * 8 AS unused FROM sysindexes WITH (NOLOCK) WHERE sysindexes.indid IN (0, 1, 255) AND sysindexes.id > 100 AND object_name(sysindexes.id) <> 'dtproperties' GROUP BY sysindexes.id WITH ROLLUP) AS X ORDER BY X.[name]
the problem is that the sum of all tables is not the same size when i make a full database backup. example of this is when i run this query against my database i see a sum of 111,899 KB that they are 111MB,but when i do full backup to that database the size of this full backup is 1.5GB,why is that and where this size come from?