We are having problems with our SQL server 2000.
The problem is that on a daily basis we run out of disk space and I always have to run shrinkdatabase on tempdb.
Today we started with 160GB of free space and by the end of the day it was gone!
Yes we do have many jobs running on our SQL server pulling data in from many sources. But I dont know how to find out which job is causing this problem. I have a suspicion that it could be a job that runs hourly that pulls data from Oracle (approximately 10000 rows each time), but that job has been active since the 28th August 2007. We only started running out of space in the past 5 days. Any suggestions would be appreciated as to what is causing this or how to diagnose the problem.
I am looking for information that tells me how fast a db is growing in MB and or percentages over a given period of time, ie weekly, monthly, yearly etc. Either in real numbers or estimates. Does 7.0 already store something like this or do I need to create some code for this?
Or does someone have something like this already coded that they would be willing to share?
In SQL Server 7.0 sp1 (NT 4.0 sp5) I have a server that has a tempdb database that continues to grow. This server contains the database for SMS. Over the weekend, the tempdb had grown so much that it filled up the drive (37GB). I have shrunk it down to a much more reasonable size and put a limit on how large it can grow. I'm noticing today that it is beginning to grow again. Is there a way I can look at the information that is in tempdb right now? I have to think that there are open transactions for some reason that can't commit. I know that tempdb gets cleared out when SQL Server is restarted, but I can't be restarting it this often.
On Microsoft's website, I did find an article about SMS Y2K queries using large amounts of Tempdb and failing to complete. The solution they have in this article Q234912 is to install SMS sp1 which is already installed.
I haven't been able to find any other useful information yet on this problem. I would appreciate any help you can offer.
Hi there, My tempdb is growing from its normal size of 800MB to 2GB. I've been shrinking it using dbcc shinkfile/ dbcc shrinkdatabase. Everytime I run the command, immediately it says that the execution is completed and successful. However, when I checked the disk space, it remained the same, as though no shrinking is done. Can anyone help? Also, it was published that restarting the SQL server can re-create tempdb from scratch. I've tried it too, the tempdb just wont go back to its normal allocation. My constraint is limited disk space, would appreciate any good samaritan to give me some help here. Thanks in advance!
I notice this morning that my tempdb grows very fast. I have 26GB in my hardrive and all the space occupied by tempdb and finaly the qeury got failed due to 0 space in hardrive and there is no space to grow tempdb. The select query supposed to bring about 40000 rows. I ran this same query in different server that is not growing even 1 mb. I checked the tempdb option the Trunc log on checkpoint is true.
Why this problem happening ?. I have just dbo permission to access all the database. Do you have any advice regarding this?. Thanks, Ravi
I have a db application that has been running fine for months. Volumes have been gradually increasing and one day the system locked up.
A stored proc that typically ran in 3 or 4 minutes never returned. The tempdb kept expanding to fill available disk space (100GB). This was the offending statement inside the stored proc:
INSERT INTO cpp (CPPDate,MerchantLink,ReportNumber,FromDate,ToDate ,TransThreshold,DayThreshold,CPPType) SELECT S.CPPDate,S.MerchantLink,s.ReportNumber, s.FromDate,s.ToDate, S.OccurThresh,s.DaysThresh,'D' FROM #stuff S, Trans T with (nolock), Supplier P with (nolock) where T.MerchantNumber in (SELECT MerchantNumber FROM Merchant WHERE MerchantLink = s.MerchantLink) AND T.TranDate >= S.FromDate AND T.TranDate <= S.ToDate AND T.LoadDate <= @ReportDate2 AND (T.SupplierNumber = P.SupplierNumber AND T.IncludeInCpp = 'Y' AND P.CountryNumber IN (SELECT CountryNumber FROM REPORTCOMBO WHERE ReportNumber = s.ReportNumber)) GROUP BY CPPDate,Merchantlink,ReportNumber, FromDate,ToDate, OccurThresh, DaysThresh HAVING COUNT(DISTINCT T.AccountNumber) >= OccurThresh
I realize that a "group by" uses the tempdb, but can't figure out why it would go away rather than returning an error.
I have a workaround in place now. I split this big query into several steps using a cursor. (slower and clumsier, but it works) Statistics are updated daily, i have tried defragging, and reindexing with no success.
Any thoughts would be appreciated. If you need any more details, please let me know.
for the first time in my long SQL DBA live I see such a behaviours. My tempdb database is growing every damn second since a this morning. Now it reached 30Gb, the log file is empty (217 Mb).
We use SQL 2000 Ent on Win 2000 Advance Server. Running Siebel Call Center (7.5 ver) with about 300 users.
Some users time to time obtain and hold a huge amount Exclusive locks on the tempdb extents
Something strange is happening to our SQL Server DB (2000). Thetempdb transaction log file continues to grow (quite slowly) for noapparent reason. We have it in simple mode, and I have tried a manualcheckpoint command and manual shrink (of the log file only). Thereare no unusual SQL's (large or small) going on. A "heavy hitter" wouldmake it grow fast, not 10 MB every 30 minutes or so. This server hasbeen in production for over a year with no similar issues to this.Anyone encounter a similar situation? This started (as far as we cantell) sometime yesterday. It is growing about 200 MB a day, and is upto 600 MB now, with all but 8 MB "used". For now we added space, butof course that is not a long term solution. The two "data" files(each had a 200 MB initial allocation) have never gone much above 100MB each used (they are each about 100 MB now and have been that wayfor several days, but have shrank and grown). There are about 30small to medium sized "regular" databases on this instance.Any help would be appreciated, especially if there is a way we can fixthis without bouncing the engine (I know, wishful thinking...). The OSis Win 2000, SP3. SQL Server is at 2000, SP 3a.THANKS IN ADVANCE!
SQL Server 2008, Tempdb ("ReportServerTempDB")of the report server is growing enormously over 20gb in 3 days. All our reports drive from stored procedures which is a different server from Reportservertempdb. We have no report subscriptions.
We store no snapshots. All we do is run the reports.
I don't know where I start to investigate the issue.
The TEMPDB transaction log file keeps growing.The database server is new and the transaction log was presized to 1 GB on installation. After installing a number of databases, the log file grew over a day to 38GB. Issuing a manual checkpoint was the only way to free some space to allow it to be shrunk back to a usable size. The usage of the file is still going up.
I am struggling to find what process is causing the log to be used so heavily. Looking at the log reuse wait desc for tempdb returns "Nothing" and tempdb itself isn't being used very much or growing in size.
I upgraded from SQL 6.5 to SQL 7 last month, and so far, everything's been going fine.
However, I'm not using my old SQL 6.5 backup scripts, which, when the backup was done, would dump the transaction log with TRUNCATE_ONLY, shrinking the log size.
My SQL 7 server is set up with a Maintenance Plan which does everything, including backup, but the log file seems to be growing and growing. I'm up to 4.5 gigs now, for a database with a data file of 3.5 gigs.
How do I "dump transaction with TRUNCATE_ONLY" on a SQL 7 database?
I have merge replication setup up for 6 SQLCE Subscribers. I have noticed that the MSmerge_tombstone table is growing at a fast rate regardless of any changes to the data in the database. It seems to be consistantly adding 50 rows of data to the table every 2 minutes. As the table grows it causes the SQLCE subscirbers to fail with the following message:
ERROR: -2147467259 SQL Server Reconciler failed: Run
ERROR: -2147200925 : Failed to enumerate changes in the filtered articles.
ERROR: 0 : The merge process timed out while executing a query. Reconfigure the QueryTimeout parameter and retry the operation.
I'm sure that this is due to the size of the MSmerge_tombstone.
Should the MSmerge_tombstone table grow at this rate? 36,000 rows every 24hrs!
I understand there is the sp_mergecleanupmetadata Stored procedure but if i use this does that mean that because i have to reinitialise all the subscribers, they are going to have to pull down the whole subscription again.
I have since Changed a settings to make subscription expiration date to 8 days instead of never expires but we're still getting 50 rows added every 2 minutes
SQL SERVER 2000 SP3 Hope someone can shed some light on this for me.
If you are familiar with Crystal reports or Visual basic, you may be familiar with the Rate and Pmt functions.
I need to duplicate them in SQL sever 7.
Anybody have code for this already? I hate re-inventing the wheel.
More (Unnecesary) details: I have a client who has handed me the formula that I need to use for calculating Interest rates. Unfortunatly, the formula was written in Crystal reports, so now I need to pick it apart and do the work that CR does automaticly. Any help?
"tempdb is skipped. You cannot run a query that requires tempdb"?
We're running a .Net web application with a SQL Server 2000 backend, and we get the error intermittently. Restarting the SQL Server service seems to fix it, as it causes tempdb to be rebuilt, but this isn't a long term solution. Any direction or hints would be greatly appreciated. Thanks! - Mike
Hi, I want to store tax rate in my tables. I set the data type to float, I wan't 4 decimal places and the data in the table has 4 decimals, but when I run a query in query analyzer it returns: 4.4999999999999998E-2 instead of 0.045. How can I fix this?
I have installed a SQL Server diagnose tool for evaluation. It prompts and warns me that "Procedure Cache hit rate is for example 15%. Its help indicates:
The Procedure Cache Hit Rate alarm is raised when the ratio between the number of times SQL Server looks for a plan in the procedure cache and the number of times it does not find a required plan in the procedure cache falls below a threshold.
A low procedure cache hit rate indicates that SQL Server is finding fewer of the query execution plans it needs already in memory and therefore has to perform more compiles. These extra compilations will degrade SQL Server performance by causing extra CPU load.
I've got a statistics table that I've been writing to for about 2 years now. Every saturday night, a size (in MB) snapshot of each DB file is taken and dumped into this table. I'm then emailed a copy for that week.
Now, I'm trying to figure out what the fastest growers are. Here's the table ddl
What I'm trying to figure out is how to query the average monthly and yearly growth percentages per DB on the MDFSize column.
I'm usually pretty good at this sort of thing, but I just can't seem to wrap my head around how to solve this issue. I'm not having a very good math day.
I need to pickup a tax rate, that is stored on a 1 record file. I would like to avoid using the CROSS JOIN. Is there a way to SELECT the record and set a Variable = to the tax rate so I can pickup the rate in another SELECT statement on each record?
I have a 32 bit SQL 2005 EE clustered installation with 10GB of physical memory and AWE enabled. Our monitoring tool, Spotlight, is reporting the Procedure Cache to be 384MB and a Hit Rate of 75% on a fairly regular basis. Sometimes the Procedure Cache increases to 495MB and a Hit Rate of 82%.
(1) With 2005 can the Procedure Cache be increased?
(2) What is the max size of Procedure Cache?
(3) How do I increase the Hit Rate to a higher percentage?
I do not encounter the issue on any other SQL Server installation, however this is our only cluster.
DBCC PROCCACHE num proc buffs = 64889 num proc buffs used = 1135 num proc buffs = 1135 active proc cache size = 2896 proc cache used = 364 proc cache active = 364
Hi, all here, I found that in my case when I trained the data mining models, the model cover rate is very low (in my case, the train data set has 82 rows but the case occuring in the models I trained is only 25). How can I improve the cover rate to improve the quality of the models? (if it is possible in SQL Server 2005) I am using SQL Server 2005.
We have Asynchronous Database Mirroring on SQL Server 2005 SP2 Entprise Edition/Windows 2000 Advanced Server. We noticed that log sent rate is quite low (average 1.3 MB/sec) in most of the cases whereas "Log bytes flushed/sec" is high (1.4 MB/sec) as a result Log send queue keeps on increasing and finally taking all the transaction log space. Our disk queue length is always in range of 0.01. And prinicipal and mirror servers are on local LAN.
I tried on low end server and high end server and in both cases Log sent rate is approx 1.3 MB/sec (Maximum 4 MB/sec).
Is there any limitation on Log sent rate?
How can we improve on log sent rate? Since both servers are on local LAN, network bandwith does not seems to be an issue.
I have a procedure that requires picking up the Rate based on Effective Date. This is what I have so far:
SELECT SHPD.ProductID, SHPD.ReceivedDate, SHPD.Shipper, SHIP.UnitRate FROM tblShipmentDet SHPD LEFT OUTER JOIN tblShippers ON SHIP.ProductID = SHPD.ProductID AND SHIP.Shipper = SHPD.Shipper AND Max???(SHIP.Effectivedate) <= SHPD.ReceivedDate
Because there can be more than 1 Shipper record, I would somehow need to pickup the Maximum EffectiveDate in each case. I realize I cannot use the Max aggregate in the JOIN. Not sure where to go from here. On the Mainframe I used a LOOKUP function that would return the correct EffectiveDate. Help would be appreciated.
SQL Server 2005XEON CPU 3.0GMEMORY 2.0GRAID Tow tables:HIS_HTTP_ONLINE_LOG(PARTITION) FOR HISTORY DATAREL_HTTP_ONLINE_LOG(NOT PARTITIONED) FOR EVERYDAY DATA,AND THEY HAVE THE SAME STRUCTURE CREATE TABLE HIS_HTTP_ONLINE_LOG(ID numeric(20,0) NOT NULL,USERID varchar(32) NOT NULL,USERIP varchar(16) NOT NULL,USERPORT numeric(10, 0) NULL,OBJECTIP varchar(16) NULL,OBJECTPORT numeric(10, 0) NULL,HTTPURL varchar(256) NULL,HTTPHOST varchar(128) NULL,HTTPDNS varchar(128) NULL,VISITIME numeric(10, 0) NULL,STARTIME datetime NOT NULL,ENDTIME datetime NOT NULL)....... SELECT * INTO REL_HTTP_ONLINE_LOG SELECT * FROM HIS_HTTP_ONLINE_LOGWHERE 1=2 There are 5 indexes in HIS_HTTP_ONLINE_LOG ,There is not one index in REL_HTTP_ONLINE_LOG There are about 5000,000 records in REL_HTTP_ONLINE_LOG everyday,at night it will move into HIS_HTTP_ONLINE_LOG automatically,The data of everyday in REL_HTTP_ONLINE_LOG will be last 90 days. My operations:1: ALTER DATABASE DB SET RECOVERY SIMPLE2: EXEC SP_DBOPTION DB, 'select into/bulkcopy', 'TRUE'3:INSERT INTO REL_HTTP_ONLINE_LOG SELECT * FROM HIS_HTTP_ONLINE_LOGWHERE 1=24: TRUNCATE TABLE REL_HTTP_ONLINE_LOG ASK:why the step 3 cost so much time ? (about 1 hour) and how can I reduce the transaction logs in this period ? Could you give me some suggestions ?Thanks!
SQL Server 2005XEON CPU 3.0GMEMORY 2.0GRAID Tow tables:HIS_HTTP_ONLINE_LOG(PARTITION) FOR HISTORY DATAREL_HTTP_ONLINE_LOG(NOT PARTITIONED) FOR EVERYDAY DATA,AND THEY HAVE THE SAME STRUCTURE CREATE TABLE HIS_HTTP_ONLINE_LOG(ID numeric(20,0) NOT NULL,USERID varchar(32) NOT NULL,USERIP varchar(16) NOT NULL,USERPORT numeric(10, 0) NULL,OBJECTIP varchar(16) NULL,OBJECTPORT numeric(10, 0) NULL,HTTPURL varchar(256) NULL,HTTPHOST varchar(128) NULL,HTTPDNS varchar(128) NULL,VISITIME numeric(10, 0) NULL,STARTIME datetime NOT NULL,ENDTIME datetime NOT NULL)....... SELECT * INTO REL_HTTP_ONLINE_LOG SELECT * FROM HIS_HTTP_ONLINE_LOGWHERE 1=2 There are 5 indexes in HIS_HTTP_ONLINE_LOG ,There is not one index in REL_HTTP_ONLINE_LOG There are about 5000,000 records in REL_HTTP_ONLINE_LOG everyday,at night it will move into HIS_HTTP_ONLINE_LOG automatically,The data of everyday in REL_HTTP_ONLINE_LOG will be last 90 days. My operations:1: ALTER DATABASE DB SET RECOVERY SIMPLE2: EXEC SP_DBOPTION DB, 'select into/bulkcopy', 'TRUE'3:INSERT INTO REL_HTTP_ONLINE_LOG SELECT * FROM HIS_HTTP_ONLINE_LOGWHERE 1=24: TRUNCATE TABLE REL_HTTP_ONLINE_LOG ASK:why the step 3 cost so much time ? (about 1 hour) and how can I reduce the transaction logs in this period ? Could you give me some suggestions ?Thanks!