In my environment, there is maintenance plan configured on one of the server and while running DBCC checkdb on a database of size around 200GB, log file usage of tempdb is increasing and causing the maintenance job to fail.
What can I do to make the maintenance job run successfully, size of the tempdb database is only 50GB and recovery model is set to simple. It cannot be increased as the mount point on which it is residing is 50GB.
We see the following message in our error log. WARNING: Clearing procedure cache to free contiguous memory. It is accomonpanied by fairly intensive CPU activity. We get this roughly once per working day.
Anyone have any idea why, and what we can do to stop this?
I have some space available in the database, I tried dbcc shrink database and srrink file. I am not getting the disk space. But the amount of free space on the database sometime get increased.
When I try to free previously registered DLL, server is starting to execute query and it takes a number of minutes and nothing happens.
Here is an example:
sp_addextendedproc 'xp_test','test.dll' //works fine
exec master..xp_test '999' //works fine
sp_dropextendedproc 'xp_test' //works fine
dbcc test(free) //do not work
I've tested it on:
SQL MSDE SP3 //i was getting an acces violation first: spid 51 Exception 0xc0000005 EXCEPTION_ACCESS_VIOLATION at 0x00000000 SQL MSDE SP4 //but on this SP I have the same behaviour as on server 2005 / infinite query execution time SQL Server 2005 standard //infinite query execution time SQL Server 2005 standard with SP2 //infinite query execution time
I can't understand this because this started after chaning my machine.
It worked without any problems on the previous one and I've used this metod very often on the same dll which hasn't changed.
In chapter 2 of Microsoft Press' "Inside Microsoft SQL Server 2005 - T-SQL Querying", an example showing how to use dynamic management view sys.dm_exec_query_optimizer_info begins with the following code:
SET NOCOUNT ON; USE Northwind; -- use your database name here DBCC FREE PROCCACHE; -- empty the procedure cache GO
When I copy, paste and execute this query, I get the message:
Msg 102, Level 15, State 1, Line 3
Incorrect syntax near 'PROCCACHE'
How can I correct the syntax?
PS. The rest of the example seems to run properly even without the DBCC FREE PROCCACHE line.
That is a SqlException I got at a... at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader() Anyone an idea what THAT means? How do I cause it? How can I work around it?
Has anyone had experience in running DBCC in a 24x7 environment. The only time that I can run them is after a server crash. I have had the server lock up when the results page returned. It almost immpossible to go down for more than hour, because we have international clients. The database is 1.2 GB but it is in constant use because we run reports from Crystal Info server and through an ASP for client use. I have consider dump the database to another site, running DBCC, copying back to the original and restoring the logs until the current time. Any suggestions will be greatly appreciated.
One of our databases seems to be looking dodgy as some scheduled jobs are failing, but DBCC CHECKDB is no use since it has been running for over 1/2 hour without giving me any results, just the spinning globe.
How do I find out what is wrong without resorting to backups?
I am trying to reindex a large table, and cannot because there isn't enough room on the the primary filegroup. the database consistes of one physical file in the primary filegroup. the table is over 50% of the size of the database. When the table is less than 50% of the size of the database, I do not see this problem.
BTW, the only index on the table is the primary key which consists of two columns, one is an integer and the other datetime.
It seems as if SQL server needs 1x the current size of the table to be free in order to reindex? Is this the case?
It is not an option for me to allow the database to autogrow. Is there anything else I can do?
SQL2000 Server, SP4, a database with a 17Gb log file. It has been backed up so all transactions should be validated, now the real file size needs to be shrunk because I need the diskspace plus I want to speed up the backup process.
http://support.microsoft.com/kb/272318/ Tells me what to do but not where to do it.
So I need to run this code : DBCC SHRINKFILE(pubs_log, 2)
I have a very strange problem. I have installed MS WS2003 SP2 and MS SQL 9.0.3054 SP2. I have database dbTraceIT with data file on D drive and log file on E drive. If I run T-SQL command:
use dbTraceIT go dbcc checkdb or Integration Services Package (task: Check Database Integrity Task, developed with MS VS)
this comand/or task has generated the hard disk errors on D drive. The chkdsk tool reports errors when hdd index verification has been completed. After hdd errors€™ repairing, if I run checkdb T-sql command the situation is repeated again. Question: is it a bug or something different? Do you have similar disk error if you use e.g. Integration Service Packages (for instance index rebuild or whatever)?
Regards, Dariusz
PS Steps to reproduce
Find any DB on your MS SQL. Run chkdsk command on hard drive where DB€™s data file is stored. Verify that everything is OK. Run t-sql command use dbTraceIT go dbcc checkdb 3. Run chkdsk again. It should show hdd errors.
I'm running a simple DBCC DBREINDEX ('myTable') and I receive thefollowing error:"Server: Msg 169, Level 15, State 2, Line 2A column has been specified more than once in the order by list.Columns in the order by list must be unique. DBCC executioncompleted. If DBCC printed error messages, contact your systemadministrator."I can successfully reindex other tables in this database. I thoughtthat perhaps I had objects in the database that ended up with the samename, but I've pretty much ruled that out.Any suggestions?ThanksJohn D. Morrismailto://jmorris_42@hotmail.com
All the recommendations I see from Microsoft docs is to limit the use of Query Notifications (QNs) to notifying connected clients when changes to mostly-static reference or configuration data occur, and to keep the number of overall query forms in play and connected clients to a minimum. Any way regarding a more integral use of QNs and Service Broker from a web app to notify n-web servers (farm) of an update to data that could be updated concurrently and quite frequently, or with a system where the technique is used extensively with lots of different query forms?
Having just archived quite a bit of data from the main Production DB, I now have around 15% free, reclaimable space sat in the data file.
I'm reluctant to run DBCC SHRINKFILE as that apparently causes a lot of Index fragmentation which will cause issues for performance - how else can the space be allocated back to the OS?
I use this code in a utility procedure (for performance testing) but it is really slow.
For example, a session with three events is taking 5 seconds to complete this query:
DECLARE @xml xml= ( SELECT CAST(xet.target_data AS xml) FROM sys.dm_xe_session_targets AS xet JOIN sys.dm_xe_sessions AS xe ON (xe.address = xet.event_session_address) WHERE xe.name = @name );
My program is copying several hundred thousand records from an Access DB to a sql server 7 db. It has to do some conversions and lookups along the way. At seemingly random times, a DBCC job gets started up by the system that locks up my program.
Any thoughts as to why it happens? What I can do to detect/prevent it so that my program doesn't lock up?
For reasons beyond the scope of my question, is there a way to run this command within a Stored Procedure from a low privileged user login? I can grant the entity "db_ddladmin" privilege and the proc runs, but I'd rather not give out that level of permission to what is basically a glorified web access login.
On one server I'm having an issue with and it having such a small procedure cache.
Server has 60GB of RAM assigned to its min and max server memory settings, optimise for ad hoc workloads is disabled.
Procedure cache at the moment on the server is 2.41MB with only 6 objects in side all related to mssqlsystemresource database, I can see stuff dropping in for user databases, but as soon as the proc has finished the plan is removed from the cache.
Buffer cache is in the 17GB mark, free pages around the 42GB mark so around 60GB used with a bit in stolen pages, but no proc cache.
All other servers in the environment are reporting over 8GB of proc cache in use which is more healthy.
Using Spotlight to monitor all of this.
Whats wrong with this one server and it not keeping the plans in cache.
We are troubleshooting a performance problem and the test result is slow the 1st time but the subsequent runs are faster.. Logging out of application and log back in ( connecting to a new database session) did not clear the buffer cache as I thought it would.. When does the database clear the buffer cache? Is it not per database session?
I can issue CHECKPOINT and then run DBCC DROPCLEANBUFFERS to clear the buffers in the disk. But since we are testing from the application,do we need to run these commands via application code to clear buffer/per database session OR can we run these commands from a management studio session?
I have a tempdb split into 4 files (5 if you include the log).
Autogrowth is disabled on the mdf/ndf files so that they can be used round robin (1 file per logical CPU).
Is there a way to be alerted when there is x% of free space left?
I know hwo to check the free space via t-sql but want to be able to be alerted. I could run a sql job that reports the free space and send a database mail message if under x% but wondered if there was a built in (or better) method?
I wanted to demonstrate something about CXPACKET wait type. For the purpose of this demo I've created a query in AdventureWorks database that uses a parallel query plan, an extended events session that captured the wait statistics for a single session and a query that shows the extended event's data. I ran it and it worked fine. Then I dropped and recreated the event session (to clear the data), in a new window I wrote a transaction that updated the table fallowed by waitfor statement so the first query will be blocked for few seconds and ran the whole thing again. The select statement was blocked as expected (ran for 9 seconds instead on 1 second as it ran without the blocking), but the wait stats that I got were almost identical to the those that I got without blocking the query.
--Query that uses parallel query plan With MyCTE as ( select top 50 * from Sales.SalesOrderHeader) select top 10000 * from Sales.SalesOrderHeader, MyCTE order by newid()
Recently maintenance was done removing some tables from the original filegroup in one drive of our SQL Server 2012 Standard Edition 64bits to another created on a separate physical drive. I was expecting the full amount of data moved to the secondary filegroup to show up as unused on the primary filegroup but that doesn't seem to be the case. Do I have to do anything after the move to release that space, not to disk, but to the database as unused?
I have about 50 databases that are only accessed once a month and on a predictable schedule. Would it free up resources on the server if they were kept offline and brought online only when needed ?
I have a virtual server (VMware ESX) with 64GB RAM running a single instance of SQL 2012 SP1. The max memory config is set to 59392 (58GB).
The Page Life Expectancy for this server has been averaging well under 10 mins for the last few days, according to our monitoring.
I have been checking the amount of data in the buffer cache periodically during the day with the below query, which seems to show that there is never more than about 10GB of data at any one time, frequently dropping below 5GB:
SELECT COUNT(*) AS BufferPages, CONVERT(decimal(10, 2), COUNT(*) / 128.0) AS BufferMB FROM sys.dm_os_buffer_descriptorsWhy would the amount of cached data be so low (and cause so much churn)?
I am aware that other things will require some of that memory (plan cache etc.) but with Max Mem of 58GB, I would expect there to be a much higher amount of actual cached data at any one time. I did the same checks on another VM with the same amount of RAM/Max Mem setting, and there was 50GB of data in the cache, with PLE measured in hours.
I am doing performance tuning of SP/Query in Dev-Test environment.
I found that SQL Server caches plan between successive executions.
So if I test/execute SP 10 times, after 1st or 2nd execution, SQL server will pull-up plan-info from CACHE...Not from SQL SERVER Or Database...
Means i am not getting correct answer...
I found this 2 commands:
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
But they say that executing above command might interfere/bother other people executing other query/sp on this server.
They also say that: Freeing the plan cache causes, for example, a stored procedure to be recompiled instead of reused from the cache. This can cause a sudden, temporary decrease in query performance.
Part of query was using Dynamic-SQL executed with EXEC command.
I replaced that with SP_EXECUTESQL.
How can I start testing of each SP-run with Fresh/Blank CACHE ?
Question- Why am I getting 428 pages for which there is no corresponding DB object? Why are so many pages present in sys.dm_os_buffer_descriptors but are missing from sys.allocation_units.
I'm getting an alert which states that both my Buffer Cache Hit Ratio and PLE are low on one of my SQL Servers though I'm not sure how to correctly check this.
I ran:
SELECT object_name, counter_name, cntr_value FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Buffer cache hit ratio'
Which gives me the Buffer Cache Hit Ratio, cntr_Value of 9 though its constantly dipping between 3-3000 and is never steady and I'm unsure if this is normal.
I also ran:
SELECT object_name, counter_name, cntr_value FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Page life expectancy'
Which gives me the Page life expectancy of 209061.
If these values would cause concern and if this is a normal Buffer Cache Hit Ratio? It's constantly dropping from high or low from what I can see. These scripts were pulled from another forum and I'm assuming they're showing the correct values.
We are using the cache transformation in our project , while doing the cache transformation our disk space goes to 0 MB free and SSIS package execution not completes even after 3 hr..Initially we have around 34 GB free space on C: drive .Our server configuration is 64 RAM. We are caching the data from table which contains around 21 million records.We changed the path in properties (“BLOPTempStoragePath”,”BufferTempStoragePath”) of Data Flow task of SSIS in which we are using Cache Transformation.