I'm having an argument with our infrastructure architect who has just gone and bought lots of SSD drives to use for our tempdb data and log files, sounds great doesn't it? There is a catch though, his plan is to add the disks to the two available slots in each blade in a RAID0+1 configuration, effectively giving you one usable drive, and adding both data and log files on to one disk.
I then pointed out that SQL Server best practice is to host tempdb data and log files on two separate drive to reduce contention. The architect then basically said that because this isn't spinning disk the issue of drive, r/w contention isn't an issue I don't agree with this and wanted to get some opinions from the community, I'm still advising that two separate disks should be used but someone just went and spent £80k ($150k) on SSDs and doesn't want to back down...
I am currently investigating aa high avg write time ms issue (145ms) which seems to be only occuring on the tempdb data files.I have followed the recommended setup of TEMPDB in that
1. Data files = number of physical cores 2. Data files and logfiles are on separate partitions away from the other databases. 3. Tempdb is presized and no incremental file increases look like they are happening with frequency.
We have sharepoint 2012 setup on other sql servers and with TEMPDB setup following the same guidelines, with far more Sharepoint activity on a similary specified hardware which is why its confusing.FileIO auditing on the partitions themselves shows that the FileIO is very fast on the partitions that the tempdb data file which leads me to beleive that Sharepoint may be the culprit perhaps due to excess use of tempdb with operations taking a long time to resolve.
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
So we have new servers that are going to be installed with SQL 2012 and I'm debating the wisdom of splitting tempdb with multiple files.
I know it's a myth that performance automatically improves if you split it into a number of files based on processors, but I'm debating the wisdom of putting a file on each of my data / log file drives.
For instance, I have a server with a C: drive (OS), D: drive (Data for system DBs and install of programs - 458 GB), an F: drive for user DB data files (767 GB), and a J: drive for log files (255 GB).
Obviously no files are going on C:. I'm debating on whether or not we should even leave system DBs on the D: drive given in our current 2k8 servers, we end up with Memory.dmp files over flowing the D: drives as well as .cabs and other install / update files that tend to collect on that drive over the years.
But if we leave the system DBs on D:, I'm wondering if adding a second tempdb file to F: and a third to J: will improve query performance or not.
I have a tempdb split into 4 files (5 if you include the log).
Autogrowth is disabled on the mdf/ndf files so that they can be used round robin (1 file per logical CPU).
Is there a way to be alerted when there is x% of free space left?
I know hwo to check the free space via t-sql but want to be able to be alerted. I could run a sql job that reports the free space and send a database mail message if under x% but wondered if there was a built in (or better) method?
I was in the process of creating additional TempDB.ndf files, and received an error saying they already exist. I checked the location and it was empty, nothing to see here. So I looked in sys.master_files and there are several tempdb files listed in various locations, all of which come up empty.
So the files are listed as online in sys.master_files, but they do not exist on the server. I restarted SQL services but it did not change anything.
We had someone create an extra data file and log file for tempdb. Sowe currently have two data files and two log files. Is it possible todelete the newly created data and log files? If I just delete thephysical files, I assume they'll get created as soon as SQL Servergets started back up. Any help would be great, since a single dataand log file for tempdb is my goal.Thanks much.sean
I'm running this procedure which insert into table_name(id, name.....) select id, name.... from table_name. For some reason the tempdb data file grow up to 200GB. The tempdb is set to expand unrestricted by 10%. How can I prevent that from hapening? Thanks.
It is possible that Data Collection can cause massive increasing MB/sec to tempdb ? I cannot find connection with tempdb and I set cash file, but on same disk.
Or it can be something different? Last two weeks what I checked was Read/Write MB/s to tempdb increasing progressively.
One time it was about 20MB/sec
After it was reseting and again 1MB/sec..
What I checked , External company which install SQL Server made one file for tempdb, next week or during breaktime(it will be possible), I would like make 8files next weekend work.
Now I saw that TempDB mdf was still increased, but using was just 8-10%
I've stepped into a new environment and have never dealt with multiple data files on user databases only with Temp db.What would be the best way to get all my data files in sync. I have done this on databases that aren't that big in size or off in size by a lot. Here is what I have
I recently set up a SQL 2012 FCI with a NetApp fileshare to store the data files. The install worked just fine, but I can't run an integrity check for any of my databases. Whenever I try, I get these error messages:
Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 1823, Level 16, State 8, Line 1 A database snapshot cannot be created because it failed to start. Msg 5120, Level 16, State 104, Line 1 Unable to open the physical file "path-to-fileshareMSSQL11.MSSQLSERVERMSSQLDATAmodel.mdf:MSSQL_DBCC12". Operating system error 1: "1(Incorrect function.)". Msg 7928, Level 16, State 1, Line 1
The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.The error message suggests SQL had a problem creating the snapshot, but I checked through some NetApp documentation for configuring SMB 3.0 for SQL.
I'm trying to move a log file of a database that is part of an availability group. I have been following steps from the article: [URL]
At first this worked fine for me in a test environment. When I tried it in a production environment the database on the secondary went into "Recovery Pending" state and I can't get it out.
I checked to ensure that the dB is looking in the right place for the log file, and it is. It just doesn't seem to actually use the new file. If I start and stop SQL service, the dB comes back up and is fine.
Here are the steps I'm going through and what is happening at each step:
-------------------------------------- :Connect DEVSQL --This is currently PRIMARY USE[master] GO ALTER AVAILABILITY GROUP [DP-AG-DEV] MODIFY REPLICA ON N'DEVSQL' WITH (SECONDARY_ROLE(ALLOW_CONNECTIONS = NO))
[Code] ....
All is good so far. Both the Primary and the Secondard have had their logical files changed, which has not taken affect yet because there has been no failover.
--Make SQL10 the PRIMARY :Connect SQL10 ALTER AVAILABILITY GROUP [DP-AG-DEV] FAILOVER; GO
SQL10 is now the Primary for this AG. And, as expected, the database [AG-Test] is in "Recovery Pending" because it is now looking for the log file in the new location. I need to move the file to the new location.
:Connect DEVSQL --Enable XP_CMDSHELL sp_configure 'show advanced options',1 go reconfigure go sp_configure 'xp_cmdshell',1
[code].....
This is where the script is failing, returning the error:
Msg 1468, Level 16, State 5, Line 5
The operation cannot be performed on database "AG-Test" because it is involved in a database mirroring session or an availability group. Some operations are not allowed on a database that is participating in a database mirroring session or in an availability group.
Msg 5069, Level 16, State 1, Line 5
ALTER DATABASE statement failed.
I can not get the dB to recognize the log file at it's new location.
If I restart the SQL Service, it comes back fine, which seems to indicate to me that it is not a permission problem and confirms that the file is in the right place.
How do I force SQL to look for the log file again without restarting the service?
SELECT event_data.value('(event/data/value)[4]', 'bigint') AS cpu_time, --database name event_data.value('(event/data/value)[5]', 'bigint') AS duration, --estimated cost --estimated rows --nest level
[code]...
Basically, is a simple T-SQL query that reads the local file for my already setup extended event sessions. But I can't find the way to retrieve the following attributes as part as the T-SQL query:
--database name --estimated cost --estimated rows --nest level --object name
I am trying to find a BOL or some MS link with the full list of possible values for event_data.value but can't find one.
I was in the process of migrating a server from one physical box to another. They are identical drive setups, same OS (2003), same SQL install (2005). Our server team did a 'PlateSpin', which copies the drives from one server to another, as long as the files are not in use. I did not reinstall SQL on the new box, i let the 'PlateSpin' tool copy everything over for me. I then stopped the SQL services on the old server and new server and copied over all of the system database (.mdf & .ldf) files. As soon as i started up the services on the new server, it looked great with one exception. The TempDB was only showing one datafile. When i queried sys.master_files, it was showing me 8 TempDB files. I tried restarting the services, but i still saw the same, only 1 file. I then tried to re-add TempDB files with the same name, but it would error saying they already existed. In turn, i could add new files with different names and they showed up fine. However, on a restart, they would not show up in the properties of the TempDB.
When i queried, sys.master_files again, i now had 16 Temp db files listed in the results. I deleted all but the original single file that was recognized out of the sys.master_files table and re-added the additional 7 files with he original names, restarted the service and then they all appeared.
Hi all, I have a tempdb that consists of 8 datafiles, tempdb_data_1 totempdb_data_8, each is 8GB. Now how can I drop 7 of them and leaveonly tempdb_data_1? Can this be done? Thanks a lot.
Documentation that supports the placement of Tempdb files on the root of a drive, i.e T: instead of T: empdb. I am positive this is not a best practice, but when challenged could not find any documentation that would support that view.
It's been a long time since I've tried this, but I have a SQL Server that needs to be restored (including master) to a server whose drives and corresponding folders match the source server, with the exception of tempdb. When SQL Server initially starts I believe it will fail since it cannot find tempdb. I just don't recall if it fails to startup or if it starts up reporting errors and recreates tempdb in the same location as master. Does anyone recall the steps needed to point SQL Server to the new location of tempdb?
Have a SQL2008R2 instance on a VM where the single .mdf for the tempDb database is located on a high contention disk. I've managed to get another 60GB disk and thought it would be a good time to move the .mdf and also increase it's size and number of files.
The server has 12 cores and after a bit of reading I've decided that it would be best just to have four files for this database as the 1 file per core (-1) seems to be disputed.
-- Move the existing file to the new disk and rename it. ALTER DATABASE tempdb MODIFY FILE (NAME='tempdev', FILENAME='E:SQLData empdb0.mdf');
-- Change the size to 1GB ALTER DATABASE tempdb MODIFY FILE (NAME='tempdev', SIZE= 1048576KB, FILEGROWTH=5%);
-- Add three new files, all with the same size & growth ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev1', FILENAME = N'E:SQLData empdb1.mdf' , SIZE = 1048576KB , FILEGROWTH = 5%) ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev2', FILENAME = N'E:SQLData empdb2.mdf' , SIZE = 1048576KB , FILEGROWTH = 5%) ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev3', FILENAME = N'E:SQLData empdb3.mdf' , SIZE = 1048576KB , FILEGROWTH = 5%)
-- Now restart the instance.
Also, what are peoples thoughts on percentage growth for tempDb? I've read that it's not recommend and yet it seems to be the norm.
Just found that my tempdb is always full whenever I run a query against a large database. Could please any experts here give me any advices on what is tempdb database used for and how to determine what files can be deleted from it?
I am looking forward to hearing from you and thanks a lot in advance.
It works remotely if I run it via command prompt. But when I add this to a TSQL job on my remote SQL instance, it runs without deleting anything. What I'm missing?
Error:- (1 row(s) affected) DBCC execution completed. If DBCC printed error messages, contact your system administrator. Msg 5042, Level 16, State 1, Line 1
The file 'tempdev1' cannot be removed because it is not empty.
Note: =>I restarted SQLServer from SSMS and then ran same commands mentioned above ,......and getting same error... => I executed above commands and restarted services...no change...
I can get a snapshot of tables in tempDB, but I would like to track which procs are causing the load in the tempDB.
I think I can sample and record objects in the tempdb, but I would like to record the proc creating the most tempDB usage, and disk read/writes associated with those procs.
The DMV's give usage in the individual DB's, but what's a good way to correlate procs in the DB's to tempdb usage?
I have a server which is not running optimally and I checked the default trace. I have around 600 entries in the default trace which are all Missing Column Statistics and the database is tempdb.is_auto_create_stats_on and is_auto_update_stats_on are both 1 for tempdb.
Some file names listed could not be created. Check related errors.
[code]
I did not have remote connections enabled yet, so the resolutions I have found that include sqlcmd or starting in single user configuration are not working. Any way that might allow me to restore the usual tempdb settings, which I think would allow SQL to start again?
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.