How To Move A Log File From 'e' Drive To 'f' Drive....
Nov 9, 2000
I am trying to move a log file from one drive to another.
What I have done is add another file to my file group. So now my log has a file on the 'e' drive and one on the 'f' drive. I now want to remove the file on the 'e' drive. I have emptied the file on the 'e' drive. When doing the command:
ALTER DATABASE Uniprodruntime
REMOVE FILE m_rk_runtime_log
I get the following error message..
Server: Msg 5020, Level 16, State 1, Line 1
The primary data or log file cannot be removed from a database.
I have also gone into enterprise manager and tried to delete the file and it does nothing.
I have a several indexes on a filegroup that I would like to move to a different physical drive. I am aware of the sp_detach...sp_attach routine which allows moving the .mdf and .log files to a different location. How would I go about moving a .ndf file though?
I have a database data file almost at 2tb maxing out a windows drive. Only 16gb left. Should I just add another data file on another Windows drive for growth? Or just move current huge data file to a new GPT drive? Or do both adding another data file and moving existing to its own new GPT drive?
I am able to run SSIS packages as SQL Server Agent jobs with a Control Flow items "File system task", if I move a file (test.txt) from a drive (c on the server (where SQL Agent jobs run) to a subdirectory on the same drive. But, if I try to move a file on a network drive, the package fail.
I obviously did not search the archives on the right terms so what isthe easiest and fastest way to move a 3G database from a nearly full Cdrive to the nearly empty D drive that should have been used.I could back it up, drop it, recreate it using the D drive, and restoreit but it seems like there should be a way to just move the datafileand use if from the new location.I am thinking that detatch/attach is the best method, but I would likeconfirmation or suggestions on how to proceed or things to be aware ofwhen using this method.-- Mark D Powell --
Is it possible to move a distribution database to another drive without removing replication? I have done some research but I getting mixed answers from Google searches.
Hello guys and girls. I have installed SQL Server 2005 Standard Edition and I have specified that the databases should be created on the K: drive. This is okay but now I need to move all the transaction log files (.ldf) to the L: drive. I have already changed the default location for the log files to point to the L: drive and the new databases that were created after the installation have their transaction log file correctly in the L: drive but now I need to move transaction log files for the master, model, temp ... databases. How can this be done? And are there any gotchas?
I need to move all log files for my SQL 2005 databases to another drive. I don't wish to shrink the files, I need to move the logs to another drive spindle. I did find an article (Article ID: 224071) that describes moving both the database and logs using sp_detach and then sp_attach. What is the best way just to move the logs to another drive on the same server, and that keeps the databases in their original location? Thanks.
I haven't found the definitive answer on how or if this can be done without removing replication. I'm thinking ALTER DATABASE modify_file is the way to go. Anybody know if this will work or a better way to go about it?
I have a 600 gig database that has a mirror. I need to move the databases from local drives to a SAN. Can anyone recommend a document that lists the steps to go through to move both the principle and mirror to the SAN with no down time? or minimal down time?
I have been trying to use openrowset with a shared drive, and even though the share has "full control" permissions granted to "everyone" and the accout that SQL runs under has been granted explicit full control permissions I am unable to open the file which itself has no security on it.
Can I not use a \ path and only use mapped drives?
Thanks
below works...
SELECT * FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0','Excel 8.0;Database=C:5People.xls', [Sheet1$])
below doesn't work...
SELECT * FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0','Excel 8.0;Database=\cluster02FileManager5People.xls', [Sheet1$])
 1: TempDB keeps getting filled.  Restart of the server has not fixed it. I shrink it, but the space gets filled again. Now I can't even shrink it anymore 2: TempDB is at the wrong location. Its current location is this :C:Program FilesMicrosoft SQL ServerMSSQL10_50.SQLPROD6MSSQLDATA empdb
How do I change its location?Â
C:Program FilesMicrosoft SQL ServerMSSQL10_50.SQLPROD6MSSQLDATA empdb Correct location of TempDB should be: TempDB(T:) But its not there
Being a very novice SQL Server administrator, I need to ask the experts a question.
How do I go about moving a database from 1 drive to another? The source drive (C is local to the server, but the target drive (E is on a Storage Area Network (SAN), although it is still a local drive for the server. I want to move the database from C: to E:. Can someone provide me with instructions?
How to backup half of dbs from a server on C drive and the other half on D drive and vice versa, first half on D drive and other half On C drive using only one job and one stored procedure??
Using scheduling from job add 2 schedules to the job so first schedule backup first half to C and second half to D , the second schedule backup first half to D and second half to D.
I'm trying to move the transaction logs of my databases to a different drive (for fault tolerance). I can create a second transaction log file for each database via Enterprise Manager but I have 2 questions:
1) If two transaction log files exist for a database which one does it use ?
2) How do I force SQL to use the new transaction log file ? (so I can delete old)
Hi All, I've been trying to find the answer but been unable to. My question is it possible to create a SQL Server (2005) database in a usb2 drive? I have a large usb drive that i would like to store my database into instead of my local drive which is not that big.
I know , it is not going to work , just wondering if anyone could give any reasons for that. Whether it was intentional constraint or just internally compact edition was designed in particular way which makes such a usage not possible. It is a pity that it doesn't work that way as it would be much easier transitional path for many Visual Foxpro , MS Access applications. In my case I just want READ-ONLY database either for multi-user access via shared drive or stand-alone on local drive.
as indicated by my stupid question, I am very new to sql. our vrsion is 2000 and I'm talking about in enterprise manager, the database that was created is not showing up in the list of db. Although I can see the file in explorer.
The problem I€™m having is when I try to attach the database €œmailarchive3Q2007_data.mdf€? it is also looking for the log file €œmailarchive3Q2007_log.ldf€? . The log file was removed by someone else off our system. I have a backup of the file but it is too large to restore now (160 gig) when the system was first set up the recovery model was not set to simple so the log just grew till it filled up our drive. I no longer have the drive space necessary to restore the log file and shrink it. So what do I do now? I need some kind of €œmailarchive3Q2007_log.ldf€? file to attach the database in enterprise manager.
I'm hoping that someone can help me. I have a SQL 2005 SSIS package that will run Friday mornings to empty/load a table with data from another database. On Friday evenings I'll need to run another package, but want to make sure the table load completed prior to launch. For this I planned to use a file watcher task, however I cannot for the life of me figure out how to output a 'done' semaphore, from the morning job, to a networked drive.
A file system task will not work because there is not a 'create file' option. I do not have an existing file that I can rename either.
I tried an execute process task running cmd.exe with the following argument:
This fails because UNC paths are not recognized. (The package executes from another server so I cannot use a local path, nor am I allowed to set-up a local share.)
Can someone offer an alternative suggestion? I'm really hoping this is easier than I'm making it.
I have to perform disk maintenance on current drive - Drive 'D' where it has sql data (mdf file) and I have added new drive - Drive 'E' By the way Drive 'C' have the program files for SQL Server 2008 R2 What is the correct process to transfer sql data (mdf file) from Drive 'D' to Drive 'E' and later remove Drive 'D' from the server.
We have an encrypted drive (that can be mounted and dismounted, a third party tool to encrypt drive path). I wanted to store the secondary file to that encrypted drive path. The secondary file stores confidential information. I separated the table from the primary to secondary file. Encryption per column is not advisable to do on that table so we decided to separate that table and put it on secondary filegroup. The physical file is stored in the mounted drive path.
I can read and write in that mounted drive path. I can also read and write if the drive is unmounted (which I believe read and write is really being done). When the drive is unmounted, the physical secondary file (.ndf) is not visible to any user logging in the server itself (this is actually the goal why we do this encrypted drive setup thing). It is kept virtually somewhere in the machine. To mount it back, a password is needed.
I'm a bit confuse, somebody can advise or give their insight on this setup. I believe that when the drive is dismounted, SQL Server stored the transactions in cache until it finds that the drive is mounted back. This means that all transactions are not comitted yet. When the drive is mounted back, I think SQL Server is smart enough to check/know that the drive is physically present and will flash all the pending transaction from the cache to the hard drive.
Is my assumption correct? Is there any thing that I need to know about transaction, committed and those data flashing thing on the hard drive?
I am having an Access database on a shared network drive which has read/write access rights on the that shared network drive. When I try to Access data through the linked server it gives me gives me a message box saying you do not have permissions to view the data. Also if i try to use xp_cmdshell to copy over the mdb file to my local drive it say 'Access denied'
But when I copy (through command prompt) the same file to another network drive or my local drive where I have full control the linked server can connect sucessfully.
The problem is the i cannot have 'full control' permissions on shared drive where my database resides.
We had some SAN issues and we dont have Transaction Log files for some databases.. The drive which was holding this Tlog files were missing.. How to bring back databases.
The MDF and LDF files are placed in SSD drive and tempdb files are placed in HDD drive. Snapshot isolation is enabled on the database. When a script is executed to insert data with NULL value to a table which has NOT NULL column, the transaction fails and then a log undo happens which fails and takes the database to suspect mode.
But when the MDF and LDF files are placed in HDD drive all this do not happen. The transaction just fails.
I recently created a program that connects to a Microsoft SQL database that was stored on my computer and it worked fine. As soon as I tried to connect to the same database via a network drive I got an error stating that "The file Y:Filename.mdf is on a network path that is not supported for database files.". I can't seem to get it to work, if anybody has any ideas what I'm doing wrong I would appreciate your help.
I need to copy a just-created bak file to another drive after the backup task has completed. I don't see anything in the job toolbox which works with file system operations like this. But still it must be a common need..There are ways to script this or use third part tools but I am looking for something native to the sql server 2012 SSMS toolset, if possible.
An alternate approach would be to run the backup job again, after the main backup, and change the destination to the alternate location. But I was thinking that another backup job would probably invoke more overhead on the server than a simple file copy operation. If I do end up taking this approach I could also use the cleanup task to toss older bak files in the alt dir.