I have a database -- MDB -- with datafile for data and transaction log under the folder d:mssqldata . Now i want to move the data file from d: to e:, say e:mssqldata . Can someone let me know if this is possible under SQL server v7.0 and if so, how
Hi, Is there any way to change the location for the datafile. I need to change the drive from say c to d because it is filling up. Is there anyway to do this or do I have to recreate the database from scracth. I have a whole lot of data in the database already.
I need some clarification about adding file in to mirrored dataabse in primary server without downtime and breaking the mirror server.
In our environment we are using monutdisks in both the servers. in primary for ex we have F drive for data files under mount disk 3 in mirror server also we have same drive but in mount drive2.
As per my knowledge if it is same drives we can add the ndf files in the primary that will reflect on mirror. but in current situation i am confusing about mount points with different names.
Hello friends ! Wich one is more efficient way to use a sql mdf file in Sqlexpress ? attaching a mdf file on sqlexpress ( and use initialcatalog in connection string ) or use AttachDbFilename on connection string directly ? Is there any difference in performance and speed ? Thanks a lot
I'm aware of the issues with sizing your logfile growth size too low (causing too many VLFs, etc). But I haven't seen much about the datafile side of it.
Are there any benchmarks specifically on setting datafile growth so low (on databases 1-100Gb in size)? Are there circumstances in well utilized servers where that might be warranted?
From BOL, I see these remarks with respect to the MODIFY FILE subcommand (my underline added):
Initializing Files By default, data and log files are initialized by filling the files with zeros when you perform one of the following operations:
Create a database
Add files to an existing database
Increase the size of an existing file
Restore a database or filegroup
Which leads me to believe that expanding the size of a datafile will also wipe out (my definition of 'initialize') any existing data within that file.
I may be misunderstanding 'initialize', because when I tested it out, I found this wasn't the case - my table data written to the file was still there after a resize.
Need to clarify to what degree I'd be taking a risk by increasing the file size on a datafile which already has data in it.
I have an instance with 4 datafiles for tempdb each set at initial size of 4G and growth rate of 100MB. After some time the initial file sizes seem to have changed automatically. They now read 3962,100,3688 and 2847 respectively. Is this something done by SQL Server itself? I cannot imagine that it was done manually.
I don't think there was a restart after the initial sizes of 4G were set, could this be related to the problem?
If a database consists of more than one datafile, how does SQL Server use the space in these datafiles ?, does it fill up the first one then move to the next and so forth, or does it use up pages across all the files evenly ?.
alter database bdj add file (name ='bdjfg1', filename='d:db djfg1.ndf' ) to filegroup bdjfg;
alter database bdj modify file (name='bdjfg1', OFFLINE);
alter database bdj modify file (name ='bdjfg1', filename='d:db ewdestdjfg1.ndf' ); --Msg 5056, Level 16, State 4, Line 1 --Cannot add, remove, or modify a file in filegroup 'bdjfg' because the filegroup is offline.
alter database bdj modify filegroup bdjfg READWRITE; --Msg 5056, Level 16, State 3, Line 1 --Cannot add, remove, or modify a file in filegroup 'bdjfg' because the filegroup is offline.
Yes, yes and should have read the cautions section saying:
"Use this option only when the file is corrupted and can be restored. A file set to OFFLINE can only be set online by restoring the file from backup. For more information about restoring a single file, see RESTORE (Transact-SQL)."
But I have not an backup of the datafile, but I have the datafile itself!
What can I do to get it online again, the old location could be fine, but it would be better on an new location (thats is the reason for all the trouble, the original drive has not much space left, so I wanted to move the datafile)
If I run the package from BIDS, it works fine. If I run the package inside Management Studio it works when I run it as a package.
It does NOT run when I schedule the job.
Error: 2008-03-12 10:51:56.16 Code: 0xC020200E Source: Data Flow Task Flat File Destination [194] Description: Cannot open the datafile "D:old_timesheet_reposTimeSheetfilesdate.txt". End Error Error: 2008-03-12 10:51:56.16 Code: 0xC004701A Source: Data Flow Task DTS.Pipeline Description: component "Flat File Destination" (194) failed the pre-execute phase and returned error code 0xC020200E. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 10:51:55 AM Finished: 10:51:56 AM Elapsed: 0.344 seconds. The package execution failed. The step failed.,00:00:01,0,0,,,,0
Hi, i try to shrink and remove one datafile. But i always get following error:
Server: Msg 5042, Level 16, State 1, Line 1 The file 'M1Pdata15' cannot be removed because it is not empty.
*********************************************** use M1K go dbcc shrinkfile (M1Pdata15,emptyfile) go use master go Alter database M1K remove file M1Pdata15 go ***********************************************
I obviously did not search the archives on the right terms so what isthe easiest and fastest way to move a 3G database from a nearly full Cdrive to the nearly empty D drive that should have been used.I could back it up, drop it, recreate it using the D drive, and restoreit but it seems like there should be a way to just move the datafileand use if from the new location.I am thinking that detatch/attach is the best method, but I would likeconfirmation or suggestions on how to proceed or things to be aware ofwhen using this method.-- Mark D Powell --
I have written a program that loads a package (SomePackage.dtsx) from the physical drive and executes that. The package does nothing but imports data from a csv file to the Sql server 2005. But I can see that the package is failing continuously. I meant the package.Execute() method is returning a DTSExecResult.Failure. I investigated the Package.Errors property that contains the error collection and found that there are two DTSError objects into the collection.
The first one€™s description says that
Cannot open the datafile "D:SOME.csv".
And the later one€™s is
component "SOURCE FLAT FILE COMPONENT" (1) failed the pre-execute phase and returned error code 0xC020200E.
But the most interesting thing is if I execute the package through the Execute package Utility (double clicking onto the SomePackage.dtsx file) ships with Sql server 2005 then it executes fine and works as expected. I have checked the permission of the csv file and it has everyone€™s full access.
Can anyone help me on this? I will appreciate all kind of suggestions.
Is it possible to convert a SQL2K datafile to SQL2K5? I have a 2K database that I need to easily convert to 2K5, I apprecaite any insight on this issue.
In order to perform an automatic way to link a software using .txt database to our SQL Database, I need some tips.....
I think about the following solution - maybe using a "data-base extractor" (Access) and convert the result into a .txt file, Which can be automaticly refresh using a .bat file which will open the .txt file created 3 times a day to refesh the data.
If you have some solution less complicated, Please send it to
I'm trying to calculate how much unused space i have on one datafile. My main goal is to determine the max space i can save by doing a dbcc shrink. Any help is greatly appreciated.
Is there any limit to the maximum size of a datafile or transaction log you can have with SQL Server 2000 on Windows 2000. Also is there a maximum size that should be adhered to for performance and admin reasons ?.
I have a dts package scheduled to run hourly as job since recent November. (Win2000 server, MSSQL2000 standard) Its been running fine, except last few days, SQL Agent shows it attempted to run it but fails Checking the history for the job i got this error message each time: ================================================== ====== DTSRun: Executing... DTSRun OnStart: DTSStep_DTSDataPumpTask_2 DTSRun OnError: DTSStep_DTSDataPumpTask_2, Error = -2147467259 (80004005) Error string: Error creating datafile mapping: The volume for a file has been externally altered so that the opened file is no longer valid. Error source: Microsoft Data Transformation Services Flat File Rowset Provider Help file: DTSFFile.hlp Help context: 0 Error Detail Records: Error: 1006 (3EE); Provider Error: 1006 (3EE) Error string: Error creating datafile mapping: The volume for a file has been externally altered so that the opened file is no longer valid. Error source: Microsoft Data Transformation Services Flat File Rowset Provider Help file: DTSFFile.hlp Help context: 0 DTSRun OnFinish: DTSStep_DTSDataPumpTask_2 DTSRun: Package execution complete. Process Exit Code 1. The step failed. =================================================
The key information i think would be these few lines: Error creating datafile mapping: The volume for a file has been externally altered so that the opened file is no longer valid.
I don't think its SQL Agent as its scheduling is running other jobs fine(backup) The DTS package runs fine manually. Such i suspect dtsrun.exe itself. But where do i go from here?
I've found this problem working with a VLDB, six month ago when I install the DBMS (Win2k3 x64+sp2, SQL 2k5 x64 +sp2, 4 dual core processor and 12 GbRAM) I've got 10 disk (actually ten LUN from a Storage Area Network), each 50Gb. I've put TempDB and Transaction Log on two separate 50 Gb disk and put the database on 8 different data file on the 8 disk; I've created each datafile with a size of 50 Gb (autogrowth disable), so my DB has 400Gb space in it's datafile. After a while the datafile began to fill and we decide to add a couple more 50Gb disk where I decide to put to new datafile; now my db is around 430 Gb and I've got this strange situation:
The first 8 datafile now are almost full of data, and obviously they can't growth since they already occupy the whole disk.
The two additional datafile are relatively empty (about 15 Gb each).
As far as I understand now each time that SQL should write to the databse it writes only on the 2 new datafile, and I fear that this can affect performance. I'd like to reorganize the space in order to have 10 datafile, each with 43Gb of data, but I didn't find any instruction/tool able to move data between datafile.
I need to move a datafile on my secondary database which is in standby mode. I have attempted to use the Restore command with the move and standby parameters
use master RESTORE LOG BWP FROM DISK='L: rans_bkpBWP_20071009080001.trn' WITH MOVE 'BWPDATA3' TO 'N:BWPDATA3BWPDATA3.ndf', standby='L:TRANS_BKPBWP_20071009130001.tuf'
But I get the following error message Msg 3174, Level 16, State 1, Line 1 The file 'BWPDATA3' cannot be moved by this RESTORE operation. Msg 3119, Level 16, State 1, Line 1 Problems were identified while planning for the RESTORE statement. Previous messages provide details. Msg 3013, Level 16, State 1, Line 1 RESTORE LOG is terminating abnormally.
I am on sql server 2005. I have a production database that i log ship to another server and keep a standby copy of that database. Transaction logs are backed up every 15 minutes on the production database then copied to the standby server and then applied in order to the read-only standby database.
Every month we add a new partition and datafile to the production database. This causes the log shipping process to break because the read-only standby database doesn't have the new datafile present. I had hoped that the alter database command to create the datafile would be logshipped. It forces me to do a full db restore every month which is a major pain.
Has anyone encountered a similiar scenario? How can I 'log ship' the addition of a datafile every month and avoid doing a full restore of my standby db?
I should add that this is a home grown log ship process, we aren't using the sql server built-in log shipping. Here is a typical backup transaction log script that i'm using:
-- using sql litespeed exec master..xp_backup_log @database='dbname, @filename='d:dbbackupsdbname_txlog_<uniqueidentifier>.bak', @init=1
I am relatively new to VB 2005 and SQL Express. I have a question concerning where I can find the SQL-database file (*.mdf) after I have installed (published) my VB 2005 database-application (the *.mdf file was created by Visual Studio Express?). It seems it is nowhere in the disk of the of pc it is installed on. So, what happens to *.mdf file after installation of the application, in what way I can connect to the *.mdf file after the application - the file is part of - is published?
I trying to get the moving total (juts as moving average). It always sum up the current record plus previous two records as well and grouped by EmpId.For example, attaching a image of excel calculation.