Removing Secondary .ndf File
Aug 16, 2002
I've inherited a database from a SQL7 system, and converted it to SQL2000. It has a secondary data file (.ndf) and a secondary log file. Because the server configurations are different it's no longer necessary to have the secondary files. How to I merge the secondary file data into the primary files and then delete the secondary files?
Thanks,
Al
View 1 Replies
ADVERTISEMENT
Feb 24, 2003
I'm converting DBs from SQL7 to SQL2K; the previous owner & server issues had created secondary data and/or log files. The size of the databases, and the configuration of the new server, do not warrant those secondary files. In restoring the databases from SQL7 to SQL2K, either by detach/copy/attach or backup/restore, is there a method for deleting the secondary files without, of course, losing any data?!
TIA,
Al
View 3 Replies
View Related
Jan 24, 2015
i have created a new login in primary server and provided dbowner permission to primary db.how do i transfer this login to secondary server and assign the same permission to secondary db ?
View 2 Replies
View Related
Mar 14, 2003
I'm trying to run a set of DBCC commands to empty and then delete a secondary log file; however, no matter how large I make the primary log file it won't empty the secondary file. Any suggestions?
DBCC SHRINKDATABASE (VE, 10)
GO
alter database VE modify file (name= VE_log,size = 1200)
GO
dbcc shrinkfile (VE_log2,EMPTYFILE)
GO
Cannot shrink log file 3 (VE_Log2) because all logical log files are in use.
DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages
------ ------ ----------- ----------- ----------- --------------
17 3 86848 128 86848 128
--alter database VE remove file VE_log2
--GO
View 1 Replies
View Related
Jan 16, 2003
I have a SQL Server 2000 database with a primary data file (MDF) and a secondary data file (NDF). I would like to remove the secondary file and only have the MDF. Is this possible?
Thanks, Dave
View 3 Replies
View Related
Mar 11, 2015
I am after T-SQL code which will simply load the next T-log backup file from a network share folder to a warm standby db on a secondary server.What is needed is a Third server (server x), to participate in log shipping (MULTIPLE TARGETS).
Primary SERVER (SERVER A)
Secondary SERVER (SERVER B) Log shipped to via GUI.
THIRD SERVER (SERVER X) which will contain the same log shipped db from server A.
This will simply restore the logs from a network share to keep the db up to date.
View 3 Replies
View Related
Jun 22, 2007
I have a database that has been running well for a few years.
It has a single data file.
It has now become very large and is creaking and running slow sometimes.
Is it possible to now create a secondary data file or do i have other options?
Many Thanks
View 4 Replies
View Related
Apr 18, 2007
i have sql server 2000 db with two data file... primary data file has extension mdf and secondary file has extension ndf (as per microsoft recommendation)..
when i try to backup the db and restore thru the enterprise manager .. in the restore -> options window ... i see both the files has the same extension mdf.. and when the restore completed, the new database still has extension mdf for both the file..
why this behaviour?
* i even try to create a new test db with two files, still its the same behaviour.
View 10 Replies
View Related
Jul 6, 2007
We have an encrypted drive (that can be mounted and dismounted, a third party tool to encrypt drive path). I wanted to store the secondary file to that encrypted drive path. The secondary file stores confidential information. I separated the table from the primary to secondary file. Encryption per column is not advisable to do on that table so we decided to separate that table and put it on secondary filegroup. The physical file is stored in the mounted drive path.
I can read and write in that mounted drive path. I can also read and write if the drive is unmounted (which I believe read and write is really being done). When the drive is unmounted, the physical secondary file (.ndf) is not visible to any user logging in the server itself (this is actually the goal why we do this encrypted drive setup thing). It is kept virtually somewhere in the machine. To mount it back, a password is needed.
I'm a bit confuse, somebody can advise or give their insight on this setup. I believe that when the drive is dismounted, SQL Server stored the transactions in cache until it finds that the drive is mounted back. This means that all transactions are not comitted yet. When the drive is mounted back, I think SQL Server is smart enough to check/know that the drive is physically present and will flash all the pending transaction from the cache to the hard drive.
Is my assumption correct? Is there any thing that I need to know about transaction, committed and those data flashing thing on the hard drive?
Thanks in advance....
View 4 Replies
View Related
Jul 26, 2004
We are using log shipping and we would like to remove all transfered and applied journal (in the primary box).
We have the intentionto use a trigger like this :
CREATE TRIGGER del_log
ON log_shipping_plan_history
AFTER INSERT
as
declare
@lastfile nvarchar(256)
SELECT @lastfile=i.last_file
FROM log_shipping_plan_history e INNER JOIN inserted i ON e.sequence_id = i.sequence_id
where i.activity=1
begin
if IF (@lastfile <> NULL)
...
... remove file (using xp_cmd for example...)
...
end
but the problem is that we have only the last file transfered and applied that will be removed
(some time, more that 1 file are applied in one shot ...
see num_files column in log_shipping_plan_history).
Any solution to remove all the files generated before the last one given by the query ?
Any other solutions (sql wizard gives the possiblity to to remove file after a laps of time 1hour, 1day...).
I am looking for the table that contains all the journal files (that we can see when we try to retore a db) ?
Thanks
View 1 Replies
View Related
Jul 27, 2015
how to add the Secondary Data file to the Database that is a part of the Always on Availability on SQL Server 2012.
View 3 Replies
View Related
Nov 9, 2015
I added a secondary data file to TEMPdb yesterday and gave it a wrong location by mistake. If I try to change the location, then I am getting an error now. I think that is because TEMPdb is in use and that is why I cant change it's secondary file's location. Do I need to take TempDB offline and then change the secondary file's location??
View 3 Replies
View Related
Jul 14, 2015
I have a database around 500 GB. right now the database have only one data file and one log, it has only one filegroup also.all the indexes and table are placed in Primary Filegroup . we are going to separate them. the planing is to move all the indexes to Secondary filegroup and all the table will be in Primary filegroup.But there will be a problem while implementing it because there are around 600 tables and each table have at least 2 non-clustered index , so is there any way to move all the index to Secondary Filegroup.
View 3 Replies
View Related
Sep 23, 2014
I have created a Test SSIS Package within BIDS (VS 2K8, v 9.0.30729.4462 QFE; .NET v 3.5 SP1) that connects to our Test Listener.
There is only 1 Connection Manager Object, and OLE DB Provider for SQL Server.
The ConnectionString lists: Provider=SQLOLEDB.1;Integrated Security=SSPI
The Test Connection within BIDS works.
The Package Control Flow has just 1 Object, and Execute SQL Task that performs an Exec on an SP that contains only a Select (Read).
The Package runs within BIDS.
I've placed this Package within a Job on the Primary Node. Ive run the job successfully using 32 bit runtime on and off. The location of the file on the server happens to be on a share that resides on what is currently the Secondary Node.
When I try to run the exact copy of this Job on the Secondary Node (Which has been Set up for Read All Connections; Yes), I get an error, regardless of the 32 bit runtime opiton. At this point, the location of the file is on the Secondary Node.
The Error is: "Login failed for user 'OurDomainAgent_Account'".
The Agent is a member of NT ServiceSQLServerAgent on both instances, and that account is a member of SysAdmin. Adding the Agent account as well, and giving that account SysAdmin, makes no difference either.
Why can't I get this to work?
View 1 Replies
View Related
Jun 18, 2015
I received an alert from one of my two secondary servers (all servers are running 2012 SP1):
File 'E:SQLMS SQL ServerMSSQL11.MSSQLSERVERMSSQLDATAMyDatabaseName_DateTime.tuf' is not a valid undo file for database 'MyDatabaseName (database ID 8). Verify the file path, and specify the correct file.
The detail in the job step shows this additional information:
*** Error: Could not apply log backup file 'MyDatabaseName_DateTime.trn' to secondary database 'MyDatabaseName'.(Microsoft.SqlServer.Management.LogShipping) ***
*** Error: Table error: Page (0:0). Test (m_headerVersion == HEADER_7_0) failed. Values are 0 and 1.
Table error: Page (0:0). Test ((m_type >= DATA_PAGE && m_type <= UNDOFILE_HEADER_PAGE) || (m_type == UNKNOWN_PAGE && level == BASIC_HEADER)) failed. Values are 0 and 0.
Table error: Page (0:0). Test (m_freeData >= PageHeaderOverhead () && m_freeData <= (UINT)PAGESIZE - m_slotCnt * sizeof (Slot)) failed. Values are 0 and 8192.
Starting a few minutes later, the Agent Job named LSRestore_MyServerName_MyDatabaseName fails every time it runs. The generated log backup, copy, and restore jobs run every 15 minutes.
I fixed the immediate problem by running a copy-only full backup on the primary, deleting the database on the secondary and restoring the new backup on the secondary with NORECOVERY. The restore job now succeeds and all seems fine. The secondaries only exists for DR purposes - no one runs reports against them or uses them at all. I had a similar problem last weekend on a different database that is also replicated between the same servers. I've been here for over a year, and these are the first instances of this problem that I've seen. However, I've now seen it twice in a week on the same server.
View 0 Replies
View Related
Oct 30, 2015
Today we received an issue on an application database on internal free space on the DB is 0% that was designed with as below
name fileid filename filegroup size maxsize growth usage
XX 1 I:DataMSSQL.1MSSQLDataNew XX.mdf PRIMARY 68140032 KB Unlimited 0 KB data only
XX_log 2 I:DataMSSQL.1MSSQLDataNew XX_log.LDF NULL 1050112 KB 2147483648 KB 102400 KB log only
XX_2 3 I:DataMSSQL.1MSSQLDataNew XX_2.ndf PRIMARY 15458304 KB Unlimited 0 KB data only
XX_3 4 I:DataMSSQL.1MSSQLDataNew XX_3.ndf PRIMARY 13186048 KB Unlimited 0 KB data only
XX_4 5 I:DataMSSQL.1MSSQLDataNew XX_4.ndf PRIMARY 19570688 KB Unlimited 204800 KB data only
XX_5 6 I:DataMSSQL.1MSSQLDataNew XX_5.ndf PRIMARY 19591168 KB Unlimited 204800 KB data only
2 of the secondary data files had its autogrowth enabled to unrestricted with 200MB and 3 of the data files including primary had its Autogowth turned OFF. Application use is complaining that there is no internal freespace on the DB.
What fails to understand us is that when the Auto growth was already TURNED OFF on 3 data files ( 1 primary and 2 secondary ) still why was the application trying to increase the space on the .mdf and .ndf files; as well when the Autogrowth is TURNED ON on 2 of the secondary data files, why was the DB not able to expand these file groups when the autogrowth is already turned off on 3 of its other files.
What more data i need to ensure i submit an analysis to this.
View 5 Replies
View Related
Nov 6, 2015
I need to copy a just-created bak file to another drive after the backup task has completed. I don't see anything in the job toolbox which works with file system operations like this. But still it must be a common need..There are ways to script this or use third part tools but I am looking for something native to the sql server 2012 SSMS toolset, if possible.
An alternate approach would be to run the backup job again, after the main backup, and change the destination to the alternate location. But I was thinking that another backup job would probably invoke more overhead on the server than a simple file copy operation. If I do end up taking this approach I could also use the cleanup task to toss older bak files in the alt dir.
View 7 Replies
View Related
Jan 14, 2008
Hello
We have set up Log shipping between Primary and Secondary DB. The secondary DB is right now option: Standby/Read-Only. I can not take Backup of Secondary DB now.
Shall we disable Log shipping and change the DB Option to Multi-user mode and take backup? or any different method, without disabling log shipping?
please advice. Thanks in advance.
Jay
View 1 Replies
View Related
Sep 23, 2014
Getting below error while ran EMPTYFILE.
DBCC SHRINKFILE: Page 10:6521 could not be moved because it is a work table page.
Msg 2555, Level 16, State 1, Line 1
Cannot move all contents of file "tempdata" to other places to complete the emptyfile operation.
View 3 Replies
View Related
Oct 5, 2006
Hi There,
I am parsing a directory of flat files and looping through it with a foreach loop. Some of the files have lines that contain characters that I would like to remove. In fact, it would be good if I could remove the entire line. Is there a way to do this with a Script Task or some other way.
Thanks for your help.
View 3 Replies
View Related
Jul 17, 2007
I have a file I'm pulling from another type of database into an Excel spreadsheet and then using my dtsx package to import the spreadsheet into my SQL database. The problem I'm having is that one of the fields coming out of the database to the spreadsheet has the thousands seperator in the field that I want to use as a numeric field without the ",". Right now I have a macro that I run on the spreadsheet to reset the field to straight numbers without commas before importing it, but would like to configure my Integration package to do it automatically.
Any ideas would be appreciated.
Thanks in Advance
View 1 Replies
View Related
Jul 6, 2015
I wanted to remove an extra transaction log file that was no longer required, and ran the following against the database...
DBCC Shrinkfile (DB_Name_log2, Emptyfile);
go
alter database [Db_Name]
remove file DB_Name_log2;
go
I got a successful removal message. But if I go into the properties of the database, and click on files, it still shows up. Why is this and how can I get rid of it?
It shows up in sys.master_files as offline.
View 4 Replies
View Related
Apr 20, 2007
Hi there,
I have a large (420GB) database that has never had data archived off before. I taken a backup to a test server and run a script supplied by the product vendor which has removed a large ammount of old data no longer required.
I have checked within enterprise manager that this data has now gone, however the actual file itself has not shrunk in size. Is there a further step I need to take to get back the space.
Kind Regards
View 2 Replies
View Related
Apr 30, 2007
I 've two database db1 and db2. i've made filegroups of db1 using database partitioning now i want to attach those filegroups to db2. i've made the backup of file groups to overcome the error 'the filegroup is in use'. But now while attaching with sp_attach_single_file_db stored procedure from the given path where the backup of db1 filegroups are present im geting error on db2 the 'db2 is already exist'. i m unable to fine any fruitful result after spending days of days in search. Any one plz tell me how to deal with it. im runing out of time so urgent replies is highly appreciated
View 6 Replies
View Related
Feb 11, 2014
I setup SQL Server 2012 on Windows Server 2012 with the service accounts in the local Administrator group, but now that I'd like to remove the accounts from this group I'm finding they don't have the appropriate access to the network storage. notes on setting the per-service SID's for SQL (SQL Engine, Analysis Services, Reporting Services, and Agent Service) so they can read the Data, Log, and TempDB mount points?
View 2 Replies
View Related
Apr 3, 2008
Hi,
I am currently playing with Sql Express 2005, and I am wondering if there is any way to create secondary keys on a table?
I know you can create compound Primary Keys (not sure that is the correct terminology), and Foregin Keys. However, I am unsure about secondary keys ( compound or otherwise).
Could someone tell me if this is possible. I am looking at migrating from an Acucobol Vision file system, which makes heavy use of secondary keys.
Cheers
View 5 Replies
View Related
May 27, 2015
in my secondary server the database which is in restoring state , when i checked in always on dash board "This secondary database is not joined to the availability group" ,
View 3 Replies
View Related
May 29, 2008
hi all,
For data ware house project, the reporting team needs to know the delta changes to the master database.
one way we were thinking was to use log shipping and run reports / ETL off the secondary server. But the team needs to know which records got changed and i was thinking of adding timestamp columns to the necessary tables (only on the secondary database schema) and that way we can track the changes.
But from my research, it seems like secondary database needs to have similar schema as promary database.
Is log shipping, can my secondary db have a bit different schema? if so how to do it?
If not, how to accomplish the above secanrio, with out adding new columns (if possible) in the master database and with low over head.
thanks
lucy
View 3 Replies
View Related
Oct 2, 2006
Hi,
We have SQL Server 2005 configured with mirroring to protect from physical errors. We also have a need for an (out of sync is ok) reporting server and we'd like to reduce our downtime in the event of a logical error.
The primary database is already being backed up (full and t-logs) to a shared network drive.
Can I implement the second half of log shipping (i.e. the stuff you do to the secondary) so that I don't have to change the current backup schedules on the primary server?
Specifically in the list of sp's below, can I start halfway down at sp_add_log_shipping_secondary_primary (Transact-SQL) ?? Without having to run the primary sp's?
sp_add_log_shipping_primary_database (Transact-SQL)
sp_add_jobschedule (Transact-SQL)
sp_add_log_shipping_alert_job (Transact-SQL)
sp_add_log_shipping_secondary_primary (Transact-SQL)
sp_add_log_shipping_secondary_database (Transact-SQL)
sp_add_log_shipping_primary_secondary (Transact-SQL)
Or do I have to ditch my current backup maintenance plans on the primary and start again?
Thanks
Ed
View 1 Replies
View Related
Dec 31, 2006
Hello,
I would like to know how do I carry out this two cases in efficient way:
1. I have two SQL 2005 servers running. One as primary, one as secondary. I would like to synchronize or replicate the transactions at real-time.
2. When primary goes down, I would like my secondary server to take place in needless of IP changes and application settings.
If you have a good architecture on this, please share with me. I am not sure whether my questions are clear enough or not. I also will be reading some articles about these. Thank you in advance.
Regards,
Paing
View 4 Replies
View Related
Apr 3, 2007
I have created the companyid as Primary Key.How to create a unique secondary index on Company Name. To avoid inserting duplicate records in database with the same companyname. I m creatin database in sql server 2005 with asp.net C# 2005. I know one way is write the query if not exists at the time of insert.But,i want to know is there anyother way to make a unique secondary index for the companyname on the company tablethanxs
View 1 Replies
View Related
Oct 21, 2003
I have a database with 2 log files VE_log and VE_log2 (the split is left from legacy system with limited disk space). I want to eliminate the 2nd log.
I've tried SHRINKFILE (EMPTYFILE), which of course reduces the file to the default minimum pages, SHRINKFILE (TRUNCATEONLY), SHRINKDATABASE (VE), and in every case the ALTER DATABASE REMOVE FILE command fails cause the log isn't empty.
Ideas???
Thanks,
Al
View 3 Replies
View Related
Apr 14, 2014
While configuring SQL Server 2012 Cluster with Always on, why do we need two secondary replicas.
The attached link will describe the actual query. [URL] ....
View 2 Replies
View Related