We have an encrypted drive (that can be mounted and dismounted, a third party tool to encrypt drive path). I wanted to store the secondary file to that encrypted drive path. The secondary file stores confidential information. I separated the table from the primary to secondary file. Encryption per column is not advisable to do on that table so we decided to separate that table and put it on secondary filegroup. The physical file is stored in the mounted drive path.
I can read and write in that mounted drive path. I can also read and write if the drive is unmounted (which I believe read and write is really being done). When the drive is unmounted, the physical secondary file (.ndf) is not visible to any user logging in the server itself (this is actually the goal why we do this encrypted drive setup thing). It is kept virtually somewhere in the machine. To mount it back, a password is needed.
I'm a bit confuse, somebody can advise or give their insight on this setup. I believe that when the drive is dismounted, SQL Server stored the transactions in cache until it finds that the drive is mounted back. This means that all transactions are not comitted yet. When the drive is mounted back, I think SQL Server is smart enough to check/know that the drive is physically present and will flash all the pending transaction from the cache to the hard drive.
Is my assumption correct? Is there any thing that I need to know about transaction, committed and those data flashing thing on the hard drive?
I need to copy a just-created bak file to another drive after the backup task has completed. I don't see anything in the job toolbox which works with file system operations like this. But still it must be a common need..There are ways to script this or use third part tools but I am looking for something native to the sql server 2012 SSMS toolset, if possible.
An alternate approach would be to run the backup job again, after the main backup, and change the destination to the alternate location. But I was thinking that another backup job would probably invoke more overhead on the server than a simple file copy operation. If I do end up taking this approach I could also use the cleanup task to toss older bak files in the alt dir.
I've configured log shipping to use for DR purposes. I'm concerned that the physical location of the secondary is mis-reported by SQL Server Management Studio.
Viewing the secondary location (with Studio DB_name Properties Files) shows the path of the primary DB (I expected it to show the path of the secondary).
This SQL command shows the correct/actual paths of both primary and secondary DB's when run on their host servers.
SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files
Is this just cosmetic?
Here is an Example of how the Studio shows the incorrect path for the secondary.
Installed SQL Server 2000 Enterprise trial a week ago on XP Pro. Installed new Seagate 80G HD; used Seagate's utility to copy old C: to new drive as new boot drive. All seems to work fine, except, when booting up, SQL server doesn't start. When I try to start it manually I get the following:
Could not start SQLSERVER service on the local computer. Error 3: the system cannot find the specified path.
I am trying to move a log file from one drive to another.
What I have done is add another file to my file group. So now my log has a file on the 'e' drive and one on the 'f' drive. I now want to remove the file on the 'e' drive. I have emptied the file on the 'e' drive. When doing the command:
ALTER DATABASE Uniprodruntime REMOVE FILE m_rk_runtime_log
I get the following error message..
Server: Msg 5020, Level 16, State 1, Line 1 The primary data or log file cannot be removed from a database.
I have also gone into enterprise manager and tried to delete the file and it does nothing.
i have created a new login in primary server and provided dbowner permission to primary db.how do i transfer this login to secondary server and assign the same permission to secondary db ?
I'm trying to run a set of DBCC commands to empty and then delete a secondary log file; however, no matter how large I make the primary log file it won't empty the secondary file. Any suggestions?
DBCC SHRINKDATABASE (VE, 10) GO alter database VE modify file (name= VE_log,size = 1200) GO dbcc shrinkfile (VE_log2,EMPTYFILE) GO Cannot shrink log file 3 (VE_Log2) because all logical log files are in use. DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages ------ ------ ----------- ----------- ----------- -------------- 17 3 86848 128 86848 128
I've inherited a database from a SQL7 system, and converted it to SQL2000. It has a secondary data file (.ndf) and a secondary log file. Because the server configurations are different it's no longer necessary to have the secondary files. How to I merge the secondary file data into the primary files and then delete the secondary files?
I am able to run SSIS packages as SQL Server Agent jobs with a Control Flow items "File system task", if I move a file (test.txt) from a drive (c on the server (where SQL Agent jobs run) to a subdirectory on the same drive. But, if I try to move a file on a network drive, the package fail.
Currently have a single hard coded file path to the SSRS config file which parses the file and provides the reporting services web service url. Â My question is how would i run this same query against 100s of servers that may or may not share the same file path as the one hard coded ?
Is there a way to query the registry to find the location of the config file of any server ? which could be on D, E, F, H, etc.Â
I know I can string together the address followed by "reports" and named instance if needed, but some instances may not have used the default virtual directory name (Reports).
Am I going about this the hard way ? Is there a location where the web service url exists in a table ? I could not locate anything in the Reporting service database. Basically need to inventory all of my reporting services url's.
I was wondering if there is a way to 'Move File' with the File System Task inside of a For Each Loop container but to dynamically set the Destination path variable.
Currently, this is what I have: FileDestinationPath variable - set to C:TestFiles FileSourcePath variable - set to C:TestFiles FileNameAndLocation variable - set to blank
For Each Loop Container €“ Iterates through a folder C:TestFiles that has .txt files in it with dates in the file name. Ex: Test_09142006.txt. Sets the file path (fully qualified) to the Variable Mapping FileNameAndLocation.
Script Task (within For Each Loop, first step) €“ Sets the FileDestinationPath to the correct dated folder within C:TestFiles. For example, if the text files I want to move are for the 14th of September, it takes FileDestinationPath and appends the date folder to the end of it. The text files have a date in the file name (test_09142006.txt) and I am picking this apart (from FileNameAndLocation in the For Each Loop) to get the folder date. (dts.Variables(€œUser::FileDestinationPath€?).Value = dts.Variables(€œUser::FileDestinationPath€?).Value & €œ€? Month & €œ_€? & Day & €œ_€? & Year & €œ€?) which gives me €œC:TestFiles 9_14_2006€?.
File System Task (within For Each Loop, second step) €“ This is where the action is supposed to occur. I want it to take the FileDestinationPath and move the FileNameAndLocation file (from the For Loop) into this folder for each run.
Now as for my problem. I want this package to run everyday but it has to set the FileDestinationPath variable dynamically according to that day€™s date. Basically, how do I get this to work since I can€™t hard code the destination path variable from the start? I have the DestinationVariable on the File System Task set to the FileDestinationPath variable, after the script task builds it. However, using FileNameAndLocation as the SourceVariable on my File System Task tells me that the €œVariable €œFileNameAndLocation€? is used as a source or destination and is empty.€?
Let me know if I need to clarify further€¦...I may be missing something very simple. Any help would be greatly appreciated!
I am running my package in sql server 2012, in which i am giving network path for flat file destination. And its working fine. But if i give m local path, its giving me  error " cannot open data file" ...
I have a SQL Server 2000 database with a primary data file (MDF) and a secondary data file (NDF). I would like to remove the secondary file and only have the MDF. Is this possible?
I am after T-SQL code which will simply load the next T-log backup file from a network share folder to a warm standby db on a secondary server.What is needed is a Third server (server x), to participate in log shipping (MULTIPLE TARGETS).
Primary SERVER (SERVER A) Secondary SERVER (SERVER B) Log shipped to via GUI. THIRD SERVER (SERVER X) which will contain the same log shipped db from server A.
This will simply restore the logs from a network share to keep the db up to date.
I have a database that has been running well for a few years. It has a single data file. It has now become very large and is creaking and running slow sometimes. Is it possible to now create a secondary data file or do i have other options? Many Thanks
I have an issue with a DTS package. We create a zip file and then attach it to emails going out using DTS. The problem is that the attachment, when received, is named using the full path to the file, so it is quite long.
Has anyone seen this before? Is there a way out of this?
I am considering mapping a drive to the share holding the file to be named, but the fact is this will shorten the name but will still result in the path being included.
I am wondering if this is a bug, as I suspect this isn't the default behaviour.
I experienced a weird error while deploying my SSIS package. After running the manifest file, i noticed that one of the configuration file's path was not updated in the dtsx file. My solution has 8 packages and almost every package has 2 configuration files. Except 1 file every other config file's path is being updated. Has anybody experieced such a problem?
i have sql server 2000 db with two data file... primary data file has extension mdf and secondary file has extension ndf (as per microsoft recommendation)..
when i try to backup the db and restore thru the enterprise manager .. in the restore -> options window ... i see both the files has the same extension mdf.. and when the restore completed, the new database still has extension mdf for both the file..
why this behaviour?
* i even try to create a new test db with two files, still its the same behaviour.
We are using log shipping and we would like to remove all transfered and applied journal (in the primary box). We have the intentionto use a trigger like this :
CREATE TRIGGER del_log ON log_shipping_plan_history AFTER INSERT as declare @lastfile nvarchar(256) SELECT @lastfile=i.last_file FROM log_shipping_plan_history e INNER JOIN inserted i ON e.sequence_id = i.sequence_id where i.activity=1 begin if IF (@lastfile <> NULL) ... ... remove file (using xp_cmd for example...) ... end
but the problem is that we have only the last file transfered and applied that will be removed (some time, more that 1 file are applied in one shot ... see num_files column in log_shipping_plan_history).
Any solution to remove all the files generated before the last one given by the query ? Any other solutions (sql wizard gives the possiblity to to remove file after a laps of time 1hour, 1day...).
I am looking for the table that contains all the journal files (that we can see when we try to retore a db) ?
I added a secondary data file to TEMPdb yesterday and gave it a wrong location by mistake. If I try to change the location, then I am getting an error now. I think that is because TEMPdb is in use and that is why I cant change it's secondary file's location. Do I need to take TempDB offline and then change the secondary file's location??
For security reason, the sensitive information must be encrypted, and they should be configurable dynamically (by an ASP.NET application).
so I tried several ways to achieve this,
1. use SSIS Package Configurations to generate an XML config file It's a convenient way to generate config file. but the passwords are not encrypted (or I don't know how).
2. use Variables and Property Expressions I can set variables by reading from an external custom xml config file which was content encrypted (read and decrypt the custom config file in a script task). but in a FTP Connection Manager entity, the Password property can not be set via Property Expressions.
Is any way to store password encrypted to an external file?
I cannot get the log file path read from the configuration.
If the path in the Connection Manager is invalid, package throws an error "SSIS logging provider has failed to open the log" instead of reading it from the config.file. What am I doing wrong?
Here is the portion of the config file. Everthing else is read from the config file correctly.
I have a database around 500 GB. right now the database have only one data file and one log, it has only one filegroup also.all the indexes and table are placed in Primary Filegroup . we are going to separate them. the planing is to move all the indexes to Secondary filegroup and all the table will be in Primary filegroup.But there will be a problem while implementing it because there are around 600 tables and each table have at least 2 non-clustered index , so is there any way to move all the index to Secondary Filegroup.
I have created a Test SSIS Package within BIDS (VS 2K8, v 9.0.30729.4462 QFE; .NET v 3.5 SP1) that connects to our Test Listener.
There is only 1 Connection Manager Object, and OLE DB Provider for SQL Server.
The ConnectionString lists: Provider=SQLOLEDB.1;Integrated Security=SSPI
The Test Connection within BIDS works.
The Package Control Flow has just 1 Object, and Execute SQL Task that performs an Exec on an SP that contains only a Select (Read).
The Package runs within BIDS.
I've placed this Package within a Job on the Primary Node. Ive run the job successfully using 32 bit runtime on and off. The location of the file on the server happens to be on a share that resides on what is currently the Secondary Node.
When I try to run the exact copy of this Job on the Secondary Node (Which has been Set up for Read All Connections; Yes), I get an error, regardless of the 32 bit runtime opiton. At this point, the location of the file is on the Secondary Node.
The Error is: "Login failed for user 'OurDomainAgent_Account'".
The Agent is a member of NT ServiceSQLServerAgent on both instances, and that account is a member of SysAdmin. Adding the Agent account as well, and giving that account SysAdmin, makes no difference either.
I received an alert from one of my two secondary servers (all servers are running 2012 SP1):
File 'E:SQLMS SQL ServerMSSQL11.MSSQLSERVERMSSQLDATAMyDatabaseName_DateTime.tuf' is not a valid undo file for database 'MyDatabaseName (database ID 8). Verify the file path, and specify the correct file.
The detail in the job step shows this additional information:
*** Error: Could not apply log backup file 'MyDatabaseName_DateTime.trn' to secondary database 'MyDatabaseName'.(Microsoft.SqlServer.Management.LogShipping) ***
*** Error: Table error: Page (0:0). Test (m_headerVersion == HEADER_7_0) failed. Values are 0 and 1.
Table error: Page (0:0). Test ((m_type >= DATA_PAGE && m_type <= UNDOFILE_HEADER_PAGE) || (m_type == UNKNOWN_PAGE && level == BASIC_HEADER)) failed. Values are 0 and 0.
Table error: Page (0:0). Test (m_freeData >= PageHeaderOverhead () && m_freeData <= (UINT)PAGESIZE - m_slotCnt * sizeof (Slot)) failed. Values are 0 and 8192. Starting a few minutes later, the Agent Job named LSRestore_MyServerName_MyDatabaseName fails every time it runs. The generated log backup, copy, and restore jobs run every 15 minutes.
I fixed the immediate problem by running a copy-only full backup on the primary, deleting the database on the secondary and restoring the new backup on the secondary with NORECOVERY. The restore job now succeeds and all seems fine. The secondaries only exists for DR purposes - no one runs reports against them or uses them at all. I had a similar problem last weekend on a different database that is also replicated between the same servers. I've been here for over a year, and these are the first instances of this problem that I've seen. However, I've now seen it twice in a week on the same server.
Today we received an issue on an application database on internal free space on the DB is 0% that was designed with as below
name   fileid   filename   filegroup   size   maxsize   growth   usage XX   1   I:DataMSSQL.1MSSQLDataNew XX.mdf   PRIMARY   68140032 KB   Unlimited   0 KB   data only XX_log   2   I:DataMSSQL.1MSSQLDataNew XX_log.LDF   NULL   1050112 KB   2147483648 KB   102400 KB   log only XX_2   3   I:DataMSSQL.1MSSQLDataNew XX_2.ndf   PRIMARY   15458304 KB   Unlimited   0 KB   data only XX_3   4   I:DataMSSQL.1MSSQLDataNew XX_3.ndf   PRIMARY   13186048 KB   Unlimited   0 KB   data only XX_4   5   I:DataMSSQL.1MSSQLDataNew XX_4.ndf   PRIMARY   19570688 KB   Unlimited   204800 KB   data only XX_5   6   I:DataMSSQL.1MSSQLDataNew XX_5.ndf   PRIMARY   19591168 KB   Unlimited   204800 KB   data only
2 of the secondary data files had its autogrowth enabled to unrestricted with 200MB and 3 of the data files including primary had its Autogowth turned OFF. Application use is complaining that there is no internal freespace on the DB.
What fails to understand us is that when the Auto growth was already TURNED OFF on 3 data files ( 1 primary and 2 secondary ) still why was the application trying to increase the space on the .mdf and .ndf files; as well when the Autogrowth is TURNED ON on 2 of the secondary data files, why was the DB not able to expand these file groups when the autogrowth is already turned off on 3 of its  other files.
What more data i need to ensure i submit an analysis to this.
We have set up Log shipping between Primary and Secondary DB. The secondary DB is right now option: Standby/Read-Only. I can not take Backup of Secondary DB now.
Shall we disable Log shipping and change the DB Option to Multi-user mode and take backup? or any different method, without disabling log shipping?
I have a several indexes on a filegroup that I would like to move to a different physical drive. I am aware of the sp_detach...sp_attach routine which allows moving the .mdf and .log files to a different location. How would I go about moving a .ndf file though?
How do I dynamically change the file path for the source file in DTS? Can i use global variable?. if so , How do i use it in the connection task. Appreciate your help.
I'm trying to move the transaction logs of my databases to a different drive (for fault tolerance). I can create a second transaction log file for each database via Enterprise Manager but I have 2 questions:
1) If two transaction log files exist for a database which one does it use ?
2) How do I force SQL to use the new transaction log file ? (so I can delete old)