SQL Server Admin 2014 :: Can Delete A Data-file Or File-group
Apr 27, 2015
In a server we had File Growth,And then We had to Add New Hard Drive And New File On It.And Now We have New server with a Huge Hard Drive.But all files remaind.Can I Reduce This files to One data file or not ?
For a database, we have 4 data files in a particular file group and the file sizes are almost 70 GB each.
Do I come across any performance issues if I create/pre-allocate an additional data file in the same file group so that the existing files don't grow too much?
A little background on what I am trying to achieve first. We are moving to Azure virtual machines and we will have 8 disks on the SQL Server box. I am adding more files to the primary file group and each file will go on its own drive. I am then rebalancing data across these files by rebuilding all of the indexes on the tables which is working fine. No problems so far all is good.
I now have an additional problem. If there is a lob or blob column on the table, rebuilding the clustered index and all the non clustered indexes doesn't rebalance the blob or lob data across the disks such as it does with in row data.
I cannot find any articles on rebalancing lob or blob data because all the articles say to move to a new file group. I do not want a new file group, I just want to use the primary file group where the data already resides, and just redistribute it evenly in the same way that I can in row data which is working fine.
One solution I thought about was to BCP data out of the table, truncate the table and then BCP back into the table which I imagine would have the desired effect of distributing the data evenly over the files.
I was running an operation to shrink file/emptyfile a data file, and then remove it.
It blocked and caused a huge mess, I suspect on the removal part. But I want to confirm that the emptyfile completed (and that the engine isn't going to try to put more data in there for when I schedule the removal part again a week or more from now).
How does the engine know not to put any more data in there, and how long does that situation last?
I recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.
We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).
Here what I have done so far:
I created the database
Created the file and successfully imported data into it using the Import Data feature.
I saved the SSIS package
Scheduled an Agent Job for this package to run at certain time,daily
At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.
Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
A log file size of a production database has been increase from 4gb to 150 gb initial size.Now i want to find when it will grow & how much it grow & which transaction is responsible for this.
I have a database it is 50 gb with hundreds of columns. I would like to choose a certain column and convert the data in it to .csv or excel file. How can I do that I am very new to MSSQL...
I had to reinstall my local copy of SQL a few weeks ago, which naturally overwrote the
msdb.dbo.sysmanagement_shared_server_groups_internal and msdb.dbo.sysmanagement_shared_registered_servers_internal tables.
However I still have the local XML file that SSMS reads so I can still access the groups, I just get weird errors when trying to re-register my install as the new CMS. How to rebuilt those tables from the XML file or know of a way to repopulate?
We have one table where we store all documents in one of the column called "Doc" with varbinary(max) data type.
We want to download those documents from sql table to windows explorer and i wrote BCP in sql 2005. And things were fine.
The format file I used there looks like this,
9.0 1 1 SQLBINARY 0 0 "" 1 Doc ""
Now we are in 2014 and when I try the same code with same format file, it hangs in the middle. So I changed the file to 12.0 instead of 9.0 but still not working.
is there a way to restore all file groups except one? example: Database A has 10 filegroups, but 1 of them is defunct, so i cant delete it and there's no backup for restore it.Can I create a new DB restoring the 9 good FGs from a database A's backup?
I have a windows 8 pc that I just got and installed sqlexpress 2014. My buddy haw windows 7 and installed sqlexpress on his pc. We create a db on his pc, did a backup, copied the backup to my pc. In ssms I right click on "database" > restore database. click device and the button to find my file. I navigate to the folder where the file shows in file explorer but the .bak file does not show in ssms to restore from. This is probably a windows thing but I have don't know what to look at.
While i execute dbcc sqlperf(logspace); I get following values.
Database NameLog Size (MB)Log Space Used (%) master 16.17969 13.30275 tempdb 7.429688 61.7245 model 0.7421875 45.78947 msdb 5.554688 25.87904 distribution 2808.93 0.8172179 BANKDB 23438.87 48.20037 WSMIRSDB 109.7422 4.839111
For database BANKDB , Log Space used(%) is 48.83% and Log size is about 23438.87 where as my database size of BANKDB is 60 GB. FULL database and Log back is done every day night one time. My database is performing slow now.
Do we need to take log backup frequently like once a 1 hour so that Log space used will be less. Same query is taking more time to execute than before in same database is it because of log file has increased.
I do index organize and rebuild once a week and stats apply nightly.
Is it correct once log space size is increasing more than 10%. Do we need to take log backup?
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage. Log files – should go on the fastest writing storage. TempDb – involves a lot of writing at the same time the data files are being read. Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
I have installed SQL 2014 (Evaluation Version) on testing machine. We want to import some excel files on database. I manually created one Test Database and now trying to import excel file. Import completed successfully but I am not able to see any table created as result of Import. I tried it 3-4 times and even restarted sql services but no luck.
DECLARE @File_Exists INT EXEC Master.dbo.xp_fileexist 'serverBSQL_Backupfile.bak', @File_Exists OUT print @File_Exists 1
And if check folder, can use "nul", but it doesn't work for UNC path
DECLARE @File_Exists INT EXEC Master.dbo.xp_fileexist 'serverBSQL_Backupul', @File_Exists OUT print @File_Exists 0
If use xp_subdirs like:
EXEC master.dbo.xp_subdirs 'serverBSQL_Backups'
If the folder doesn't exists, Msg 22006, Level 16, State 1, Line 3 xp_subdirs could not access 'ServerBSQL_Backups*.*': FindFirstFile() returned error 67, 'The network name cannot be found.'
how to check if UNC folder exists in Backup? in my code I want to check if the unc folder exists before doing backup, the unc path is retrieved from other table or backup history.
Have a SQL 2014 install and cannot for the life of me get the maintenance plan to remove old backups. I've tried everything. Rights to the folder where the backups are stored are adequate, extension set in the clean up task is as it should be, etc. Log shows the job ran successfully. Running the command manually shows successful completion, but backups are still not removed.
I have Full database backup upto previous day and transaction logfile of Today transaction. my database has crashed. I have restored previous day's Full backup. I have faced difficulty to restore today's transaction from today's transaction log. What are the steps to restore full database back and one day's transaction log file. Note: there is no differential database backup and transaction backup.
I am planning to delete a login from SQL logins because he moved out from project .when i try to delete the login , it throws an error saying " The server principal owns an endpoint and cannot be dropped , error 15141 "
Same problem facing on different servers.
Note : Environment is SQL 2012,SQL 2008 including cluster servers .
Recently after turning on trace I restarted the sql services on a box which is configured for automatic failover availability groups. The ag has not failed over to other node. The other node was in resolving state. When the restarted server is back, the AG went back to that server. I checked the sys.availability groups field for failover property failure condition level, it's set to 1 which means service restarts should initiate the failover.
What I asked for: Three Windows Server 2012 R2 machines with independent storage running a SQL Server 2014 AlwaysOn Availability Group. DB1 would be the primary, DB2 would be a synchronous replica, and DB3 would be a remote asynchronous replica.
What I was given: a two-node Windows Server 2012 R2 WSFC to run SQL Server 2014 Enterprise with shared storage and a third (remote) Windows Server 2012 R2 machine with independent storage, also with SQL Server 2014 Enterprise, to host an AlwaysOn Availability Groups asynchronous replica.
DB1 and DB2 (as Cluster1) share an E: drive. The remote DB3 has its own E: drive. Initially, DB3’s E: drive was claimed as a cluster resource and I couldn’t even see it. I’ve had several ugly days trying to make this work and have temporarily given up, installing DB3 as a standalone SQL Server that is no longer part of the WSFC and pointing everything towards that (it was originally a third node in the WSFC).
Is it possible to create an AlwaysOn Availability Group with nested clusters (i.e. create the AOAG with Cluster1 and DB3 and somehow ignore the individual nodes that comprise Cluster1)?
I have a requirement to delete all the orphans users for the databases. The issue I am having is with when database principal owns a schema in the DB, User cannt be dropped.
How do I transfer it to DBO in case I am looping multiple databases. This is what I got so far .
declare @is_read_only nvarchar (200) Select @is_read_only = is_read_only from master.sys.databases where name='test' /* This should be a parameter value */ IF @IS_READ_ONLY= 0 BEGIN Declare @SQL as varchar (200)
I have a user, who is trying log into the server, but everytime he gets this error saying something about the Group policy denies him access.
This user needs access and i'm trying to understand how to grant it to him.
I have been looking into how i can access the group policy editor, but the farthest i can get is the Local group policy editor. How do i make sure this specific user has access?
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.
I setup an availability Group. (Only 2 servers - Primary And secondary) -- 21 , 22
I also define an listener . IP .. 23
1- In First step I connected To Listener (23) And in a while I inserted A record to a table .
While 1=1 insert into Tbl_T1(f1,f2) Values (1,2)
2- in second, I Stop the primary .
- I expected this while whitout disconnect, continue.
3- The while code stopped whit this message :
Msg 64, Level 20, State 0, Line 0 A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The specified network name is no longer available.)
4- I execute again the script, And it worked in new primary.
My questions :
1- is the listener disconnected between switched primary and secondary ? OR have we data loss between switching?
2- I did some huge update on Primary that fill the Log fiel space. And in last Update I got this error :
Msg 9002, Level 17, State 2, Line 27
The transaction log for database 'Your_DB' is full due to 'LOG_BACKUP'.
Is this (Fill All space) a reason to switch primary? Or not ?