Breaking Up Sql Server Backups Into Smaller Files
Apr 5, 2006
Hey guys,
I'm wondering how most people manage very very large backups. What is the best approach to breaking up the backup files if you're restricted to a drive size (450gig in my case). I unix, you can pipe the backup to gzip and split, I'm not sure how the same thing could be accomplished in windows.
Thanks,
-Kilka
View 2 Replies
ADVERTISEMENT
Jan 10, 2015
I'm performing SQL Server native backups and Comvault Backups are breaking the backup chain?I can work around them if they only perfume 1 backup per day.We have a 3rd party hosting our servers so I do not have much control as to what they do.I have concerns about being able to get my databases Comvault being restored.
View 2 Replies
View Related
Jan 9, 2015
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
View 2 Replies
View Related
Jul 23, 2005
To all,How to backup a database into a number of smaller files ?For example, can I can fully backup a DB of 10 MB into 10 files (each 1MB)???The problem I've met is that the DB backup file is too large, over 4GB, and even Winzip can't compress it (after compressing, around 80 %of compression rate is possible)Thanks![color=blue]>From Jason (Kusanagihk)[/color]
View 11 Replies
View Related
Dec 1, 2014
Until yesterday I had a server running SQL Server 2008 R2 - with all the SQL Server DB files on an attached disk array.
The server died - so I attached the disk array to a new server - and all the DB data files are visible there.
I installed SQL Server 2014 on the new server and am trying to work out how to point it at the existing database files.
I also have backups of the DB's - but they will take ages to copy over and restore - so it would be much easier to just use the db files. Should I restore the master db first (easy as its small)?
View 9 Replies
View Related
Jul 20, 2005
Hi all:I restored one backup database (7.9 GB mdf) on two diffrent servers. Ishrunk them by clicking "Move pages to beginning of file beforeshrinking".After shrinking, one mdf file is 6.7 GB, and the other is 4.2GB. Ishunk again and again:1. the 6.7GB become 5.9GB, 5.2GB, 4.7GB and 4.2Gb (four times)2. the 4.2Gb become 4.0GB (just one more time)It is wired, I am wondering the mdf will be smaller and smaller if Icontinue to shrink them? What is the reason?ThanksWJ
View 1 Replies
View Related
Jul 20, 2005
Hello all. Before my arrival at my current employer, our consultantsphysically set up our MSSQL 7 server as follows:drive c: contains the mssql enginedrive d: contains the transaction logdrive e: contains the data filesNo filegroups were set up and the data files consist of only 1 largephysical file. Currently, our data file is >10GB. When I was trained onthe physical aspects of sqlserver, I was told to never create physical files[color=blue]> 2048MB each. If I did, I could expect inefficient physical storage of[/color]data and slower performance (due to the OS).Our server has 2 RAID-5 arrays. Drive c: and e: are located on the firstarray and drive d: on the second. We're running Windows 4.0 NT Server SP6with NTFS.Can someone comment on the use of 1 single large data file vs. more smallerdata files?
View 2 Replies
View Related
Jun 26, 2001
Hello Group,
I have a SQL Server 2000 database that has never been backed up. The SQL Server database can't be logged into due to an SSL Security Error, thus I can't get to the backup utilities within Enterprise Manager.
What data files do I need to backup manually and what steps do I have to take to backup these files to a tape and rebuild the server?
Thanks
Kevin Kraus
View 3 Replies
View Related
Nov 21, 2005
i have a maintenance plan running on my database, in which I told the wizard, on creation, to "remove files older than 4 week" and yet it doesn't seem to be doing so, as on checking this morning, diskspace was getting low, due to over 300gb of backups and trn' dating back to september.
Anyone have ny problems with maintenance plans not cleaning up when told?
a
View 4 Replies
View Related
Jan 29, 2008
My scenario -
-Dev server lost all its raid drives (which included user data/log files and ALL backups)
-Raid drives were removed from server, so we could try and recover the data (not sure why they had to remove them)
-Recreating the user db's isn't an issue, but I REALLY need to retrieve the jobs from the MSDB (developer was working on dts package for the last 2 months, but didn't check it in to VSS)
-Since the System DB's (model, master, msdb) were installed on the C drive, I was able to recover their mdf/ldf files.
So we build an new server and recreated the user db's.
How do I go about getting the jobs data from the msdb database?
I've tried the following -
Started SQL server with the -T3608 and was able to detach the MSDB database, but when I go to reattach using the mdf/ldf files from the other server I get the following error message.
Error 5172: The header for the file 'C:mssqldatamsdbdata.mdf' is not a valid database file header. The PageAudit property is incorrect
I've been doing some research and found a lot of great articles on how to RESTORE (from backup - dont have) or Rebuild (wipes out the data I'm looking for) the MSDB, but not how to replace it with only the ldf/mdf files....
Any idea's would be greatly appreciated.
Thanks
PS - All backups are now being copied to another server (lesson learned)
View 4 Replies
View Related
Jan 8, 2007
Michael writes "We are running SQL and Veritas to backup the databases. Supposedly the SQL agent in Veritas, after a full backup, truncates the log files but for some reason this isn't happening... any ideas?"
View 4 Replies
View Related
Jul 16, 2007
We have a SQL 2K5 10GB database that, as part of the recovery plan, gets a differential backup every six hours. Log file backups occur every hour, and a full backup is done every 24 hours. Over the weekend, the differential backup produced a 55GB backup file which caused us a lot of issues besides disk space usage (log backups couldnt finish, mirroring broke, etc.). This is also the max growth size that the log file is set to. There are no errors in the ERRORLOG, or in the job history. It's as if the backup was successful, which I assume it was, but the file was sparse.
I should mention that our full backup is typically 10GB, log file backups are typically 100 to 500MB, and the diff backup is generally 1GB to 3GB.
Has anyone experienced this issue before?
What causes it?
How do we resolve it?
Thank you in advance for your help,
Greg
View 8 Replies
View Related
Aug 29, 2006
where in sql server 2005 ( and 2000 for that matter) do i find the path to the location where backups are placed (.bak files)? is there sql i can use to find this out
View 6 Replies
View Related
Nov 19, 2007
Hi,
I am cleaning database backup history as "Delete history if 4 months old" for backup file, and "Delete history if 1 week old" for Log file back.
lets say, I have disaster after 3 months, and I recover database with data and log backup files. Will I be able to recover with 3 month old backup file with just 1 week Log file backup. As I am keeing my Log backups only for 1 week.
Thanks,
View 5 Replies
View Related
May 29, 2008
We are using the Transfer SQL Server Objects Task to transfer a large table. The trans log is filling up for this table. Is there a method to split the Data Transfer Task into smaller batches? (Smaller tables are transferring without issue.)
Thanks.
View 2 Replies
View Related
Aug 4, 2014
We are running SQL Server 2012 on Windows Server 2008. Just wondering what type of actions would break the backup-chain or backup sequence? For instance, if you have tlog backups being taken every 10 minutes and you stop the SQL Server Service for say 30 minutes. Would this action break the backup chain? Or would everything return to normal once the SQL Server Service is restarted?
View 7 Replies
View Related
Nov 15, 2015
The space allocated to the Log in question is 180 GB. During this time period I was running TLog backups every 5 minutes, yet the log continued to chew through to 80 GB used, even after the process was complete and a final TLog backup had been taken. It continued to stay very large until the Full backup was complete -- or something else that I'm unaware of completed. Like every other DBA I typically take a TLog backup to shrink the log, but what appeared to be the case here was the Full completed and it released the used log space. All said, will Transaction Log backups not free up the log during Full backups?
View 3 Replies
View Related
Mar 27, 2000
I have a situation where I need to migrate data from an older platform to a newer one. The data from the old system(s) will be available on DAT tapes. All database construction on the new system will be identical to the old one in size and schema, except for one table (call it "ARCHIVE").
If the ARCHIVE table on the old system is 210MB, and the ARCHIVE table on the new system has the same attributes but has been expanded to 380MB in size, can I simply restore the dump for the old table into the new ARCHIVE?
Empirically it works (I have done it with apparent success two times) but I seem to recall that backups are done by pages, and I'm concerned that there may be conditions not being met by simply doing the restore the way I'm planning to do it.
Also, are there any tests or checks built into SQL which I can use to check table integrity on the target ARCHIVE table after the restore?
Any help is greatly appreciated.
Best rgds,
Kevin
View 4 Replies
View Related
Aug 28, 2000
I have a database in 2 GB .mdf and a 1 GB ldf. The backup is much
smaller. I need to copy this database to another server which does
not have that much free space. Can this be restored to a smaller
.mdf and .ldf? How?
Thanks.
Ranjit
View 1 Replies
View Related
Dec 9, 2005
Is it generally or almost always better to have multiplesmall SPs and functions to return a result set instead ofusing a single big 1000+ lines SP?I have one SP for example that is 1000+ lines and earlyanalysis of the SP I see it first has 3 big blocks of codeseparated by IF statements. Then within each IF blockof code I see 3-4 UNIONs. UNIONs that meansthey are all returning the same columns so I amguessing these are prime candidates for becomingindividual functions or SPs, maybe even dynamic SPs.Obviously I am not showing you the code but am Iright to think this way? This same SP has about 15 JOINsincluding some LEFT JOINs and one LEFT JOIN to a (SELECTstatement) and almost all the tables referenced by theseJOINs have thousands of records, very possibly hundreds ofthousands.The SELECT statement is returning 30-40 columns froma lot of the these tables plus I also see a lot of CASE ELSEstatements within the main SELECT statement. The code ofeach CASE statement is calling a function. As an exampleif the CASE is for EmployeeID then a function is being calledto get the EmployeeID's FirstName and LastName. If the CASEis for CustomerID then another function is being called to getthe Customer Name.I am thinking to cut this big SP to many smaller SPs and/or functionsand I also plan on using table variable(s) to hold temporary resultwhile I continue processing the records from the table variablewith other code logic.Also I want to leave as the last thing to do is to convert the"machine result", i.e. EmployeeID or CustomerID to "humanreadable result", i.e. Employee FirstName and LastName,Customer Name.I am trying to test this on the Northwind's Employees table,but the Statistics IO, Time and the Execution Plan aresomething I've only started to use. I am unable to makeconclusion which method is better. I'll work on posting anotherpost specifically with details to this test that I am currently doing.My opinion is that by having 1 single SP with 15+ join causea lot more locking than if I would run smaller SPs and store theresult into temp table variables and continue processing theremaining code logic.I would like to know what you think and if I am right or wrongon how I want to optimize this SP?Thank you
View 5 Replies
View Related
Sep 4, 2007
Hello there....
I have a scenario where I am trying to set up multiple database instances for multiple test/development environment(s) for each group where the test/dev environment will contain a copy of what was in the production environment.. The test/dev environment can be refreshed on demand based on the prior night's full backup of the production environment.. This is good for our web developers and for training purposes, as the test environment(s) can be played around with, and will retain data for as long as the developers/testers/trainers need it, and then can be refreshed to the most current data when everyone in the group decides they want it refreshed...
Normally, this works out well...
However I am having a file size issue...
The production database was pre-allocated (a long time ago) to a large file size (probably to reduce external fragmentation).... So even though the backup file is only 5GB, the production database file itself is something like 40GB... I believe the production database has a maintainance plan on it already that rebuilds the indicies each weekend, etc...
Anyhow, the problem is that when I restore the 5GB database back into a newly created database file, the file expands all the way up to 40GB again, even though the backup file is 5GB...
Normally this would be fine, the problem is that I am trying to create multiple environments, and I do not have the disk space on my test/dev server for 40GB (plus another 15GB or so for the transaction log) multiplied by each of my test/dev environments... It would be much nicer if I could get this down to 5GB (or heck, even 10GB), since I know for sure that the total amount of data in the database doesnt exceed 5GB, and I have plenty of space on my disk for 5 (or 10) GB multiplied by each of the environments I want to create...
I have tried DBCC SHRINKDB and I have tried DBCC SHRINKFILE with the truncate after the restore, which seems to work but doesn't....
I have also tried to go into the database properties and change the "initial size" but that doesnt do anything etiher
Is there any way to get this file back down to a manageable size after the restore??
Or better yet, is there a special method to restore the database so it wont 'expand' back out to 40GB in the first place??? Perhaps some option to tell the restore process that even though the source database had a 40GB pre-allocation, that the database I am restoring into doesn't need to be pre-allocated??
View 1 Replies
View Related
Jul 19, 2007
Alright. I'm stuck. I admit it!
I have a bunch of names, and each name can have one or more 'roles'(operator, reader, key operator, etc. Just random words really.) attached to it.
Using reporting services, I've managed to get the information I need with relative ease... the only problem is, with 900 some records to display, it's current length of 41 pages with just one column going down the left side of each page is not exactly preferred by my superior (can't say I blame him really. Looks kind of odd!)
It looks like this right now:
Name1
Function
Function
Function
Name2
Function
Function
Name3
Function
Function
etc all the way down to page 41
I need it to look something like this:
Name 1 Name 4 Name 7
Function Function Function
Name 2 Name 5 Function
Function Function Name 8
Function Function Function
Name 3 Name 6 Function
Function Function Function
etc. Or some variation of...
I've fiddled around, and merely adding one extra column to the initial table-layout with the same =(!UserName etc) just merely replicates the data in the second column... not giving me the new stuff.
I'm quite new to reporting services, but none of the tutorials I've seen/done seem to accomodate for this... Heeelp!
View 3 Replies
View Related
Oct 24, 2005
I'm having problems with handling a very large amount of user records - about 100.000 - 150.000 records. Instead of selecting all of them at a time, how do I f.ex. select 1000 of them? (f.ex. get nr. 1 - nr 1000, then get nr. 1001 - nr. 2000) ???
View 1 Replies
View Related
Apr 16, 2002
Hi all,
I found a database file and a log file over 2G on mssql 2000 server. Actually, they only need around 200M. I try to backup, truncate the database in order make the size smaller. But the size cannot be smaller. How can I do it?
Simon
View 3 Replies
View Related
Aug 20, 2004
I have a Database A in production with 12GB as data file and 8 GB as log file. How do I restore this db in Development with a smaller log file, say 1GB?
I can't shrink the log file or anything in production. What is the best way to restore in Dev with a smaller log file?
Thanks.
View 4 Replies
View Related
Sep 3, 2007
Hi gurus,
I've created a linked server (and set up the corresponding schema.ini file) in order to perform bulk-inserts from some CSV text files into SQL tables (from my standpoint the text files are just for reading purposes). The linked server works fine (I can select the data in the files without a problem).
Now the question: is possible to automatically detect when one or more of those files change in order to start the import process automatically? Something like having a trigger created on the CSV files Or there's no easy way to do that so I have, to say something, to create a Job that periodically checks if the files have changed programatically (say, recording each file's timestamp everytime is imported and comparing the recorded value with the current one, or whatever)?
Thanks a lot in advance!
View 1 Replies
View Related
Nov 1, 2004
Hello,
I'm looking at using full-text indexing for tables to query. I have some smaller fields (varchar(50) that stores names) that I was contemplating using full-text indexing for. I was just curious if it is worth it?
Basically the data that will be there are one-word names, without any spaces or whatnot.
Brian
View 3 Replies
View Related
May 29, 2014
However when I start SQL 2012 it loads the Management Studio but does not load the smaller splash screen that usally appears asking me to connect to a server. When I try to click any of the menu items at the top of the screen the system just hangs.
I also have 2012 Service pack 1 installed too.My installations of 2005 and 2008r2 still work fine.I also tried loading SQL2014 and had the same issues as with 2012.
View 0 Replies
View Related
Apr 9, 2008
HiI have a backup device file ... ".bck" whick has grown pretty large.Is there any way I can reduce its size ?ThanksDavid Greenberg
View 4 Replies
View Related
Aug 9, 2000
I currently have a Name column in one of my tables that is in the following format:
Name
Doe, John P
and I need to break this down into three separate fields in a different table:
Last_Name First_Name Middle_Name
Doe John P
Could someone please help me with this issue?
Thanks,
Peter
View 1 Replies
View Related
Jul 17, 2007
I've got a Table that has over 500,000 row in it. Now I need to convert the whole thing into Excel to import into another application. So I need to break the table into 10 different tables. How can I do that?
Thanks
View 11 Replies
View Related
Jul 23, 2005
I hope I can get this across clearly.I have a table that needs to be broken into 3 tables.Col1 Col2 Col3 Col4 Col5 Col6 Col7Col1 and Col2 need to go into LookupTable1Col3 and Col4 into LookupTable2If Col5 is twice the width.... haha just kidding...so Col5 and Col6 go into LookupTable3There is a 4th table which is made up of foreign keys which are the PK ofLookupTable1,2,3My questions is, how to get the data from the columns of each row and add itto its respective lookuptableand sequentially step throw the table to repeat the above step until I'veprocessed each rowthanks folksT.B
View 3 Replies
View Related
Nov 2, 2007
What is the best way to page break and I am having trouble. I am building a form based report and need to always break after the last field.
View 3 Replies
View Related