I just completed a copy-Only compressed backup of a DB (with a FULL Recovery Model ) on SQL Server 2012 and the resulting backup (the bak file) is 1/100th the size of the data & log file. Is the compression in SQL Server 2012 just that good or did something else happen that I did not catch? Below is the T-SQL to re-create the backup. The size of the data file is 750MB and the log file is 75GB and is %95 used according to the SQLPERF command.
Does the compression in SQL Server 2012 simply that good
BACKUP DATABASE [MYBIGOLEDB] TO DISK = N'Z:Microsoft SQL ServerMSSQL11.MSSQLSERVERMSSQLBackupMYBIGOLEDB_20150611.bak' WITH COPY_ONLY, NOFORMAT, INIT, NAME = N'MYBIGOLEDB-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10
GO
We have just implemented a SQL 2012 always on environment. We have a primary and secondary server. I am confused about how to set up the backup plans. The application team was happy to tell me that in sql 2012 always on we can offload the backups to the secondary, thus reducing overhead on the primary server.
However, the secondary only supports copy only full backups. I am unsure how these would be useful in a disaster event? I could not apply any trx log backups on a copy only backup. This means I need to run my full backups on the primary server?
When your snapshot is set to be delivered via ftp and compressed in a cab file. If you add a new article to your publication and re-run the snapshot the agent will be unable to pull the snapshot down as it for some reason doesnt think its compressed. It is failing to find the scripts it needs inside the cab file despite the cab file existing in the correct location.
Here is the error.
2007-07-19 09:57:29.855 Snapshot files will be downloaded via ftp 2007-07-19 09:57:29.886 Connecting to ftp site 'SQL3' 2007-07-19 09:57:29.933 The schema script 'empActive_127.sch' could not be propagated to the subscriber. 2007-07-19 09:57:29.933 Category:NULL Source: Merge Replication Provider Number: -2147201001 Message: The schema script 'empActive_127.sch' could not be propagated to the subscriber. 2007-07-19 09:57:29.933 Category:AGENT Source: SQL2SQL2005 Number: 20033 Message: The process could not retrieve file 'SQL3_CCUSA_ATLAS_SYSTEM TABLES/20070719055712/empActive_127.sch' from the FTP site 'SQL3'. 2007-07-19 09:57:29.949 CategoryS Source: Number: 12003 Message: 200 Type set to I. 200 PORT command successful. 550 SQL3_CCUSA_ATLAS_SYSTEM TABLES/20070719055712/empActive_127.sch: The system cannot find the file specified. 550 SQL3_CCUSA_ATLAS_SYSTEM TABLES/20070719055712/empActive_127.sch: The system cannot find
My source data is present in XML File which is stored in CLOB column Of Oracle. CLOB column is compressed.I need to Migrate data by Uncompressing XML to SQL 2012 .
Do I need to define XML column in SQL Server 2012 for storing Uncompressed CLOB values ?
How to uncompress the clob and extract the required data from XML using SSIS .
I'm having difficulties copying a production DB to a new computer using backup files. The production computer had tempdb on the D: drive, the new computer is much smaller and only has a C: drive. I've successfully restored the Master DB backup but now the database will only start with the (-F) parameter. I know how to Alter the DB to move the tempdb, but I cann't get the DB to start while Tempdb is pointed to the D: drive
Hi, I back up SQL Server 2000 and SQL server 2005 databases to hard disk using the SQL Server Backup Wizard and maintenance plans. Then, I copy the resulting backups to tape using third party tape backup software and compression by the backup software and hardware. I do not use the SQL Server Agent available for the third party backup software. Is this acceptable, or does the compression performed by the third party backup system introduce opportunities for database corruption or other negative effects?
On one of our SQL Server 2014 boxes each database has a copy-only full backup made every night, in addition to the maintenance plan schedule of a full backup weekly, daily differential backups and log backups.
When performing a PIT restore in SSMS the restore file list lists the most recent copy-only backup as the full backup to use, not the most recent plan full backup. I noticed that using SSMS 2008 to start a PIT restore on the 2014 box does not have this problem, and lists the correct restore file sequence (ignores the copy-only backups).
I have inherited a new SQL Server 2008 database server and can not figure out how my user databases are being backed up. This database server is running under a VM.
All the user databases are being backed up nightly per the SQL server log. The backups are written to a virtual disk and is kicked off by the NT AUTHORITYSYSTEM user. I can not see the virtual disk. A restore task does not provide any information about the last backup. I have created a new database, and it is automatically included in the next set of backups.
I have looked at the windows event viewer with out any luck. There are no SQL Server Maintenance Plans or Agent jobs that call a backup. I have also checked the Windows Task Scheduler and can not find any task that does a backup.Could the backups be called from another server ?
I am using Ola Hallegren's scripts to do backups. He uses @Cleanup Time to delete backups older than a certain number of hours. My situation is I do a full backup of a database on Sunday and then I have a few Differentials and then log backups for the rest of the week. When Sunday rolls around again and my full backup is finished, I would like to delete all the differential backups and log backups. Any way that I could accomplish this using Ola's scripts?
I have tried doing sql server backups to a file share and that has been taking too long. So I've decided to backup locally and then taking those backups and getting them off the server. For those that are doing this what do you use to get your backups off the server?
I am sure I have seen in the past in a monitoring tool that PLE drops off to 0 whenever we do a backup. I was doing some reading around this however and found something that said backups use a different portion of memory external to the buffer pool (minmax settings).
Is this correct and how can I tell how much memory will be required for a backup?
So I'm testing some things in our new servers and was trying to restore a database from some striped backup sets. We have 4 files for our backups and restoring the FULL backups with no recovery worked beautifully via SSMS. But when I tried to restore the differentials (also striped across 4 files), the GUI gave me this error:
Unable to create restore plan due to break in the LSN chain.
How to locate when the break happened and I came across this article about how this is an SSMS 2012 bug.
So I tried the advice in this article to attempt a restore via Files and Filegroups, and ended up with the below error:
EDIT: Picture is attached if it is not showing in post.
I was able to restore via T-SQL, but I want to also be able to restore through the GUI.
We have a SQL 2012 server instance that has log shipping set up to another SQL 2012 server to provide a warm standby for a forward facing application. The databases on the primary server occasionally are required to be backed up and restored to a development environment, completely different server. Is there a way to schedule full backups with log shipping enabled?
Native backups to NAS do not complete.We have been experiencing an issue whereby our native backups are hanging with status': SUSPENDED/ RUNNABLE...I ran select * from sys.sysprocesses. All of the backup SPIDs processes show BACKUPTHREAD/PREEMPTIVE_OS_FILEOPS
This first occurred last Wednesday evening. When I discovered this on Thursday, I attempted to kill the backup jobs. This also hung with 0% completed/0% time remaining. Backups hung on more than one instance.That evening, I attempted to restart the instance which also failed with something along the lines of: could not start MASTER file in use.
I then restarted the server--which I really did not want to do--and this cleared it. I was also able to manually kick off maintenance plans (DBCC CHECKDB and full backup) without issue.I was off Friday and the weekend. I came in this morning and found the maintenance plans (diff/tlog backups) did not complete in some of the instances--in one case, the instance affected now was not affected before. They appeared to have hung on their next scheduled kickoff which was later that night after I went home
Remembering the "file in use" error, I have run process monitor to see if anything unusual had a lock on any files. I saw only SQL Server and Double-Take processes accessing log files.Being a relatively new DBA, I am user where to go next in trying to track down the cause of this issue. This is fairly urgent as one of the instances that has had this problem both times is our production SharePoint environment.
ENVIRONMENT:
SQL version: Microsoft SQL Server 2012 (SP1) - 11.0.3368.0 (X64) May 22 2013 17:10:44 Copyright (c) Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.2 <X64> (Build 9200: )
I am in plan to implement following for backup of one of our database Enable Full recovery mode
1- Create full backup nightly 2- Create transaction log backup after every 25 min
as I am taking full backup every night, I think I can remove transaction log file backups at the time of full backup, as we can apply transaction log backup over full backup.My question is regarding removal of transaction log backups.
-Should I remove all transaction log backups and then execute full backup? -Should I execute full backup and remove all transaction log backup older than 24Hrs ? -Do I have to consider SCN or related info before deleting any transaction log backup ?
Up until a couple of days ago all my maintenance plans were working.
Now they are all failing with the following errors -
Message Executed as user: MHS2sql2008. Microsoft (R) SQL Server Execute Package Utility Version 11.0.2100.60 for 64-bit Copyright (C) Microsoft Corporation. All rights reserved. Started: 10:07:02 Progress: 2014-05-29 10:07:03.20 Source: {CE851AA2-045F-429A-885E-E60E75A38639} Executing query "DECLARE @Guid UNIQUEIDENTIFIER
[Code] ....
The package execution failed. The step failed. Now I have changed the SQL Server Agent service account to use a network account called "sql2008".Stopped and started the Agent service.
The account has access to the folders it is trying to back up to. It's a full admin on the sql box and the box where I am trying to backup to?
We had our backups backing up to the server where the databases reside. Now I modified the backups to backup to a file share. Now when we try to restore from the file share the restore fails, so we have to copy the backup to a drive on the server and recover for there. Should I be able to restore directly from the file share (using the gui)? Do I need to change something else to modify the default backup drive?
I am setting up Availability Groups and I want to use the secondary replica to perform the full copy_only backups to reduce the load on the primary replica.But what is the best way to check for successful full backups on Availability Group databases?
Previously I could check the system table msdb.dbo.backupset but this is not available for copy_only backups.So I wonder how people are monitoring that their full backups have been successful?
Do you just check that the SQL Agent job that runs the backup was successful?
Or do you search the SQL Server Error Log for entries like "Database backed up. Database: xxx" where database xxx is in an Availability Group?
Can we backup our cluster databases directly to tape using native backups (without using any third party tool) ? It's SQL Server 2012 two node Active/Passive cluster. One of the DB will be huge in size, hence checking if we can directly backup from the cluster instance to a tape.
Hello, I have a table of compressed data and am looking for an efficient way to expand the data for reporting purposes. The table is used to store the number of hours a given contractor works and is stored in the following fashion:
cntHours 58 20 58 20
The first row represents the number of sequential days where an employee worked the same # of hours. Once the # of hours changes, a new record is created. In this simple example, the first row shows an employee working Monday-Friday (5) for a total of 8 hours each day. The second row represents the weekend (2 days) where the employee worked 0 hours.
What I need to do is explode this out to show 1 record per day. Ideally I'd like to write a function to do this as I would be linking to another table which has the start and end date for the contractor and would allow me to apply individual dates to each record based on the contractor start date through to the end date.
I am using SQL2005 merge replication and have pull subscribers on a low bandwidth link
I am compressing the snapshot into an alternate folder. Files are not put into the default folder
When I start a synchronization, I would expect the cab file to be copied to the subscriber and then the files to be extracted locally at the subscriber in order to apply the snapshot
However, what appears to be happening is that the files are being extracted from the cab file on the publisher (in a UNC specified directory) and then copied in their uncompressed form to the subscriber - resulting in an extremely slow snapshot application.
Any ideas what I am doing wrong? I have read about the options for using FTP to transfer snapshot files, but I am not clear whether I have to use FTP in order to transmit a compressed snapshot. I don't want to use FTP unless I need to.
The space allocated to the Log in question is 180 GB. During this time period I was running TLog backups every 5 minutes, yet the log continued to chew through to 80 GB used, even after the process was complete and a final TLog backup had been taken. It continued to stay very large until the Full backup was complete -- or something else that I'm unaware of completed. Like every other DBA I typically take a TLog backup to shrink the log, but what appeared to be the case here was the Full completed and it released the used log space. All said, will Transaction Log backups not free up the log during Full backups?
I have a server in our central location which is a compressed snapshot publisher. I have 2 push subscribers in remote locations on very slow WAN links. I would like the snapshot cabinet file to be uncompressed at the subscribers location rather than the publisher location. Is this possible with push subscribers? I want to manage the pushing of data to the remote subscribers from the publisher location.
I understand the default with push subscriptions is to uncompress the cabinet file at the publisher location.
already tried this in other SQL forums, but maybe i have some luck here.
I need mainly to restore database backups from customers. They arrive in all kind of formats (zip, rar, gz). I'd like to be able to restore those directly from the compressed file, because i'm talking up to 7GB rar files which take a while to uncompress in a separate step.
I'm working for 6 years in R&D environments, but mostly on Linux/Oracle where this is an easy task using pipes, but i haven't found a sinlge web page, post or even script to do this with MSSQL. The VDI is not really what i'm looking for, so aren't backup software like SQLBackup, Litespeed etc. because i can't force the customer to use those.
Anybody any idea or even the same problem maybe with solution?
I am trying to read a 36 byte files that contains compressed data. I create my Flat File data source and SSIS reads it fine UNTIL it hits a x00 in the file. Then it stops reading and I can't get any data after it. There is data after the x00. Here the entire hex string: C7 C7 CF 6A 00 00 05 02 3D 03 21 01 E0 02 00 00 00 00 00 00 00 00 3D 3C 1E FD 02 C8 00 00 00 AE 41 E3 28 7C
To test, I changed the two x00 in bytes 5 and 6 to x01 and SSIS read until the next x00.
I am trying to use bcp to output data to a compressed (zipped) folder. The bcp command is called from a step in scheduled job in SQL 2005 (T-SQL) similar to:
.... where Cdata is a compressed (zipped) folder. The scheduled job seems to work without errors, but afterwards there is nothing in the compressed folder. If Cdata is a regular folder everything works fine.
We are using 208r2. We used to generate simple reports against to prod. The production is running slow. Just we need the production fresh copy all the time. We are using mainly one database. So I need to have read only copy of that particular database which is sync with the production.