SQL Server 2012 :: Spool Backup Command Result To A Text File
Jun 9, 2015
I am running backups on SQL Sever express and I take backups via the batch file executing the TSQL as below. This works perfectly fine, I just need to be able to spool out the backup completion log. What TSQL command can I use to do that?
My batch file looks like this:
@echo off
sqlcmd -S SERVER01 -E -i H:master_scriptsTOM_FAM_log.sql
Which will the call the TOM_FAM_log.sql :
DECLARE @name VARCHAR(50) -- database name
DECLARE @path VARCHAR(256) -- path for backup files
DECLARE @fileName VARCHAR(256) -- filename for backup
DECLARE @fileDate VARCHAR(20) -- used for file name
SET @path = 'H:BackupTOM_FAM'
[Code] ....
Usually if we schedule the backups using the maintenance plan, we can chose reporting options that can spool the result of the backup to a text file. What is the T-SQL for the spool report to a text file.
This option produces a file that writes logs with the details I posted below. I would like to do the same or similar using T-SQL in my code.
Microsoft(R) Server Maintenance Utility (Unicode) Version 11.0.3000
Report was generated on "SERVER01".
Maintenance Plan: Backup logs
Duration: 00:00:00
Status: Succeeded.
Hi friends, Suppose that I want to generate an script that has a "DROP TABLE <table_name>;" for each table in one of our databases.In Oracle I can do this using "spool" command in "SQL Plus" which is a commandline utility somewhat like isql or osql(I don't know these two perfectly! ).For example:
Code:
spool c:DropAllTables.sql select "DROP TABLE "||table_name||";" from user_tables; spool off
And this will generate that script with many drop within that.Would you please clarify me, if there is such a thing in SQL Server 2000? -Thanks a lot
you can use OSQL or BCP in a command line the command line is something like that : osql -UMyUser -PMyPwd -DMyDatabase -Slocal -i query.sql -o ficspool.txt -e
Hi all, I have a table with the list of tables I need to drop. So basically before droping those tables I need to disable the FK and PK constraints. So I want to spool out the out of this script. SELECT 'ALTER TABLE ' + QUOTENAME( c.TABLE_NAME ) + ' NOCHECK CONSTRAINT ' + QUOTENAME( c.CONSTRAINT_NAME ) AS ALTER_SCRIPT FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS c WHERE CONSTRAINT_TYPE = 'FOREIGN KEY'
Is there a way to store the output of this script in a .sql file so that I could execute it. Any thoughts will help! Thank you!
On the SQL Server the Event Viewer shows the same messages and errors every evening between 22:05:00 and 22:08:00. The following information messages are shown for every database:
"I/O is frozen on database <database name>. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup."
"I/O was resumed on database <database name>. No user action is required."
"Database backed up. Database: <database name>, creation date(time): 2003/04/08(09:13:36), pages dumped: 306, first LSN: 44:148:37, last LSN: 44:165:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{A79410F7-4AC5-47CE-9E9B-F91660F1072B}4'}). This is an informational message only. No user action is required."
After the 3 messages the following error message is shown for every database:
"BACKUP failed to complete the command BACKUP LOG <database name>. Check the backup application log for detailed messages."
I have added a Maintenance Plan but these jobs run after 02:00:00 at night.
Where can I find the command or setup which will backup all databases and log files at 22:00:00 in the evening?
write a backup to local disk and then run a command to send the file to the TSM Server.This is the command I use at a Command Prompt to do an TSM incremental backup.
Command for an incremental backup of drive letter h: C:Program FilesTivoliTSMaclientdsmc incremental h:
Command for an incremental backup of a mount point: C:Program FilesTivoliTSMaclientdsmc incremental -domain="E:Backup"
I would like to be able to run this as the last step in my backup processes. This would allow me to send my local backup file to the TSM server to write to tape.
I am looking for either a CMDEXEC Expert that could show me the syntax to run these commands via a direct command or a batch job. The other option would be to run these commands via the Powershell type.
What causes the query optimizer to choose a table spoollazy spoolaction in the execution plan? The explanation of "optimize rewinds"makes little sense because my query never comes back to that table.I'm going to have to change the query but it would be helpful if Iknew what I should be trying to avoid.David
It works remotely if I run it via command prompt. But when I add this to a TSQL job on my remote SQL instance, it runs without deleting anything. What I'm missing?
I have accidentally taken the backup of a database twice into same .bak file. Now the file is twice the size. Will it be fine if I restore this backup? Or will I screw up any data?
I am in need of urgent help. i do believe that when you used the command SPOOL filename without specifying an extension name, it automatically puts an .lst extension name. Can you help me identify if there is any way i can generate a file without an extension name? I am in need of help right now. I hope someone will answer my post as soon as possible. I really really appreciate it!
how to import a text file with a list of NI numbers into a new table with a column to list all the NI numbers? I think I use the Select INTO clause, but not sure how to do this?
I am looking for a way to convert the following format into a sql table. The format it is Bib Tex.
Essentially a new row in the table would be for each entry, denoted by an @ logo and each column is denoted by an =, as you can see from the example data no one contains all the possible columns and some fields can be over two lines long.
To load this I was considering loading it into a table as each line being a row. Adding a row number, then a column counting the @ signs in order and essentially grouping each record, then for each group running through and looking for the column keywords 'author' , 'title' etc then splitting the data out into those constituent parts using substring and charindex.
@Book{hicks2001, author = "von Hicks, III, Michael", title = "Design of a Carbon Fiber Composite Grid Structure for the GLAST Spacecraft Using a Novel Manufacturing Technique", publisher = "Stanford Press", year = 2001,
This script will read the contents of a DB backup file, and generate a restore command.
Set the value of parameter @backup_path to point to the backup file, run in Query Analyzer, cut/paste the output into another Query Analyzer window, modify as necessary, and run.
This is just a barebones script to demo how this can be done. Modify as necessary to meet your own needs.
Works in SQL 2000 and 7.0. May work in SQL 2005, but it is not tested.
-- Create Restore Database Command from DB Backup File
select [--Restore--]= case when a.Seq = 1 then @cr+ @cr+'restore database '+c.DatabaseName+ @cr+'from disk ='+@cr+@tab+''''+ @backup_path+''''+@cr+'with'+@cr else '' end+ @tab+'move '''+a.LogicalName+ '''to '''+a.PhysicalName+''' ,'+ case when a.Seq = b.Seq then @cr+@tab+'replace, stats = 5 , recovery' else '' end from #filelist a cross join ( select Seq = max(b1.Seq) from #filelist b1 ) b cross join ( select DatabaseName = max(c1.DatabaseName) from #header c1 ) c order by a.Seq go drop table #header drop table #filelist
finding the database size from the backup file.I have SQL 2012 backup file, is there any way to find the estimated database size from the backup.I tried restoring , i got an error saying " no space need additional xxx bytes " ...does this error gives the exact space needed to restore ?
One more question....one of the backup file size is 7.2 GB, when i try to restore it ....it throws error saying it needs 292GB extra space while only 100 Gb is available. How come 392 Gb sized database becomes 7.2 Gb .bak file ?
I have a SQL 2012 enterprise server, and I'm using Commvault as my backups. So commvault can restore a .bak file to my server, but it cannot use sql compression on the file apparently. So what would be a 150GB .bak backup file is now 600GB. I have to manually upload these files to an auditing firm on an sftp server and the transfer times are now huge.
Is there a way to use something in sql to compress this already existing .bak file down?
I need to create a new db from a backup file of an allready existing database (vehicledb) on this server.So i created a new database with name "vehicledb2" .But when i try this, the backup says that this file is used...The backup set holds a backup of a database other than the existing 'Vehicles2' database. Restore of database 'Vehicledb' failed.
i am using a OLE DB Source in my dataflow component and want to select rows from the source based on the Name I enter during execution time. I have created two variables,
enterName - String packageLevel (will store the name I enter)
myVar - String packageLevel. (to store the query)
I am assigning this query to the myVar variable, "Select * from db.Users where (UsrName = " + @[User::enterName] + " )"
Now in the OLE Db source, I have selected as Sql Command from Variable, and I am getting the variable, enterName,. I select that and when I click on OK am getting this error.
Error at Data Flow Task [OLE DB Source [1]]: An OLE DB error has occurred. Error code: 0x80040E0C. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E0C Description: "Command text was not set for the command object.".
Can Someone guide me whr am going wrong?
myVar variable, i have set the ExecuteAsExpression Property to true too.
I am looking at the file / filegroup level backup and recovery options within SQL Server and I'm struggling with the following concept.
Books online assures me that it is possible to perform a file restore whilst the database is in the simple recovery model.
So I have set up a database with two separate file groups, a read/write primary and a read only "secondary". Each filegroup has 2 underlying data files.
I have then created a "live" customers tables within the primary filegroup and assigned my existing "archive" customers tables within the secondary filegroup.
If I try to perform a file or filegroup level backup within management studio, those options are greyed out. I can only perform a database backup.
If I switch back to the full recovery model, the options are no longer greyed out.
So my question is this, is file level backup and recovery actually supported in the simple user model, do you have to perform this task outside of management studio, or (as is likely) am I missing something crucial?
I have a CTE query against a table with 32K rows that runs fine in 2008R2. I am running it in 2014 Std Ed. against the same data and it runs very slowly. Looking at the execution plan I think I see what's contributing to the slowness.
Note that the "actual number of rows" is some 351M...how is this possible?
the query:
declare @amts table (claim int,allowed decimal(12,2),copay decimal(12,2),deductible decimal(12,2),coins decimal(12,2)); ;with unpaid (claimID) as (select claimID from claim where amt+copay + disct+mm + ded=0) insert @amts select lineID, sum(rc), sum(copay), sum(deduct), case when sum(mm)>0 and (sum(mm)<sum(mmamt)) then sum(mm) else 0 end from claimln where status is null and lineID not in (select claimID from unpaid) group by lineID
it's like there's some massively recursive process going on?
How do you add a specific timestamp to a backup? For example, if the backups are going to the same drive location on disk and you want to retain 3 days worth of backups online, how do you add the timestamp to the filename to make each backup unique?
F:MSSQL.1MSSQLBackup and your user database is <xyz>_<timestamp>.bak The user database dumps each night at 9 PM.
Hi. I am writing a program in C# to migrate data from a Foxpro database to an SQL Server 2005 Express database. The package is being created programmatically. I am creating a separate data flow for each Foxpro table. It seems to be doing it ok but I am getting the following error message at the package validation stage:
Description: An OLE DB Error has occured. Error code: 0x80040E0C.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E0C Description: "Command text was not set for the command object".
.........
Description: "component "OLE DB Destination" (22)" failed validation and returned validation status "VS_ISBROKEN".
This is the first time I am writing such code and I there must be something I am not doing correct but can't seem to figure it out. Any help will be highly appreciated. My code is as below:
private bool BuildPackage()
{
// Create the package object
oPackage = new Package();
// Create connections for the Foxpro and SQL Server data
MessageBox.Show("The DTS package was not built successfully because of the following error(s):" + sErrorMessage, "Package Builder", MessageBoxButtons.OK, MessageBoxIcon.Information);
I have a query below which filters detail field in the #TempLogins table. The details field is a text field which contains many types of text strings, some containing urls that have parts like "ResultID=5" which is what is contained in the ResultIDSearch and ResultSetIDSearch fields. The records with entries like "ResultID=5" are the ones I'm trying to filter for.
The problem I have is that the query takes way too long to run. The TempLogin table has around 200 K records and the TempSearch table has around 80 K records.
select * from #TempLogins a where exists (select 1 from #TempSearch t1 where a.detail like '%' + t1.ResultIDSearch + '%' or a.detail like '%' + t1.ResultSetIDSearch + '%')