I have an old backup file that I am pretty sure contains a table that I am looking for. Is there a way to verify what exactly a backup file contains without restoring it to a test server?
Here's what's going on. I have a 2 computers (x & y) running SQL2000. Ibacked up a copy a DB from x and restored it on y. I have a Stored procthat runs in under 2 seconds on both x & y when running it throughAnalyzer, but when I call this stored proc running it throuhg my C#winforms app (running on computer z) it takes over 3 minutes oncomputer x and under 10 seconds on y.This stored proc does have a select clause as part of the where clause,but again it works fine on y.I've check the indexes and that looks good and I just did a restore ofthe database so they should be identical. And I don't think it's aperformance issue because the rest of the app runs actaully a bitfaster on x.The plans do have differences. Specifically with a mention ofParallelism in the fast one.Here are the plans:X (slow):|--Sort(DISTINCT ORDER BY:([r].[GuestId] ASC, [g].[GuestNote] ASC,[Expr1005] ASC, [g].[Email] ASC, [g].[Phone1] ASC))|--Compute Scalar(DEFINE:([Expr1005]=[g].[LastName]+','+[g].[FirstName]))|--Filter(WHERE:(If ([Expr1003] IS NULL) then 0 else[Expr1003]>=2))|--Nested Loops(Left Outer Join, OUTERREFERENCES:([r].[GuestId]))|--Hash Match(Inner Join,HASH:([g].[GuestId])=([r].[GuestId]),RESIDUAL:([r].[GuestId]=[g].[GuestId]))| |--Clustered IndexScan(OBJECT:([Restaurant].[dbo].[Guest].[PK_Guest] AS [g]),WHERE:(len(isnull([g].[Email], ''))>6 AND charindex('@',isnull([g].[Email], ''), NULL)>1))| |--Clustered IndexSeek(OBJECT:([Restaurant].[dbo].[Reservations].[PK_Reservations] AS[r]), SEEK:([r].[RestId]=1), WHERE:([r].[Date]<='Jan 1 2005 12:00AM'AND [r].[Date]>='Jan 1 2003 12:00AM') ORDERED FORWARD)|--Hash Match(Cache, HASH:([r].[GuestId]),RESIDUAL:([r].[GuestId]=[r].[GuestId]))|--ComputeScalar(DEFINE:([Expr1003]=Convert([Expr1011])))|--StreamAggregate(DEFINE:([Expr1011]=Count(*)))|--IndexSpool(SEEK:([r2].[GuestId]=[r].[GuestId]))|--Clustered IndexScan(OBJECT:([Restaurant].[dbo].[Reservations].[PK_Reservations] AS[r2]))Y (Fast):|--Parallelism(Gather Streams)|--Sort(DISTINCT ORDER BY:([r].[GuestId] ASC, [g].[GuestNote]ASC, [Expr1005] ASC, [g].[Email] ASC, [g].[Phone1] ASC))|--Parallelism(Repartition Streams, PARTITIONCOLUMNS:([r].[GuestId], [g].[GuestNote], [Expr1005], [g].[Email],[g].[Phone1]))|--Compute Scalar(DEFINE:([Expr1005]=[g].[LastName]+','+[g].[FirstName]))|--Filter(WHERE:(If ([Expr1003] IS NULL) then 0else [Expr1003]>=2))|--ComputeScalar(DEFINE:([Expr1003]=Convert([Expr1013])))|--Hash Match Root(Right Outer Join,HASH:([r2].[GuestId])=([r].[GuestId]),RESIDUAL:([r2].[GuestId]=[r2].[GuestId]) AND([r2].[GuestId]=[r].[GuestId]) DEFINE:([Expr1013]=COUNT(*)))|--Parallelism(RepartitionStreams, PARTITION COLUMNS:([r2].[GuestId]))| |--Clustered IndexScan(OBJECT:([Restaurant].[dbo].[Reservations].[PK_Reservations] AS[r2]))|--Hash Match Team(Inner Join,HASH:([g].[GuestId])=([r].[GuestId]),RESIDUAL:([r].[GuestId]=[g].[GuestId]))|--Bitmap(HASH:([g].[GuestId]), DEFINE:([Bitmap1014]))||--Parallelism(Repartition Streams, PARTITION COLUMNS:([g].[GuestId]))| |--Clustered IndexScan(OBJECT:([Restaurant].[dbo].[Guest].[PK_Guest] AS [g]),WHERE:(len(isnull([g].[Email], ''))>6 AND charindex('@',isnull([g].[Email], ''), NULL)>1))|--Parallelism(RepartitionStreams, PARTITION COLUMNS:([r].[GuestId]),WHERE:(PROBE([Bitmap1014])=TRUE))|--Clustered IndexSeek(OBJECT:([Restaurant].[dbo].[Reservations].[PK_Reservations] AS[r]), SEEK:([r].[RestId]=1), WHERE:([r].[Date]<='Jan 1 2005 12:00AM'AND [r].[Date]>='Jan 1 2003 12:00AM') ORDERED FORWAny ideas of what I can check for?Thanks for any help.
I trying to find a way to find what the SPID of a given job that is running. I am trying to create a script that will give me the SPID the JOB_ID, and JOB_NAME. The problem comes in that if I use sysprocesses I have to pull the JOB_ID from program_name in sysprocesses and convert it into something then join it to sysjobs. Have not been sucessfull in that conversion. Any Ideas
Hi, I have a users table in my SQL Server database. Now, I am looking to create a table (or multiple tables) to allow users to post their weekly events, meetings, activities, and accomplishments to the database. Each Monday morning, each user will enter their new schedule for that week and the previous week's entry will be archived in the database. My question is: what would make more sense? Should I create one big table that would have the following columns:
- Week Number (the current week number in the year) - User ID - Events - Meetings - Activities - Accomplishments
And each user would have one row in the database per week.
OR should I create 4 separate tables named Events, Meetings, Activities, Accomplishments. Each of these tables would have the following columns: (for instance, the Events table would contain:)
Each time a user adds a new event to their schedule, a new row in the Events table is created. Each time a user adds a new accomplishment, a new row in the Accomplishments table is created. etc., etc.
Which approach seems to make more sense and would be easier to maintain? Also, which one would conserve database space better and result in faster querying.
I have 3 tables, Items, Brands and Categories. Brands and Categories consists of a primary key as well as the Brand/Category name. Items has a bunch of columns as well as two foreign key fields, one for Brands and the other for Categories.I want a SELECT statement that would select each Brand(id and name) as well as the associated categories for each brand (i want this info the menu control) if this was a one-to-many relation, i wouldn't have had any problem... but since this is basically a many-to-many relation, i just can't figure it out
Using SQL Server 2005 Server Management Studio, I attempted to back up a database, and received this error:
Backup failed: System.Data.SqlClient.SqlError: Backup and file manipulation operations (such as ALTER DATABASE ADD FILE) on a database must be serialized. Reissue the satement after the current backup or file manipulation is completed (Microsoft.SqlServer.Smo)
Program location:
at Microsoft.SqlServer.Management.Smo.Backup.SqlBackup(Server srv) at Microsoft.SqlServer.Management.SqlManagerUI.BackupPropOptions.OnRunNow(Object sender)
Backup Options were set to:
Back up to the existing media set
Overwrite all existing backup sets
I am fairly new to SQL 2005. Can someone help me get past this issue? What other information do I need to provide?
I got full backup on daily schedule its taking more space on Drive because each file has more than 25GB.I am using SLQ server 2008R2 so I'm looking to take the backup with compression instead of uncompressed Backup. What are the impacts of compressed backup. Is there any problems with compressed backup while restoring the backup file.
I should restore a SQL Server 2005 Database from backup. The backup contains three files, named user.bak0, user.bak1 and user.bak2.
How is the syntax of the restore filelistonly and the restore database ... ?
I usualy write restore filelistonly from disk = 'path and filenam.bak' restore database. zy from disk = 'path and filename.bak' with replace, move..... move....
This works but I cannot use it with a splitted backup file. The files are much too big to put together to one file.
My customer got a total hard drive failure.After sending it to drive recovery specialist we were able to recover the LDF log file (MyDB_0.LDF).But the MDF file was completely destroyed (MyDB.MDF).They have a good full backup from a month ago.
1) Installed SQL Server 2012 on a new PC 2) Created a new database of same name (MyDB) - with same MDF and LDF file names as original 3) Took the new database offline 4) deleted the MDF and LDF files of the new database 5) put "MyDB_0.LDF" in the place of the LDF file I just deleted 6) put the database back on-line 7) after hitting F5 to refresh databases - it shows "MyDB (Recovery Pending)" 8) tried to do Tail Log Backup with this command BACKUP LOG [MyDB] TO DISK = N'C:BACKUPMyDB_TailLog.bak' WITH NO_TRUNCATE
And I get this error...
Msg 3447, Level 16, State 1, Line 3 Could not activate or scan all of the log files for database 'MyDB'.
The sad thing is I know we can get this data back using ApexSQL-Log. I can see all the transactions since the last full backup in this program - so the log file is not damaged. But my client doesn't want to pay the $2000 fee for this software.There has to be a way to restore this data, without having to purchase a third party tool.
I checked quickly one network and I ran into the folders; I found out the folder MSSQL backup, nothing strange so far. Within that folder, anyway, I found out several backup and one very huge file (datawarehouse) with extension FILE. I am wondering what can be? I got datawarehouse mdf of course and datawarehouse log but what is that huge file (1 TB)?
Hi there, After a bad server crash, the only remnant we have of our SQL server database are the .mdf and .ldf files in the MSSQL7/Data folder. Can we restore this database from either of these files and if so, what is the procedure? Sorry but I'm an SQL server newbie.
I dropped a SPROC, and was in the process of reacreating it when I lost power. Now I don't have the SPROC except for in an old backup file.
Is there anyway to get this without restoring the DB ? If not, can I easily restore to a different DB name and then delete it? My backups are on my live server and I don't want to overwrite my current DB with my backup :)
I rigth click my project , i choose add -> new item -> sql database and i write
SqlConnection cc = new SqlConnection(@"Data Source=.SQLEXPRESS;AttachDbFilename=|DataDirectory|DataData.mdf;Integrated Security=True;User Instance=True");
when i code
cmd.CommandText = "Backup database Data to disk = 'C:\123.bak'";
i recieve Error Could not locate entry in sysdatabases for database 'Data'. No entry found with that name. Make sure that the name is entered correctly. BACKUP DATABASE is terminating abnormally.
My database's log file is full. i want to backup it . First i create a backup device named aa. Than i use Enterprise to backup trasaction log. But it popup an error messagebox. The title is "Microsoft SQL-DMO(ODBC SQL state:42000) The content is "write on 'aa' failed,status=112. See the SQL error log for more details.Backup or Restore operation terminating abnormally. What's the matter ?
I am using SQL Server 7 and have about 5 databases. One of them has a data file of about 10 Meg, and most of the others are larger. I do a nightly backup to both a local and mapped drive. On both, the size of the backup file for this database is more than 500 Meg, but the rest appear to be an appropriate size. Does anyone know why this would be happening? The database works fine, it does not get a lot of insert/delete activity and I run DBCC every weekend. If anyone has any ideas I would sure like to hear from them.
Hi, I am new to SqlServer (I am using 7.0) and I have just recieved a database.bak file from one of our clients and I want to create a new D/B from this back up file ,I have already installed the MS SqlServer. Can you tell me how to do this.
Anyone have anything hints on getting the option of DELBKUP to work. I have a maintenance job that backs up the database. Also set to delete backups older than 1 day. This doesn't seem to be working.
Is the name of the most recent backup file for each database stored anywhere in SQL2K? I want execute a SQA job periodically that takes the BAK from database A and restores it over database B (using the T-SQL RESTORE DATABASE procedure), but I need to know the exact name of the .BAK file; i.e. I need to know the yyyymmddhhmm value at the end of that file.
Is the name of the last backup file recorded anywhere in SQL2K? I do backups every 2 hours, and want to restore backups from database A to database B once a day using the latest backup. Unfortunately the backup files don't have the time the backup started (e.g. 1600), but have the time they ended (I presume); e.g. 1604.
I want to automate the process; any suggestions on how to find the name of the last database file?
Hi, I have a database which size is 64Go, and i backup it everyday with 1 day of retention. But the maintenance plan don't delete the file older than 1 day. This works fine for the other db. Do you know what is the pb ? regards.
I tried to use backup and restore database tasks to restore backup file but it does not work. The backup file I tried to restore in SQL server 2000 is from somewhere else (from my friend) and saved in cd-rom, not the one I created before. How can I restore it to view in SQL server 2000 database? Can you show me step by step? Thanks for your help
How can I see what is exact size of SQL server backup file while backup is running. The process is running for more than 2 hours, it is chewing up disk space (it already took about 20G), but the size of backup file is still showing as 0.
I would like to set up sort of schedule by using FTP to move database back up file from Production server to Staging server every night. Is someone telling me how to do this and any other good way to handle it? I appreciate help!
Hi This is chetan jain sql server DBA. I came accross a probplem while synchronising production with DR in logshipping. logshipping was stopped and the message was " Cannot copy the ......TRN . file is corrupted.
can anybody suggest that what should be done if TRN file is corrupted...because if i manully restore the logs at DR i will need every log file to sync Prod with Dr.log file that is corrupted cannot be skipped...neither it can be restored at Dr.
hi i am using isp_backup store procedure to get the daily backup of database and this is work fine but i have copy the database file in network sharing folder to use following cmd EXEC master..xp_cmdshell 'D:filename M:foldernam' M: is network folder to window2003 but their no copy this folder
I am considering some disaster recovery scenarios.
Lets assume my MDF is gone - the disks are dead.
The LDF is on a different disk channel. Lets assume its fine.
Can I make a "final" TLog backup from the "good" LDF file?
Maybe copy some-earlier-MDF file into place, would that enable a TLog backup from the LDF file?
'Coz if I can then I could use that as a route to getting a zero-loss recovery - make a final TLog backup, and then restore the whole lot from last FULL + All TLog backups thereafter.