SQL 2012 :: Restore Labelonly From Disk Running Nonstop
Jun 22, 2015
We're having some issues with where our backups write to, so I've been watching and monitoring the performance, when I noticed today that restore labelonly from disk has been running almost non stop for the past few hours.
The account running the query is the SQL Server's service account, and the program is "Microsoft SQL Server".
Every minute or so the SPID changes which made me think it was related to the transaction logs, the "restore labelonly" runs for as long as each database in the transaction log backup.
Example: Database A transaction log backup takes 1 minute and the SPID XX for restore labelonly runs 1 minute
Database B transaction log backup starts and there is a new SPID for restore labelonly.
I hope this makes sense because I normally don't see this restore labelonly running.
I am trying to create a scheduled task taht will restore a database from a backup file. I do not store my backups on a backup device, but on a local disk. To restore a DB from a backup device, the following statement will work: "Load DBName from BackupDeviceName". Does any know what statement to use to restore from a file, if the file is "E:DBName_db_dump_199909272220".
We've got an internal database that replicates with another database server for our website.
Not all tables are replicated, some use merge and the others are snapshot based and published regularly to the public website facing server.
However, there's a lot of data (well, large textual data) that's being transferred and it seems to be generating massive log files that continue to grow and grow.
I'm fairly new to adminning an SQL Server box, so was wondering if anyone can tell me what the best way to keep it under control is? I've heard its possible to truncate the logs, effectively deleting any data that has already been processed by subscribing servers etc.?
As I said, I'm very much new to this and would really appreciate some guidance, if only to the right part of the SQL Server Books Online :)
If databases on a physical drive [G:] are fragmented, and the drive isextended by adding more hard drives to the array, does it make sense tobackup and restore the fragmented databases?The Windows Server should be able to find contiguous space for eachdatabase, since it shows 75% free space on the SQL Data drive withoutany file fragments on it.Or will it restore to the original location, in which case does it makesense to delete the databases and restore them from the backups?Thank you very much!
Hi, I have formatted my server because of serious problem and i did not backup my database. I have only a phisical copy of the disk containing data on another disk. :( How I can recover my db? Thank you in advance.
i have a SQL 2008 R2 RTM production instance, in which we run dBCC CheckDB every weekend to check on the DB. This weekend this sql job returned the error:
DBCC RESULTS -------------------- <DbccResults> <Dbcc ID="0" Error="8928" Severity="16" State="1">Object ID 866531312, index ID 1, partition ID 72057602979266560 , alloc unit ID 72057603064397824 (type In-row data): Page (1:7650240) could not be processed. See other errors for details.</Dbcc> <Dbcc ID="1" Error="8939" Severity="16" State="98">Table error: Object ID 866531312, index ID 1, partition ID 720 57602979266560, alloc unit ID 72057603064397824 (type In-row data), page (1:7650240). Test (IS_OFF (BUF_IOERR, pB UF->bstat)) failed. Values are 12716041 and -6.</Dbcc> <Dbcc ID="2" Error="8990" Severity="10" State="1">CHECKDB found 0 allocation errors and 2 consistency errors in t able 'tblDistpatch' (object ID 866531312).</Dbcc>
We tried to rebuild the indexes in the table: tblDistpatch...the non clustered indexes ran fine however the cluster index rebuilt returned an error:
Error: The statement has been terminated. Msg 829, Level 21, State 1, Line 1
Database ID 3, Page (1:7650240) is marked RestorePending, which may indicate disk corruption. To recover from this state, perform a restore.in TEST environment we were able to reproduce this error by restoring latest backup.
then we ran dbcc checkdb and no errors where found.
my question comes with in your experience is this the best possible way to fix this table?only one table, but running this in production environment will required to put db in single user mode first.
Fellas!!This is a very complicated one and it took me a few days to figure outexactly what's going on, but here's the final story:I have a production environment running on .NET with a SQL Server(2000, SP3). The SQL Server is on a dedicated Proliant computer with2GB RAM (the actual SQLServer.exe process has dynamic memoryassignment and can reach up to 1.6GB RAM). Nothing else is running onthat specific computer.Once the SQLServer is started, it hits 300MB RAM (the minimum that wasset in the configuration of the server - remember, it is dynamicallyaquired).Then there is a .NET program that requests just about all the data theSQL Server contains (apart from a single table that contains roughly1.6 million rows and another table that contains about 10000 rowswhich are all of type IMAGE).Once all the data is retrieved, the RAM is at about 400MB. From thereon, every update I make to the data on the server causes the RAM to goup by a bit (that updates are done in a Transaction which of course iscommitted at the end). It seems that BLOB updates are the majorproblem in all of this. For some reason, uploading a blob of size 9MBcauses the RAM to go up by roughly 20MB and after commit it gose down10MB (total gain of roughly 10MB RAM). Eventually the SQLServerprocess hits its upper limit (1.6GB) and at this point it startsslowing down.Some performance checks showed me the SQLServer has a lot of diskactivity, it seems it is reading and writing pages of data from/to theHD all the time (which causes the queries to be much much muchslower).We have a development environment running the exact same code (it isthe exact same in everything, except for the amount of data stored inthe DB). This does not happen there at all.I have a few questions:1. Why is the RAM going up after BLOB updates?2. Why is the RAM going up at all?3. How can I tell the DB which tables should remain in the RAM at alltime (never swapped back to the HD?) - DBCC PINTABLE does not seem todo the job.It does not seem to have anything to do with the .NET code.Thank you very much,M Yamo.
Does anyone out there have any ideas on the best way to connect in integration services to a nonstop sql database running on a tandem computer? We have some odbc drivers that have proven to be problematic and slow to use. What oledb products I can find apparently come with something that's around $36K per processor to buy. Thanks.
When I select All Actions>Backup Database and then click the ... button to choose a location on the hard drive, SQL Enterprise Manager Hangs and I have to kill it from Task Manager.
All other areas of EM browse the disk drive without a problem. (DTS, File Groups, etc.)
every time I try to verify the backup file I get the error:
backup set on file '1' is not valid (translated from italian).
this is my backup script:
BACKUP DATABASE [ahr_sistema] TO DISK = N'C:ProgrammiMicrosoft SQL ServerMSSQL.1MSSQLBackupahr_sistemaahr_sistema_backup_1.bak' WITH FORMAT, INIT, NAME = N'ahr_sistema-Completo Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO
declare @backupSetId as int
select @backupSetId = position from msdb..backupset where database_name=N'ahr_sistema' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'ahr_sistema' )
if @backupSetId is null begin raiserror(N'Verifica non riuscita. Impossibile trovare le informazioni di backup per il database ''ahr_sistema''.', 16, 1) end
RESTORE VERIFYONLY FROM DISK = N'C:ProgrammiMicrosoft SQL ServerMSSQL.1MSSQLBackupahr_sistemaahr_sistema_backup_1.bak' WITH FILE = @backupSetId, NOUNLOAD, NOREWIND
I have one data pump in a series that was pumping in too many records. Doing an independent query of the source table, I found there was about 140,000 records. My pump uses a variable for the source query, nothing fancy just a simple SELECT * FROM table WHERE DateField > '4/6/2006 12:00:00AM'. The Destination is local on the SQL Server and is set by a variable, and does a fast load. When I went away and checked in BIDS while it was running (the data flow tab where you can see the record count) it was at 28,000,000 and still going!
Any ideas what could be causing this? As I say there are only 140,000 records and no joins in the query--is this a bug someone has run into before?
hello,all I am new to Sql 2000,I installed sql 2000 database in C disk,but Now I found my C disk space is smaller than before,So I want to move my databse(include data and structure) from C Disk to D Disk(its space is very large) . is it possible to do it ? if its can be done ,do I need to change my asp.net program source code (exp: chaneg my crystal report connectstring ) ? thanks in advanced!
We have a couple of 200GB databases that are recreated each night on a SQL2012 server connected to a disk array. The SQL disk is an auto-tiered combo of 10k and 7k drives in a RAID1 lun, and both data and log files are there.
Recently, some room has opened up on an older array that contains smaller, but many, 15k drives that I could use in a RAID1 config. Being that I'd like to split up the mdf and ldf files, which would you put onto the new (faster) disk?
EDIT: Add'l info: the only current performance issues I see in the SQL Log are FlushCache messages occuring throughout the night, when all activity happens for this DB. Things like this: "FlushCache: cleaned up 388690 bufs with 23474 writes in 409743 ms (avoided 179747 new dirty bufs) for db 47:0"
During investigation problem with disk i found some issue, that every 2 minutes is writing on disk , it looks like to mdf file of database, but in almost 6-8milions Bytes / sec , it is about 1-2 sec , but every 2minutes... this is normal behaviour ? this is synchonyous writing to disk from memory.
Is there anyway to check if server is having disk latency or IO issues?Found below in SQL error log
Date10/1/2014 8:28:58 AM LogSQL Server (Current - 10/1/2014 12:00:00 AM)
Sourcespid10s
Message SQL Server has encountered 8500 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [D:Fin.mdf] in database [Fin] (5). The OS file handle is 0x0000000000001368. The offset of the latest long I/O is: 0x0001104a7da000
I have a windows 2012 cluster environment that consists of two SQL servers nodes with Quorum disk configured as witness.
Manual failover between nodes is working fine, however the sql instance virtual is not seeing the Quorum disk.
Moreover the Quorum disk has the same number as another cluster storage disk, is that considered a problem?
When I move the SQL instance from a node to anohter, should the Quorum Disk change ownership as well to that destination node ? if it is not changing ownership what would be the problem??
I had my IT dept. install sql server 2005 enterprise edition on a new windows server 2003. All other machines are running sql 2000 server on windows server 2000. I am a sa and local admin on all servers. I tried performing copy database, backup and restore and detach and attach to upgrade the sql server 2000 databases to 2005 and all fail. They also installed sql server 2005 sp2. I followed all steps. It's like the servers dont talk to each other. Their all on the company's domain. I am wondering if something happen with the install but the IT dept. insist that everything went fine. It's strange that I can't perform a simple backup and restore it on the new server buy when I click on restore it doesn't let me browse to get the backup file on the 2000 server. I never had a problem in Sql 2000 with backup. Can it be the installation was corrupted somehow. It seems fine. I haven't created any new databases because I wanted to move the databases from sql 2000. Can anyone help me get a clue of what the problem is please?
I am wondering what would be the best disk/RAID setup for a Windows server 2008 R2 OS and SQL Server 2012 database that has heavy read/write. I have the following disks I can use:
4x 15k 146GB 2x 10k 600GB
According to the server build requirements for the application, I need 100GB for the OS and 290GB for the drive containing the SQL mdf there are no stated requirements for the ldf, but would like to know if it should be allocated elsewhere?I should do RAID 10 for the 15k drives for SQL and RAID 1 for the OS on the 10k.
If I return the Average, Minimum, and Maximum values for the counter Physical Disk: Avg. Disk Queue Length, and those values are 10, 0, 87 respectively, which value do I use to compute the Avg. Disk Queue Length for a 4 disk array(RAID 10): Average, Minimum, or Maximum? The disk(lun) is on a SAN.
If I run the following command in a Query window it works:
RESTORE DATABASE CIS_Source_Data_Test FROM DISK = 'y:CIS_Source_Data_backup_2015_06_17_085557_7782407.bak' WITH RECOVERY, REPLACE
If I dynamically put together the command and store it in variable @cmd and then execute it using exec sp_executesql @cmd or exec (@cmd) it does not work. I get the following:
Msg 2745, Level 16, State 2, Procedure CIS_Source_Data_Refresh, Line 92 Process ID 62 has raised user error 50000, severity 20. SQL Server is terminating this process. Msg 50000, Level 20, State 1, Procedure CIS_Source_Data_Refresh, Line 92 RESTORE DATABASE is terminating abnormally. Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
Why it won't work when I try to create and run it dynamically?
I need to restore test DB from production backup but once it is restored I would need all the permissions of sql logins and windows AD account intact in test Db as it was before.
-- Initialize Control Mechanism DECLARE@Drive TINYINT, @SQL VARCHAR(100)
SET@Drive = 97
-- Setup Staging Area DECLARE@Drives TABLE ( Drive CHAR(1), Info VARCHAR(80) )
WHILE @Drive <= 122 BEGIN SET@SQL = 'EXEC XP_CMDSHELL ''fsutil volume diskfree ' + CHAR(@Drive) + ':'''
INSERT@Drives ( Info ) EXEC(@SQL)
UPDATE@Drives SETDrive = CHAR(@Drive) WHEREDrive IS NULL
SET@Drive = @Drive + 1 END
-- Show the expected output SELECTDrive, SUM(CASE WHEN Info LIKE 'Total # of bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS TotalBytes, SUM(CASE WHEN Info LIKE 'Total # of free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS FreeBytes, SUM(CASE WHEN Info LIKE 'Total # of avail free bytes : %' THEN CAST(REPLACE(SUBSTRING(Info, 32, 48), CHAR(13), '') AS BIGINT) ELSE CAST(0 AS BIGINT) END) AS AvailFreeBytes FROM( SELECTDrive, Info FROM@Drives WHEREInfo LIKE 'Total # of %' ) AS d GROUP BYDrive ORDER BYDrive
I am trying to setup a test cluster and am having an issue. When I try to create the resource of a physical disk it takes both the drive e: and drive q: and doesn't seperate them into two physical disks as resources. This means when I try to associate the quorum disk it links the to physcial disk resource of drive e and q. Then when I try to install SQL2k5 I get the warning about installing SQL on the quorum disk. Am I missing something? Is there a way to seperate e and q onto two physical disk resources so I can specifically associate the quorum to q and the sql to e or should I be setting the quorum disk to a majority node set? Thanks in advance.
I have a three tier system using SQL server 2000, we are currently experiencing IO bottle necks on our SCSI Raid 10 array, which holds the Data and the logs in separate partitions.
So my options as I understand it are:
Get Enterprise edition
or
Get another physical raid 10 array and separate the logs and data i.e. data on one array and logs on the other array.
I would like to try the latter but I am totally unsure how much difference this will make or whether it will make any difference at all.
Does anyone know how much performance increase I will get from using two arrays as opposed to one?
Any other advice on this scenario would be greatly appreciated.
In ReplMon, some Log Reader agent jobs display as "not running". Rigt-clicking on it the only option is to "Stop" the agent. Seems to me if it's NOT RUNNING you should be able to Start it but that option is disabled. Also, looking at the Agent Job Activity Monitor on the distribution server it says that the job is currently executing.
Running the Replication Agents Checkup job yields nothing and those jobs are still "not running". I can easily Stop/Start the Log Reader in ReplMon or insert a tracer token and then all looks fine. I'm just puzzled by the inconsistency and wondering how I can programmatically check and resolve it.