SQL Server Admin 2014 :: Running Frequently Transaction Log Backup During Integrity Check DB
Jul 6, 2014
Running Frequently Transaction logbackup during Integrinty check DB /optimization job will cause any issue /impact as duration will extend ...
View 2 Replies
ADVERTISEMENT
Mar 8, 2015
I've been running the Ola Hallengren maintenance script for the last five months without missing a beat. Today I find an error stating the UserDatabase Integrity check job failed last night. This is running on SQL Server 2014 BI edition w/64 Gigs.
I ran a DBCC CHECKDB on each database manually and all worked until I tried it on the biggest one that is about 18 gbs. It just keeps running and I eventually stopped it so I'm guessing it is memory, but doesn't make sense considering it has 64 gbs. I have it set to 64/4 max / min. Again, this was never an issue until last night.I've been looking up all morning, but not seeing much on this error "The operating system returned error 1453"?
View 5 Replies
View Related
Dec 23, 2014
I'm working on databases where statistics of some indexes (tables) are changing too frequently. Once I update them manually, one minute after they get 10-20% change, and five minutes after they get over 100% change. Tables get updated very frequently (multiple times in a second).
When I run a query to read from sys.stats, sys.dm_db_stats_properties and other dynamic views, I see that they were last updated when I did it manually, but the change rate overpassed the 500+20% (tables have multiples of 10K rows). Auto create and update statistics are set to true on all databases, and I don't know why sql server does not do that automatically.
View 2 Replies
View Related
Dec 5, 2014
I've recently started working with a public sector organisation who have 4 clustered sql instances that has 80% of it's db mirrored.
Looking at the transaction log - it seems that a transaction log backup is a good idea as the log is 4x larger than the data file.But I'm not allowed access to the physical server to check onto which drive I can create the trn. No RDP, no vmware - let's be honest I'm not even allowed to launch cmd line Also the Server Manager informs me "We will need to carefully look at database backups if you guys want to start doing these backups on box, as that will break our off box backup routine (it will screw the transaction chain)."
I don't understand how backing up the transaction log could break the "transaction chain"?
View 9 Replies
View Related
Jun 10, 2015
I have Full database backup upto previous day and transaction logfile of Today transaction. my database has crashed. I have restored previous day's Full backup. I have faced difficulty to restore today's transaction from today's transaction log. What are the steps to restore full database back and one day's transaction log file. Note: there is no differential database backup and transaction backup.
View 8 Replies
View Related
Aug 3, 2015
Need to restore database,here's the scenario:
Data got deleted on Friday evening, need to have database restored to FRiday afternoon and also some data has been entered on Monday, which needs to be there.
View 8 Replies
View Related
Apr 7, 2008
Our system is 24/7, ldf file disk space is 30 gb.
As per the advice given by somebody
i set ldf initial log size as 29 gb and restricted the growth and set the autogrowth to 500 mb
i did not run and shrink ldf command on daily basis. this is the advice given by another dba.
then he suggested to create a maintenance plan to check database integrity check. whick will
check the disk space threshold.
i did not understand how will the maintenance plan check the disk space and give an alert.
where should i check the alert for this maintenance plan. secondly how can i stop my database log file to deliver a message that my disk is full.
please suggest me a best method. where i am wrong , how should i handle the situation without the dba monitoring.
and our system has lot of batches running for every minute.
View 13 Replies
View Related
Oct 29, 2015
usual way to check if file exists
DECLARE @File_Exists INT
EXEC Master.dbo.xp_fileexist 'serverBSQL_Backupfile.bak', @File_Exists OUT
print @File_Exists
1
And if check folder, can use "nul", but it doesn't work for UNC path
DECLARE @File_Exists INT
EXEC Master.dbo.xp_fileexist 'serverBSQL_Backupul', @File_Exists OUT
print @File_Exists
0
If use xp_subdirs like:
EXEC master.dbo.xp_subdirs 'serverBSQL_Backups'
If the folder doesn't exists, Msg 22006, Level 16, State 1, Line 3 xp_subdirs could not access 'ServerBSQL_Backups*.*': FindFirstFile() returned error 67, 'The network name cannot be found.'
how to check if UNC folder exists in Backup? in my code I want to check if the unc folder exists before doing backup, the unc path is retrieved from other table or backup history.
View 6 Replies
View Related
Jun 3, 2015
We carried out an in-place upgrade on our production server on Saturday - going from 2008 R2 to 2014.
We had tested this method out in dev/test and pre-production with only minor post issues to fix.
However, on production we had an issue whereby checkdb was hitting 100% CPU and caused overnight processes to hang. The checkdb statement was terminated and disabled by a colleague at 1 am.
Since then we have restored this database to a dev server and ran checkdb against it with no_infomsgs and all_errormsgs but it still hasn't finished since Monday morning!
The database is just over 800 GB and whilst checkdb was crippling the cpu, logical reads are less than one. However, sp_whoisactive is showing that it has done 56 million reads so far and this number increases periodically so it looks like it keeps going back to re-check the database with a deep dive.
Also, on a different environment, we ran check table statements and one of them took over 9 hours for a single table but came back clean (see attachment).
We need to wait for the output but the database is still in use in production and the mess will just get worse if it is indeed corrupted.
View 5 Replies
View Related
Oct 2, 2014
I have the following setup:
- An MSSQL 2014 Standard server that houses multiple small databases (in excess of a hundred).
- These databases are frequently dropped and restored by an application that uses this SQL Server.
- There is a business need for this setup at this time, so I can't get away from it. Therefore answers like "don't have so many small databases that are frequently dropped and restored" would be somewhat unuseful
This is the problem I have:
- When I connect SSMS 2014 to the server and expand the "Databases" node, it takes forever to display. In comparison, SSMS 2008 connected to SQL 2008R2 server with the same number of databases displays the Databases tree very quickly.
I ran a trace to see what exactly SSMS 2014 is doing. When the "Databases" node is expanded, it runs a query that checks each database for Memory-Optimized Tables (new and wonderful feature of SQL 2014 for sure, but I'm not using it, at least yet). Naturally, when you have to loop through over a hundred DBs, it takes time. Worse yet, if one of these DBs is in process of being restored, the query sits and waits to time out before proceeding to the next DB. Sometimes this causes outright timeouts. Here is the query:
use [MyDatabase]
SELECT
ISNULL((select top 1 1 from sys.filegroups FG where FG.[type] = 'FX'), 0) AS [HasMemoryOptimizedObjects]
To be sure, this is NOT a SQL Server performance issue. This server processes a rather heavy workload and has been doing so for over a month, and the workload completes within expected time limits or better. Even so I've done some basic performance measuring, and the server itself is quite all right.
Moreover, if I connect SSMS 2008 to it, I get an error message (Index out of bounds or somesuch), but SSMS 2008 does connect, and displays the Databases tree much faster than SSMS 2014.
I'd like to turn off the option to check for Memory Optimized Objects altogether, as I'm not using the feature.
View 3 Replies
View Related
Apr 12, 2015
I'm looking at installing 2008R2 and 2014 side by side, then using Mirroring to provide HA for the 2008R2 instance and AoHA for the 2014 instance. I'd be using the same two physical servers for both the Mirroring pair and the AoHA pair.
View 2 Replies
View Related
Mar 13, 2014
Is there a formula for calculating how expensive a transaction will be in terms of disk space used before its run. I dont want it accurate to the MB, but rough enough so I can determine how much additional space to assign to a transaction log or SAN volume.
Currently we're reindexing ~25billion rows, nothing too wide, say 12 columns consisting of 1 varchar(50) and the rest ints, bits and money.Roughly speaking if I reindex the clustered index on an int indetity, (with sort_in_tempdb) how would I calculate the the disk space used?
View 2 Replies
View Related
Jul 18, 2015
I have a database that is part of AlwaysOn that is filling up the transaction log drive even though I have a daily full backup and transaction logs set for every 2 hours. The backups are going from both the primary and secondary replica backuping up to the shared disk and I have the backup preferences set to the primary.
When I try to shrink the log I get 'The transaction log for database 'DB' is full due to 'LOG_BACKUP''. I have to manually backup the trans log and then shrink, why the maintenance plan backups aren't doing this even though they are "working".
View 9 Replies
View Related
Apr 30, 2015
I recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.
We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).
Here what I have done so far:
I created the database
Created the file and successfully imported data into it using the Import Data feature.
I saved the SSIS package
Scheduled an Agent Job for this package to run at certain time,daily
At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.
Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.
View 9 Replies
View Related
Sep 21, 2015
I want print state of running query to output, because my query need long time to run. But print statement dos not work correctly!!
Note: I cannot use "go" in my query!
/* My Query */
print 'Fetch Data From Table1 is Running...'
insert into @T1
select x,y,z
from Table1
print 'Fetch Data From Table1 Done.'
[Code] .....
View 1 Replies
View Related
Oct 30, 2015
I have a SQL Server 2008 instance that is running on "LiveServer" our production database (ProdDB) - and we need to upgrade to 2014. In order to do some upgrade testing, I spun up a VM with the same version of SQL server on the test VM (TestServer), did a backup of the production DB from the live server, and restored it to TestServer under a different name (ProdDBUA).
I then installed SQL2014 Upgrade advisor onto TestServer, and ran it, checking all the boxes (reporting services etc..) and it all came back clean - no issues whatsoever - not a single warning even. I'm under the impression that stored procs/functions etc... all reside within the DB, so a backup will include those. Is that correct?
The problem is, I know I have stored Procs, functions and views that use deprecated joins in that LiveServer.ProdDB. What do I need to do/configure/check in order to make sure that the Upgrade Advisor is actually checking through all that T-SQL that has deprecated code? I want to have a list to give to my report writers of procs/functions/views that need to be rewritten prior to the upgrade going live.
If there is a modification that needs to be run on the TestServer.ProdDBUA, a cursor to change the path etc. DB is running in Compatibility mode 90.
View 4 Replies
View Related
Feb 7, 2015
I have this command :
Exec 'update .... insert ..... delete ..... insert ...'
I Execute these command in one execution.
exec ('...')
Are these commands act as a transaction? If one of them create error , another commands run or rull backed?
View 1 Replies
View Related
Jun 6, 2015
Whenever I am doing reinitialization, publisher table schema is applying at subscriber level
I don't want to apply all changes ...
How can I skip the schema changes for transaction replication ...
View 1 Replies
View Related
May 15, 2015
We have multiple databases on a single instance in an OLTP environment. I have my data files on a separate SAN LUN from my transaction log files (and a few NDFs split out onto additional LUNs). I was wondering if there is a performance benefit to putting each LDF file on its own LUN? Or at least my few busiest LDFs?
We are currently on 2012, but I'm having to put together specs for a 2014 installation and need to answer this question without having an environment in which I can benchmark different setups. I just want to hear whether or not others have done this (why or why not?).
View 3 Replies
View Related
Sep 21, 2015
I'm working on a large scale project that is currently in production. We have a big process that recently changed to use In-Memory Tables with SQL 2014 for performance efficiency.
The Process uses:
51 In-Memory SQL Tables.
50 Stored Procedures (not native) that loads data(Insert) from about 150 regular Tables and IM tables.
300 Validations (short stored procedure not native) Selecting from those 50 In-Memory Tables (And insert to In-Memory table that save the validation errors if exists on In-Memory table).
At the end of this process we clean the table from the data that relavnt to etch prosses(DELETE FROM WHERE)
B.T.W
No UPDATE STAT on In-Memory are used-when we test the prosses it slow as down and cause some locks.
We are calling this process from ADO.Net, loads stored procedure first and then validations, each SP use different SQL Connection. In normal use, everything works fine and takes about 1.5 second.
Under stress test (6 Clients X 100 Tasks) for 30 minutes. After several minutes we are starting to get this SQL Exception (1 SQL Exception for every 20 tasks):
41301. A previous transaction that the current transaction took a dependency on has aborted, and the current transaction can no longer commit.
Transactions in Memory-Optimized Tables
The Exception is not clear. We are not using BEGIN TRANSACTION in the process. The SQL Exception occurs in different stored procedures each time.
View 2 Replies
View Related
Aug 9, 2014
how to take backup of indexes for a particular table.
View 1 Replies
View Related
Aug 9, 2014
How to take backup of indexes for a particular table.
View 5 Replies
View Related
Sep 24, 2014
Is there a way to backing up SQL DB without having to stop users from connecting to it.
View 5 Replies
View Related
Feb 10, 2015
Look at this scenario
1- I have created a backup device
2- I have created a maintenance plan full backup and run it - with overwrite option on the backup device
3- I have created a maintenance plan differential backup and run it with append option on the same file of backup device
4- I have created a maintenance plan log backup with append option on the same backup device
When I made restore database from the backup device i found three files the full and differential and log backup
5- I ran again the differential backup maintenance plan ( suppose to be ran everyday night )
when i made restore database I found only two backups the full and the last differential !!!
what I want to do is to take a full backup every week , append differential backup everyday and append log backup every hour
when I ran the last differential backup it erased the first differential and log backups, why is this happening and how to apply this scenario and keep the all differential backups on the same backup device .
View 4 Replies
View Related
May 10, 2015
I run the following:
EXECUTE dbo.DatabaseBackup
@Databases = 'F1SB',
@Directory = 'F:SqlBackup2014',
@BackupType = 'FULL',
@Compress = 'Y',
@Encrypt = 'Y',
[code]...
I cannot see the file created in the directory. The account under which sql server the agen job run have full privileges on it and is sysadmin.Then i run the Command in ssms
BACKUP DATABASE [F1SB] TO DISK = N'F:SqlBackup2014<server>F1SBFULLIGS-DB01_F1SB_FULL_20150510_214455.bak' WITH NO_CHECKSUM, COMPRESSION, ENCRYPTION (ALGORITHM = AES_256, SERVER CERTIFICATE = [serverCertificate])
and I get this error message:
Msg 3013, Level 16, State 1, Line 13
BACKUP DATABASE is terminating abnormally.
View 9 Replies
View Related
Sep 25, 2015
I need backup script to take all the database backups and we have the maintenance plan but our database character size is 98 and when we are taking the backups through maintenance plan while storing the backup history information it is adding the date and timezone information and exceeding the length to 128 so it is not writing the information on MSDB.
So we want to take the backup using the script and it has to create sub folder for each database. Also if any of the database fails it should continue with others.
View 6 Replies
View Related
Oct 27, 2015
I've got the below and have several variation and still cant seem to find a perfect way to query the server to bring back that last full backup per db. I'm shopwing mutilple records in the backup set db w/ type = 'D'. I look online and type D = Database. Which i assumed it meant full database backup. Apparently not. Try running the below on one of your full databases. Then check to see if the date is actually the last backup date.
DECLARE @db_name VARCHAR(100)
SELECT @db_name = DB_NAME()
-- Get Backup History for required database
SELECT TOP ( 30 ) s.database_name,
m.physical_device_name,
[Code] ....
View 9 Replies
View Related
Mar 30, 2015
We are consolidating some old SQL server-environments from 'OLD' to 'NEW' and one of our vendors is protesting on behalve of the collation we use on our 'NEW' SQL server.
Our old server (SQL 2005) contains databases with collation SQL_Latin1_General_CP1_CI_AS
Our new server (2014) has the standard collation Latin1_General_CI_AS
Both collations have CI and AS
From experience I know different databases can reside next to eachother on the same Instance.
The only problem could be ('could be !!') the use of TempDB with a high volume of transaction to be executured in TempDB and choosing for Snapshot Isolation Level ....
The application the databases belong to is very static, hardly updated, and questioned only several time per hour (so no TempDB issue I guess).
using different databases using a different collation running on the same instance?
View 5 Replies
View Related
Oct 31, 2014
is there a way to backup all stored procedures in a database?
View 4 Replies
View Related
Nov 12, 2014
Recently I have faced one DBA interview, below is the question they asked me.
" Does AlwaysON secondary replica support Full backup, if it is not why"?
I know AlwaysON secondary replicas support only copy_only and tlog backups, why they wont support full backup?
View 9 Replies
View Related
Nov 13, 2014
If data is modified (by an insert, update, or delete) while the backup is running, will the backup contain those changes or will it be added to the database afterwards?
View 2 Replies
View Related
Dec 12, 2014
With all the new functionality, can 2014 now restore a single table from a standard backup without using any third party tools? I have looked, but can't see this listed as a feature (though that doesn't mean it's not there, maybe I've just missed it).
View 6 Replies
View Related
Jan 6, 2015
I am planning to take one full backup and Transactional Log backups for every month ..as i will be making the changes in database only once in a month .
And I am aware of that in case of disaster i need to restore database with all the Transactional Log backups . My Plan is to have Transactional log backups for 5 Years and after 5 years i would be taking a full backup .
So should I need to take any other precautions or concerns with this approach.?
View 9 Replies
View Related