URGENT !!!! Transaction Log Backup Running Longer Time
Dec 26, 2002
Hi Guys,
We have a database with 20 gig and with huge transactions. The transaction log backup is scheduled every one hour
from 3.00 AM to 9.00 PM.
We take a full backup in the disk at 9.00 PM and again a full backup in the tape at 2.00 AM
It works fine in the day from 6.00 AM and complete within seconds and the size is approx. 50 to 200 MB.
But the very first transaction log backup at 3.00 AM is running like 3 hrs and the size is approx. 11 gig whick is almost equivalent to the Full backup size. There are some dts packages that are running in the night and as usual reindex, intergrity checks are running and there no large user traffic during night. But I have no idea which the very first transaction log backup in the morning takes longer time and has this bug size. Is there any work around to fix this proble.
Please advise.
Thanks,
Anu
View 7 Replies
ADVERTISEMENT
Oct 12, 2000
hi,
my Backup job, took a backup at 3.00am , backup went well, but this job is still running , now the time is 10.00am.
I tried to kill this job but it is still running.
Why my process still running after taking a backup also.
Can anyone suggest me, how i have to resolve this.
Thanks!
--Ram
View 2 Replies
View Related
Feb 7, 2000
Hi,
What do I do if transaction log is full ?????
Can I automate the truncation of the transaction log????
Bindu
View 3 Replies
View Related
Oct 24, 2001
Is it safe to backup while the database is running 1000 transaction/sec.? If yes - why should I buy Veritas Backup Exec Server Edition and Veritas Backup Exec Online Backup Pack ?
Michael
View 2 Replies
View Related
Jun 23, 2015
If an application is reflecting timeout errors but there are no backups running, the network is fine and the data and log files are within normal parameters, what else could be the cause of the errors?
View 9 Replies
View Related
Mar 30, 2007
Hello,
I'm trying to figure out why my transaction log backup is taking up to an hour to complete. I started off with a full recovery model with a Full database back up every Sunday, differential backups every Tuesday/Thursday and log backups every 5 minutes. I would have thought that the log file backups would execute much quicker because I'm backing them up more often.
Here is my backup statement, I'm hoping I've got a wrong option that you can point out to me:
BACKUP LOG [xxxx] TO [LogFilexxxxBackups] WITH NOINIT , NOUNLOAD , NAME = N'xxxx log backup', SKIP , STATS = 10, NOFORMAT
View 1 Replies
View Related
Jul 6, 2014
Running Frequently Transaction logbackup during Integrinty check DB /optimization job will cause any issue /impact as duration will extend ...
View 2 Replies
View Related
Sep 20, 2000
How is it possible that a 133MB SQL7 database, the backup of the database itself takes 2 seconds, the transaction log backup takes 25 minutes??? We are doing log backup every 10 minutes, and appending.
Thanks.
a
View 2 Replies
View Related
Jul 20, 2005
Hi,I have stored procedure (MS SQL Server 2000) which operateson around 600 000 rows (SELECT, UPDATE, INSERT)and executes in 5 minutes,when I put it in SQL transaction it slows down to more than 5 hours (!!)I have to admit that it is not problem with data locks (beside thatprocedurenothing else is executed on db),It is not also problem with that exact procedure, other proceduresalso slow down heavily when wrapped by SQL transactionvery very seldom stored procedure within transaction executescomparably long that its copy without transactionI guess it could be MS SQL Server 2000 configuration/tuning problem.Any ideas ?Chris
View 1 Replies
View Related
May 16, 2007
We moved a 2000 database to another platform by restoring the database. It took a lot longer than I expected. Would it take less time to restore it a second time to the same target database since the allocations are already there?
Thanks
View 1 Replies
View Related
May 23, 2007
Hello All,
I have SQL Server 2005 installed on my machine and I am firing following query to insert 1500 records into a simple table having just on column.
Declare @i int
Set @i=0
While (@i<1500)
Begin
Insert into test2 values (@i)
Set @i=@i+1
End
Here goes the table definition,
CREATE TABLE [dbo].[test2](
[int] NULL
) ON [PRIMARY]
Now the problem with this is that on one of my server this query is taking just 500ms to run while on my production and other test server this query is taking more than 25 seconds.
Same is the problem with updates. I have checked the configurations of both the servers and found them to be the same. Also there are no indexes defined on either of the tables. I was wondering what can be the possible reason for this to happen. If any of u people has any pointers regarding this, will be really useful
Thanks in advance,
Mitesh
View 6 Replies
View Related
Feb 13, 2001
Has anybody come across situations where queries take longer to execute the second time? The server is a dedicated sql server box with 1gb memory.
Thanks in advance.
Praveena
View 2 Replies
View Related
Aug 1, 2007
sql 2005 stnd on a server of decent spec.
dbase in question is only about 5GB on a 450GB partition.
at the begining of the month I run:
BACKUP LOG [objectstore] TO DISK ='D:BackupsProdackup_objectstore.BAK'
WITH NOFORMAT , INIT , NAME = N'objectstore backup'
and then every 10 minutes (within working hours) for the rest of the month I
run:
BACKUP LOG [objectstore] TO DISK ='D:BackupsProdackup_objectstore.BAK'
WITH NOFORMAT , NOINIT , NAME = N'objectstore backup'.
The amount of data that gets backed up is the same through out the month and
the loading on the server as a whole also stays constant throughout the month
- NOTHING increases throughout the month that would affect this server in any
way, yet at the begining of the month the backup takes 10 seconds, and at the
end, it gets up to 5-6 minutes.
why?
THanks
Alastair Jones.
"A computer once beat me at chess - but it was no match for me at kick boxing" - Emo Phillips.
View 11 Replies
View Related
Mar 13, 2008
Hi All,
My Full backups are taking longer than the usual time on sundays.
I know this has nothing to do with the SQl Server storage engine or Database engine.
i hvae checked there are no jobs ruuning at this time..and this across all the servers sharing the SAN.
How can prove that some thing else is reponsible for this Behvaior and not SQL server.
are there any counters (perfmon) or tools or some sniffers which can tell me what is causing this.
please help.
Thanks in advance.
View 1 Replies
View Related
May 16, 2008
Hi,
I have a package designed as bring data tables over to SQL Server. There are 9 data flow tasks that runs parallel, to bring 9 datatables over. In BIDS, when I execute the package, it runs like 8 minutes. Or if I start the scheduled job manually, it runs around 8 minutes too. But it runs about 30 minutes at the scheduled time at midnight.
I wonder what I can do to speed up the scheduled job.
Thanks
View 13 Replies
View Related
Aug 9, 2015
We are importing xer formats through the wizard to sqlserver database and It takes upto 35-45 mins for each import (single project), any option to reduce the time.Is they any other import options - which can give us faster results.
View 0 Replies
View Related
Nov 15, 2007
Is there a way ( using an included SSIS task rather than coding a script task) to detect whether a package has run longer than a specified period of time?
So I can send an email to operators notifying them that a job is taking longer than usual.
Thanks
View 12 Replies
View Related
Mar 1, 2005
Hi we have a table with about 400000 records in it. It starting to take longer and longer to add a new record. I was thinking of creating another identical table and archiving off most of the records every month (we are now adding about about 4000 records a day) . Is this the best thing to do?
I don't know a lot about sql server so any help or suggestions would be great
View 4 Replies
View Related
Jun 15, 2015
We are running a SQL Server 2008R2 64-bit database system on a Windows 2012 R2 64-bit Standard system. I have noticed in recent weeks that our differential backups periodically are taking longer than expected to complete. The usual amount of time is about one hour, but on several occasions, it has taken upwards to five hours. The nights when the job takes longer to complete are on Friday.
I did some checking online, and one possible reason for this issue is my scheduling the reindexing of the database on the morning of the Differential backup. For example, this past Friday the reindexing occurred at 1:00 AM with the Differential running at 10:00 PM that night.The article that I read suggested the reindexing, which takes several minutes, if that, to complete, should be scheduled to run just before the Full backup job.
View 2 Replies
View Related
Feb 18, 2015
SQL Server 2008 r2 - 6 GB memory...I attempted a backup on a 500GB database but it was taking way too long. I checked the resources on the box and saw the CPU at 100%. I checked the SQL Server activity log and saw a hung query (user was not even logged on) that had multiple threads so I killed it and now the CPU utilization is back to normal.
Trouble is, now all of the threads in the activity monitor for the backup show 'suspended' and the backup appears to be not doing anything.
View 3 Replies
View Related
Nov 12, 2007
Hi,
We need to select rows from the database that have been recently inserted/updated. We have a main primary table (COMMIT_TEST) and a second update table (COMMIT_TEST_UPDATE). The update table contains the primary key and a LAST_UPDATE field which is a datetime (to tell us when an update occurred). Triggers on the primary table are used to populate the update table.
If we insert or update the primary table in a transaction, we would expect that the datetime of the insert/update would be at the commit, however it seems that the insert/update statement is cached and getdate() is executed at the time of the cache instead of the commit. This causes problems as we select rows based on LAST_UPDATE and a commit may occur later but the earlier insert timestamp is saved to the database and we miss that update.
We would like to know if there is anyway to tell the SQL Server to not execute the function getdate() until the commit, or any other way to get the commit to create the correct timestamp.
We are using default isolation level. We have tried using getdate(), current_timestamp and even {fn Now()} with the same results. SQL Queries that reproduce the problem are provided below:
/* Different functions to get current timestamp €“ all have been tested to produce the same results */
/*
SELECT GETDATE()
GO
SELECT CURRENT_TIMESTAMP
GO
SELECT {fn Now()}
GO
*/
/* Use these statements to delete the tables to allow recreate of the tables */
/*
DROP TABLE COMMIT_TEST
DROP TABLE COMMIT_TEST_UPDATE
*/
/* Create a primary table and an UPDATE table to store the date/time when the primary table is modified */
CREATE TABLE dbo.COMMIT_TEST (PKEY int PRIMARY KEY, timestamp) /* ROW_VERSION rowversion */
GO
CREATE TABLE dbo.COMMIT_TEST_UPDATE (PKEY int PRIMARY KEY, LAST_UPDATE datetime, timestamp ) /* ROW_VERSION rowversion */
GO
/* Use these statements to delete the triggers to allow reinsert */
/*
drop trigger LOG_COMMIT_TEST_INSERT
drop trigger LOG_COMMIT_TEST_UPDATE
drop trigger LOG_COMMIT_TEST_DELETE
*/
/* Create insert, update and delete triggers */
create trigger LOG_COMMIT_TEST_INSERT on COMMIT_TEST for INSERT as
begin
declare @time datetime
select @time = getdate()
insert into COMMIT_TEST_UPDATE (PKEY,LAST_UPDATE)
select PKEY, getdate()
from inserted
end
GO
create trigger LOG_COMMIT_TEST_UPDATE on COMMIT_TEST for UPDATE as
begin
declare @time datetime
select @time = getdate()
update COMMIT_TEST_UPDATE
set LAST_UPDATE = getdate()
from COMMIT_TEST_UPDATE, deleted, inserted
where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY
end
GO
/* In our application deletes should never occur so we don€™t log when they get modified we just delete them from the UPDATE table */
create trigger LOG_COMMIT_TEST_DELETE on COMMIT_TEST for DELETE as
begin
if ( select count(*) from deleted ) > 0
begin
delete COMMIT_TEST_UPDATE
from COMMIT_TEST_UPDATE, deleted
where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY
end
end
GO
/* Delete any previous inserted record to avoid errors when inserting */
DELETE COMMIT_TEST WHERE PKEY = 1
GO
/* What is the current date/time */
SELECT GETDATE()
GO
BEGIN TRANSACTION
GO
/* Insert a record into the primary table */
INSERT COMMIT_TEST (PKEY) VALUES (1)
GO
/* Simulate additional processing within this transaction */
WAITFOR DELAY '00:00:10'
GO
/* We expect at this point that the date is written to the database (or at least we need some way for this to happen) */
COMMIT TRANSACTION
GO
/* get the current date to show us what date/time should have been committed to the database */
SELECT GETDATE()
GO
/* Select results from the table €“ we see that the timestamp is 10 seconds older than the commit, in other words it was evaluated at */
/* the insert statement, even though the row could not be read with a SELECT as it was uncommitted */
SELECT * FROM COMMIT_TEST
GO
SELECT * FROM COMMIT_TEST_UPDATE
Any help would be appreciated, we understand we could make changes to the application/database to approximate what we need, but all the solutions have identified suffer from possible performance issues, or could still lead to missing deals (assuming the commit time is larger than some artifical time window).
Regards,
Mark
View 8 Replies
View Related
Oct 7, 2015
I have a table called employee_punch_record that we use to store employee time clock punches.
The columns are:
employeeid,
punch_timestamp,
punch_type (In / Out),
closed (bit used as status for open or closed pay periods),
ident
Here are some examples of a record:
bkingery62015-10-06 16:59:04.000In0
bkingery72015-10-06 16:59:09.000Out0
bkingery82015-10-06 16:59:13.000In0
bkingery92015-10-06 18:22:44.000Out0
bkingery102015-10-06 18:22:46.000In0
bkingery112015-10-06 18:22:48.000Out0
bkingery122015-10-06 18:22:51.000In0
tfeller52015-10-05 17:00:05.000In0
We are using SQL Server 2008 as our database and use Access as a GUI. I am looking to create a form in Access where employees can access their time card and request changes from management. I want to use the format from the attached screen shot for the form. I pretty much know how to do it all, the only point of complication is trying to figure out the easiest way to get the transaction punch record data on employee_punch_record into a format where I can easily populate the form in the horizontal format you see in the screen shot.
I am not super strong in SQL, but figure I can do it using a formatting table of some sort. quick and easy way to move transaction records into a more horizontally oriented record?
View 0 Replies
View Related
Jul 15, 2015
We take a full backup in the early morning and hourly transaction log back during the working hours for one database in the production server. The application team made certain changes to the design of the said database in their development server. The backup from the development server was restored to the production server during working hours. After the restoration should we take a full backup before next transactional logbackup? Would the transactional log backup with out a full backup after the restoration of a database be valid?
View 5 Replies
View Related
Dec 7, 2001
How do we know the list of all the backups that are present in a particualr backup device like file# and time of backup taken etc for the purose of restore?
When I run :
RESTORE HEADERONLY FROM BackupDeviceName
it doesn't give that info. Can anyone tell me the exact command to list all the files of a backup device so that I can restore the right one??
Thanks.
Shalini.
View 2 Replies
View Related
Feb 4, 2008
The following code is taking longer and longer to run. I am not talking about the gradualy increase in size. this job has been taking 30-40 mins normaly and in the last few days it has gone 1hr to 2 hr to 3 hr... ANy ideas why this is happening? I can not see and other jobs running at this time.
declare @filename varchar(255)
set @filename =
(select top 1 physical_device_name
from ****.msdb.dbo.backupset bs, ****.msdb.dbo.backupmediafamily bf
where bs.media_set_id=bf.media_set_id
and database_name = 'Live_PRD'
and backup_start_date>getdate()-1
and type = 'D'
order by backup_start_date desc)
restore database REPORTS_REP
from disk=@filename
with
move 'LIVE_PRD_Data' to 'T:SOUTHREPORTS_REP_Data.mdf',
move 'LIVE_PRD_Log' to 'U:SOUTHREPORTS_REP_Log.ldf',
move 'LIVE_PRD_Log2' to 'U:SOUTHREPORTS_REP_Log2.ldf',
replace, stats=2, recovery
View 5 Replies
View Related
Mar 11, 2008
Hello, everyone:
I just heard that for restore purpose, ths full backup and transaction log backup should be from one maintenance plan. Otherwise transaction log backup files cannot be restored after restoring full backup files.
Is it true? Can anyone offer official documents?
In my system, full and transaction backups are from one maintenance plan. Restores are doing fine. I am not sure that ideal is true or not.
Thanks
ZYT
View 2 Replies
View Related
Jun 13, 2007
what is the differences between a differenctial backup and transaction log backup?
View 1 Replies
View Related
Oct 14, 2007
I neglected to backup the transaction log as part of the process of backing up the database. Now i only have the backup file for the database and no transaction log backup. When i try to do a restore on the database, i get the error on a "tail log missing" message (which i'm assuming is that it's looking for the t-log backup?).
Is it possible to restore or even restore to a new database? I'm only looking to retreive data from 2 tables within the backup file.
Thanks!
SQL Server 2005 on Windows 2003 Server x64.
View 16 Replies
View Related
Jun 25, 2015
I am reading about the RESTORE command to a point in time using logs, I would like to know the minimum point in time recovery for a backup image using T-SQL command before applying a log restore and what are the log ranges needed for the restore during restore.
 My Version 2008 R2
View 7 Replies
View Related
Jun 27, 2001
I run an Operating system backup nightly using Veritas Backup exec (v8), immediatly after I run a SQL written backup job.
1. Can I use the same tape? Appending to the media that Veritas has written to?
2. How do I run the backup from cmd line from within Veritas? It has an option to "execute after backup job:"
- How do I run a SQL job from operating system?
Thanks,
AJ
View 4 Replies
View Related
Oct 17, 2000
How to run a DTS job from command prompt? Any help is appreciated!
Thanks.
View 2 Replies
View Related
Jul 22, 2002
Hi! i need to clean the transaction log of a local database. I used to do this in SQL server 6.5 with the sentence 'dump transaction {DB} with no_log'. After this ran 'sp_spaceused @updateusage = 'true'' and the problem was corrected. How can i do this in version 7' ? Are the same sentences?
View 1 Replies
View Related
Oct 3, 2001
For some reason the transaction log backup which occurs every few hours has failed. This means my transaction log in filling up. Is it possible to increase the space allocated for the log whilst the server is in use.
Thanks in advance for help.
View 2 Replies
View Related