Transaction Log Shipping Script Errors
Dec 10, 2007
Greetings:
When I script out my log shipping configuration from the GUI and subsequently drop the log shipping and try to recreate it with the created script, the backup and restore functions do not seem to be working; please see script below. Is there an additional step (or steps) that the SSMS GUI does not output when it creates the script for log shipping? I noticed in the GUI after I run the script that the destination folder for copied files is blank as well.
Example error from backup/restore job - Error: The path is not of a legal form.(mscorlib)
-- Execute the following statements at the Primary to configure Log Shipping
-- for the database [rdevsql2].[SymbolLookUp],
-- The script needs to be run at the Primary in the context of the [msdb] database.
-------------------------------------------------------------------------------------
-- Adding the Log Shipping configuration
-- ****** Begin: Script to be run at Primary: [rdevsql2] ******
DECLARE @LS_BackupJobId AS uniqueidentifier
DECLARE @LS_PrimaryId AS uniqueidentifier
DECLARE @SP_Add_RetCode As int
EXEC @SP_Add_RetCode = master.dbo.sp_add_log_shipping_primary_database
@database = N'SymbolLookUp'
,@backup_directory = N'm:ackups'
,@backup_share = N'\rdevsql2m$ackups'
,@backup_job_name = N'LSBackup_SymbolLookUp'
,@backup_retention_period = 60
,@monitor_server = N'RDEVSQL1'
,@monitor_server_security_mode = 1
,@backup_threshold = 60
,@threshold_alert_enabled = 1
,@history_retention_period = 60
,@backup_job_id = @LS_BackupJobId OUTPUT
,@primary_id = @LS_PrimaryId OUTPUT
,@overwrite = 1
,@ignoreremotemonitor = 1
IF (@@ERROR = 0 AND @SP_Add_RetCode = 0)
BEGIN
DECLARE @LS_BackUpScheduleUID As uniqueidentifier
DECLARE @LS_BackUpScheduleID AS int
EXEC msdb.dbo.sp_add_schedule
@schedule_name =N'LSBackupSchedule_rdevsql21'
,@enabled = 1
,@freq_type = 4
,@freq_interval = 1
,@freq_subday_type = 4
,@freq_subday_interval = 1
,@freq_recurrence_factor = 0
,@active_start_date = 20071207
,@active_end_date = 99991231
,@active_start_time = 0
,@active_end_time = 235900
,@schedule_uid = @LS_BackUpScheduleUID OUTPUT
,@schedule_id = @LS_BackUpScheduleID OUTPUT
EXEC msdb.dbo.sp_attach_schedule
@job_id = @LS_BackupJobId
,@schedule_id = @LS_BackUpScheduleID
EXEC msdb.dbo.sp_update_job
@job_id = @LS_BackupJobId
,@enabled = 1
END
EXEC master.dbo.sp_add_log_shipping_primary_secondary
@primary_database = N'SymbolLookUp'
,@secondary_server = N'RDEVSQL1'
,@secondary_database = N'SymbolLookUp'
,@overwrite = 1
-- ****** End: Script to be run at Primary: [rdevsql2] ******
-- ****** Begin: Script to be run at Monitor: [RDEVSQL1] ******
EXEC rdevsql1.msdb.dbo.sp_processlogshippingmonitorprimary
@mode = 1
,@primary_id = N'4d80db8c-e090-4dc0-8af6-d5f5802c4207'
,@primary_server = N'rdevsql2'
,@monitor_server = N'RDEVSQL1'
,@monitor_server_security_mode = 1
,@primary_database = N'SymbolLookUp'
,@backup_threshold = 60
,@threshold_alert = 14420
,@threshold_alert_enabled = 1
,@history_retention_period = 60
-- ****** End: Script to be run at Monitor: [RDEVSQL1] ******
-- Execute the following statements at the Secondary to configure Log Shipping
-- for the database [RDEVSQL1].[SymbolLookUp],
-- the script needs to be run at the Secondary in the context of the [msdb] database.
-------------------------------------------------------------------------------------
-- Adding the Log Shipping configuration
-- ****** Begin: Script to be run at Secondary: [RDEVSQL1] ******
DECLARE @LS_Secondary__CopyJobId AS uniqueidentifier
DECLARE @LS_Secondary__RestoreJobId AS uniqueidentifier
DECLARE @LS_Secondary__SecondaryId AS uniqueidentifier
DECLARE @LS_Add_RetCode As int
EXEC @LS_Add_RetCode = rdevsql1.master.dbo.sp_add_log_shipping_secondary_primary
@primary_server = N'rdevsql2'
,@primary_database = N'SymbolLookUp'
,@backup_source_directory = N'\rdevsql2m$ackups'
,@backup_destination_directory = N''
,@copy_job_name = N''
,@restore_job_name = N''
,@file_retention_period = 4320
,@monitor_server = N'RDEVSQL1'
,@monitor_server_security_mode = 1
,@overwrite = 1
,@copy_job_id = @LS_Secondary__CopyJobId OUTPUT
,@restore_job_id = @LS_Secondary__RestoreJobId OUTPUT
,@secondary_id = @LS_Secondary__SecondaryId OUTPUT
IF (@@ERROR = 0 AND @LS_Add_RetCode = 0)
BEGIN
DECLARE @LS_SecondaryCopyJobScheduleUID As uniqueidentifier
DECLARE @LS_SecondaryCopyJobScheduleID AS int
EXEC rdevsql1.msdb.dbo.sp_add_schedule
@schedule_name =N'DefaultCopyJobSchedule'
,@enabled = 1
,@freq_type = 4
,@freq_interval = 1
,@freq_subday_type = 4
,@freq_subday_interval = 15
,@freq_recurrence_factor = 0
,@active_start_date = 20071207
,@active_end_date = 99991231
,@active_start_time = 0
,@active_end_time = 235900
,@schedule_uid = @LS_SecondaryCopyJobScheduleUID OUTPUT
,@schedule_id = @LS_SecondaryCopyJobScheduleID OUTPUT
EXEC rdevsql1.msdb.dbo.sp_attach_schedule
@job_id = @LS_Secondary__CopyJobId
,@schedule_id = @LS_SecondaryCopyJobScheduleID
DECLARE @LS_SecondaryRestoreJobScheduleUID As uniqueidentifier
DECLARE @LS_SecondaryRestoreJobScheduleID AS int
EXEC rdevsql1.msdb.dbo.sp_add_schedule
@schedule_name =N'DefaultRestoreJobSchedule'
,@enabled = 1
,@freq_type = 4
,@freq_interval = 1
,@freq_subday_type = 4
,@freq_subday_interval = 15
,@freq_recurrence_factor = 0
,@active_start_date = 20071207
,@active_end_date = 99991231
,@active_start_time = 0
,@active_end_time = 235900
,@schedule_uid = @LS_SecondaryRestoreJobScheduleUID OUTPUT
,@schedule_id = @LS_SecondaryRestoreJobScheduleID OUTPUT
EXEC rdevsql1.msdb.dbo.sp_attach_schedule
@job_id = @LS_Secondary__RestoreJobId
,@schedule_id = @LS_SecondaryRestoreJobScheduleID
END
DECLARE @LS_Add_RetCode2 As int
IF (@@ERROR = 0 AND @LS_Add_RetCode = 0)
BEGIN
EXEC @LS_Add_RetCode2 = rdevsql1.master.dbo.sp_add_log_shipping_secondary_database
@secondary_database = N'SymbolLookUp'
,@primary_server = N'rdevsql2'
,@primary_database = N'SymbolLookUp'
,@restore_delay = 0
,@restore_mode = 0
,@disconnect_users = 0
,@restore_threshold = 45
,@threshold_alert_enabled = 1
,@history_retention_period = 60
,@overwrite = 1
END
IF (@@error = 0 AND @LS_Add_RetCode = 0)
BEGIN
EXEC rdevsql1.msdb.dbo.sp_update_job
@job_id = @LS_Secondary__CopyJobId
,@enabled = 1
EXEC rdevsql1.msdb.dbo.sp_update_job
@job_id = @LS_Secondary__RestoreJobId
,@enabled = 1
END
-- ****** End: Script to be run at Secondary: [RDEVSQL1] ******
help is much appreciated,
Derek
View 1 Replies
ADVERTISEMENT
Dec 19, 2007
I am trying to set up log shipping in SQL 2005.
The transaction log copy job is failing with the following error.
2007-12-19 11:33:33.93*** Error: Could not retrieve copy settings for secondary ID 'd8d9b7cf-0f36-4446-bdbb-488dfdc1f6fe'.(Microsoft.SqlServer.Management.LogS hipping) ***
2007-12-19 11:33:33.95*** Error: Failed to connect to server SCDSSLSQL2.(Microsoft.SqlServer.ConnectionInfo) ***
2007-12-19 11:33:33.95*** Error: An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)(.Net SqlClient Data Provider) ***
Any help would be appreciated.
vkumar
View 3 Replies
View Related
Feb 28, 2008
I have 2 SQL 2005 SP2 machines that i am configuring for log shipping. The primary machine has a large database on it that i want duplicated on another machine. I did a full backup and restore onto machine 2. I configured the log shipping per the BOL and white papers. Everything seams to work correctly except for the restore i get the following error:
The restore operation cannot proceed because the secondary database 'B' is not in NORECOVERY/STANDBY mode.
The jobs are copying the files correctly but i cannot get them restored. I am trying to leave the "B" database readable for reporting but that is all i need it for. Thanks in advance
View 3 Replies
View Related
May 6, 2015
I am trying to configure log shipping on same server with different instances/I am facing the below error//Cannot open backup device.Operating system error 67(The network name cannot be found)Restore filelist is terminating abnormally (Microsoft sql server Error 3201)
View 11 Replies
View Related
Oct 9, 2007
We have set-up log shipping in both our development and production environments. The difference between the two is that development is using SQL 2005 Developer Edition SP2 and production is using SQL 2005 Enterprise Edition SP2. As well, the production environment runs using 64-bit 3-node failover cluster set-up for the source, whereas the development source server environment is 32-bit and not clustered. Also, our development environment destination/monitor instance is located within the same geographic location mapped to the same domain controller. The production environment destination/monitor instance is located off-site, and although is part of the same domain, uses a different domain controller which is synched-up with the primary domain controller used for the source server and entire development environment. Other than that, both environments run using Windows 2003 Server Enterprise Edition SP1.
Originally, both environments were configured to use Monitor connections "By impersonating the proxy account of the job (usually the SQL Server Agent service account of the server instance where the job runs)". This presented no problems in the development environment, but in the production environment, this results in the following error whenever the source server tries to update the monitor instance with the backup alert status:
Error: 18456, Severity: 14, State: 11.
Login failed for user 'NT AUTHORITYANONYMOUS LOGON'. [CLIENT: XXX.XX.XX.XX]
This also results in the log-ship alert job falsely reporting that backup jobs are "out-of-synch", since the source server cannot write log information to the log ship tables on the destination/monitor instance.
Now, according to BOL, when choosing to impersonate the proxy account, this is supposed to default to the SQL Server Agent Service account, which in our systems (both development and production), is a Windows domain account with full administrator priviledges and a SQL system adminstrator on both source and destination/monitor instances.
Upon trying to open a case with Microsoft originally when we were running on SQL 2005 SP1, and spending several hours ensuring there were no duplicate SPNs and linked servers were properly configured, they had come to the conclusion that this was the result of a known SP1 issue (http://support.microsoft.com/kb/925843), and would be solved by applying SP2. We are now running SP2 + hotfix (9.0.3152), but are still receiving this error.
The only way I have currently of fixing this issue is by changing the Monitor connection from authenticating via proxy account to using a SQL Server login account which has system admin priviledges.
I have limited knowledge of how security is applied across different domain controllers within the same domain. Any help would be greatly appreciated.
View 4 Replies
View Related
Feb 22, 2004
Hi,
We currently have a couple a large Databases running on SQL 2000 SP3 Clustered Windows 2000 SP3 environment.
Log Shipping is enabled for both databases shipping to a Standalone SQL 2000 SP3 Windows 2000 SP3 box.
Log Shipping occurs every 15 mins with the Transaction Files on average being no more than 500KB in size. However, every now and then a Transaction Log comes through and it can be as big as 3.52GB.
Not sure why this is happening. Anyone got any ideas?
Regards
Paul Towler
View 3 Replies
View Related
Jul 23, 2005
I am going through a security audit on our servers. We use log shippingfor a standby database. One of the questions in the audit has melooking for answers."Are the transaction logs that are being shipped to the standbydatabase encrypted?"I am assuming no. However, I need to know definitively. I have not beenable to find an answer in BOL or in Google. If the logs are notencrypted, is there an option where I could send them encrypted, ifnecessary?Thanks,Jennie
View 2 Replies
View Related
Jan 18, 2006
Hi guys,
I have a server in a datacenter (SQL 2005 ent) that collects large quantities of data from our visitors. I need to set up a secondary database in our office (different geographic location) that will server 2 purposes, 1, a backup of the database and 2, allow us to perform complex queries on the data.
There is no updating of the data on the secondary server so no changes need to go back to the primary server. A database in standby mode is fine and users on the secondary server can be disconnected when it's being updated.
I have transaction log shipping working well in a staging environment (LAN). My first question is is there any reason why transaction log shipping would not work over a WAN with a VPN connection?
And my second question is can I compress the trn files for transport over the WAN. If I manually compress the files with winzip they compress by 98%. That translates into a huge saving when I am leasing a line to transport these files.
Thanks in advance
Stephen
View 4 Replies
View Related
Aug 7, 2015
Log shipping was configured 6 months back. A Transaction log file got corrupted today. How to resolve this?
View 20 Replies
View Related
Sep 21, 2007
Hello,
We have log-shipping set up between a source and 3 destination SS 2000 databases. Two of the destination servers actually perform their log restores across the network from the other secondary server. This allows us to only copy the files once from a remote location. All three servers stay caught up within 15 minutes of each other.
Recently, I added a fourth server to this that has SS 2005 SP2 (X64). I wrote a stored procedure that restores log backups from the same single location as the maintenance plan jobs. The problem that I'm experiencing is that this fourth server is not keeping up with the other three. It seems to take longer to restore the same log backups. The destination servers are all on the same domain. This fourth server was previously part of the same maintenance plan configuration as the others prior to rebuilding it for SS 2005 SP2 (X64). During that time, it stayed caught up with the other servers. There is another database on the new server that I am log-shipping to in the same manner and it stays caught up, though, for the most part, the log backups are smaller. There is a file on the fourth server with a ckp extension for the database in question that doesn't seem to exist for the other databases on this server and the other servers.
Any information on this behavior would be appreciated.
View 1 Replies
View Related
Apr 23, 2008
I am new to this environment and was asked to ensure that the transaction log shipping for SQL 2005 on W2K3 boxes is working properly. I noticed the db's on the secondary server are show "Restoring..." I am not sure if these were set up in No Recover Mode or Standby Mode. I have no access to the secondary db's. I get an error message when trying to access them (error 927). Monitoring was not set up initially and as you may or may not know can't be turned on after the fact...unless you delete the job and start over.
My question is is "Restoring..." normal and what does it indicate?
View 3 Replies
View Related
Aug 21, 2007
Hi,
I currently have a 2000 Ent. production server and a stand by server ready for transaction log shipping.
Is it possible to setup transaction log shipping on a live environment without any interruptions?
I'm currently backing up the log every 1 hour, I'd like to increase to 15 minutes.
Any help would greatly be appreciated.
Thanks,
- Gary
View 4 Replies
View Related
Feb 4, 2008
Hi,
Im in the process of setting up logshipping on sqlserver 2005 enterprise edition.
My scenario is like this:
My Avg size of my tlog is 500MB and im planning to set the log shipping at 30mins interval(ie backup job schedule,Copy,restore job schedule).But at some part of the day the Tlog suddenly increases up to 1.5GB - 2 GB .So i wanted to know, wht if that 1.5GB-2GB tlog file is unable to get backed up,copy and restore at 30mins interval?.How to deal with this kind of issues where the size of tlogs are increased suddenly.I cannot do it at15mins interval due to some network restrictions at my office.
i have One more doubt about the setting on Logshipping screen:
Now let us suppose my settting on log shipping screen 'Alert if no restore occurs within' is set to '180mins', then does this setting mean that the restore job will keep on looking for the copied file in the folder on secondary for next 90mins and if its not able to find any, it will generate an alert after 90mins ??? or it will generate an error if its nt able to find any copied file after the first restore job execution.???
in the same way,
Thnx in advance for any help.
Regards
Arvind L
View 1 Replies
View Related
Jun 23, 2007
I'm experiencing a weird problem with log shipping in SQL 2005.
I've setup Log Shipping for a production database between two sites. The standby database is being updated correctly and everything seems to be working as expected but for one detail: the name of the transaction log backups are generated with an UTC timestamp instead of my local timezone.
The the data below extracted from the backup history:
2007-06-23 17:30:00.000 D:BackupDatabasesmydbmydb_20070623073000.trn
2007-06-23 17:15:00.000 D:BackupDatabasesmydbmydb_20070623071500.trn
2007-06-23 17:00:00.000 D:BackupDatabasesmydbmydb_20070623070000.trn
2007-06-23 16:45:00.000 D:BackupDatabasesmydbmydb_20070623064500.trn
My timezone here is GMT+10.
Although it's not affecting Log Shipping, it's very confusing as the full backups have a timestamp in the local timezone!
Has anyone seen experienced something similar to this? Please see below my SQL details:
1 ProductName NULL Microsoft SQL Server
2 ProductVersion 589824 9.00.3042.00
3 Language 1033 English (United States)
4 Platform NULL NT AMD64
5 Comments NULL NT AMD64
6 CompanyName NULL Microsoft Corporation
7 FileDescription NULL SQL Server Windows NT - 64 Bit
8 FileVersion NULL 2005.090.3042.00
9 InternalName NULL SQLSERVR
10 LegalCopyright NULL © Microsoft Corp. All rights reserved.
11 LegalTrademarks NULL Microsoft® is a registered trademark of Microsoft Corporation. Windows(TM) is a trademark of Microsoft Corporation
12 OriginalFilename NULL SQLSERVR.EXE
13 PrivateBuild NULL NULL
14 SpecialBuild 199360512 NULL
15 WindowsVersion 248381957 5.2 (3790)
16 ProcessorCount 4 4
17 ProcessorActiveMask 4 f
18 ProcessorType 8664 NULL
19 PhysicalMemory 4095 4095 (4294037504)
20 Product ID NULL NULL
Thanks,
André
View 3 Replies
View Related
Oct 12, 2015
I've got log shipping set up, and everything seems to be working fine, but the log files are not being deleted from the primary server despite configuring log shipping to retain them for 3 days. Â I see no errors concerning the log shipping, but did not configure a monitor. What process is responsible for deleting the older log backups, and how can I look for errors. Â I could simply set up a jog to delete the older files, but that will only mask the issue.
View 3 Replies
View Related
Jul 16, 2015
Out of using stored procedure, reports and all this staff, I want to know the possible way to make sure that the data inside my Secondary Server Read only database are same as data in my primary server database.
So what is the simple way to do this check?
View 4 Replies
View Related
Jun 18, 2015
I received an alert from one of my two secondary servers (all servers are running 2012 SP1):
File 'E:SQLMS SQL ServerMSSQL11.MSSQLSERVERMSSQLDATAMyDatabaseName_DateTime.tuf' is not a valid undo file for database 'MyDatabaseName (database ID 8). Verify the file path, and specify the correct file.
The detail in the job step shows this additional information:
*** Error: Could not apply log backup file 'MyDatabaseName_DateTime.trn' to secondary database 'MyDatabaseName'.(Microsoft.SqlServer.Management.LogShipping) ***
*** Error: Table error: Page (0:0). Test (m_headerVersion == HEADER_7_0) failed. Values are 0 and 1.
Table error: Page (0:0). Test ((m_type >= DATA_PAGE && m_type <= UNDOFILE_HEADER_PAGE) || (m_type == UNKNOWN_PAGE && level == BASIC_HEADER)) failed. Values are 0 and 0.
Table error: Page (0:0). Test (m_freeData >= PageHeaderOverhead () && m_freeData <= (UINT)PAGESIZE - m_slotCnt * sizeof (Slot)) failed. Values are 0 and 8192.
Starting a few minutes later, the Agent Job named LSRestore_MyServerName_MyDatabaseName fails every time it runs. The generated log backup, copy, and restore jobs run every 15 minutes.
I fixed the immediate problem by running a copy-only full backup on the primary, deleting the database on the secondary and restoring the new backup on the secondary with NORECOVERY. The restore job now succeeds and all seems fine. The secondaries only exists for DR purposes - no one runs reports against them or uses them at all. I had a similar problem last weekend on a different database that is also replicated between the same servers. I've been here for over a year, and these are the first instances of this problem that I've seen. However, I've now seen it twice in a week on the same server.
View 0 Replies
View Related
Jan 13, 2014
I was getting this error message from our Cold Fusion application front end when it was trying to execute one of our stored procedures.The operation could not be performed because OLE DB provider "SQLNCLI10" for linked server "SERVERNAME" was unable to begin a distributed transaction.
I was a bit puzzled by this because we don't use distributed transactions (at least I don't specifically code them). I did some research online and I found out how to modify the DTC component on the server to have the proper configurations.Then, when trying again we got this error message:
Unable to start a nested transaction for OLE DB provider "SQLNCLI10" for linked server "SERVERNAME". A nested transaction was required because the XACT_ABORT option was set to OFF.
So, I was able to resolve that as well by changing that option in the stored procedure...Now, there are 3 stored procedures - One does inserts; one does updates; and one does deletes.The actions are being done to a view in a database on another server. The view definition uses a linked server.
The error was/is only happening on the INSERT stored procedure. So, I'm a little baffled as to why it only bombs on the insert stored procedure and not the others. They are all coded in the same fashion..Do distributed transactions work differently if its an insert vs. update or delete? Why is it all of the sudden treating these as distributed transactions when they aren't coded as such?
The code is very simple and looks just like this:
INSERT vw_Name
SELECT bla, bla2, bla3
FROM local table
WHERE bla bla
And again vw_Name would be a table on another server that we have via Linked Server. It is also a SQL Server (but its SQL 2000).
View 6 Replies
View Related
Oct 2, 2014
I have a scenario where a customer is going to be using Log Shipping to the DR site; however, we need to maintain the normal backup strategy on the current system. (i.e. Nightly Full, Every 6 Hour Differential and Hourly Transaction Log backup)I know how to setup Transaction Log Shipping and Fail-over to DR and backup but now the local backup strategy is going to be an issue. I use the [URL] .... maintenance solution currently.
Is it even possible to do regular backups locally keeping data integrity for your backup strategy with Transaction Log Shipping enabled?
View 2 Replies
View Related
May 31, 2008
Hi!
We have a Microsoft SQL Server 2000 SP3 running database for Microsoft
Navision 3.7
From time we encounter problems, especially when running heavy query
procedures from Navision, with the transaction log. It's actually setup as
folows:
File properties:
File growth By percent (10)
Restrict file growth (MB) 10000
OPTIONS:
Recovery model: simple
Settings: Autoupdate statistics, Auto create statistics, Autoshrink
We get the following errors (once every 2-3 months so far):
The log file for database 'ME_Prod' is full. Back up the transaction log for
the database to free up some log space..
in between numerous abovementioned messages I have the following:
Configuration option 'show advanced options' changed from 1 to 1. Run the
RECONFIGURE statement to install..
Could not write a CHECKPOINT record in database ID 9 because the log is out
of space.
Automatic checkpointing is disabled in database 'ME_Prod' because the log is
out of space. It will continue when the database owner successfully
checkpoints the database. Free up some space or extend the database and then
run the CHECKPOINT statement.
View 10 Replies
View Related
Apr 10, 2006
Hello,
I currently have a Transactional Log reader agent failing with the below error:
The process could not execute 'sp_replcmds'
Error: 14151, Severity: 18, State: 1
SQL Server Assertion: File: <logscan.cpp>, line=2223
Failed Assertion = 'm_noOfScAlloc == 0'.
Stack Signature for the dump is 0x24642FE5
Error: 3624, Severity: 20, State: 1.
SQL Server Assertion: File: <logscan.cpp>, line=1985
Failed Assertion = 'startLSN >= m_curLSN'.
Stack Signature for the dump is 0xD7150BD4
Now, I understand that SP4 is supposed to fix a similar issue. SP4 has been installed and the errors keep happening. I do notice that the hot fix mentions different line numbers than the above errors. Does anyone know if this is a new bug? If not can someone explain the fixes to me, thanks,
Tech Drone.
View 3 Replies
View Related
May 13, 2007
Hi
I could not able to find Forums in regards to 'Log Shipping' thats why posting this question in here. Appriciate if someone can provide me answers depends on their experience.
Can we switch database recovery model when log shipping is turned on ?
We want to switch from Full Recovery to Bulk Logged Recovery to make sure Bulk Insert operations during the after hours load process will have some performance gain.
Is there any possibility of loosing data ?
Thanks
View 1 Replies
View Related
Jun 8, 2006
Hi,
I 'm sure I am missing something obvious, hopefully someone could point it out. After a failover log shipping, I want to fail back to my inital Primary server database; however, my database is marked as loading. How can I mark it as normal?
I did the failover as follow:
I did a failover log shipping from the 2 server Sv1 (Primary) and Sv2 (Secondary) by doing the following
1) Stop the primary database by using sp_change_primary_role (Sv1)
2) Change the 2nd server to primary server by running sp_change_secondary_role (Sv2)
3) Change the monitor role by running sp-change_monitor_role (Sv2)
4) Resolve the log ins - (Sv2)
5) Now I want to fail back - I copy the TRN files to Sv1 - use SQL Ent to restore the database at point in time. The task is done; however, the database is still mark as loading. I could not use sp_dboption.
I appreciate any suggestion.
Thanks in advance
View 5 Replies
View Related
May 31, 2008
Hi All
I'm getting this when executing the code below. Going from W2K/SQL2k SP4 to XP/SQL2k SP4 over a dial-up link.
If I take away the begin tran and commit it works, but of course, if one statement fails I want a rollback. I'm executing this from a Delphi app, but I get the same from Qry Analyser.
I've tried both with and without the Set XACT . . ., and also tried with Set Implicit_Transactions off.
set XACT_ABORT ON
Begin distributed Tran
update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TRANSACTIONMAIN
set REPFLAG = 0 where REPFLAG = 1
update TSADMIN.TRANSACTIONMAIN
set REPFLAG = 0 where REPFLAG = 1 and DONE = 1
update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.WBENTRY
set REPFLAG = 0 where REPFLAG = 1
update TSADMIN.WBENTRY
set REPFLAG = 0 where REPFLAG = 1
update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.FIXED
set REPFLAG = 0 where REPFLAG = 1
update TSADMIN.FIXED
set REPFLAG = 0 where REPFLAG = 1
update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.ALTCHARGE
set REPFLAG = 0 where REPFLAG = 1
update TSADMIN.ALTCHARGE
set REPFLAG = 0 where REPFLAG = 1
update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TSAUDIT
set REPFLAG = 0 where REPFLAG = 1
update TSADMIN.TSAUDIT
set REPFLAG = 0 where REPFLAG = 1
COMMIT TRAN
It's got me stumped, so any ideas gratefully received.Thx
View 1 Replies
View Related
Feb 22, 2007
I have a design a SSIS Package for ETL Process. In my package i have to read the data from the tables and then insert into the another table of same structure.
for reading the data i have write the Dynamic TSQL based on some condition and based on that it is using 25 different function to populate the data into different 25 column. Tsql returning correct data and is working fine in Enterprise manager. But in my SSIS package it show me time out ERROR.
I have increase and decrease the time to catch the error but it is still there i have tried to set 0 for commandout Properties.
if i'm using the 0 for commandtime out then i'm getting the Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
and
Failed to open a fastload rowset for "[dbo].[P@@#$%$%%%]". Check that the object exists in the database.
Please help me it's very urgent.
View 3 Replies
View Related
Feb 6, 2007
I am getting this error :Distributed transaction completed. Either enlist this session in a new
transaction or the NULL transaction. Description:
An unhandled exception occurred during the execution of the current web
request. Please review the stack trace for more information about the error and
where it originated in the code. Exception Details:
System.Data.OleDb.OleDbException: Distributed transaction completed. Either
enlist this session in a new transaction or the NULL transaction.have anybody idea?!
View 1 Replies
View Related
Dec 22, 2006
i have a sequence container in my my sequence container i have a script task for drop the existing tables. This seq. container connected to another seq. container. all these are in for each loop container when i run the package it's work fine for 1st looop but it gives me error for second execution.
Message is like this:
Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
View 8 Replies
View Related
Jan 8, 2008
Hi,
i am getting this error "Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.".
my transations have been done using LINKED SERVER. when i manually call the store procedure from Server 1 it works but when i call it through Service broker it dosen't work and gives me this error.
Thanks in advance.
View 2 Replies
View Related
Jul 1, 2015
I recently updated the datatype of a sproc parameter from bit to tinyint. When I executed the sproc with the updated parameters the sproc appeared to succeed and returned "1 row(s) affected" in the console. However, the update triggered by the sproc did not actually work.
The table column was a bit which only allows 0 or 1 and the sproc was passing a value of 2 so the table was rejecting this value. However, the sproc did not return an error and appeared to return success. So is there a way to configure the database or sproc to return an error message when this type of error occurs?
View 1 Replies
View Related
Jul 31, 2006
I have a parent package that calls child packages inside a For Each container. When I debug/run the parent package (from VS), I get the following error message: Warning: The Execution method succeeded, but the number of errors raised (3) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
It appears to be failing while executing the child package. However, the logs (via the "progress" tab) for both the parent package and the child package show no errors other than the one listed above (and that shows in the parent package log). The child package appears to validate completely without error (all components are green and no error messages in the log). I turned on SSIS logging to a text file and see nothing in there either.
If I bump up the MaximumErrorCount in the parent package and in the Execute Package Task that calls the child package to 4 (to go one above the error count indicated in the message above), the whole thing executes sucessfully. I don't want to leave the Max Error Count set like this. Is there something I am missing? For example are there errors that do not get logged by default? I get some warnings, do a certain number of warnings equal an error?
Thanks,
Lee
View 5 Replies
View Related
Apr 20, 2006
Starwin writes "when i execute DBCC CHECKDB, DBCC CHECKCATALOG
I reveived the following error.
how to solve it?
Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID -2093955965, index ID 711, page ID (3:2530). The PageId in the page header = (34443:343146507).
. . . .
. . . .
CHECKDB found 0 allocation errors and 1 consistency errors in table '(Object ID -1635188736)' (object ID -1635188736).
CHECKDB found 0 allocation errors and 1 consistency errors in table '(Object ID -1600811521)' (object ID -1600811521).
. . . .
. . . .
Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID -8748568, index ID 50307, page ID (3:2497). The PageId in the page header = (26707:762626875).
Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID -7615284, index ID 35836, page ID (3:2534). The PageId in the page heade"
View 1 Replies
View Related
Jun 10, 2015
I have Full database backup upto previous day and transaction logfile of Today transaction. my database has crashed. I have restored previous day's Full backup. I have faced difficulty to restore today's transaction from today's transaction log. What are the steps to restore full database back and one day's transaction log file. Note: there is no differential database backup and transaction backup.
View 8 Replies
View Related
Jul 11, 2007
I want to list out the pending transaction for transaction replication by publication.
Help needed.
View 1 Replies
View Related