I'm curious what are considerations for choosing a good transaction retention time? The default SQL uses is 0 to 72 hours. With this setting I found that cleanup was taking upwards of 30 minutes (for a process that defaults to run every 10 minutes). I've read that lowering it can improve performance, and that also you don't want this running too long because of deadlock issues between this and the log reader. So how short is too short? Optimally, since the system this runs on is under heavy use I'd like to optimize this as much as possible, which makes me think that the smaller the retention the better, but is something like 1 or 2 hours too short? What are possible consequences of such a short period of time?
I currently have a simple transactional replication setup for a database. My publisher and distributor are on the same box. The subscription is setup using a push agent.
My question is related to recovery of the subscriber.
So lets say replication is setup and working fine. Suddenly we had a failure on the subscriber database. Now I could just reconfigure the subscription, and the subscribing database would be back up and good to go, but the problem is that over time, we have made some changes to the subscribing database that are not made in the publisher. For example, the tables have different indexes. Just reconfiguring the subscritpion would not recover these objects.
So I have to acutally restore the subscriber database. So I do that, and apply transaction logs up to the most recent transaction log backup. Now, consider that my transaction log backups on the subscriber happen every 4 hours, and the most recent transaction log backup I had was from 3 hours ago. So now at this point, my subscribing database is 3 hours behind my publisher.
Now, will the distribution agent resend the missing 3 hours of transactions?
In the distribution agent properties, there are two settings for transaction retention, "at least" and "but not more than". Currently they set to 0 and 72 hours respectivly. Now I would assume that if I set the "at least" setting to the subscriber transaction log backup period, in this case 4 hours, I would be covered, and the distribution agent would indeed re-replication the transactions that happend since the recovery point 3 hours.
I just wanted to verify that this is acutally what these settings are refering too, and that if I set the "at least" setting to 4 hours, I would be covered.
Hi,I am still not very proficient in SQLServer. So apology if thequestion sounds basic.We have a script to clean old unwanted data. It basically deletesall rows which are more than 2 weeks old. It deletes data from33 tables and the number of rows in each table runs into few millions.What I see in the script (not written by me :-) ) is that all data isdeleted within a single BEGIN TRANSACTION and COMMIT TRANSACTION. AsI have background in informix, such an action in Informix may resultin "LONG TRANSACTION PROBLEM". Does SQLServer have a similar concept.Also won't it have performance problem if all rows are marked lockedtill they are committed.TIA.
Hi all,I would like to perform anINSERT INTO LINKEDSVR.dbo.xyz.abcSELECT ... FROM dbo.dfgwhere LINKEDSVR is a linked server on another machine. Both servers arerunning SQLServer 2000 and have the DTC running.When I run this batch from QueryAnalyzer without explicitly usingtransactions, it works well (takes about 5 sec) - however, when Ienclose it usingbegin [distributed] tran/commit tranthe query runs forever.I also tried to use the local server as linked server (loopback) but itdid not work either.Any suggestions?Thanks,Jo
I just wanted to post a follow up to a message I posted some months agoabout a long running transaction that was blocking all other users...The link is belowhttp://groups.google.com/group/comp...649bee2002646a2By using the new "Row versioning" functionality of SQL 2005, itcompletely solved this problem. By reading the books online, it saysthere is a performance impact, but that the better performance of SQL2005 in general might offset it. So far this seems to be the case. justposting it here in case anyone else has the problem. The SQL command Iihad to execute to get everything working properly was:ALTER DATABASE DBnameSET READ_COMMITTED_SNAPSHOT ON;
I seem to be misunderstanding the way transactions work with service broker queues. We have developed and deployed a service broker application that 5 queues and a windows service for each queue on multiple servers (3 currently). Due to a last minute issue, we had to not use transactions when the services executed a recieve and I am not updating the code base to use transactions and am running into blocking issues. One of the services runs for 90 seconds (spooling to the printer) and all of the servers block on the receive operation for this queue. I thought that if I was receving messages from a single conversation, other receives against this queue would not block.
I'm trying to figure out why my transaction log backup is taking up to an hour to complete. I started off with a full recovery model with a Full database back up every Sunday, differential backups every Tuesday/Thursday and log backups every 5 minutes. I would have thought that the log file backups would execute much quicker because I'm backing them up more often.
Here is my backup statement, I'm hoping I've got a wrong option that you can point out to me:
BACKUP LOG [xxxx] TO [LogFilexxxxBackups] WITH NOINIT , NOUNLOAD , NAME = N'xxxx log backup', SKIP , STATS = 10, NOFORMAT
We have a web-based third-party application that has both background processes and user activity requests running in the same database (SQL Server 2005 SP2). The problem is that a background process will start a long-running transaction and hold an exclusive lock on a few rows in a given table (a small table, <100 rows). The web clients need to scan this same table, but when their "select *" statements get to those locked row(s), the web client queries stall waiting for that exclusive lock to be released. This effectively brings the entire web front end to a halt because all clients must hit this table for each user action. I realize that this is the classic lock condition that multiversioning databases like Oracle, PostgreSQL, SQL Server Compact Edition, and other databases do not suffer because they don't use shared read locks like SQL Server. But since we're on SQL Server for this app, what is the way to get around this problem? Modifying the clients to use WITH (NOLOCK) is not an option... there will be major consistency issues unless the clients run in Read Committed or higher. Any ideas? We could tweak this app if needed. Does SQL Server 2008 introduce multiversioning or at least some mechanism to get around this problem? I did not see it mentioned on the Microsoft site, but maybe I missed it. Thanks in advance.
How can I execute a long running transaction using something similar to the fire and forget pattern? I intend to start the execution of a very long stored proc from within IIS. I would like to execute a sql script that will start the job and return immediately so that it doesn't hold an IIS thread.
I have members in a database who have paid thru dates. I am creating retention reports
I created a cross tab in Crystal (using SQL) that counts records that paid within a certain year. I need to create a script that will let me find when members skip payment for a year. Any ideas?
I was thinking of running a count of all paid (Activity) records, but still kind of stuck.
I have just started in the scary world of SQL Server admin and am trying to unravel the mysteries of backups etc. If I run 'BACKUP DATABASE xxx TO DISK = 'D:DB_Backupsxxx.bak' WITH RETAINDAYS = 7' each day, each db backup if appended to the same '.bak' file and the RETAINDAYS protects the backup from being deleted by SQL Server. OK so far. But does anyone understand what criteria is used to decide when to overwrite the older backups? My backup file is getting bigger everyday, with no sign of any of the old data being deleted! Do I have to wait for the entire disk to become full before they start to get overwritten? Or should I just not worry and trust that it will do it all correctly? Any ideas would be much appreciated.
In sql2005 the database backup retention has been added in sql server properties in database setting.
In 2000 we had a comfortable option to set retention based on maintenance plan,files and also our space availabilty.It has helped the dba's a lot.But it has been removed in sql 2005.
Is that sql server setting is the only retention period setting or do we have to set in anyother tabs..
I want to change the history retention time because the history stores about 1 gb of detail per database per day in the msdb
Some of the log shipped databases have a monitor server option that has a setting for retention time but most of the log shipped databases are not using a monitor server since the option was unavailable at setup.
So is there a way to change the history retention time
I am running a couple of sql 2000 SP3a servers with merge and snapshot replication. One server acting as publisher and distributor and the rest subscribers. On one of the server I have got the error below and have tried most of the suggestions by msdn. This server has not crashed ever before or any hardware problems. It has been running for a couple of months and no problems. This has not happened no any of the other servers. Any suggestions would be greatly appreciated as the only resolution I have left is to bring up a new instance, setup replication and see if this would resolve the issue. Stopping and starting of agents don't work.
Server: EASTSRV3 DBMS: Microsoft SQL Server Version: 08.00.0760 user name: dbo API conformance: 2 SQL conformance: 1 transaction capable: 2 read only: N identifier quote char: " non_nullable_columns: 1 owner usage: 31 max table name len: 128 max column name len: 128 need long data len: Y max columns in table: 1024 max columns in index: 16 max char literal len: 524288 max statement len: 524288 max row size: 524288
[4/18/2005 11:59:27 AM]EASTSRV3.ICASData: {call sp_MSgetversion } Percent Complete: 2 Connecting to Subscriber 'EASTSRV3' Percent Complete: 3 Retrieving publication information Percent Complete: 4 Retrieving subscription information Percent Complete: 4 The merge process is cleaning up meta data in database 'HO_Master'. Percent Complete: 4 The merge process cleaned up 0 row(s) in MSmerge_genhistory, 0 row(s) in MSmerge_contents, and 0 row(s) in MSmerge_tombstone. Percent Complete: 4 The merge process is cleaning up meta data in database 'ICASData'. The merge process could not perform retention-based meta data cleanup in database 'ICASData'. Percent Complete: 0 The merge process could not perform retention-based meta data cleanup in database 'ICASData'. Percent Complete: 0 Category:NULL Source: Merge Replication Provider Number: -2147199467 Message: The merge process could not perform retention-based meta data cleanup in database 'ICASData'. Percent Complete: 0 Category:COMMAND Source: Failed Command Number: 0 Message: {call sp_mergemetadataretentioncleanup(?, ?, ?)} Percent Complete: 0 Category:SQLSERVER Source: EASTSRV3 Number: 11 Message: General network error. Check your network documentation.
I want to store data warehouse source tables and files in an Archive schema and then delete / drop them after a specified period of time.
Is there a table property that I can set (can't find one) or some other mechanism so that I can easily identify these tables with a script.
If there is no such property or feature within the database engine I will define a metadata table and record it there, but a property or similar that I can set at archive time would be very handy.
We have retention policy , and pay at the time year completion , now policy change and it is converted from yearly to monthly and this with effect from April-15.
if calculate the pay system will generate the Arrear payment of the employee from the month of April onward but i already paid the retention amount for month for two month April and May which i need to deduct the same otherwise this will double amount .
I currently use 7 days for subscription expiration setting for my two merge publications, which will cause metadata to clean up very 7 days. Now I need to increase the retention period to be 14 days. How I can avoid missing metadata after cleanup? Microsoft ms151188 (http://msdn2.microsoft.com/en-us/library/ms151188.aspx) warns that publisher may not have enough metadata, which may lead to non-convergence. I want to change this setting without causing any data loss.
For the best practice I issued full SQL Server database, differential and transaction log backups. I have setup a process to backup to local disks and then also copy the files to a centralized set of storage. On a weekly basis the centralized file system is backed up to a tape backup device. I know I can get data off of the tapes, but that process is time consuming, not well tested from my perspective and I am not in control of the overall process. Can you offer some recommendations from a SQL Server backup retention perspective?
I got some issues in my production environment, so please help me out. The following is the message I got from the replication monitor and I don't what to at this point.
The merge process could not perform retention-based meta data cleanup in database 'TT'. (Source: Merge Replication Provider, Error number: -2147199467) Get help: http://help/-2147199467
Transaction (Process ID 73) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. (Source: ply-db-svr1, Error number: 1205) Get help: http://help/1205
I'm getting this when executing the code below. Going from W2K/SQL2k SP4 to XP/SQL2k SP4 over a dial-up link.
If I take away the begin tran and commit it works, but of course, if one statement fails I want a rollback. I'm executing this from a Delphi app, but I get the same from Qry Analyser.
I've tried both with and without the Set XACT . . ., and also tried with Set Implicit_Transactions off.
set XACT_ABORT ON Begin distributed Tran update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 and DONE = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 COMMIT TRAN
It's got me stumped, so any ideas gratefully received.Thx
I have a design a SSIS Package for ETL Process. In my package i have to read the data from the tables and then insert into the another table of same structure.
for reading the data i have write the Dynamic TSQL based on some condition and based on that it is using 25 different function to populate the data into different 25 column. Tsql returning correct data and is working fine in Enterprise manager. But in my SSIS package it show me time out ERROR.
I have increase and decrease the time to catch the error but it is still there i have tried to set 0 for commandout Properties.
if i'm using the 0 for commandtime out then i'm getting the Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
and
Failed to open a fastload rowset for "[dbo].[P@@#$%$%%%]". Check that the object exists in the database.
I am getting this error :Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.OleDb.OleDbException: Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.have anybody idea?!
i have a sequence container in my my sequence container i have a script task for drop the existing tables. This seq. container connected to another seq. container. all these are in for each loop container when i run the package it's work fine for 1st looop but it gives me error for second execution.
Message is like this:
Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
i am getting this error "Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.".
my transations have been done using LINKED SERVER. when i manually call the store procedure from Server 1 it works but when i call it through Service broker it dosen't work and gives me this error.
I have Full database backup upto previous day and transaction logfile of Today transaction. my database has crashed. I have restored previous day's Full backup. I have faced difficulty to restore today's transaction from today's transaction log. What are the steps to restore full database back and one day's transaction log file. Note: there is no differential database backup and transaction backup.
I'm receiving the below error when trying to implement Execute SQL Task.
"The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION." This error also happens on COMMIT as well and there is a preceding Execute SQL Task with BEGIN TRANSACTION tranname WITH MARK 'tran'
I know I can change the transaction option property from "supported" to "required" however I want to mark the transaction. I was copying the way Import/Export Wizard does it however I'm unable to figure out why it works and why mine doesn't work.
I created a Calculated measure in cube something like this : ([TransType].[TransTypeHierarchy].[TransTypeCategoryParent].&[SPEND],[Measures].[Transaction Amount]). To get only spend transactions. Now, I want to slice this measure with same hierarchy to find the amount distribution across different transaction types under spend transaction. But this query behaving like the measure doesn't have relation with measure.
you can think this as below query: WITH MEMBER SPEND AS ([TransType].[TransTypeHierarchy].[TransTypeCategoryParent].&[SPEND],[Measures].[Transaction Amount]) SELECT NON EMPTY {SPEND} ON 0 ,NON EMPTY ([TransType].[TransTypeHierarchy].[TransTypeCategoryParent]) ON 1 FROM [CUBE]
How do I make use of begin transaction and commit transaction in SSIS.
As am not able to commit changes due to certain update commands I want to explicitly write begin and commit statements. but when i make use of begin and commit in OLEDB commnad stage it throws an error as follows:
Hresult:0x80004005
descriptionyntax error or access violation.
its definately not an syntax error as i executed it in sql server. also when i use it in execute sql task out side the dataflow container it doesnt throw any error but still this task doesnt serve my purpose of saving/ commiting update chanages in the database.