Transaction Log Growing All The Time

Apr 30, 2007

Hi there,



We have a Principle, Mirror and Witness set-up and all is working fine, however, the transaction logs for a few large databases just keep growing over a the course of the month until the disk is full. As I understand it, and having tried you can't dump the transaction logs while mirroring is configured, is there any way at all to commit and truncate the logs while mirroring is running or do I have to manually remove the mirroring each month, dump the transaction logs and then re-enable it again after doing the backup/restore?



The databases in question are about 6GB in data size and the transaction logs can grow to be about 60GB in a month.



Would a normal SQL Server 2005 backup truncate the logs if I configured this? At the moment we use Litespeed for SQL server for nightly backups.



Any advice would be very helpful.



Thanks

Ed

View 5 Replies


ADVERTISEMENT

Transaction Log Growing And Growing

Aug 22, 2001

I upgraded from SQL 6.5 to SQL 7 last month, and so far, everything's
been going fine.

However, I'm not using my old SQL 6.5 backup scripts, which, when the
backup was done, would dump the transaction log with TRUNCATE_ONLY, shrinking
the log size.

My SQL 7 server is set up with a Maintenance Plan which does everything,
including backup, but the log file seems to be growing and growing. I'm
up to 4.5 gigs now, for a database with a data file of 3.5 gigs.

How do I "dump transaction with TRUNCATE_ONLY" on a SQL 7 database?

Thanks,
Todd Wallace

View 3 Replies View Related

Transaction Log Not Growing

Mar 28, 2000

I wonder if anyone could explain why when monitoring the transaction log size it doesn't appear to be growing! I'm using the following code to test image data types with logging.. I've got 'Truncate log on Checkpoint' switched off and 'Select into Bulk copy' also switched off.

Running the following code I would expect to see the transaction log grow and grow and grow... Monitoring it using perfmon indicates that it isn't in fact logging...

DECLARE @ptrval varbinary(16)

SELECT @ptrval = TEXTPTR(pr_info)
FROM pub_info pr INNER JOIN publishers p
ON p.pub_id = pr.pub_id
AND p.pub_name = 'New Moon Books'

declare @Loop int
select @Loop = 0

While @Loop <= 10000
BEGIN

WRITETEXT pub_info.pr_info @ptrval
with log
'New Moon Books (NMB) '

select @Loop = @Loop + 1

END

View 1 Replies View Related

Transaction Log Growing

Jan 24, 2003

Greetings,

My database has a situation where my transaction log is growing out of control. However I have not been able to figure out where any memory leaks are occuring.

Is there a way to monitor the database in order to find out at when the tlog is growing. Or even better, what sql is being executed that is causing this unreasonable tlog growth?


Thanks a million,

Roger

View 2 Replies View Related

Transaction Log - Keeps Growing

Mar 6, 2007

Hi,

I'm getting this SQL server log...

The log file for database 'P5_Nextel' is full. Back up the transaction log for the database to free up some log space

What i'm doing is, i just resizing the space allocated, but the problem is my disk is now out of space. How can i prevent this kind of problem without adding a new disk? Is there any other way?

Please advise.

Thanks!

View 4 Replies View Related

Transaction Log Growing

Feb 29, 2008

I am having problems with the transaction log growing. I set up a maintenance plan to backup every 15 minutes. The recovery model is set to full. It is set to auto grow by 10 percent with unrestricted file growth. The space allocated is 2mb. Should this be set higher? It seems whenever I unchecked the auto shrink, the log is growing larger. The command I use to check is:

SELECT *
FROM sysfiles
WHERE name LIKE '%LOG%'
GO

This is the log size and the log space used.

1.492187538.02356

This has grown 4 percent since yesterday. Are there any good practices to maintain these log files?Any help would be appreciated.

msr976

View 1 Replies View Related

Transaction Log Keeps Growing

Aug 24, 2006

Hi,We have created a SQL server 2000 database. We observe that thetransaction log keeps growing over time. We are now about to run out ofspace. We have been periodically shrinking the database. Neverthelessthe size has increased. I would imagine that a transaction log can beeliminated if we stop the database. Can that be done? Is there a way tocompletely wipe off the transaction log?Thanks,Yash

View 4 Replies View Related

How To Avoid A Growing Transaction Log

Mar 12, 2003

I have to admit, I'm usually using the MySql database, but in this particular case I have to use MSSQL 2000.

Over to my problem.

I'm building a web-based system (who isn't these days) in which the user types arbitrary information that is published when the user pushes the save button. Nothing new about that.

Here comes the tricky part, when the user wants to edit an existing item I copy all information in the database and sets the id of the 'edit-copy' to the negative value ( id 45 becomes id -45 for the edit-copy). This is also done on all items in other tables that is connected to the main item.

This way I get a copy that the user may edit without messing up the published information. When the user is done I either delete all the negative items (cancel) or delete the positive items and update the negative to become positive (save).

So far so good, allmost... my problem is that the transaction log grows tremendously.

Is there any other way to accomplish a safe edit that doesn't affect the transaction log as much as my current solution?

Could I be doing something wrong when updateing or deleteing my items?

View 1 Replies View Related

Transaction Log Growing Uncontrolled?

Jan 28, 2014

The transaction log of a database grows until it runs out of disk space. If disk space is full, all databases on instance may get problems. Because of this i have set a limitation on how much it may grow, up to 40 GB. It grows in steps of 100 MB. It reaches its limit a couple of times a week, causing the application to hang.

The database file itself is about 2,3 GB large.

The SQL version is 10.50.4276, SQL2008R2 SP2.

The recovery model is FULL.

I have a backupjob that runs a FULL Backup at midnight.And a backupjob that runs a LOG backup every 30 minutes.Both finishes with success. However, the transaction log is never truncated, the unused space is never released.

I have checked for "long running jobs", it sometimes sys "backup_log", sometimes "active_transaction".

Could a workaround be to set the recovery mode to simple, and create a full-backup job that runs every 30 minutes for this database? It is a critical database....

View 4 Replies View Related

Growing Transaction Logs

Jan 14, 2008

Hi.

I know this question has been posted before but after reading some other post I'm still a bit confuse of what to do?

In my database, I have a 3Mb data file and the transaction log file has growed to 500Mb and keep growing... Can you please advise what I should do to reduce it size and/or stop it from continue growing?

Thanks for your help.

View 1 Replies View Related

Transaction Logs Growing

Aug 16, 2006

Hello,



What if there are uncommited transactions in the log and backing up the database/logs does not trunicate the log and it still grows?



Thanks

wade

View 3 Replies View Related

HELLP!! Transaction Log Growing By 1GB Per Week.

Nov 7, 2001

Hi there - I have an sms server which uses a sql 7 database. The transaction log on it keeps growing at an incredible rate (about 1gb p/w). The database was successfully backed up yesterday, and I thought that should delete the inactive entries in the log. Can someone tell me some steps to do reduce it to a regular size?

thanks for any help

Jonathan McCrea

View 1 Replies View Related

Transaction Log File 210GB And Growing

Oct 12, 2007

I am running Microsoft SQL 2005 Express.

The transaction log of the main database has gotten huge and I just noticed it because the disk is nearly out of free disk space.

The database data file is 46MB and the transaction log is 210GB. This happended because autogrowth was on, and no limit was set I suppose.

So my goal is to knock this thing down to size, and set the max size of the transaction file to like 5GB.

I CANNOT do a backup of the transaction log because I do not have enough free space for such an operation. I tried doing:


DUMP TRANSACTION dbName


No luck, got the following error: Backup or restore requires at least one backup device.

Can I just stop the service and manually DELETE the transaction log file?

Thanks for the help.

View 2 Replies View Related

Transaction Log Size Growing Very Large

Oct 10, 2006

hi all,


my tlogs at the subscriber are growing very large

irregardless of my recovery mode. any help

using snapshot-push replication

thanks,

joey



View 8 Replies View Related

Transaction Log Still Growing When In SIMPLE Recovery Mode

Dec 3, 2007

hi,
we have a SQL Server 2000 database which we set to 'SIMPLE' recovery mode at 6pm (due to nightly large data loads). We revert back to 'FULL' recovery mode at 6am.

My understanding was that in 'SIMPLE' recovery mode, the transaction log would not grow because it would automatically be truncated after a checkpoint. However this is not the case. I thought perhaps it could be due to a long running uncomitted transaction, but when I ran 'dbcc opentran', the oldest running transactions doesn't last for more than a couple minutes. I manually run a 'checkpoint' command as well in the hope of forcing the transaction log truncation. I repeat this a couple of times to no avail. When I run 'dbcc sqlperf(logspace)' , I can still see the transaction log growing.

It is not until I run 'backup log db with truncate_only' that the transaction log gets truncated.
I do not understand, why the transaction log does not get automatically truncated in SIMPLE recovery mode?

Andrew

View 11 Replies View Related

Merge-Replication - Transaction Log Is Growing Largely - Help Needed...

Apr 25, 2007

Hi,

We have configured the following in the Publisher server..



1) Merge Replication - Synchronisation to be running in the continuous mode.

2) Merge Replication - Synchronisation in Scheduled mode.



The issue that we are facing here is the transaction log file of the databases which are in replication are growing very largely. And we get this error message in the Subscriber :



Error messages:



The Merge Agent failed to retrieve article information for publication 'MCC_Pos_CashlessPub'. Increase the -QueryTimeOut parameter and restart the synchronization. When troubleshooting, use SQL Profiler or restart the agent with a higher value for -HistoryVerboseLevel and check the output log file for errors. Correct any database engine conditions that may be causing internal replication stored procedures to fail. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147201017)
Get help: http://help/MSSQL_REPL-2147201017

The transaction log for database 'tempdb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases (Source: MSSQLServer, Error number: 9002)
Get help: http://help/9002



In fact for the past two days there are no data movement or any changes to the table, still we are able to see the growth in the transaction log file.



As mentioned in the error description when checked the log_reuse_wait_desc column of the sys.databases table, it showed the value "LOG_BACKUP". So took database backup 3 times. and took transaction log backup 2 times from the subscriber server in which the error was thrown. Still the issue persists. There is no change in the transaction log size.



What is the reason behind the growth of LDF?? Is it because of Merge Replication Configured in the server??



Kindly check and let as know regarding this issue. We are facing lot of problems because of this issue. Thanks in advance.



Regards,

Swapna.B.



View 1 Replies View Related

The Time Dimension Is Growing Much After Every Process Of Cube

Apr 11, 2008

Hi guys,

I made a cube with time dimension with hieracly year/month/date/hour
the problem is that dimension is growin to fast. In older version of MSSQL (2000) the same dimension doesn't grew so much.
Any ideas? The table is big (may be around 1 500 000 rows per month) now it contains around 4 500 000 rows.

View 19 Replies View Related

Problem With Getdate() In Transaction Takes The Insert Time Instead Of The Commit Time

Nov 12, 2007

Hi,

We need to select rows from the database that have been recently inserted/updated. We have a main primary table (COMMIT_TEST) and a second update table (COMMIT_TEST_UPDATE). The update table contains the primary key and a LAST_UPDATE field which is a datetime (to tell us when an update occurred). Triggers on the primary table are used to populate the update table.

If we insert or update the primary table in a transaction, we would expect that the datetime of the insert/update would be at the commit, however it seems that the insert/update statement is cached and getdate() is executed at the time of the cache instead of the commit. This causes problems as we select rows based on LAST_UPDATE and a commit may occur later but the earlier insert timestamp is saved to the database and we miss that update.

We would like to know if there is anyway to tell the SQL Server to not execute the function getdate() until the commit, or any other way to get the commit to create the correct timestamp.

We are using default isolation level. We have tried using getdate(), current_timestamp and even {fn Now()} with the same results. SQL Queries that reproduce the problem are provided below:


/* Different functions to get current timestamp €“ all have been tested to produce the same results */
/*
SELECT GETDATE()
GO
SELECT CURRENT_TIMESTAMP
GO
SELECT {fn Now()}
GO
*/
/* Use these statements to delete the tables to allow recreate of the tables */
/*
DROP TABLE COMMIT_TEST
DROP TABLE COMMIT_TEST_UPDATE
*/
/* Create a primary table and an UPDATE table to store the date/time when the primary table is modified */
CREATE TABLE dbo.COMMIT_TEST (PKEY int PRIMARY KEY, timestamp) /* ROW_VERSION rowversion */
GO
CREATE TABLE dbo.COMMIT_TEST_UPDATE (PKEY int PRIMARY KEY, LAST_UPDATE datetime, timestamp ) /* ROW_VERSION rowversion */
GO
/* Use these statements to delete the triggers to allow reinsert */
/*
drop trigger LOG_COMMIT_TEST_INSERT
drop trigger LOG_COMMIT_TEST_UPDATE
drop trigger LOG_COMMIT_TEST_DELETE
*/
/* Create insert, update and delete triggers */
create trigger LOG_COMMIT_TEST_INSERT on COMMIT_TEST for INSERT as
begin
declare @time datetime
select @time = getdate()

insert into COMMIT_TEST_UPDATE (PKEY,LAST_UPDATE)
select PKEY, getdate()
from inserted
end
GO
create trigger LOG_COMMIT_TEST_UPDATE on COMMIT_TEST for UPDATE as
begin
declare @time datetime
select @time = getdate()

update COMMIT_TEST_UPDATE
set LAST_UPDATE = getdate()
from COMMIT_TEST_UPDATE, deleted, inserted
where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY
end
GO
/* In our application deletes should never occur so we don€™t log when they get modified we just delete them from the UPDATE table */
create trigger LOG_COMMIT_TEST_DELETE on COMMIT_TEST for DELETE as
begin
if ( select count(*) from deleted ) > 0
begin
delete COMMIT_TEST_UPDATE
from COMMIT_TEST_UPDATE, deleted
where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY
end
end
GO
/* Delete any previous inserted record to avoid errors when inserting */
DELETE COMMIT_TEST WHERE PKEY = 1
GO
/* What is the current date/time */
SELECT GETDATE()
GO
BEGIN TRANSACTION
GO
/* Insert a record into the primary table */
INSERT COMMIT_TEST (PKEY) VALUES (1)
GO
/* Simulate additional processing within this transaction */
WAITFOR DELAY '00:00:10'
GO
/* We expect at this point that the date is written to the database (or at least we need some way for this to happen) */
COMMIT TRANSACTION
GO
/* get the current date to show us what date/time should have been committed to the database */
SELECT GETDATE()
GO
/* Select results from the table €“ we see that the timestamp is 10 seconds older than the commit, in other words it was evaluated at */
/* the insert statement, even though the row could not be read with a SELECT as it was uncommitted */
SELECT * FROM COMMIT_TEST
GO
SELECT * FROM COMMIT_TEST_UPDATE


Any help would be appreciated, we understand we could make changes to the application/database to approximate what we need, but all the solutions have identified suffer from possible performance issues, or could still lead to missing deals (assuming the commit time is larger than some artifical time window).

Regards,

Mark

View 8 Replies View Related

SQL Server 2008 :: Displaying Transaction Time Punch Data In A Time Card Form?

Oct 7, 2015

I have a table called employee_punch_record that we use to store employee time clock punches.

The columns are:

employeeid,
punch_timestamp,
punch_type (In / Out),
closed (bit used as status for open or closed pay periods),
ident

Here are some examples of a record:

bkingery62015-10-06 16:59:04.000In0
bkingery72015-10-06 16:59:09.000Out0
bkingery82015-10-06 16:59:13.000In0
bkingery92015-10-06 18:22:44.000Out0
bkingery102015-10-06 18:22:46.000In0
bkingery112015-10-06 18:22:48.000Out0
bkingery122015-10-06 18:22:51.000In0
tfeller52015-10-05 17:00:05.000In0

We are using SQL Server 2008 as our database and use Access as a GUI. I am looking to create a form in Access where employees can access their time card and request changes from management. I want to use the format from the attached screen shot for the form. I pretty much know how to do it all, the only point of complication is trying to figure out the easiest way to get the transaction punch record data on employee_punch_record into a format where I can easily populate the form in the horizontal format you see in the screen shot.

I am not super strong in SQL, but figure I can do it using a formatting table of some sort. quick and easy way to move transaction records into a more horizontally oriented record?

View 0 Replies View Related

MSmerge_tombstone Growing And Growing

Mar 1, 2004

I have merge replication setup up for 6 SQLCE Subscribers.
I have noticed that the MSmerge_tombstone table is growing
at a fast rate regardless of any changes to the data in
the database. It seems to be consistantly adding 50 rows
of data to the table every 2 minutes. As the table grows
it causes the SQLCE subscirbers to fail with the following
message:

ERROR: -2147467259
SQL Server Reconciler failed: Run

ERROR: -2147200925
: Failed to enumerate changes in the filtered
articles.

ERROR: 0
: {call sp_MSsetupbelongs
(?,?,?,?,?,0,?,?,1,?,?,?,?,?,?)}

ERROR: 0
: The merge process timed out while executing a
query. Reconfigure the QueryTimeout parameter and retry
the operation.

I'm sure that this is due to the size of the
MSmerge_tombstone.

Should the MSmerge_tombstone table grow at this rate?
36,000 rows every 24hrs!

I understand there is the sp_mergecleanupmetadata Stored
procedure but if i use this does that mean that because i
have to reinitialise all the subscribers, they are going
to have to pull down the whole subscription again.

I have since Changed a settings to make subscription
expiration date to 8 days instead of never expires but
we're still getting 50 rows added every 2 minutes

SQL SERVER 2000 SP3
Hope someone can shed some light on this for me.

Thanks.
.

View 5 Replies View Related

Single Sql Connection, Several Transaction At Same Time?

Nov 30, 2006

Hi Friends,
Is it possible to create single sql connection and running several transaction at the same time? If so, can you explain?
Thanks.

View 2 Replies View Related

Problems With Transaction Scope Time Out

Oct 2, 2007

Hi, i'm trying to run wrap a transaction around a block of code to make sure every statement is carried out.  Here's what i have:
using (TransactionScope ts = new TransactionScope())
{   ...   ts.Complete();}The code executed in the ... part is a set of sql commands within my dal.  If this above expression takes less than 30 seconds everything is carried out correctly, however if it takes longer than 30 seconds i receive an sql timeout error.  I tried modify the above code to:
using (TransactionScope ts = new TransactionScope(TransactionScopeOption.Required, TimeSpan.FromMinutes(5)))
{   ...   ts.Complete();}
But it did not seem to make a difference.  I suppose the easiest option would be to increase the timeout for every sql command in my dal but i feel that the 30 second timeout is sufficient for a single command.
Appreciate if someone could tell me how this should be done correctly.  Thanks

View 2 Replies View Related

Transact SQL :: Calculating Transaction Time

Nov 23, 2015

select TOP 10 rec_id,trans_id,number_id,card_no,message_id,trans_datetimefrom [dbo].[trans_log]
order by trans_datetime desc101, 1,34343, 99999, 200, 2015-11-23 12:27:25.710101,2,34343,99999,210,2015-11-23
12:27:26.710102,3,43434,88888,200,2015-11-23 12:28:26.714102,4,43434,88888,233,2015-11-23 12:28:27.710expected result:34343,99999,datediff(ss,'2015-11-23 12:27:26.710','2015-11-23 12:27:25.710') --difference between row 2 and row 143434,88888,datediff(ss,'2015-11-23 12:28:26.714','2015-11-23 12:28:27.710') --

difference between row 4 and row 3difference between row 6 and row 5...In the above query, I always want to find the difference in transaction date time between second and first row in a moving window.I have the unique id  as rec_id to compare the next row with the previous row.

View 3 Replies View Related

SQL 2012 :: Read Response Time Transaction Log

Apr 15, 2014

We have poor performance spikes on a drive containing our log file but this is only for reads and seems to be at a time when we run a re-index job. If this is a likely correlation as to poor performance in reading the log file, and what reads are done from a log file.

View 2 Replies View Related

Transaction Log Backup Taking A Long Time

Mar 30, 2007

Hello,

I'm trying to figure out why my transaction log backup is taking up to an hour to complete. I started off with a full recovery model with a Full database back up every Sunday, differential backups every Tuesday/Thursday and log backups every 5 minutes. I would have thought that the log file backups would execute much quicker because I'm backing them up more often.

Here is my backup statement, I'm hoping I've got a wrong option that you can point out to me:

BACKUP LOG [xxxx] TO [LogFilexxxxBackups] WITH NOINIT , NOUNLOAD , NAME = N'xxxx log backup', SKIP , STATS = 10, NOFORMAT

View 1 Replies View Related

Process Or SQL Transaction Takign Memory And Processor Time

Mar 22, 2002

Hi,

While watching through performance monitor the processor time often goes high above the memory.

Could you please tell me how to find out which process is doing that.

Thanks
John Jayaseelan

View 3 Replies View Related

URGENT !!!! Transaction Log Backup Running Longer Time

Dec 26, 2002

Hi Guys,

We have a database with 20 gig and with huge transactions. The transaction log backup is scheduled every one hour
from 3.00 AM to 9.00 PM.

We take a full backup in the disk at 9.00 PM and again a full backup in the tape at 2.00 AM

It works fine in the day from 6.00 AM and complete within seconds and the size is approx. 50 to 200 MB.

But the very first transaction log backup at 3.00 AM is running like 3 hrs and the size is approx. 11 gig whick is almost equivalent to the Full backup size. There are some dts packages that are running in the night and as usual reindex, intergrity checks are running and there no large user traffic during night. But I have no idea which the very first transaction log backup in the morning takes longer time and has this bug size. Is there any work around to fix this proble.

Please advise.

Thanks,
Anu

View 7 Replies View Related

Error 8525: Distributed Transaction Completed. Either Enlist This Session In A New Transaction Or The NULL Transaction.

May 31, 2008

Hi All

I'm getting this when executing the code below. Going from W2K/SQL2k SP4 to XP/SQL2k SP4 over a dial-up link.

If I take away the begin tran and commit it works, but of course, if one statement fails I want a rollback. I'm executing this from a Delphi app, but I get the same from Qry Analyser.

I've tried both with and without the Set XACT . . ., and also tried with Set Implicit_Transactions off.

set XACT_ABORT ON
Begin distributed Tran
update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TRANSACTIONMAIN
set REPFLAG = 0 where REPFLAG = 1
update TSADMIN.TRANSACTIONMAIN
set REPFLAG = 0 where REPFLAG = 1 and DONE = 1
update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.WBENTRY
set REPFLAG = 0 where REPFLAG = 1
update TSADMIN.WBENTRY
set REPFLAG = 0 where REPFLAG = 1
update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.FIXED
set REPFLAG = 0 where REPFLAG = 1
update TSADMIN.FIXED
set REPFLAG = 0 where REPFLAG = 1
update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.ALTCHARGE
set REPFLAG = 0 where REPFLAG = 1
update TSADMIN.ALTCHARGE
set REPFLAG = 0 where REPFLAG = 1
update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TSAUDIT
set REPFLAG = 0 where REPFLAG = 1
update TSADMIN.TSAUDIT
set REPFLAG = 0 where REPFLAG = 1
COMMIT TRAN


It's got me stumped, so any ideas gratefully received.Thx

View 1 Replies View Related

SSIS, Distributed Transaction Completed. Either Enlist This Session In A New Transaction Or The NULL Transaction.

Feb 22, 2007

I have a design a SSIS Package for ETL Process. In my package i have to read the data from the tables and then insert into the another table of same structure.

for reading the data i have write the Dynamic TSQL based on some condition and based on that it is using 25 different function to populate the data into different 25 column. Tsql returning correct data and is working fine in Enterprise manager. But in my SSIS package it show me time out ERROR.

I have increase and decrease the time to catch the error but it is still there i have tried to set 0 for commandout Properties.

if i'm using the 0 for commandtime out then i'm getting the Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.

and

Failed to open a fastload rowset for "[dbo].[P@@#$%$%%%]". Check that the object exists in the database.

Please help me it's very urgent.

View 3 Replies View Related

Distributed Transaction Completed. Either Enlist This Session In A New Transaction Or The NULL Transaction.

Feb 6, 2007

I am getting this error  :Distributed transaction completed. Either enlist this session in a new
transaction or the NULL transaction. Description:
An unhandled exception occurred during the execution of the current web
request. Please review the stack trace for more information about the error and
where it originated in the code. Exception Details:
System.Data.OleDb.OleDbException: Distributed transaction completed. Either
enlist this session in a new transaction or the NULL transaction.have anybody idea?!

View 1 Replies View Related

Distributed Transaction Completed. Either Enlist This Session In A New Transaction Or The NULL Transaction.

Dec 22, 2006

i have a sequence container in my my sequence container i have a script task for drop the existing tables. This seq. container connected to another seq. container. all these are in for each loop container when i run the package it's work fine for 1st looop but it gives me error for second execution.

Message is like this:

Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.

View 8 Replies View Related

Distributed Transaction Completed. Either Enlist This Session In A New Transaction Or The NULL Transaction. (HELP)

Jan 8, 2008

Hi,

i am getting this error "Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.".

my transations have been done using LINKED SERVER. when i manually call the store procedure from Server 1 it works but when i call it through Service broker it dosen't work and gives me this error.



Thanks in advance.


View 2 Replies View Related

At What Rate Is The Db Growing???

Oct 27, 1999

Hello,

Does 7.0 store db Growth rate information?

I am looking for information that tells me how fast a db is growing in MB and or percentages over a given period of time, ie weekly, monthly, yearly etc. Either in real numbers or estimates. Does 7.0 already store something like this or do I need to create some code for this?

Or does someone have something like this already coded that they would be willing to share?

Thank you in advance.
Troy

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved