So I have a few data flow tasks that I need to all execute successfully before I commit the changes. So I use a few nested sequence containers with the parent set to Required and all of the children set to Supported. This should work right? Instead I get "The AcquireConnection method call to the connection manager "Databasenamehere" failed with error code 0xC0202009. If I switch the parent back to Supported or NotSupported it will execute fine.
I have a package that has many containers that execute in sequence.
If any container fails the package does not fail as the subsequent containers must run.
However if containers fail i do not want to rerun the entire package. Checkpoint restartibility only allows you to start from where your package failed, however my package will not fail and i do not want to start from where it failed but only rerun the containers that failed.
Is this possible ? Can one maybe run only certain containers in a package through dtsexec or another command line tool?
There is a bug in SSIS2005 concerning the way that checkpoint files behave in concert with Sequence containers. It is documented (at length) here:
Is it possible to execute a container regardless of the checkpoint file? (http://forums.microsoft.com/msdn/showpost.aspx?postid=1574262&siteid=1&sb=0&d=1&at=7&ft=11&tf=0&pageid=0)
Is anyone from Microsoft yet able to give a definitive answer to whether this will be fixed or not in Katmai? A yes or no answer would be very very much appreciated.
I have about 12 sequence containers mapped out to execute separately based on a precedence contraint and an expression. These lie at the same level and order in my sequence. Only one of these will execute based on an expression. After any of these executes, I have a consecutive sequence container that I'm attempting to execute. I've set the precedence contraint from all twelve of the prior sequence containers to this single sequence container that I would like to run after any of the 12. My problem is that the package will only allow one of these twelve sequence containers to become a precedence of this second single sequence container at a time. The package will let me graphically attach the precedence constraint from all 12 to this single sequence container, but when the package runs, it fails to follow through to this single sequence container. I'm trying to figure out why this is the case and how I can get what I would like to work -- work. Thanks.
I have run into a problem! Im developing a SSIS package programmatically using C#. But when i create and add a container (foreachloop and sequence) the container is not becommming visible in design time in my intergration services designer (when i open the .dtsx package afterwards). Does anyone have a solution to this problem? It is only a problem with containers i create myself (it is working when im adding e.g. dataflow tasks to existing containers).
I have a Sequence containser(named One) and 2 Sequence containers( named two and three) nested inside container One. In Container two and three I have execute SQL tasks to execute Stored Procedure. Then I have send email task linked to my sequence containter One on failure constraint.
Failure contrainst to send email is not working. I want each sequence containters(two and three) to execute SQL tasks and if one fails wait for other to execute and then failure task should execute.
I have multiple sequence containers in my package. Â I only want to have one sendmail task for the failure/completion of the package. Â If I put the sendmail task in the last sequence container and the first seqence fails, the sendmail task will not be reached and therefore, no email will be sent out.Is there a way to have one sendmail task for all the sequence containers and allow it to send mail regardless of what sequence fails/completes?
I have a DTS package which contains: - 1 "Execute SQL" task - 1 "Connection" object
The provider for the connection is SQLOLEDB ("Microsoft OLEDB Provider for SQL Server"), and this works just fine with transactions in ADO etc. The MDAC version is 2.5, and the SQL Server Client Utils version is 7.0 SP2.
The package properties are set as follows: - "Use transactions" is on - "Auto commmit transaction" is on - "Read committed" isolation level
The execute SQL task has the following workflow properties: - "Join transaction if present" is on - "Commit transaction on successful..." is on - "Rollback transaction on failure" is on - ("Execute on main package thread" just in case)
When I execute the package (from the designer, or cmd line), I get the following (most informative) error: "Error Source: Microsoft Data Transformation Services (DTS) Package Error Description: Unspecified error"
If I change the package properties to remove "Use transactions", it executes just fine.
Hi,All. I'm writing test cases on C# for a few methods that make changes in database.To prevent making changes I used BeginTransaction-Rollback,everything was good.But this doesn't work if tested method has BeginTransaction-Rollback code itself.An error appears in NUnit: System.InvalidOperationException : SqlConnection does not support parallel transactions. Do smb know how to solve the problem?
Hi there, I have decided to move all my transaction handling from asp.net to stored procedures in a SQL Server 2000 database. I know the database is capable of rolling back the transactions just like myTransaction.Rollback() in asp.net. But what about exceptions? In asp.net, I am used to doing the following: <code>Try 'execute commands myTransaction.Commit()Catch ex As Exception Response.Write(ex.Message) myTransaction.Rollback()End Try</code>Will the database inform me of any exceptions (and their messages)? Do I need to put anything explicit in my stored procedure other than rollback transaction? Any help is greatly appreciated
This is my code in vb.net with Sql transactionI am using insertcommand and update command for executing the sqlqueryin consecutive transactions as follows.How can I achive parallel transactions in sql------------------start of code---------------------trybID = Convert.ToInt32(Session("batchID")) strSQL = "" strSQL = "Insert into sessiondelayed (batchid,ActualEndDate) values (" & bID & ",'" & Format(d1, "MM/dd/yyyy") & "')" sqlCon = New System.Data.SqlClient.SqlConnection(ConfigurationSettings.AppSettings("conString")) Dim s1 As String = sqlCon.ConnectionString.ToString sqlDaEndDate = New System.Data.SqlClient.SqlDataAdapter("Select * from sessiondelayed", sqlCon) dsEndDate = New DataSet sqlDaEndDate.Fill(dsEndDate) dbcommandBuilder = New SqlClient.SqlCommandBuilder(sqlDaEndDate) 'sqlCon.BeginTransaction() 'sqlDaEndDate.InsertCommand.Transaction = tr If sqlCon.State = ConnectionState.Closed Then sqlCon.Open() End If sqlDaEndDate.InsertCommand = sqlCon.CreateCommand() tr = sqlCon.BeginTransaction(IsolationLevel.ReadCommitted) sqlDaEndDate.InsertCommand.Connection = sqlCon sqlDaEndDate.InsertCommand.Transaction = tr sqlDaEndDate.InsertCommand.CommandText = strSQL sqlDaEndDate.InsertCommand.CommandType = CommandType.Text sqlDaEndDate.InsertCommand.ExecuteNonQuery() tr.Commit() sqlDaEndDate.Update(dsEndDate) sqlCon.Close() End If Catch es As Exception Dim s2 As String = es.Message If sqlCon.State = ConnectionState.Closed Then sqlCon.Open() End If strSQL = " update SessionDelayed set ActualEndDate= '" & Format(d1, "MM/dd/yyyy") & "' where batchid=" & bID & "" sqlDaEndDate.UpdateCommand = sqlCon.CreateCommand() tr1 = sqlCon.BeginTransaction(IsolationLevel.ReadCommitted) sqlDaEndDate.UpdateCommand.Connection = sqlCon sqlDaEndDate.UpdateCommand.Transaction = tr1 sqlDaEndDate.UpdateCommand.CommandText = strSQL sqlDaEndDate.UpdateCommand.CommandType = CommandType.Text sqlDaEndDate.UpdateCommand.ExecuteNonQuery() tr1.Commit() sqlDaEndDate.Update(dsEndDate) sqlCon.Close()
We are using peer-to-peer transactional replication. When using the default agent profile, the replicator stops processing commands when it encounters an error. It appears to keep trying to apply the command that caused the error and essentially hangs on that command. We have changed our agent profiles to "Continue on data consistency errors" - not the kind of option to give you a warm fuzzy feeling but we just don't know what else to do.
Where are the commands that cause errors stored within the replication environment? Can they be individually viewed, edited, and/or deleted? We would like some alternative to "continue on data consistency errors."
Hi, I am working on vs2005 with sql server 2000. I have used TransactionScope class. Example Reference: http://www.c-sharpcorner.com/UploadFile/mosessaur/TransactionScope04142006103850AM/TransactionScope.aspx The code is given below. using System.Transactions; protected void Page_Load(object sender, EventArgs e) { System.Transactions.TransactionOptions transOption = new System.Transactions.TransactionOptions(); transOption.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted; transOption.Timeout = new TimeSpan(0, 2, 0); using (System.Transactions.TransactionScope tranScope = new System.Transactions.TransactionScope(TransactionScopeOption.Required,transOption)) { using (SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["nwConnString"].ConnectionString)) { int i; con.Open(); SqlCommand cmd = new SqlCommand("update products set unitsinstock=100 where productid=1", con); i = cmd.ExecuteNonQuery(); if (i > 0) { using (SqlConnection conInner = new SqlConnection(ConfigurationManager.ConnectionStrings["pubsConnString"].ConnectionString)) { conInner.Open(); SqlCommand cmdInner = new SqlCommand("update Salary set sal=5000 where eno=1", conInner); i = cmdInner.ExecuteNonQuery(); if (i > 0) { tranScope.Complete(); // this statement commits the executed query. } } } } // Dispose TransactionScope object, to commit or rollback transaction. } } It gives error like "The partner transaction manager has disabled its support for remote/network transactions. (Exception from HRESULT: 0x8004D025)" The database I have used is northwind database and pubs database which is by default in sql server 2000. So, Kindly let me know how to proceed further. Thanks in advance,Arun.
I am having a problem getting error rows to redirect between an OLE DB Source and an OLE DB Destination when using transactions. Each time I turn on the transaction control I get an error stating:
"[OLE DB Destination [48]] Error: The input "OLE DB Destination Input" (61) cannot be set to redirect on error using a connection in a transaction."
I get the above Error when using MSDTC. I have the data flow inside of a Sequence Container with the transaction option set to REQUIRED and the Isolation Level set to Serializable. I have tried all the Isolation levels.
I have the error rows piped off to a seperate OLE DB Destination. I have also tried using native SQL transactions with Execute SQL tasks to BEGIN, COMMIT or ROLLBACK the transaction. This does not work either. It looks like it works properly when the data flow is successful but using profiler I can see SSIS opens up a seperate process for the BEGIN and then another one with the Data Flow task. When I intentionally fail the Data Flow the Rollback always fails. I made sure I had RetainSameConnection turned on for the Connection I was using.
I am speculating that the Data Flow does not know what to Rollback the actual rows that succeeded or the error rows that are getting piped off.
I am fairly stumped on this one so any help is appreciated.
I'm receiving the below error when trying to implement Execute SQL Task.
"The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION." This error also happens on COMMIT as well and there is a preceding Execute SQL Task with BEGIN TRANSACTION tranname WITH MARK 'tran'
I know I can change the transaction option property from "supported" to "required" however I want to mark the transaction. I was copying the way Import/Export Wizard does it however I'm unable to figure out why it works and why mine doesn't work.
Hi All, Can anybody suggest me a website where I can find articles on Managing transactions with Sql server. Also a scenario where the transactions take place in a environment involving 2 different databases, Like the bank account and credit card transactions (specifically of 2 way kind) Thanks
I have a web application with a shopping cart, how do I stop all the shopping cart transaction from going into the db log? Is this possible? These are are only transient data movements, and will never be need to to restore to, and they are cause log bloat. Or is there a better way to stop log bloat?
How can we change connection properties in a DTS pkg with connection? You can loop through the connection count but the connection ID is not static one.So can’t rely on that. Is there another way of changing connection properties?
I am currently designing a DTS Package to import data that is processed daily into a large database.
I have to design the package such that if any step fails when importing, I roll back the entire transaction.
I have designed the package with this in mind, checked "join transaction if present" and "rollback transaction on failure" in all of the workflows. I have also made all workflows serialized.
However, when I run the package, it fails on one of the data pumps with the error:
I am replicating (finally!!) and on my publishers agent history I can see it says xx transactions with xx commands were delivered. (xx being the number) Where can I look to see what the transactions or commands are?
Is there a place the system stores this information?
Is there a point to wrapping a single UPDATE or INSERT statement in an explicit TRANSACTION:
BEGIN TRANSACTION
INSERT INTO Table (...) VALUES (...)
COMMIT TRANSACTION
I understand ACID and concept of transactions. However, I thought they were only necessary for multi-statement operations. I'm maintaining code that does this and am wondering if this is necessary. Does SQL Server guarantee ACID for single statements? Are single UPDATE/INSERT statements prone to race condition like affects without using explicit transactions?
If you run the Begin Transaction code and then run a create such as an update query and you see that it effects the number of rows that you wanted it to effect is there a way to look at the actual data that changed before you Commit Transaction?
I have a table with around 240 columns and one of the column in the Table is the Inserttime ( DATETIME ) and I using a GETDATE() function in the stored Proc, when we insert data into the table. In the same Milli second 2007-06-27 09:32:58.303 , I have around 7600 records in the database. The Stored Proc is called for each Individual record and we don't bunch the transactions. Is this possible.
I did some bench marking on this server and I can insert only 700 - 800 records approx / sec on this particular table.
I have a small database that I have been testing.I get an error about a transaction deadlock.The code is in stored procedures and I added transactions to the sp'sbut the error happened again.I wrapped the whole sp in just one transaction and I don't have anyindex on the tables.When I test just by running a program that sends 3 calls at a time itwill get a deadlocked transaction as I send 6 or 9 at a time.I am not sure how it can have a deadlocked transaction after I usedtransactions(begin and commit) in the sp's.Steve
I am working with transactions and use try catch to capture errors and in the event of an error i have to rollback the transaction. How can i perform this?, most of the errors which i forsee are either insertion of null values into non nullable columns or violation of Primary keys while inserting duplicates. I started by coding the following way but it does not rollaback apparently the try catch does not work for above kind of errors..Can somebody help..
DECLARE @REPORTING_PERIOD VARCHAR(6)
BEGIN TRY
BEGIN TRANSACTION
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SET @REPORTING_PERIOD =(Select REPORT_PERIOD_ID from dbo.T_REPORT_PERIOD where C_FLAG_ACTIVITY=1)
--Step 1
INSERT INTO [dbo].[T_COUNTRIES]
([C_COUNTRY]
,[LB_COUNTRY]
,[C_REGION]
,[FK_REPORTING_PERIOD])
SELECT [C_COUNTRY]
,[LB_COUNTRY]
,[C_REGION]
,@REPORTING_PERIOD
FROM [dbo].[IN_T_COUNTRIES]
--Step 2
INSERT INTO [dbo].[T_FLE]
([FK_P_FLE]
,[C_COSMOS]
,[LB_FLE]
,[C_PARETO]
,[C_OPCO_SCOPE]
,[C_LEVEL]
,[C_FLE_TYPE]
,[C_ACTIVITY]
,[F_MATERIAL]
,[C_MATERIAL_PRIORITY]
,[C_CALCULATION_METHOD]
,[F_CREDIT_RISK_MATERIALITY]
,[V_PARTICIPATION]
,[FK_REPORTING_PERIOD])
SELECT Null as [FK_P_FLE]
,[C_COSMOS]
,[LB_FLE]
,[C_PARETO]
,[C_OPCO_SCOPE]
,[C_LEVEL]
,[C_FLE_TYPE]
,@REPORTING_PERIOD
FROM [dbo].[IN_T_FLE]
COMMIT TRANSACTION
END TRY
BEGIN CATCH
SELECT
ERROR_NUMBER() as ErrorNumber,
ERROR_LINE() as ErrorLine,
ERROR_MESSAGE() as ErrorMessage;
-- Test XACT_STATE for 1 or -1.
-- XACT_STATE = 0 means there is no transaction and
-- a commit or rollback operation would generate an error.
-- Test whether the transaction is uncommittable.
IF (XACT_STATE()) = -1
BEGIN
PRINT
N'The transaction is in an uncommittable state. ' +
'Rolling back transaction.'
ROLLBACK TRANSACTION;
END;
-- Test whether the transaction is active and valid.
I've been searching around and haven't found anything that simply states what I want to know.
I want to use a transaction within my CLR Stored Proc, to do so I've got System.Transactions referenced and I can access the current transaction via Transaction.Current.
My questions are
Will there always be a current transaction?
Do I need to create a new transaction if one doesn't already exist?
I need to push rows from CE to SQL Server 2000 and after delete these rows of CE database only if all rows have been sent to SQL Server 2000.
I think the best is work with transactions. Since I know I can use transactions for this purpose, can anybody give me a link with push transaction examples ?
What does "Transactions/sec" counter in SQL 2005 under databases do in terms of performance. My counter shows almost 100% all the time in 4 terrbyte DB in superdome with many CPUs.
My first question: Can there be a performance loss if I uncomment the lines about transaction usage? I mean, when I do this I start to get more timeouts.
My problem goes on. When I comment those lines and run a stress tool, I am getting "column X does not belong to table Y" errors. If those lines are not commented i am not getting this error, but I get timeout errors frequently. So, my second question: is there something wrong in my query or is there a bad coding practice I am following? Could someone offer a better and more robust sample for this code block?
By the way, connection pooling is on. And these errors are observed under high loads.