Transactions And Checkpoints Not Working As Expected
May 2, 2007
I have a package that has a container containing multiple DF Tasks.
The container is set to be Transacted, such that should any of the DF tasks fail the data inserted in any of the previous tasks rolls back.
This works as expected.
However, this container is part of a larger package and so I wanted to have a checkpoint on it, so that should any of the tasks within it fail, the package could be restarted from this container.
However, I would expect the functionality to be that on failure, the checkpoint would cause the whole container to be started again (because the container is transacted all DF task info would be rolled back) so we would expect it to start at task 1 again.
This is not the functionality I see. The package restarts from the failed task within the container every time.
According to the book Prof SSIS, it should start again from the first task and as explained this makes sense on a Transacted container as you would want this to happen.
A previous forum message encountered the same issue it appears:
See SSIS Checkpoints 04 Dec 2006.
This is an extract from it:
"I only experimented a little but my experience was that when I have a transacted container with multiple tasks that are checkpointed, SSIS would try to restart from the task that failed rather than from the first task in the container. The transaction was being rolled back correctly though.
In short, I felt that check points were not aware of transactions.
So, I ended up with this setting and it works for me:
Container is checkpointed and trasacted.
Tasks within the container are not checkpointed.
'FailParentOnFailure' property set to True on the tasks.
That way, if a task failed, it would fail the container and a checkpoint would be created at that level. Transaction would be rolled back as usual."
While this makes sense to me it is not the same properties that the SSIS book has that work.
Additionally, this didn't work for me either !!
I have tried every combination of FailPackageOnProperty and FailParentOnProperty that makes sense but every time the package restarts from the failed container within the task.
The transaction is rolled back correctly every time, but it seems the checkpoint that is created is not used correctly when dealing with transactions within containers.
Hello everyone, I had been studying the relationship between SSIS Checkpoints and SSIS Transactions.
What I want to do is to create a package with different task, where each one task creates a new transaction, and the same time each task be a checkpoint, it€™s in order to restarts the package from the failure task not from the beginning.
The Transaction-Checkpoint solution contains two packages*: CkeckpointsAndTransactions1.dtsx and CkeckpointsAndTransactions2.dtsx
Package CkeckpointsAndTransactions1 contains four tasks, task three always fail. The package is configured to use checkpoints and each individual task creates a checkpoint. Additionally, each task creates a new transaction. The package has the TransactionOption setting to NoSupported.
In the CkeckpointsAndTransactions1 package there is something wrong, when the third task fails and I restart the package, the package starts from the beginning, this is wrong!!, the package should restart from the failure task.
In order to the package works like is expected it€™s necessary to add a new task between second and third task. It is also necessary that this new task hasn€™t transaction support. This is shown in the CkeckpointsAndTransactions2 package, in this package after package failure, I restart the package and the package restarts from the failure task, like is expected, but the additional task should not be necessary!!
Does anyone what is wrong in my packages?? How can I to create a package with different task, where each task creates a new transaction, and the same time each task be a checkpoint?
*Please download the BIDS solution from hernan93.files-upload.com (Transaction-Checkpoint.zip file)
I am trying here to get a situation going which includes both transactions and checkpoints to make sure that when something goes wrong I don't get a) data corruption (hence the transactions and b) I don't have to completely restart my 2hr run (hence the checkpoints). However I ran into something of which i cannot see whether it is intended behaviour or simply a bug.
Here's the deal: I have a SSIS-package in which I enable checkpoints (CheckpointUsage: IfExists and SaveCheckpoints: True). I have 2 Dataflows which follow eachother (the first dataflow prepares data for the second dataflow to edit). Because I want to make sure that my data is secure I put a separate transaction on both the dataflows.
And here my problem arises. If I run my package now and the second dataflow breaks then my checkpoint sends me back to the first dataflow and my initial insert is executed again, which isn't meant to happen (I enabled checkpoints to prevent rerunning items). Somehow my checkpoint does not register the fact that the first dataflow has already been executed and it will execute that one again upon rerun.
However: if I put a random task between the 2 transacted dataflows (for example an empty script-task) it will work as intended. Just as long as this inserted item doesnt have a transaction; because if it does then the problem comes back Now if I execute the package then my checkpoint shows that the first dataflow has already been executed and thus it will not execute this one again and it starts at the second dataflow upon re-execution.
I can work around it (with the empty script-task) but still I am wondering as to why this is happening. I am very interested to hear whether this is really a bug or if it is intended behaviour (and if it is then why is it intended?)
Do you see anything wrong with this? The first select works and finds rows the second one does not. I have opened the Key since the first query does find rows.
select *
from [dbo].[dmTable]
WHERE cast(decryptByKey(field) as varchar(50)) = 'Value'
select *
from [dbo].[dmTable]
where field = EncryptByKey(Key_GUID('CLTCadminKey'),'Value')
In SQL 2005 SP1 - In my transactional replication RMO C# script, I want my snapshot job schedule to run daily at 2:58 AM.
Instead it runs hourly in the 58th minute. Sample code below shows I use the value 025800. That should be interpretted as AM. The frequencytype is daily. The frequency interval is 1. There is no subday frequency. Yet the job runs hourly and disregards the specified hour.
Is there something missing in this code? Is this a SQL Server bug?
// Set the required properties for the trans publication snapshot job. TransPublication tpublication = new TransPublication(); tpublication.ConnectionContext = conn; tpublication.Name = publicationName; tpublication.DatabaseName = publicationDbName; tpublication.SnapshotSchedule.FrequencyType = ScheduleFrequencyType.Daily; tpublication.SnapshotSchedule.FrequencyInterval = Convert.ToInt32(0x0001); tpublication.SnapshotSchedule.ActiveStartDate = 20051101; string newString = "025800"; tpublication.SnapshotSchedule.ActiveStartTime = Convert.ToInt32(newString); tpublication.Create();
I was hoping that the Trim function inside the update command, cmdUpdate.Parameters.Add("@doc_num", Trim(txtDocNum.Text)) , would deal with any leading and trailing spaces, but it does not seem to be doing anything at all. The value from the textbox still arrives in the database table with leading spaces!!
We have an asp application that runs the reportserver URL for the selected report, passing it through parameters. This opens the report viewer and the report runs.
The problem im having is that one report is not working as expected. When the report is run from report manager, everything works fine. The links do what they're meant to (they link to other reports passing through parameters). When the report is run from our asp application with the report viewer, the links fail.... they dont pass through the correct values or sometimes dont pass through a value at all.
Were the report viewer and report manager applications developed seperately?
I have a package with an FTP task in the Control Flow. Nothing complicated, its configured to download a file from FTP with overwrite true, and the get and download paths in variables. Once this step completes it goes to a sequence container that does stuff with the file, that part works fine.
The problem is if i run the package in debug mode using Visual Studio, everything works perfectly (even if run over and over). The problem occurs if i use my "driver" application to try and execute the package. All my driver does is use C# code to create an Application object, set the PackagePassword, LoadPackage based on the path and then .Execute().
Here is when the strangeness begins:. If the file exists in the destination path, then the FTP task fires. If the file does not exist (lets say i manually delete it after its being downloaded before using the IDE) the FTP task dose not fire and my Sequence Container fails to use the fail, since it is of course missing.
Monitoring the FTP server that i am running locally i can see commands firing when the file exists, but when the file is killed it never even tries to connect to my server. Very odd.
Am i missing something? Why would this work from the IDE but never from my driver?
Any help would be appreciated. Please let me know if any additional detail is required.
table1 has search words and table2 has file names as below and want to get file names from table2 those match with all search words.
table1 ---------------------- -searchword- column name -------------------------------------------- Learn more about melons row0 -------------------------------------------- %.txt row1 -------------------------------------------
table2 ------------------------------ -testname- column name -------------------------------------------- FKOV43C6.EXE ------------------------------------------- frusdr.txt ------------------------------------------- FRUSDR.TXT ------------------------------------------ SPGP_FWPkg_66G.zip ------------------------------------------ readme.txt ----------------------------------------- README.TXT ---------------------------------------- watermelon.exe ---------------------------------------- Learn more about melons read me.txt -------------------------------------------------------
Here is the script what I have tried...............I hope some one will help to come out this loop.Thanks in advance.
Has anyone had experience of using Parent/Child packages while enlisting them in Transactions. I tested this on a small sample and thought that I had got it to work, but in my real-world package it does not.
The parent package essentially calls three child packages. In each child package there are multiple DFT's that import and transform data into SQL Server. All data must be imported or not at all. Therefore I created a FELC container into which three Exec child package tasks were placed. The FELC is set to Trans Option 'Required' and the Exec child package tasks to supported. Unfortunately upon failure of one of the DFT's in the child the data was not rolled back.
So initially we had in terms of container hierarchy for the Trans Option property: Parent package Supported FELC for calling child packages Required Task execute child package Supported Child package Suppored Tasks Suppored
Looking at this more closely we thought that we would need Parent package Supported FELC for calling child packages Required Task execute child package Required Child package Required Tasks Suppored
for it to work. However, the latter now gives us failures with error messages on the tasks on the child packages. [Execute SQL Task] Error: Failed to acquire connection "Conn ECARS1CEDImport". Connection may not be configured correctly or you may not have the right permissions on this connection.
Even more strange the first couple of tasks in the child pkg complete successfully even though they use the same connection listed in the error. These tasks also have Event handlers.
I have a package with two sequence containers, each containing two SQL tasks and a data flow task, executed in that order. I want to encapsulate the data flow task in a transaction but not the SQL tasks. I have the TransactionOption property set to 'required' on the data flow tasks and 'supported' on the SQL tasks and the sequence containers. When I run the package I get a distributed transaction error on the first SQL task of the second sequence container:
"[Execute SQL Task] Error: Executing the query "TRUNCATE TABLE DistTransTbl2" failed with the following error: "Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly."
The only way I can get the package to succeed is to set the TransactionOption = 'required' on the sequence containers and 'supported' on all subordinate tasks. This is not what I want, however. Any ideas?
I have this simple full text search query that works perfectly on my own computer using sql server 2005 express, however, on the production server(shared hosting)when I added the first 50+ rows, the full text search works perfect, but as the number of rows increases, the full text search can only see the first50+ rows, but not the new ones. Is there any quick solution for this or it's just a common mistake for developers for not properly indexed columns?Is there a way to re-indexed all rows without loosing data on the live server? search query: SELECT TOP 50 *FROM li_BookmarksWHERE FREETEXT(Keywords,@Keywords)
I want to use a checkpoint in an SSIS package and would require some help.
I have a scenario like this
Task A ------ Task B-----------Task C
------- Task B1
Task A has a precedence constraint which determines if either Task B or Task B1 runs. Task B is run if the condition is met and Task B1 if the condition is not met.
I would like Task B1 to be a script task that is used to fail Task A so that when the package is restarted it will start from task A based on the checkpoint.
I have a package that has 4 Script Tasks that are placed sequentially.
I have Task1--> Task2-->Task3-->Task4
The arrows between them are OnCompletion Arrows as opposed to the Standard OnSuccess arrows.Even if Task2 failed, it would still execute 3 and 4
the catch is that i want it such that when i run the first time and task 2 fails, then all the tasks except task2 should run which is fine, but when i rerun it. I want it such that it realises that task 2 had failed earlier, so it runs just task2.... if both 2 and 4 had failed then it should just run 2 and 4
i tired to implement it with check points, but the problemn is that if it fails at task2 it stops at task2 and does not continue to execute tasks 3 &4... when u rerun it starts at 2 but like i said i would like 3 & 4 to have completed the previous run...
I have a package that uses ssis checkpoints. It works well. However, when I try to setup transactions for some task, the chekpoints aren't used.
I read BOL and It states: "If a package is configured to use checkpoints, Integration Services captures the restart point in the checkpoint file. The type of container that fails and the implementation of features such as transactions affect the restart point that is recorded in the checkpoint file."
But, how checkpoints are affected by transactions? what relation exists between this two components?
hi people, i have crashed on a this problem. I have a sequence container and on this container I have set "FailPackageOnFailure=true". Now in this container there are 2 tasks. The first one is preceeding the second one. Now both this task have set "FailParentOnFailure=true". Both task are the same and their purpose is to drop table A.
1) I run the package and it fails, because there is no table to drop. 2) I create the table manualy and run package again. 3) I see, that the first task is beeing just SIMPLE OMMITED and the second task runs
In general, everytime any task in a sequence container invokes failure, next time is beeing ommited regardelss of its status. How can this be fixed ? Thanks
I am building a set of packages to load different things, some of which have relationships with the others. Therefore I want them loaded in a certain order. I have built a main package that executes the set of packages to control the flow of the packages.
Now, I want to implement checkpoints. Ultimately, I only want to deal with the main package that controls everything. So I figure the main package needs checkpoints enabled. When packages are nested and checkpoints are on at the top level package, will the nested package(s) start at the control flow point of failure or will it run the entire nested package? Should checkpoints be implemented within the nested packages as well? Should checkpoints only be implemented within the nested packages? Again, remember that I only want to launch estart the main package.
I am using check point in my packages , but i am not able to run my packages where it exactly got failed. The scenario is i am 100 rows at source system and i was loaded 95 records into target and due to the some data formatting issues i got failed at the 96th record. Later i am trying to re-execute the package, Surprisingly my package start run from the 1 st record(nothing but the start point of dataflow task).
How can i achive to run from where it excatly got failed(96th record) ?? is it possible using check points else is there any work-around approach ??please respond this post , it is very helpfull for me..
We are currently facing an issue in ensuring restartability of an SSIS package. The scenario is explained below.
Context: The SSIS Package has two Data Flow tasks. The Data Flow task named DFT1 is the predecessor for DFT2 and chained with OnSuccess precedence constraint.
OnPreExecute and OnPostExecute event handlers have been implemented for DFT1. Each task in both event handlers as well as DFT1 and DFT2 have FailPackageOnFailure set to True.
Scenario1: Task in OnPreExecute of DFT1 fails. DFT1 is attempted and succeeded. OnPostExecute of DFT1 was not attempted. DFT2 was not attempted. Checkpoint file was created; however, no entries were made.
When restarted, execution started from first step in Control flow.
Scenario2: Task in OnPostExecute of DFT1 fails. DFT1 and its OnPreExecute Event were executed. DFT2 was not attempted. Checkpoint file was created and entries were made. Entries had DTS:result as 0 for OnPreExecute and DFT1 tasks.
When restarted, DFT2 was executed. OnPostExecute event, which failed during previous execution, was not attempted.
Each task in the package, whether it is in Control flow or as part of an event handler is crucial for seamless execution. But apparently, as explained above, there is no reliability on the event handlers in case of failures. Has anyone encountered similar scenario? Is this behavior as per design of the runtime engine?
I have a master package with a sequence container with around 10 execute package tasks (for child packages), all in parallel. Checkpoints has been enabled in the master package. For the execute package tasks FailParentOnFailure is set to true and for the sequence container FailPackageOnFailure is set to true.
The problem i am facing is as follows. One of the parallel tasks fails and at the time of failure some of the parallel tasks (say set S1) are completed succesfully and few are still in execution (say set S2) which eventually complete successfully. The container fails after all the tasks complete execution and fails the package. When the package is restarted the task which failed is not executed, but the tasks in set S2 are executed.
If FailPackageOnFailure is set to true and whatever be the FailParentOnFailure value for the execute package task, in case of restart the failed package is executed but the tasks in set S2 are also executed.
Please let me know if there is any setting that only the failed task executes on restart.
I have a sequence container in my Package and this sequence has more than one control flow tasks.
Can I create the checkpoints such that only the failed component inside the sequence container runs again and not the other successful components/tasks in the sequence container?
I have a FTP task in my control flow that download files from a FTP server. This ftp task is inside a foreach container that loops over a ADO recordset for the file name. The files that the ftp task pulls are huge. If the FTP task fails then I want the FTP task to restart and only download those files that have not been downloaded. Is this possible?
What possible configurations do I have to make to the foreach container and the filetask?
Hi there, I have decided to move all my transaction handling from asp.net to stored procedures in a SQL Server 2000 database. I know the database is capable of rolling back the transactions just like myTransaction.Rollback() in asp.net. But what about exceptions? In asp.net, I am used to doing the following: <code>Try 'execute commands myTransaction.Commit()Catch ex As Exception Response.Write(ex.Message) myTransaction.Rollback()End Try</code>Will the database inform me of any exceptions (and their messages)? Do I need to put anything explicit in my stored procedure other than rollback transaction? Any help is greatly appreciated
Hi, I'm trying to set the value of the variable @prvYearMonth thru this sp. In the query analyzer I execute the following code to the see the results of my 'CabsSchedule_GetPrevYearMonth' SP, but the only see "The Command(s) completed successfully in the result. What am I missing??
Thanks in advance
CREATE PROCEDURE CabsSchedule_GetPrevYearMonth ( @prvYearMonth int OUTPUT )
AS BEGIN SET @prvYearMonth = (SELECT MAX(YearMonth) FROM CabsSchedule) END GO
SELECT @tmpCount returns nothing. The RIGHT(....) function does not render any results. I am expecting '0006'.
I read that the data type must be compatible with varchar. The @cLastBarcode was declare as char(25). I have even tried casting the @cLastBarcode char string to type varchar.
I did a trace on a production DB for many hours, and got more than 7 million of "RPC:Completed" and "SQL:BatchCompleted" trace records. Then I grouped them and obtained only 545 different events (just EXECs and SELECTs), and save them into a new workload file.
To test the workload file, I run DTA just for 30 minutes over a restored database on a test server, and got the following: Date 28-12-2007 Time 18:29:31 Server SQL2K5 Database(s) to tune [DBProd] Workload file C:Tempfiltered.trc Maximum tuning time 31 Minutes Time taken for tuning 31 Minutes Expected percentage improvement 20.52 Maximum space for recommendation (MB) 12874 Space used currently (MB) 7534 Space used by recommendation (MB) 8116 Number of events in workload 545 Number of events tuned 80 Number of statements tuned 145 Percent SELECT statements in the tuned set 77 Percent INSERT statements in the tuned set 13 Percent UPDATE statements in the tuned set 8 Number of indexes recommended to be created 15 Number of statistics recommended to be created 50 Please note that only 80 of the 545 events were tuned and 20% of improvement is expected if 15 indexes and 50 statistics are created.
Then, I run the same analysis for an unlimited amount of time... After the whole weekend, DTA was still running and I had to stop it. The result was: Date 31-12-2007 Time 10:03:09 Server SQL2K5 Database(s) to tune [DBProd] Workload file C:Tempfiltered.trc Maximum tuning time Unlimited Time taken for tuning 2 Days 13 Hours 44 Minutes Expected percentage improvement 0.00 Maximum space for recommendation (MB) 12874 Space used currently (MB) 7534 Space used by recommendation (MB) 7534 Number of events in workload 545 Number of events tuned 545 Number of statements tuned 1064 Percent SELECT statements in the tuned set 71 Percent INSERT statements in the tuned set 21 Percent DELETE statements in the tuned set 1 Percent UPDATE statements in the tuned set 5 This time DTA processed all the events, but no improvement is expected! Neither indexes/statistics creation recomendation.
It does not seem that Tuning Advisor crashed... Usage reports are fine and make sense to me.
What's happening here? It looks like DTA applied the recomendations and iterated, but no new objects where found in DB.
I guess that recomendations from the first try with only 80 events were invalidated by the remaining from the long run.
My first foray into the SQL CLR world is a simple function to return the size of a specified file. I created the function in VS2005, where it works as expected. Running the function in SSMS, however, returns a value of zero, regardless of the file it is pointed at.
Here's the class member code:
Public Shared Function GetFileSize(ByVal strTargetFolder As String, ByVal strTargetFile As String) As Long
This always returns zero with no error displayed. Running Profiler was little help and there's not much in the Event Log. The function returns correct values in VS2005. The assembly is created with UNSAFE because using EXTERNAL_ACCESS resulted in a security error that prevented the assembly from being created, let alone running. Security is, I suspect, at the root of this issue as well, but I'm not sure what or where to look to verify this.
So I€™m at a dead-end looking for the reason behind the following behavior. Just to make sure no one misses it, the 'behavior' is the difference in the number of reads between using sp_executesql and not.
The following statements are executed against a SQL 2000 database that contains >1,000,000 records in the act_item table. They are run using Query Analyzer and the Duration and Reads come from SQL Profiler
SQL 2: DECLARE @Priority int DECLARE @Activity_Code char(36)
SET @Priority = 0 SET @Activity_Code = '46DF335F-68F7-493F-B55E-5F9BC6CEBC69' update act_item set Priority = @Priority where activity_code = @activity_code
Reads: ~160 Duration: 0 ms
Random information:
Activity_code is an indexed field on the table, although it is not the primary key. There are a total of four indexes on the table, none of which include the priority as one of the fields. There are two triggers on the table, neither of which is executed for this SQL statement (there is an IF UPDATE(fieldname) surrounding the code in the trigger) There are no foreign relationships I checked (using perfmon) to see if a compilation/recompilation was happening. No it's not. Any suggestions as to avenues that could be examined would be appreciated.
Hi All, I am kindly seeking for help. I have a table(MyTable) which is defined as (date datetime, ID char (10), and R, P,M,D&Y are all float) and the layout is as following: Date ID R P M D... Y 1/1/90 A 1 2 3 4... 5 1/2/90 A 2 3 4 5... 1 ... 2/11/05 A 3 4 5 6... 2 1/1/90 B 1 2 3 4... 5 1/2/90 B 2 3 4 5... 1 ... 2/11/05 B 3 4 5 6... 2 ... The expected query results look like: ( this results from Date, ID and R fields) Date A B 1/1/90 1 1 1/2/90 2 2 ... 2/11/05 3 3
The SQL I wrote: select date, ID, A=sum(case when ID=A then R else 0 end), B=sum(case when id=B then R else 0 end) from MyTable Group by date
I would also like to get another set of results with the same format but from date,ID and P fields: Date A B 1/1/90 2 2 1/2/90 3 3 ... 2/11/05 4 4
select date, ID, A=sum(case when ID=A then P else 0 end), B=sum(case when id=B then P else 0 end) from MyTable Group by date
The problem with that is if I have thousands of ID in MyTable I have to "hard code" thousands times and the same problem with the fields/columns. Is there any easier way to do this? I also would like to insert the results into a table/view which will be refreshed whenever MyTable gets updated.
Any suggestion/comments are highly appreciated! shiparsons
I use the following sproc to populate a table that is used as the base recordset for a report.
For some reason, when the sproc is run from a scheduled job, it doesn't repopulate the table. It does, however, truncate the table. If I run it manually from query analyzer, it works fine.
I've checked all the permissions on all the object touched by the sproc, and everything looks right there. Is there another problem I should be looking for?
SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS OFF GO
setuser N'mcorron' GO
CREATE PROCEDURE mcorron.CreateDiscOrders AS /* Creates table for Orders with disc items Actuate report */ SET NOCOUNT ON SET ANSI_WARNINGS OFF
TRUNCATE TABLE dbo.rptDiscOrders
INSERT INTO dbo.rptDiscOrders SELECT * FROM (SELECT ORD.product as prod_XREF, ORD.ORDER_NUMB, ORD.CustName, ord.units as ordunits, INV.Product, INV.Units FROM (SELECT TOP 100 PERCENT f.PARENT_SITE, f.SITE, dbo.vwCustBillTo.CustName, o.ORDER_NUMB, p.Prod_Xref, o.PRODUCT, o.ORDER_TONS * 2000 / m.part_wt AS UNITS FROM dbo.Lawn_Orders o INNER JOIN dbo.PRODUCT_XREF p ON o.PRODUCT = p.Product INNER JOIN dbo.FACILITY_MASTER f ON o.WHSE = f.SITE INNER JOIN dbo.Lawn_PartMstr m ON o.PRODUCT = m.part_code INNER JOIN dbo.vwCustBillTo ON o.BILLTO = dbo.vwCustBillTo.BillToNum WHERE (o.SHIP_DATE < DATEADD(d, 30, GETDATE())) and prod_xref not like 'dead%') ORD INNER JOIN (SELECT f.PARENT_SITE, x.Prod_Xref, i. Product, SUM(i.Qty) AS Units FROM dbo.Lawn_Inventory i INNER JOIN dbo.FACILITY_MASTER f ON i.Whse = f.SITE INNER JOIN dbo.PRODUCT_XREF x ON i. Product = x. Product WHERE (f.WHSE_TYPE = 'ship') GROUP BY f.PARENT_SITE, x.Prod_Xref, i. Product) INV ON ORD.PARENT_SITE = INV.PARENT_SITE AND ORD.Prod_Xref = INV.Prod_Xref) ordinv WHERE (Prod_Xref <> Product) GO setuser GO
I have a stored procedure that is Averaging a Difference in dates in seconds. All of the sudden it started throwing an Arithmetic overflow error. After running the query below on the same data, I can see that it is because the DateDiff in my procedure, which is calculating the difference in seconds, is returning a value greater than 68 years. Looking at the dates in the result table, I don't see how it is coming up with the values in the Years Difference column.
Code SnippetSELECT createdate, completeddate, DATEDIFF(y, createdate, completeddate) as 'years difference' FROM tasks WHERE (TaskStatusID = 3) and (createdate < completeddate) and (DATEDIFF(y, createdate, completeddate)>=68) ORDER BY completeddate
I have an Execute SQL Task that selects one column value from one row, so General > ResultSet = Single row. Result Set > Result Name = 0 (the first selected value) and Variable Name = User::objectTypeNbr. The task runs successfully, but after the it runs the value of User::objectTypeNbr is not changed.
User::objectTypeNbr > Data Type = Int32. When I declared the variable Value could not be empty so I set it to 0 arbitraily, assuming it would be overwritten when assigned a new value by the Execute SQL Task, but it remains 0 after the task runs. What am I missing here?