I'm in a situation where I need to accumulate values from several columns in each of the rows returned in a result set and to then display each accumulation value for each row at the end of each row.
For example, if I had the following...
------------------------------------------------------
name - car - van - bus - total
------------------------------------------------------
Terry - 55 - 34 - 12 - 101
John - 01 - 23 - 05 - 029
etc etc
------------------------------------------------------
Where 'total' column is an accumuliation of the 'car', 'van', and 'bus' columns. The values for the columns 'car', 'van', and 'bus' are all values that have been derived from COUNT function. So what I want is to somehow genarate the values for the 'total' column (car + van + bus).
Now I have tried the following SQL, but to no avail
Code:
SELECT pveh.csign, pveh.name1 AS [Driver], COUNT(pveh.name1) AS Car, COUNT(pveh.name2) AS Van, SUM(COUNT(pveh.name1) + SUM(COUNT(pveh.name2))
FROM planvehicles AS pveh
... and I also tried...
SELECT pveh.csign, pveh.name1 AS [Driver], COUNT(pveh.name1) AS Car, COUNT(pveh.name2) AS Van, SUM(Car + Van)
FROM planvehicles AS pveh
I've got a table that contains a column of accumulating uptime datathat looks similar to this:239.13239.21239.30239.38239.46239.550.350.440.530.600.680.780.850.93I need to SUM the data up the point where the data gets reset (the nextrow is less than the preceeding row). Then I start the SUM again untildata gets reset.Thanks in advance for help!
I have a table called Daily. It has 5 columns, "Testers ","Activity ", "GivenHours ","UsedHours " and " Delta ". This table will be regularly updated every 12 hours. Now I need to Accumulate the data from GivenHours and UsedHours with respect to the testers and insert the accumulated data Into a new table called "Weekly". The accumulation should last only for 7 days and then it should be reset again to accumulate new data for the new week.
Weekly table should have 5 columns "Testers ","Activity ", "TotalGivenHours ", "TotalUsedHours " and " Percentage ". Percentage is ( TotalUsedHours / TotalGivenHours ) *100.
I am importing data from a CSV to a DB with a SSIS package. Among some things things it already does, it has to decide if the relation between one row and the following is acceptable. If it is not, the 2nd row is discarded, the next one is taken and the relation value is calculated again to decide whether to keep this one or not, and so on.
To calculate this value, I need to apply a formula that includes sin(), cos() and acos() functions. I have already written this formula as a scalar-valued function in my SQL Server.
So, my question is: - Is there a way to call a function (a UDF) within the Expression in a Derived Column dataflow item? and if not, - Hoy can I use trigonometric functions within the Expression in a Derived Column dataflow item?
I hope someona can tell me something about this... I'm falling into despair! :-s
The data I am pulling is correct I just cant figure out how to order by the last 8 numbers that is my NUMBER column. I tried adding FOR XML AUTO to my last line in my query: From AP_DETAIL_REG where AP_BATCH_ID = 1212 and NUMBER is not null order by NUMBER FOR XML AUTO) as Temp(DATA) where DATA is not null
but no change same error. Output: 1234567890000043321092513 00050020
Select DATA from( select '12345678'+ left( '0', 10-len(cast ( CONVERT(int,( INV_AMT *100)) as varchar))) + cast (CONVERT(int,(INV_AMT*100)) as varchar) + left('0',2-len(CAST (MONTH(DATE) as varchar(2))))+ CAST (MONTH(DATE) as varchar(2)) + left('0',2-len(CAST (day(CHECK_DATE) as varchar(2)))) + CAST (day(DATE) as varchar(2))+right(cast (year(DATE)
I have been trying to make a database that counts up and down votes (like eBay ratings or reddit votes). I think (hope) I have got the database design right. I know that you can perform math functions in SQL, but I want to use two COUNT()s from the same table and subtract one (the down votes) from the other (the up votes). I have been learning ASP.NET 2.0 and it's going well, but I really need help with this. I asked a question on this forum before and the answers were great and really helpful. If anyone can help that would be great. Thank you. Jack.
I'm trying to create an accumulating field based on a set of records. I need to fill in daily amount balances that accumulates on a daily basis. But I can't seem to figure out how to create a total for the daily dates and have it add on additional amounts if needed.
Now I already have a table with the dates created via a stored procedure. I have a set of dates from 5/5/2000 to 5/8/2000. So that results set should look like this:
I'm trying to creating a rolling sum that accumulates the amount field for each daily record and if a new amount is listed, then roll that amount into the total. If you have any suggestions about how to perform this rolling total via TSQL or SSIS, I would greatly appreciate it.
Hi all, I found that my testing server is accumulating shapshot folder in repldata. Everytime, we refresh the tables (snapshot publication), a new folder created without the old folders. And I found that there is at most 1 snapshot folder remain in production server. Any parameter to adjust the retention period of the snapshot folder? Thanks in advance
I need to know what a table's max row Identity is part way thru a data flow. I can't get it at the beginning of the data flow. I need to either (1) add it to the data buffer part way thru or (2) set it into a package variable and then reference the var in a script component.
I've not found a way to add a database column to the data buffer without doing a lookup for each row (too slow and not appropriate here) or some goofy oledb source and then merge join into the data buffer on a contrived join.
I've read questions about referencing package vars in scripts but I can't get that to work. DTS.Variables("varname").Value isn't recognised when I code it up.
Anyone have an idea or solution for either one of these? If you're gonna explain the script code, please include the entire snipet including the INCLUDEs, etc.
I am trying to create a procedure which will calculate the total tuition This process involves 3 tables. Contract table has tuition information which is all $100 (set price). Discount table has discount type and discount percentage (ex. 0.3) on each discount type. ContractDiscount table have contract number and discount number to connect both tables
I think I need to create a loop since some contract gets more than one discount. I have to calculate and get result nee to be like this
total_tuition = (tuition - discountPer * tuition) - this has to be a loop condition
I am adding a new column with the derrived column data flow Transformation and am having a problem with it coming through as a deciaml format. The destination column is set as "numeric(18, 9)" but no matter what i change the data type to in the transfomation editor it will only bring through the calculation as a whole number (IE it rounds the number up) The destination table shows the zero's after the "."
What do i need to do to get this coming through as a number with decimals
Heres the expression i am using (PROG_CREDITS / (PROG_DURATION / 12)) / 120
I think that I've done a big mess on my work... I've got plain file which must be loaded into a sql table. Up to there no problem, I use Derived Column due to columns needed be transformed with NULL, RIGHT, LEN, and so on...
But in this last package I've done half of work using Data Conversion but I've got five columns which would need be transformed but I don't know how can I do such thing. I can't connect a Derived Column from Flat File Source task, of course, it's already data conversion...
Let me know if you need further details.
Previously I think, silly idea, that it could be unified or better, that Data Conversion task allows me make transformations...
I have a character field that should contain either a number or space in one of my transitions but it is coming over with junk in it occasionally. What is the best way to ensure that this data is only numeric or a space? Whenever I hit a < or ] one of those weird symbols I want to replace it with a space in my target field. It is SQL Server to SQL Server.
Hi all--I've got a derived column transformation where I am adding a field called Import_Date. I'm telling it to add as a new column and use the function "GetDate()" to populate the field. When I run the package, it returns NULL as the data value for all rows. Any idea why this might be happening?
how to declare multiple derived columns in SSIS Derived Column Task in one attempt.as i have around 150 columns coming from Flat file. I had created the required Expression in Excel and now i want add those in derived column task but its allowing only 1 expression at a time.
Out of nowhere my derived hierarchies starting showing the following message in the MDS UI
No Level Defined:Â This Derived Hierarchy is incomplete......
As you can see below the structure is defined and still in tact. This message shows up in by the Explorer & System Admin areas. I'm also able to query the subscription view setup without any issue.
This is with 2012 w/ no CUs. Same setup in running in another environment without issue.
I would like to write a custom mining function, which takes a string, queries the database, and returns an answer based upon those queries. So the basic function is then:
[MiningFunction("Performs Foo")] public string Foo(string param) { // process parameters
// query database
// calculate answer from query results
// return query results }
And is executed from the client using:
SELECT Foo("X Y Z") FROM FooModel
This arrangement is so that resource-intensive calculations are performed server-side.
My question is: what is the preferrable method for executing the database query from within the custom mining function?
i have too many DTS packages to migrate to SSIS, and while examining a DTS package in BIDS (converted with the migration utility) i tried to edit the resulting migrated package, which opened the DTS interface with the two connection icons joined by the big fat arrow with a gear on it...not exactly what i had in mind, iow, it looks like SSIS on the outside, but its still DTS on the inside. So I stripped out a series of components from a more complex package hoping that simplifying it would reveal the contents of old DTS Transformations tab at least partially set up in a Derived Column transformation. Can i get there from here, or must i recreate every stinking definition in a derived column manually from the ground up? thanks very much for your help
In the derived column transformation editor, I have a Derived column name called FileGroupID. I would like to pass in a value for this column from a variable that I have set earlier in the scope. Can someone let me know, how to write the expression that does that and where do I specifiy that expression. I am thinking its the expression field in the derived column transformation editor. My main question is how to actually write the expression, what is the syntax to pull the variable value? Thanks.
W2k3 server, SQL 2005. @@version = Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86) Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 1)
I have my first SSIS package almost working, but I'm having an odd problem and can't find any information to help resolve it.
I'm importing from a flat file (csv) to an existing table (append). I've got a Derived Column transformation in the middle to do some data cleanup. It's all working except for one little problem...
One of the transformations is 'REPLACE([Column 3],"^","; ")', output to a new column. (The input file has a field that uses carets as delimiters between an unknown number of items; I'm changing that to semicolons for easier reading.) Not all rows have data in this column, some will have one item, some will have multiple items.
The REPLACE works except that it fills in repeated data for all the blank rows.
Example:
Incoming data is:
1 Smith,Jane^Jones,Jane
2 Brown,John
3
4 Adams,James^Adams,Jim
5
6 White,Debra
Data inserted into the table is:
1 Smith,Jane; Jones,Jane
2 Brown,John
3 Brown,John
4 Adams,James; Adams,Jim
5 Adams,James; Adams,Jim
6 White,Debra
I've tried to use a Conditional to skip the empty rows, but I can't get that working at all (get syntax errors no matter what I put in).
Any suggestions on how to fix this would be most appreciated!
I am Using Derived column between Source and Destination Control. the Source input column PriceTime is String Data type. but in the Destination is should be a DATE TIME column. How to Convert this string to DateTime in the Derivied Column Control.
I already tried to in the Derived column control
PRICEDATETIME <add as new column> ((DT_DBTIMESTAMP)priceDateTime) database timestamp [DT_DBTIMESTAMP]
But still throwing Error showing type case probelm
I have managed to use the BI Wizard for time intelligence and added YTD and MTD successfully. I notice the values returned are empty, and I think this is due to the fact that all the test data I use is many years old. What's the simplest way to resolve this issue so that I can see that these MDX functions return correct values? Changing the system date on this company laptop is not an option.
I need to know how to use my private function - created as a scalar-valued-function in SQL Server 2005 - in script component (here a transformation is used) in a data flow task to transform a two-digit-month into a tree-sign-month:
I was playing around with the new SQL 2005 CLR functionality andremembered this discussion that I had with Erland Sommarskog concerningperformance of scalar UDFs some time ago (See "Calling sp_oa* infunction" in this newsgroup). In that discussion, Erland made thefollowing comment about UDFs in SQL 2005:[color=blue][color=green]>>The good news is that in SQL 2005, Microsoft has addressed several of[/color][/color]these issues, and the cost of a UDF is not as severe there. In fact fora complex expression, a UDF in written a CLR language may be fasterthanthe corresponding expression using built-in T-SQL functions.<<I thought the I would put this to the test using some of the same SQLas before, but adding a simple scalar CLR UDF into the mix. The testinvolved querying a simple table with about 300,000 rows. Thescenarios are as follows:(A) Use a simple CASE function to calculate a column(B) Use a simple CASE function to calculate a column and as a criterionin the WHERE clause(C) Use a scalar UDF to calculate a column(D) Use a scalar UDF to calculate a column and as a criterion in theWHERE clause(E) Use a scalar CLR UDF to calculate a column(F) Use a scalar CLR UDF to calculate a column and as a criterion inthe WHERE clauseA sample of the results is as follows (time in milliseconds):(295310 row(s) affected)A: 1563(150003 row(s) affected)B: 906(295310 row(s) affected)C: 2703(150003 row(s) affected)D: 2533(295310 row(s) affected)E: 2060(150003 row(s) affected)F: 2190The scalar CLR UDF function was significantly faster than the classicscalar UDF, even for this very simple function. Perhaps a more complexfunction would have shown even a greater difference. Based on this, Imust conclude that Erland was right. Of course, it's still faster tostick with basic built-in functions like CASE.In another test, I decided to run some queries to compare built-inaggregates vs. a couple of simple CLR aggregates as follows:(G) Calculate averages by group using the built-in AVG aggregate(H) Calculate averages by group using a CLR aggregate that similatesthe built-in AVG aggregate(I) Calculate a "trimmed" average by group (average excluding highestand lowest values) using built-in aggregates(J) Calculate a "trimmed" average by group using a CLR aggregatespecially designed for this purposeA sample of the results is as follows (time in milliseconds):(59 row(s) affected)G: 313(59 row(s) affected)H: 890(59 row(s) affected)I: 216(59 row(s) affected)J: 846It seems that the CLR aggregates came with a significant performancepenalty over the built-in aggregates. Perhaps they would pay off if Iwere attempting a very complex type of aggregation. However, at thispoint I'm going to shy away from using these unless I can't find a wayto do the calculation with standard SQL.In a way, I'm happy that basic SQL still seems to be the fastest way toget things done. With the addition of the new CLR functionality, Isuspect that MS may be giving us developers enough rope to comfortablyhang ourselves if we're not careful.Bill E.Hollywood, FL------------------------------------------------------------------------- table TestAssignment, about 300,000 rowsCREATE TABLE [dbo].[TestAssignment]([TestAssignmentID] [int] NOT NULL,[ProductID] [int] NULL,[PercentPassed] [int] NULL,CONSTRAINT [PK_TestAssignment] PRIMARY KEY CLUSTERED([TestAssignmentID] ASC)--Scalar UDF in SQLCREATE FUNCTION [dbo].[fnIsEven](@intValue int)RETURNS bitASBEGINDeclare @bitReturnValue bitIf @intValue % 2 = 0Set @bitReturnValue=1ElseSet @bitReturnValue=0RETURN @bitReturnValueEND--Scalar CLR UDF/*using System;using System.Data;using System.Data.SqlClient;using System.Data.SqlTypes;using Microsoft.SqlServer.Server;public partial class UserDefinedFunctions{[Microsoft.SqlServer.Server.SqlFunction(IsDetermini stic=true,IsPrecise=true)]public static SqlBoolean IsEven(SqlInt32 value){if(value % 2 == 0){return true;}else{return false;}}};*/--Test #1--Scenario A - Query with calculated column--SELECT TestAssignmentID,CASE WHEN TestAssignmentID % 2=0 THEN 1 ELSE 0 END ASCalcColumnFROM TestAssignment--Scenario B - Query with calculated column as criterion--SELECT TestAssignmentID,CASE WHEN TestAssignmentID % 2=0 THEN 1 ELSE 0 END ASCalcColumnFROM TestAssignmentWHERE CASE WHEN TestAssignmentID % 2=0 THEN 1 ELSE 0 END=1--Scenario C - Query using scalar UDF--SELECT TestAssignmentID,dbo.fnIsEven(TestAssignmentID) AS CalcColumnFROM TestAssignment--Scenario D - Query using scalar UDF as crierion--SELECT TestAssignmentID,dbo.fnIsEven(TestAssignmentID) AS CalcColumnFROM TestAssignmentWHERE dbo.fnIsEven(TestAssignmentID)=1--Scenario E - Query using CLR scalar UDF--SELECT TestAssignmentID,dbo.fnIsEven_CLR(TestAssignmentID) AS CalcColumnFROM TestAssignment--Scenario F - Query using CLR scalar UDF as crierion--SELECT TestAssignmentID,dbo.fnIsEven_CLR(TestAssignmentID) AS CalcColumnFROM TestAssignmentWHERE dbo.fnIsEven(TestAssignmentID)=1--CLR Aggregate functions/*using System;using System.Data;using System.Data.SqlClient;using System.Data.SqlTypes;using Microsoft.SqlServer.Server;[Serializable][Microsoft.SqlServer.Server.SqlUserDefinedAggregate (Format.Native)]public struct Avg{public void Init(){this.numValues = 0;this.totalValue = 0;}public void Accumulate(SqlDouble Value){if (!Value.IsNull){this.numValues++;this.totalValue += Value;}}public void Merge(Avg Group){if (Group.numValues > 0){this.numValues += Group.numValues;this.totalValue += Group.totalValue;}}public SqlDouble Terminate(){if (numValues == 0){return SqlDouble.Null;}else{return (this.totalValue / this.numValues);}}// private accumulatorsprivate int numValues;private SqlDouble totalValue;}[Serializable][Microsoft.SqlServer.Server.SqlUserDefinedAggregate (Format.Native)]public struct TrimmedAvg{public void Init(){this.numValues = 0;this.totalValue = 0;this.minValue = SqlDouble.MaxValue;this.maxValue = SqlDouble.MinValue;}public void Accumulate(SqlDouble Value){if (!Value.IsNull){this.numValues++;this.totalValue += Value;if (Value < this.minValue)this.minValue = Value;if (Value > this.maxValue)this.maxValue = Value;}}public void Merge(TrimmedAvg Group){if (Group.numValues > 0){this.numValues += Group.numValues;this.totalValue += Group.totalValue;if (Group.minValue < this.minValue)this.minValue = Group.minValue;if (Group.maxValue > this.maxValue)this.maxValue = Group.maxValue;}}public SqlDouble Terminate(){if (this.numValues < 3)return SqlDouble.Null;else{this.numValues -= 2;this.totalValue -= this.minValue;this.totalValue -= this.maxValue;return (this.totalValue / this.numValues);}}// private accumulatorsprivate int numValues;private SqlDouble totalValue;private SqlDouble minValue;private SqlDouble maxValue;}*/--Test #2--Scenario G - Average Query using built-in aggregate--SELECT ProductID, Avg(Cast(PercentPassed AS float))FROM TestAssignmentGROUP BY ProductIDORDER BY ProductID--Scenario H - Average Query using CLR aggregate--SELECT ProductID, dbo.Avg_CLR(Cast(PercentPassed AS float)) AS AverageFROM TestAssignmentGROUP BY ProductIDORDER BY ProductID--Scenario I - Trimmed Average Query using built in aggregates/setoperations--SELECT A.ProductID,CaseWhen B.CountValues<3 Then NullElse Cast(A.Total-B.MaxValue-B.MinValue ASfloat)/Cast(B.CountValues-2 As float)End AS AverageFROM(SELECT ProductID, Sum(PercentPassed) AS TotalFROM TestAssignmentGROUP BY ProductID) ALEFT JOIN(SELECT ProductID,Max(PercentPassed) AS MaxValue,Min(PercentPassed) AS MinValue,Count(*) AS CountValuesFROM TestAssignmentWHERE PercentPassed Is Not NullGROUP BY ProductID) BON A.ProductID=B.ProductIDORDER BY A.ProductID--Scenario J - Trimmed Average Query using CLR aggregate--SELECT ProductID, dbo.TrimmedAvg_CLR(Cast(PercentPassed AS real)) ASAverageFROM TestAssignmentGROUP BY ProductIDORDER BY ProductID
I work on a copy of SQL Server Express on my desktop. After modifying and creating views and user defined functions, I would like to copy and paste them into the working database. Is there a method programmatically of doing this or must I copy and paste the t-sql language from the existing view to the new database--then save the new view on the working database?
With the function below, I receive this error:Error:Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 1, current count = 0.Function:Public Shared Function DeleteMesssages(ByVal UserID As String, ByVal MessageIDs As List(Of String)) As Boolean Dim bSuccess As Boolean Dim MyConnection As SqlConnection = GetConnection() Dim cmd As New SqlCommand("", MyConnection) Dim i As Integer Dim fBeginTransCalled As Boolean = False 'messagetype 1 =internal messages Try ' ' Start transaction ' MyConnection.Open() cmd.CommandText = "BEGIN TRANSACTION" cmd.ExecuteNonQuery() fBeginTransCalled = True Dim obj As Object For i = 0 To MessageIDs.Count - 1 bSuccess = False 'delete userid-message reference cmd.CommandText = "DELETE FROM tblUsersAndMessages WHERE MessageID=@MessageID AND UserID=@UserID" cmd.Parameters.Add(New SqlParameter("@UserID", UserID)) cmd.Parameters.Add(New SqlParameter("@MessageID", MessageIDs(i).ToString)) cmd.ExecuteNonQuery() 'then delete the message itself if no other user has a reference cmd.CommandText = "SELECT COUNT(*) FROM tblUsersAndMessages WHERE MessageID=@MessageID1" cmd.Parameters.Add(New SqlParameter("@MessageID1", MessageIDs(i).ToString)) obj = cmd.ExecuteScalar If ((Not (obj) Is Nothing) _ AndAlso ((TypeOf (obj) Is Integer) _ AndAlso (CType(obj, Integer) > 0))) Then 'more references exist so do not delete message Else 'this is the only reference to the message so delete it permanently cmd.CommandText = "DELETE FROM tblMessages WHERE MessageID=@MessageID2" cmd.Parameters.Add(New SqlParameter("@MessageID2", MessageIDs(i).ToString)) cmd.ExecuteNonQuery() End If Next i ' ' End transaction ' cmd.CommandText = "COMMIT TRANSACTION" cmd.ExecuteNonQuery() bSuccess = True fBeginTransCalled = False Catch ex As Exception 'LOG ERROR GlobalFunctions.ReportError("MessageDAL:DeleteMessages", ex.Message) Finally If fBeginTransCalled Then Try cmd = New SqlCommand("ROLLBACK TRANSACTION", MyConnection) cmd.ExecuteNonQuery() Catch e As System.Exception End Try End If MyConnection.Close() End Try Return bSuccess End Function
Hi, I have just run a simple data set through a model to predict a simple true or false value (i.e. binary output) The Lift Chart/Mining Legend in Analysis Services shows three results €“ Score, Population Correct (%), and Predict Probability (%)
Population Correct I beleive is the percentage of predictions it got right out of the total number of predictions it tried to make. Is this correct?
However, I can€™t work out how the other two are derived in particular the 'SCORE'. To give a live example the scores were as follows:
Model Score Pop Correct Pred Probability Decision Trees 0.83 76.59% 54.28% Neural Network 0.75 67.63% 50.05% Ideal Model 100.00%
Can anyone help with this and give a detailed explanation?