Weighted Sum?

Aug 31, 2007

I need to run some statistics on a dataset. I've never used SUM or COUNT(*) before, but I'm pretty sure I need them for this. If this is my table:

GROUP nvarchar(10) QTY tinyint TYPE smallint VALUE decimal(3,2)
g1 q1 t1 v1
g1 q1 t2 v2
g2 q2 t1 v3
g2 q2 t2 v4
g3 q3 t1 v5
g3 q3 t2 v6
g4 q4 t3 v7

and I need this output
v1 * q1 / (q1 + q2 + q3 + q4) + v3 * q2 / (q1 + q2 + q3 + q4) + v5 * q3 / (q1 + q2 + q3 + q4)
v2 * q1 / (q1 + q2 + q3 + q4) + v4 * q2 / (q1 + q2 + q3 + q4) + v6 * q3 / (q1 + q2 + q3 + q4)
v7 * q4 / (q1 + q2 + q3 + q4)

how can I query this? I think that I have to group by TYPE, but other than that I don't know how to do this. Thanks for your help.

View 15 Replies


ADVERTISEMENT

Weighted Search??

Oct 5, 2006

I have to perform a weighted search.  I have 2 criteria and each will be weighted on a 100 sum(e.g 25/75, 50/50).  I am just wondering if there is an easy way to encompass a weighted value on SELECTS.  Sorry if this is a dumb question. Thanks,Kyle 

View 1 Replies View Related

Weighted Average

Nov 4, 2005

Does anyone know if it's possible to calculate weighted averages on report level, based on 2 items in the report?

View 6 Replies View Related

Weighted Random Numbers

Apr 11, 2007

Hi there... I've got an interesting one, that I can't seem to get my head around. Maybe some legend out there might be able to give me a hand...

I'm looking for a way to produce a weighted set of random numbers. I'm doing some work for a client at the moment, and they want to issue 3 random "reward cards" to their members at certain times. These are a bit like discount vouchers etc. The problem is some cards have need to have a higher frequency than the others. I guess a similar problem to baseball cards, you buy a pack of cards, you get mostly common cards, but every now and then, you get a rare card.

Here is the table setup:
CREATE TABLE [dbo].[Cards](
[CardID] [uniqueidentifier] NOT NULL CONSTRAINT [DF_Cards_CardID] DEFAULT (newid()),
[CardName] [nvarchar](50) NOT NULL,
[InsertRatio] [float] NULL,
CONSTRAINT [PK_Cards] PRIMARY KEY CLUSTERED
(
[CardID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Common 1', NULL) /* Null implies the card is a common card */
INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Common 2', NULL) /* Null implies the card is a common card */
INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Common 3', NULL) /* Null implies the card is a common card */
INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Common 4', NULL) /* Null implies the card is a common card */
INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Common 5', NULL) /* Null implies the card is a common card */
INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Common 6', NULL) /* Null implies the card is a common card */
INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Common 7', NULL) /* Null implies the card is a common card */
INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Common 8', NULL) /* Null implies the card is a common card */
INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Common 9', NULL) /* Null implies the card is a common card */
INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Rare 1', 0.02) /* 1:50 ratio */
INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Rare 2', 0.02) /* 1:50 ratio */
INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Rare 3', 0.02) /* 1:50 ratio */
INSERT INTO [dbo].[Cards]([CardName],[InsertRatio]) VALUES('Very Rare 1', 0.005) /* 1:200 ratio */



So what I need to do, is have a Stored Proc that I can execute and it returns back 3 random rows. Now in that single run, a card can't be duplicated.

Notice the Insert Ratio column? This has the ratio of the probability, eg a 1:50 insert ratio is equal to 0.02. For the common cards, a NULL value indicates it is a common.

Eventually, this table would have about 1000 rows in it, and about 200 of those would have various ratios (eg 1:50, 1:200, 1:1000, 1:8000 etc)

Any ideas or comments?

Thanks in advance.

Leddo
Brisbane, Australia

View 6 Replies View Related

Weighted Search On Fulltext Index

Aug 20, 2007

SQL Server 2005:
Anyone know how best to rewrite this SQL string to perform a weighted search on my table?
here's my code:
SQL="SELECT RecipeName FROM recipeList WHERE FREETEXT(*,'ISABOUT " & ing01 & " WEIGHT (.1) or " & ing02 & " WEIGHT (.2) or " & ing03 & " WEIGHT (.3) or " & ing04 & " WEIGHT (.4) or " & ing05 & " WEIGHT (.5)')"
the idea is to get a list of recipes most pertinant to the ingredients entered.
table 'recipeList' contains the ingredients in several columns. so column 2 might say '5oz of flour', column 2 - '2oz butter' etc.

at the moment it returns the correct recipes but not in the right order.
If I use 'CONTAINS' it returns nothing.
hope that's not too vague!

View 10 Replies View Related

Script For SQL Agent Weighted Job Duration

Jul 20, 2005

Hi,Does anyone know of a script that will give "weighted job duration"?I want to use it, to identify which jobs are hogging the CPU. That isfor a given server, list the sql agent jobs ordered by:(avg job duration in minutes) times (avg num of times job runs in agiven day).

View 2 Replies View Related

Join To Select A 'weighted' Column

Mar 28, 2006

Perhaps is just brain drain but i cannot seem find an efficient query to join two tables (inv and supplier) such that an inv item can have multiple suppliers and i would like to choose the prefered supplier based on the current 'weight' column.

declare @inv table (item varchar(50), supplierid int)

declare @supplier table (supplierid int, weight int)

set nocount on

insert into @inv values ('item1', 1)

insert into @inv values ('item1', 2)

insert into @inv values ('item2', 2)

insert into @inv values ('item2', 3)

insert into @supplier values(1, 30)

insert into @supplier values(2, 20)

insert into @supplier values(3, 10)

-- the query should return the item and the supplierid associated to the lowest weight
-- item1 -> supplier 2
-- item2 -> supplier 3

select item, ps2.supplierid from @supplier ps2 join

(select item, min(ps.weight)'weight'

from @inv inv join @supplier ps on inv.supplierid=ps.supplierid

group by item) iw on ps2.weight=iw.weight

Is there a better alternative to this?

Thanks in advance,

Mike

View 3 Replies View Related

Prediction Query For A Weighted Clustering Model

Dec 12, 2006

I have a question about writing a prediction query against a clustering model that has the same column added more than once.

Per Jamie, I can accomplish some crude weighting by adding a column to my model multiple times. See this post for an explnation... Now that I have that worked out, I was wondering how my DM query would look? If I have Input_A1, Input_A2 , & Input_A3 all being source from the same column in my structure do I have to reference all three when writing my prediction query?

View 1 Replies View Related

Power Pivot :: Formula For Row Wise Weighted Average

Apr 22, 2015

I have a table in `PowerPivot` which contains the logged data of a traffic control camera mounted on a road. This table is filled the velocity and the number of vehicles that pass this camera during a specific time(e.g. 14:10 - 15:25). Now I want to know that how can I get the average velocity of cars for an specific hour and list them in a separate table with 24 rows(hour 0 - 23) where the second column of each row is the weighted average velocity of that hour?

A sample of my stat_table data is given below:
 
count    vel          hour  
-----    --------   ----  
133    96.00237    15  
117    91.45705    21  
81      81.90521    6  
2        84.29946    21  
4        77.7841      18  
1        140.8766    17  

[Code] ....

In a separate `PowerPivot` table I have 24 rows and 2 columns(column1 is for hours and column 2 for weighted averages) and when I enter my formula, the whole rows get updated with the same number. My formula is:   

=sumX(FILTER(stat_table, stat_table[hour]=[hour]), stat_table[count] * stat_table[vel])/sumX(FILTER(stat_table, stat_table[hour]=[hour]), stat_table[count])

View 3 Replies View Related

Minimizing Penalty (weighted Sum Of False Positives Plus False Negatives)

May 25, 2006

I am using Naive Bayes, Decision Trees, and Neural Net (SSAS 2005) to predict which of two states each record belongs to.

How can I enforce a different penalty for a false positive versus a false negative ?  (I am assuming that in some sense the mining algorithms can then minimize the total penalty).

View 5 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved