Time Dimension According To Different Granularity

May 29, 2008

Hi,
I got inventory fact table. For the past two weeks, I got on a daily level; beyond that, weekly level, and beyond that monthly. I need to tie it all in to the time dimension of course €“ and the problem is, how do I do it on different granularity?
As far as time dimension, tn the datamart, I got tables dim_date with key column date_id (int) , and correspondingly dim_week with week_id(int) and dim_month with month_id(int).

What I€™ve done so far, is created a time dimension from dim_date table (meaning granularity=daily) and simply tied in all the inventory €“ daily, weekly, monthly, on the day level (it all has date_id field in it, even the weekly and the monthly. Its simply the day of the end of week or end of month) I didn€™t tie anything to dim_week and dim_month.
Does that makes sense? The result is kind of strange. I cant upload an image here, but€¦ well it seems ok, I got year, week (€˜GL Week€™) and then €¦ this is the annoying thing: why am I getting a €˜date€™ column, when I only want it by week or by month? I can€™t make that column disappear (e.g when in time hierarchy I only group by month, still a €˜day€™ column will be there, and will show 4 days€¦ the 4 €˜end of week€™ that comes from €˜date_id€™.)
How do I make it go away??

Thanks for reading : ) I know it€™s a bit long

View 1 Replies


ADVERTISEMENT

Filtering Out 1 Time Dimension Based On The Selection In Another Time Dimension

May 19, 2008

Hi!

Need some help building a query that does the following :

I have 2 Time Dimensions ; Time (Transdate) and ClosedDate (ClosedDate)

In my report/query, if [Time].CurrentMember = [Time].[YMD].[YMD].[2006].[200610].[20061031] I want to FILTER out all ClosedDate < [ClosedDate].[YMD].[YMD].[2006].[200610].[20061031]

Both Time Dimensions are Year -> Month -> Day and have the same Members.

I have every option available, using calculated Members and/or Measures to do this.

The report I'm creating is Aging of Receivables : Balance / 30 days / 60 days / etc.. But for the Aging, I need to filter like explained above.

Appreciate all help!

Regards,
Stian Bakke

View 3 Replies View Related

Time Dimension

Sep 4, 2007

Hello,

I am trying to make a time dimension in analysis services. Is it possible to include the hours of a day in the dimension ? Is necessarily the day level the lowest of the hierarchy?

Thank you
ST

View 3 Replies View Related

Time Dimension

Aug 14, 2007

(This is the Twilight Zone......not so much)

What are the pros and cons to using SSAS to create time dimensions based on a date field in the fact table as opposed to a stand alone time dimension table.

I can see many problems with loading a time dimension table. The date is from the same table as most of my fact data. I have a column in my OLTP that sets last change date so I can tell if my fact data is an insert or update but it wouldn't tell me if the date column had been changed, just some column in the table. I'm going to have several thousand sales on any given day so I'll be reading a lot of rows just to put one row into the dimension.

From a SSIS point of view I'd think leaving the date in the fact table would be better.

View 5 Replies View Related

Need Help To Create My Time Dimension

Mar 30, 2004

Hi,

I have a fact table which contains the transaction date, ProductID, QTySold, TotalSaleAmount, etc...

Since I am new to OLAP therefore I need help to now create the table for TIME on which I will be basing my time dimension... I have read a few articles and have gathered that at the end of the exercise my fact table should have a 'timeid' column which will be linked to the same column in the table being used for the time dimension...

I have gone through the tutorial of MS-Analysis Service and FoodMart example have some idea about what he structure of this table will be.

My questions are:

1. I need guidance on how to create the table for time. One option is to just copy the table used in the Foodmart example but thought that might work but my concept will not be clear

2. The structure of the table to be used for time dimension is quite clear (i think this part is easy). What I want to understand is that how do i POPULATE this table which data? Can some one provide me with scripts, SPs, or whatever to do this.... This is the area where I am lost...

3. How will I enter the "TheDate" column in my fact table and link it with my table for time dimension...

Looking forward to someone's help.

BTW, I would like to share a very good article which i recently found in one of the newsgroups. Some of you might like appreciate it too:
http://www.sqljunkies.com/Article/D1E44392-592C-40DB-B80D-F20D60951395.scuk

View 3 Replies View Related

Creating A 4-4-5 Time Dimension

Aug 29, 2007

Hello:
Very soon my company will be moving to a 4-4-5 reporting schedule. Basically, what this means is that the first month of the quarter will have 4 weeks, the second will have 4 weeks, and the third will have 5 weeks. Therefore, for the 2007 the dates for Jan, Feb and Mar will be as follows:
Jan - 1 - 27
Feb - 28 - 24
Mar - 25 - 31

Currently, I have an SSIS package creating a record for each day in the Time Dimension. Is there any script out there that will help me build a Fiscal calendar such as the one described above?

I realize that this is not a direct SSIS question but I figured that some of you might have encountered this situation and hence my post.

Thank you!

View 4 Replies View Related

Analysis :: Add Dimension To Cube Dimension Without Any Relation In Dimension Usage

Oct 26, 2015

When i add a dimension to the cube dimension without any relation in my dimension usage to any measure group my units are going down.However when i remove the dimension from the cube am getting the correct values.

View 4 Replies View Related

Time Dimension Design Question

Dec 20, 2007

Im trying to design my time dimension and need to add a field tohandle null dates in the fact. So if at the time of ETL the date isntknown, referential integrity will be preserved. Kimball suggestsinsterting a record in the time dimension to handle this with adescription of 'Date not available' or something like that. Howeverif users are doing inner joins on the dimension they will obvisouslybe pulling the datetime field..what should be in the datetime fieldfor this particular record?

View 4 Replies View Related

Sorting The Time Dimension In Browsers

May 20, 2008



Hi All,
I've looked around but can't seem to find an answer for this.
I have a cube that has a fairly large time dimension (going back to 1949) as the data demands this.
When a user is browsing the cube and applies a filter or adds the a time heirarchy as a value it's always sorted from oldest to newest. Whilst the need is there to look at data from 1949 most people want to look at the last few years. The problem is they have to scroll right down the list to get to this. Is there a way of having the most recent years of the time dimension appear first in these lists to make them more accessable?

I hope that makes sense!

Any help would be appreciated.
Thanks
yayomayn

View 10 Replies View Related

Please Guide - Especially About Time Dimension And Approach In General

Mar 30, 2004

Hi,

I have a table which contains all the transaction details for which I am trying to create a CUBE... The explanation below in brackets is only for clarity about each field. Kindly note that I am using the following table as my fact table. Let's call it tblFact

This table contains fields like transaction date, Product (which is sold), Product Family (to which the Product Belongs), Brand (of the product), Qty sold, Unit Price (at which each unit was sold), and some other fields.

I have created a Product dimension based on tblFact. I don't know if this is a good idea or not :confused: Is it okay and am I on the right track or should I base my Product Dimension on some other table (say tblProducts and then in the Cube editor link tblProducts with tblFact based on the ProductID field? Please guide.

Now coming to my last question:
Currently I am also creating my Time Dimension based on tblFact. Is this also a wrong approach?
1. Should I instead create the Time Dimension based on a different table (say tblTime) and again link up tblTime and tblFact in the Cube editor?

2. if yes, then how do I create tblTime from tblFact in a manner that it only contains all the transaction dates.

3. Assuming that I should take the tblTime approach, then should this table (tblTime) also contain the ProductID - representing the Product which was sold for each date in tblTime?

I realize that this is a lenghty post but reply and more importantly guidance from the experienced users will be greatly appreciated becuase I have recently started learning/playing around on the OLAP side of things and I know that this is the time that I get my foundations correct otherwise I'll end up getting used to some bad practice and will find it difficult to change my approach to cube designing later down the road.

So many thanks in advance and I eagerly look forward to reply from someone.

View 10 Replies View Related

The Time Dimension Is Growing Much After Every Process Of Cube

Apr 11, 2008

Hi guys,

I made a cube with time dimension with hieracly year/month/date/hour
the problem is that dimension is growin to fast. In older version of MSSQL (2000) the same dimension doesn't grew so much.
Any ideas? The table is big (may be around 1 500 000 rows per month) now it contains around 4 500 000 rows.

View 19 Replies View Related

Reporting On A Cube -- Time Dimension Issue

Feb 28, 2007

Hello,

I have an Analysis Services Cube that I would like to report on. However, the Time Dimension currently only has four columns, Day of Month, Month(name) , Year, and DateKey (DateTime representation at midnight for every day). Thus when I drag the month attribute onto the report, it is sorted April - August - December - etc. instead of Jan - Feb - Mar. How do I fix this? I remember reading something in the MSDN Library about it but I can't find it again now.

Thomas

View 5 Replies View Related

Data Mining Against A Cube With Time Dimension

Nov 21, 2006

We have a set of cubes and dimensions, and we're experimenting with data mining against the cubes (primarily for forecasting applications). We have a custom time dimension (which we call calendar), not generated by the BIStudio wizard. The dimension has year/month/day/hour/... attributes. But when I try to add this Calendar dimension to the mining structure as a nested table using BI studio, it only shows the Year attribute, not the others. Other dimensions seem to show all the attributes.

Is there something we've done wrong in defining our time dimension? What determines which attributes show up as available for selection in BI studio?

View 5 Replies View Related

HOW TO MODIFY MDX TO RESTRICT MEMBERS IN TIME DIMENSION

Dec 29, 2006

There are some 55 members in the arrival year level of the time dimension (1995 - 2055).
I am trying to find a way to restrict the number of years returned by this mdx query to the current year - 5. Any help will be appreciated.

WITH MEMBER [Measures].[ParameterCaption]
AS '[TIME DIMENSION].[ARRIVAL YEAR].CURRENTMEMBER.MEMBER_CAPTION'
MEMBER [Measures].[ParameterValue]
AS '[TIME DIMENSION].[ARRIVAL YEAR].CURRENTMEMBER.UNIQUENAME'
MEMBER [Measures].[ParameterLevel]
AS '[TIME DIMENSION].[ARRIVAL YEAR].CURRENTMEMBER.LEVEL.ORDINAL'
SELECT
{
[Measures].[ParameterCaption],
[Measures].[ParameterValue],
[Measures].[ParameterLevel]
} ON COLUMNS,
[TIME DIMENSION].[ARRIVAL YEAR].allmembers ON ROWS
FROM [TOURISM CUBE]

Thanks

View 4 Replies View Related

MDX To Compare The Currentmember Of The Time Dimension To Today

May 21, 2008

This is a cross-post from the Office PPS Planning site: http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=3379691&SiteID=17 I was hoping there may be some additional MDX resources here.

I'm trying to determine if the currentmember of the time dimension is > or < the value of today. I want to use this to change the the value of a member that I display on columns from actual (for prior to today) to forecast (after today). I have this MDX:


with

member [measures].[PYCY] as IIf(VBA!CInt(Format(VBA!Now(),"MM"))<10, VBA!CStr(VBA!CInt(VBA!Year(VBA!Now())-1)), VBA!CStr(VBA!CInt(VBA!Year(VBA!Now())+0)))

member [measures].[CYNY] as IIf(VBA!CInt(Format(VBA!Now(),"MM"))<10, VBA!CStr(VBA!CInt(VBA!Year(VBA!Now())+0)), VBA!CStr(VBA!CInt(VBA!Year(VBA!Now())+1)))

member [measures].[NYNY2] as IIf(VBA!CInt(Format(VBA!Now(),"MM"))<10, VBA!CStr(VBA!CInt(VBA!Year(VBA!Now())+1)), VBA!CStr(VBA!CInt(VBA!Year(VBA!Now())+2)))


member [measures].[test2] as Format(Now(), "yyyyMM")

member [measures].[test4] as [Time].[Day View].CurrentMember.Member_Key

member [measures].[diff] as DateDiff('m', Format(Now(), "yyyyMM"),[Time].[Day View].CurrentMember.Member_Key)


select

StrToSet('{' + Generate(Descendants(

{strtomember('[Time].[Day View].[year].&[' + [measures].[PYCY] + ']'), strtomember('[Time].[Day View].[year].&[' + [measures].[CYNY] + ']'), strtomember('[Time].[Day View].[year].&[' + [measures].[NYNY2] + ']') }

, [Time].[Day View].[Month]), [Time].[Day View].CurrentMember.UNIQUE_NAME, ", ") + '}')


* {[measures].[test2],[Measures].[test4],[measures].[diff]}



on 0,

[Scenario].[All Members].members

on 1

from [SalesForecasting]


Here is the result for 2 months:










October FY2007
October FY2007
October FY2007
November FY2007
November FY2007
November FY2007


test2
test4
diff
test2
test4
diff

All
200805
200610
#Error
200805
200611
#Error

None
200805
200610
#Error
200805
200611
#Error

Actual
200805
200610
#Error
200805
200611
#Error

Forecast
200805
200610
#Error
200805
200611
#Error

Override
200805
200610
#Error
200805
200611
#Error

Plan
200805
200610
#Error
200805
200611
#Error



The error is: #Error Execution of the managed stored procedure DateDiff failed with the following error: Exception has been thrown by the target of an invocation.Argument 'Date1' cannot be converted to type 'Date'..

Why can't DateDiff calculate the difference between 200805 & 200610?

Thanks

View 5 Replies View Related

Slow Moving Dimension And History Time Stamp

Nov 14, 2007

The following question might sound a bit stupid but I'm not a database expert so hopefully nobody minds me asking it.

Here's what I did:

1. I created an SSIS package that is supposed to import new data into my data warehouse as it becomes available.

2. Since I need to maintain some of the history I use the Slow Moving Dimensions part (set the history flag on input fields) but run into an error condition while running the package. The message basically says that I'm about to create a duplicate record which is not allowed.



Original Table1:

PK1 field1
PK2 field2
PK3 field3
field4
field5

-----------

Now I enhanced it like this:

Extended Table1:

PK1 field1
PK2 field2
PK3 field3
field4
field5

CreateDate (new)
NewDate (new)
ActiveFlag (new)
_____________________

Now on some records the package is supposed to archive history by populating the (new) fields. In order to keep the record unique (primary key constraint) thought, do I need to make the (new) fields primary keys as well?

So I guess I'm struggling with a more basic concept;)

I would appreciate if somebody could shed some light on this.

Thanks in advance.
Dirk

View 3 Replies View Related

Query Builder Shows Time Dimension As Two Dimensions?

Oct 17, 2007

Good afternoon all,

I have created a cube in Analysis Services with a time dimension named Time. The data is only needed at the month level, so the fields in the table that the dimension is based on are [DTE MM] and [DTE YR]. In Visual Studio 2005 or in SQL Server Mgmt Studio I can browse the dimension and I see that Time has the following attributes: Year and Month both Regular attributes and Time (Key attribute). There is also a hierarchy named [Year - Month] and if I browse that it looks good dril down from All to Year to Month.

However, when I point my Reporting Services Datasource to the cube, and start Query Builder, I see two dimensions, [DTE MM] and [DTE YR] and no Time dimension.

[DTE MM] has attributes [DTE MM].[Month], [DTE MM].[Time], [DTE MM].[Year], [DTE MM].[Year - Month].

[DTE YR] has attributes [DTE YR].[Month], [DTE YR].[Time], [DTE YR].[Year], [DTE YR].[Year - Month]

This is causing me huge problems in my report. I need to use a date range in the query. To do this, I created Query Parameters and refer to Report Parameters that will be passed in. For example, I use ="[DTE YR].[Year - Month].[Year].&[" + Parameters!StartYear.Value + "].&[" + Parameters!StartMonth.Value +"]" (thanks to Simon Philips) as the query parameter for the Start Year Month, but if I use the [DTE YR].[Year - Month] in my SELECT, the data is not sliced in my query results. That is to say that the Year and Month show correctly, according to the date range, but the Measure is equal for each month and is equal to the total for the cube. To slice the data, I have to use [DTE YR].[Year].[Year] and [DTE MM].[Month].[Month] in my select, but then the data is shown for every month, i.e. the date range is ignored.

What am I doing wrong, or is this a quirk of RS that I can work around?

Thanks,
Kathryn

View 3 Replies View Related

Analysis :: SSAS Time Period Dimension Aggregation

Jul 27, 2015

I have a monthly time period dimension representing average number of students for each month. At the yearly aggregate level I don't want it to sum up the avg number of students from every month because that number is incorrect. I would like it to use the number of students from the most recent month as a roll up. Is that possible to configure in SSAS?

View 2 Replies View Related

Population Of Dimension Table Takes Long Time

May 26, 2008



Hi,

The scenario is the data comes from various sources and its staged into staging database. From this staging database it goes into data warehouse database. Everyday this staging database is truncated and repopulated from various sources.
I've a dimension table called DimCustomers which consists of around 300,000 rows and has lots of different types of SCD columns. It takes around 4-5 hours to load data from staging to this dimension table. Currently I'm using a For Loop container which uses a store proc to extract 15000 rows each time and populate my dimension tables. First couple of loops it goes off quickly but as and when the number reaches half of the count it slows down and hence it takes around 4-5 hours to load data.

What would be the best approach to populate this kind of dimension table.

Thanks

View 7 Replies View Related

Analysis :: How To Change The Member Names Of Time Dimension

Oct 19, 2015

<SQL Server 2008 R2>

I created a Time table using BIS. I found that the default naming of time members is too long and redundant.

For example, the wizard generated "Fiscal Calendar 2015", "Fiscal Quarter 1, 2015", etc. However, shorter expression like "FY2015", "FQ1 2015", etc would be enough for me. 

Is it possible to change the default naming rule, or does SSAS works correctly if I update the Time table values using SQL?

View 2 Replies View Related

Analysis :: How Time Dimension Aggregate Parent Child In MDX

May 21, 2015

I have a parent child dimension. a time dimension i have year.

Assume,

ParentID ChildID sales Year

171 171 10 2014
171 172 200 2014
171 173 300 2014
171 172 44 2015

if I pass 2014 and 2015 in sub select  171 data is not coming in result. i i pass only 2014 in sub select i get value of only 2014. if I pass 2015 in sub select i didn't get any value.                                                                         

View 2 Replies View Related

Tricky Schema Question - Dimension Can Split And Combine Over Time

Jul 20, 2005

Hi all,I'm working on the schema for a database that must represent data about stock& bond funds over time. My connundrum is that, for any of several dimensionfields, including the fund name itself, the dimension may be represented indifferent ways over time, and may split or combine from one period to thenext.When querying from the database for an arbitrary time period, I need the datato be rolled up to the smallest extent possible so that apples can be comparedto apples. For instance, if the North America region becomes 2 regions, USAand Canada, and I query a time period that spans the period in which thissplit occurred, I should roll up USA and Canada and for records in the periodthat has both, and I should call the result something like "(NorthAmerica)(USA/Canada)" in the output. The client specifies that the dimensionoutput must represent all dimensions that went into the input.Of course, I have to account for more complex possibilities as well, e.g.Fund-A splits into Fund-B and Fund-C, then Fund-C merges into Fund-D producing(Fund-A/Fund-D)(Fund-B/Fund-C/Fund-D)(Fund-B/Fund-D)I can think of several ways to handle this issue, and they're allextraordinarily complex and ugly. Any suggestions?Thanks,- Steve Jorgensen

View 9 Replies View Related

Analysis :: Change Dimension Storage Type As Real-time Rolap But Error Occurs?

Jul 7, 2015

I Create a measure group and two dimensions using  [AdventureWorksDW2012], I try to change one dimension's storage mode with setting property proactive caching as Real-Time ROlap. There is no any warning message when deploying and processing, but error occurs when I query in sql server analysis services, see below for the error messages and the screen capture.

Error occurred retrieving child nodes: the current operation was cancelled because another operation in the transaction failed.

View 2 Replies View Related

Index Granularity

Oct 4, 2006

Hi,

I was looking at the new index locking granularity option available in 2005. I did not understand in what case can this be a performance enhancement. Has anybody looked into this ?

View 5 Replies View Related

Sum At A Lower Level Of Granularity

May 19, 2008

Hi all,
I have a big problem with the design and the queries of a couple of tables. I have to calculate the weighted average of the number of days between an invoice and the sum of its payments (rateal payments), weighting this number with the invoice's amount, and only when the sum of the invoice's payments equal invoice amount.
I have designed two FactTables to accomplish this:

Example:
data in the fist table (FactInvoice) looks like this:

CustomerId, InvoiceDate, InvoiceNumber, InvoiceAmount
1234567 2008-03-10 123 1000.00
1234567 2008-04-10 150 2000.00

and data in the second one (FactPayments) looks as below:


CustomerId, PaymentDate, InvoiceNumber, PaymentsAmount
1234567 2008-03-10 123 500
1234567 2008-03-15 123 500
1234567 2008-04-20 150 800
1234567 2008-04-23 150 1200




Let's suppose that I'm querying the cube for the customer 1234567, at the date 2008-05-01. I need to sum the payments for the first invoice multiplicating it for the count of days elapsed, then divide the number by the InvoiceAmount (or PaymentsAmount, is the same) : (0 days * 500‚¬ + 5 days * 500‚¬ + 10 days * 800‚¬ + 13 days * 1200)/3000‚¬ = 8,7 days of weighted average to cash an invoice.

How can I do this with Analisys Services ? Is it too complicated ? Here, in the southern Italy, the problem of customer's debit is heavily felt, and total like this are really important!
Any help or suggestion will be really appreciated.

Marco

View 13 Replies View Related

Lock Granularity And Performance Issues

Jun 9, 2008

Hi I have a few things to clear up and I hope i can find some answers in here.
I have an application written in C# using COM+ components
This application is intended to support a few hundred users at the same time and each action user leads to a few updates on several tables.
The problem is that 2 or more users might do this thing the same time meaning that each users should update rows that "belong" the other users .
I rely on the appearance of deadlocks in order to keep the data consistent, so only one of the users should be able to have this action completed, the transactions for the other concurrent users should abort.
Basically the results would be the same no matter which user will complete the transaction so the only issue is to have only one action completed. So far so good, it seems that I have no problems and that the data is consistent.

Now the reason why I'm here would be that lately i see that the number of deadlocks has increased considerably and there should be no reason for this to happen; the situations when the users would modify each others data are somehow rare and I should see a few hundred deadlocks daily
So what i think is that SQL might increases the lock granularity to table or page instead of using row locks(I am not sure it's just a wild guess)
From what I've read, sql starts with default row lock and it might increase it when necessary.

And now finally the question: can I force sql to use only row lock? and if yes are there any risks involved? Is sql capable to maintain only this kind of lock and still be able to manage the transactions correctly?
And another question, how do deadlock decrease overall performance?

View 1 Replies View Related

Budget On Different Granularity In Organization Hierarchy (which Is SCD)

May 27, 2008

Hi everyone

I am struggling with adding budget numbers to a cube. The main reason being that the budget is *not* on the finest granularity (employee) with regard to the organization hierarchy but on a coarser one (team).


The organization hierarchy is a "flat" (not parent-child) hierarchy that looks about like that:

employee -> team -> teamgroup -> region -> country


As mentioned I now have budget numbers that are defined on the team-level (not on the employee level as "regular" measures). I would know assume that I could put the budget data into its own table and "link" it with the organization through the "team" attribute. I would do that on the "dimesion usage"-tab.

The problem with this approach is that the organization is changing (SCD type 2). This essentially means that by linking to the "team" attribute the aggregation of the budget data on higher levels of the organization hierarchy can be ambiguous (at least that is what I understand).

Example organization table:



Code Snippet
surrKey busKey empName teamId teamGroupId regionId countryId ... scdStuff
1 1 Raphael 1 1 1 1 ...
2 2 Jeanne 2 2 1 1 ...
3 3 George 3 3 1 1 ...
4 2 Jeanne 2 3 1 1 ...



That would mean that on some point in time team 2 (consisting of one employee, Jeanne) moved from teamgroup 2 to teamgroup 3. Just for the sake of a simple example.

Now, what am I to do with my budget data in this situation? I cannot link it to the teamId, because teamId = 2 for example cannot specify if the value should be aggregated into teamgroup 2 or teamgroup3...


I have a feeling that this got something to do with the design of the organization-table but I am unsure about what the actual problem is. Any hint, pointer or solution would be appreciated. If the question is unclear, please let me know and I will try to clarify.


Kind regards
scherand

View 3 Replies View Related

Analysis :: What Is The Meaning Of Granularity In SSAS

May 6, 2015

I wanted to know the meaning of Granlatiy of Fact  with some example.

View 3 Replies View Related

Security Need Thoughts: Ease Of Admin And Granularity

Oct 3, 2005

I like the new gig a lot. Real busy, smart folks and I have been in high demand since 5 minutes after my butt hit the chair. I already have code in production.

Anyhow, we have a security situation on the sql servers I pointed out on my first day. So they want me to roll everything over to Windows Authentication and give the developers and report writers more restricted rights inside SQL Server. So they have NT Groups for different kinds of users and all of that jazz and I layed on the typical stuff about using NT groups vs individual accounts and ease of admin vs granularity of control. Well the boss came back and said he wants ease of admin and granularity of control over security. So, does anyone have any fresh thinking on turning my eitheror into an AND.

View 5 Replies View Related

Analysis :: Count Function Taking More Time To Get Count From Parent Child Dimension?

May 25, 2015

below data,

Countery
parentid
CustomerSkId
sales

A
29097
29097
10

A
29465
29465
30

A
30492
30492
40

[code]....
 
Output

Countery
parentCount

A
8

B
3

c
3

in my count function,my code look like,

 set buyerset as exists(dimcustomer.leval02.allmembers,custoertypeisRetailers,"Sales")
set saleset(buyerset)
set custdimensionfilter as {custdimensionmemb1,custdimensionmemb2,custdimensionmemb3,custdimensionmemb4}
set finalset as exists(salest,custdimensionfilter,"Sales")
Set ProdIP as dimproduct.dimproduct.prod1
set Othersset as (cyears,ProdIP)
(exists(([FINALSET],Othersset,dimension2.dimension2.item3),[DimCustomerBuyer].[ParentPostalCode].currentmember, "factsales")).count

it will take 12 to 15 min to execute.

View 3 Replies View Related

GROUP BY Speed Up Query On Data That Is Already At Lowest Granularity?

Jul 9, 2015

I have just been running a query which I was planning on improving by removing a redundant GROUP BY (there are about 20 columns, and one of the columns returned is atomic, so will mean that the "group by" will never manage to group any of the data) but when I modified the query to remove the grouping, this actually seems to slow the query, and I can't see why this would be the case.

Both queries return the same number of rows (69000), as I expected, and looking at the query plan, then they look nearly identical, other than at the start, there is a "stream aggregate" and "sort" being performed. The estimated data size is 64MB for the non-grouped query (runs in 6 min 41 secs), vs 53MB for the aggregated query (runs in 5 min 31 secs), and the estimated row size is smaller when aggregated.

Can rationalise this? In my mind, the data that is being pulled is identical, plus there is extra computation for doing an unnecessary aggregation, so the aggregated query should be unquestionably slower, but the database engine has other ideas; it seems to be able to work more quickly when it needs to do unnecessary work :) Perhaps something to do with an inefficient query plan for the non-aggregated query? I would have thought looking at the actual execution plan might have made this apparent, but both plans look very similar.

Edit: More information, the "group by" query had two aggregations on it, a count of one of the columns, and an average of another one. I changed this so that it was just "1" instead of the count, and for the average, I changed it to be the expression within the average aggregate, since the aggregation effectively does not do anything.

View 2 Replies View Related

Analysis :: Linking Multiple Fact Table At Different Granularity To Same Dimensions

Jul 15, 2015

I am modelling two fact tables of Actuals and Budget which are at different granularity, Actuals are at day, customer and product sub category level. Budgets are at month, Region and Product category level.

Month, Region and Product Category is present in Date, Region and Product Category dimension respectively. I have only three dimensions as Customer, Product and Date. Linking those dimensions to Actual Fact table is not an issue, what is the best way and options are there to link budget fact table to those three dimensions.

View 2 Replies View Related

Time Dimension With &"None&" Option

May 6, 2008

I have a fact table that has a couple of different time values. There is a createtime which is my primary time dimension, but there is also an expiration time. Not all records have an expiration time, but I'd like to have this as a dimension, such that the rows that don't have an expiration time could be grouped as none or unknown.

Is it possible to configure this in the ErrorConfiguration section for the dimension? I keep getting errors on processing for missing key values.

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved