I have a wide fact table that I'm feeding to an SSAS cube. I was advised that splitting the measure group into two will improve performance when querying the cube.
I cannot find any documentation that supports this, in fact I get a blue curved line suggesting that I merge the measure groups since they have the same dimensionality and granularity.
I guess the best practice is what the blue line states, but without knowing the internals of SSAS I can undestand that a smaller measure group may be easier to handle, or create more specific aggregations for.
When I am executing below MDX query, it's giving correct result with out any issue
SELECT NON EMPTY { [Measures].[Daystorecieve] , [Measures].[PO Recieved], [Measures].[Post Award Milestone PO Analysis Count], [Measures].[Powith80pct Received]
[Code] ....
After the successful execution of the above query, I am trying to filter on my measure group [Measures].
[Daystorecieve] values not equal to "0". With minimum number of the dimension selection my query executing fine.
Please find the below query.
SELECT NON EMPTY { [Measures].[Daystorecieve] , [Measures].[PO Recieved], [Measures].[Post Award Milestone PO Analysis Count], [Measures].[Powith80pct Received] } ON COLUMNS,
[Code] ....
But, when I am trying to execute with total number of dimensions. It's running long time and giving out of memory exception. Is there any way to apply where condition on my measure group like where [Measures].[Daystorecieve]<>0.
In my cube there are two measures which are used in different calculations.Now I'm need to show in report if there any months in data when both or even another one of the measures is not updated (value = 0 or NULL).
how should I create the calculated measure for that?
I have tried in mgmt studio to plan this but I'm in a loop of errors.
I'm building a system that imports data from several source, Excel files, text files, Access databases, etc. using DTS. The entire process revolved around MS SQL Server, by the way.
I figured I would create denormalized tables that mirror the Excel and flat files, for example, in structure, import data to those, clean up and remove duplicates there, then break those out into my normalized table structure later.
Now I've finished the importing part (though this is going to happen once a week) and I'm onto breaking up the denormalized tables.
I'm hesitating because I'm not sure I've made the best decisions in terms of process, etc.
I've decided to use cursors to loop over the denormalized tables and use batch insert statements to push data out to the appropriate tables.
Any comments? Suggestions? All is welcome.
I'm specifically interested in hearing back on the way I've set up the intermediate, denormalized tables and how I'm breaking them up using cursors (step 2 of the process below). Still, all comments are welcome. As are suggestions for further reading.
Thanks again...
simplified example (my denormalized tables are 20 - 30 colums wide)
denormalized table: =================== name, address, city, state, cellphone, homephone
I'm breaking up the denormalized tables like this (*UNTESTED*): =================================================
DECLARE @vars.... (one for each column in my normalized table structure, matching size and type)
DECLARE myCursor CURSOR FAST_FORWARD FOR SELECT name, address, city, state, cellphone, homephone FROM _DNT_myWideTable INTO
WHILE @@Fetch_Status = 0 BEGIN -- grab the next row from the wide table FETCH NEXT FROM myCursor INTO @name, @address, @city, @state, @cellphone, @homephone
-- create the person first and get the ID with @@IDENTITY INSERT INTO tblPerson (name) VALUES (@name)
SET @personID = @@IDENTITY
-- use that ID to coordinate inserts across other tables INSERT INTO tblAddress (FK_person, address, city, state, addressType) VALUES(@person, @address, @city, @state, 'HOME')
INSERT INTO tblContact (FK_person, data, contactType) VALUES(@person, @cellphone, 'CELLPHONE')
INSERT INTO tblContact (FK_person, data, contactType) VALUES(@person, @homephone, 'HOMEPHONE')
The data attached below is from a Fact table. When this data is browsed in the Cube the end user is only interested in value of Measure 1 when it is not equal to zero. Measure 1 is a base measure .how to suppress the value 0 for Measure 1 in the Cube.
I'm trying to create a percentile rank function based on the standard WIKI version:
I've seen Brian Knight's article here, but that only deals with percentile.
Where I'm struggling is getting the count of members in a set using a measure, in the current context on the same hierarchy, as the filter expression. I'm using the comparative set as in belonging to the same geographical location, and therefore associating by another attribute.
So, cl as below:
MEMBER [Measures].[RegionPercentileCount] AS Count( Filter( NonEmpty( descendants(Ancestor( [Supplier].[NameMap].CurrentMember, [Supplier].[NameMap].[Region]), [Supplier].[NameMap].[Supplier Id]), [Measures].[ActiveMeasure]) , [Measures].[ActiveMeasure] < ([Supplier].[NameMap].CurrentMember, [Measures].[ActiveMeasure])))
Using the same measure and context hierarchy is always going to be equal, and therefore the count is always zero. Its almost as if I need a nested context for the FILTER which allows me to use enumerate the set on the same hierarchy whilst maintaining the external reference.
I'm thinking that perhaps I'm going to have to create another hierarchy and use that as the filter set and reference through StrToMember or similar.
I am trying to count a set as a calculated measure, when this set is called directly in the row , it returns fast, but when i try to count the set as calculated measure(so i can slice with another dimension) the query keeps running forever.
The queries are below
select {} on 0, nonempty ( {([Transaction].[RPC Count].&[1],[Transaction].[Account ID].[Account ID])} , {([Account].[PAYMENTSTATUS].&[0],[Account].[Account ID].[Account ID])} ) on 1
Is it possible to filter out a measure only at the intersection of Two dimension members? I have a date dimension, a Hospital dimension and a wait time measure.
For Example, is it possible to filter out Wait time for Bayside Hospital for the Month of June 2015?
I want Wait time to continue to be displayed for all other months and roll up into the totals without the filtered value.
I'm trying to show measures from 2 measures groups in a drillthrough. Obviously, it's not possible with a standard drillthrough action, but I still hope that I can somehow achieve this with the ASSP GetCustomDrillthroughMDX function.
Speaking in AdventureWorks2008R2 terms: Imagine I have a pivot table with Product Categories in filter (say, filtered on Gloves) and "Internet Sales Amount" as measure. From context menu in Excel I can call the drillthrough action which shows me the individual sales records. I would like to show in drillthrough additionally "End of Day Rate" measure from "Exchange Rates" measure group.
One option would be to join FactInternetSales with FactCurrencyRate and make EndOfDayRate a physical measure in the "Internet Sales" measure group. This is a pretty huge overhead for my scenario and I'd like to avoid this.
Another one would be to call something completely external for a drillthrough (for example, a SRRS Report).
I have an issue related to SSAS security. We have an SSAS multidimensional cube which needs 3 types of security:
- Access to the entire cube => OK, based upon a role - Restricted access to one department (= dimension) => OK, based upon a role - Access to the entire cube, but with dynamic security on 2 measures.
Let's say, we have 2 departments (food and non-food). Users within food are allowed to see sales and pieces from the food department, but not from the non-food department.
It is not an option to restrict access to the non-food department because there are other measure which they have access to. I tried cell security, but this is very slow and generates multiple empty rows on my selections.
we are having an existing cube in that we need to update with new measures . The Measure groups are added to the cube as linked object. so when we are updating the measure group it is throwing the exceptions as follows..“Errors in the OLAP storage engine: The metadata for the statically linked measure group, with the name of 'SalesActual', cannot be verified against the source object.”
I am using the following approach to enable dynamic date calculations: [URL] ...
However, some measures I work with are percentages (not numeric values) and demand a different formula.
See example snippet bellow (first IIF statement)
/*Calendar*/ [Date Calculation].[Calculation].[Calendar Actual versus PY WK %] = IIF ( [measures].Name = "HSP2 %", -- < ThIs is where I am trying to check if the measure is a percetgae type 0, -- < When this is the case then do something different
[Code] ....
I am trying to access the metadata. I am aware of $system.mdschema_measures but I am unable to utulize this for my case.
I can pull a last processed date for the entire model: URL...That works great for showing the last time anything in the entire model was processed. It is not table specific. Updating / processing table A also changes the timestamp on tables B and C.However, if I want to look at just a specific table in the model (build a column for each fact table and a measure to go with it) I find that doing any process operation updates all the =Now() columns in all the fact tables.If I have a model with 3 fact tables and I do a process table using ProcessFull on one of them, all three tables calculated columns "LastProcessed" =NOW() are updated. URL...
Is there a way to setup a measure in the model on a per table level to show last processed date for each individual table? LastDataRefresh:= MAX ('TableA'[LastProcessed])
For example: Our sales table may update three times a day, where as our warehouse inventory table is only updated nightly.I wanted to let end users see by adding a measure for each table when the last process event was for a given table in the model.
One of my models has order data, cost per order/invoice ID and then dimensions on Fiscal Year, category, etc...the usual.
A user wanted to search it for an exact order amount. (They knew for example that one of our accounts was not balancing by single order worth $746.13 and assumed it must be an order that was placed but never marked shipped that slipped through the cracks).
Now, in the model I have "order amount" as a field and then a measure that sums that.
I could expose that "order amount" field as a label and let them filter on it in Excel (and that works).
However, I haven't had any luck filtering on the actual measure "Total Order Amount". Such as OrderID-> View Filter -> "Total Order Amount" equals 746.13.
I assume this is due to a few things:
Measure calculates at different levels so filtering on a measure is difficult as you would have to place all the "slicers" and set them first before the measure would "exist" at a level where it could be $746.13. Orders by year would have $746.13 as part of it's year sum, but wouldn't exist as a stand alone line item orders by year 2015 might be 2 million.
Orders by category might exist at 500,000, 8,000, 15,146.36, etc... but not $746.13.
So I would need OrderID on there as a column so the measure could return at the value of $746.13 for one row for it to match the filter?
Basically: 1. Why it can't really filter on a measure? 2. Is there a better way to accomplish this other than exposing the actual column in the fact table "order amount" as it feels like that could cause all kinds of confusion if other users try to slice/filter on that not realizing exactly what it is meant to be?
Developing a Retail cube using SSAS 2012. One of the dimension is DimCustomer with SCD type II. Each Customer can be a member or a non-member over a period of time. We have StartDt and EndDt to reflect the membership status.
eg: Joe is a member between 06-01-2014 and 31-08-2014 Joe is a non-member between 09-01-2014 and 01-31-2015 Joe is a member between 02-01-2015 and 04-30-2015 Joe is a non-member between 05-01-2015 and 12-31-9999
Without adding fact row of Joe for each day to reflect the membership status, I want to provide the ability to measure "Active Customers Count" on a given date. There are 2 million customers in the DimCustomer Table.
The user wants to be able, using excel, to apply a filter to all measures in every measure group. I though that I can create a dimension with a single level with two members, let´s say "on" and "off" and depending on the selected member and using an IIF statement decide which formula applies to the calculated measures.
I have serious doubts about the performance and for this technique because I am thinking as a .Net developer and not as a cube developer. Maybe it is better to resolve it scoping the measures but I cannot figure it out.
i want to create a new measure that will behave based on the dimension dropped,ex. if i added the employee dimension only it will aggregate data from the #Calls Count but if i added the product dimension it should display # Product Calls at the product level and #Calls Count at the employee level as shown in the screen shot.
I created SSAS cube in VS 2008 and have been able to deploy it successfully to the server. While creating the cube I was able to browse dimensions and all underlying tables just to make sure it has data. After deploying successfully when I drag and drop any measure group to browser it does not display anything.
The only thing I did different from straightforward cube building process was that when I created those measure groups the partitions that were created by default were giving me some unknown errors so I had to delete them in order for cube to process successfully.
Did that made any difference because I thought partitions are for improving query performance and has nothing to do with cube processing errors.
I have a Fact table that contains several degenerate string values that I have pulled into a Fact Dimension.
When I browse the cube and cut one of the measures by an attribute from the Fact Dimension, I am getting incorrect data.
In other words, when I query the fact table directly via SQL and apply the same filters, I see the data I am expecting to see. But cube browse with same filters yields different results.
How can this happen since the fact dimension has a 1:1 relationship with the fact table.
I do have the Dimension Usage configured properly.
Is this an aggregation thing? Attribute key thing? What am I missing?
Are Measure Expressions Supported in SSAS 2014 Standard Edition?In 2005 SSAS, I remember that Measure Expressions were not supported in the Standard Edition, only Enterprise Edition.
At the following MDX code , I want to get the aggregate of measure only for members that are also in the specified last time (like in examp 01/06/2015) . I tried existing and exists, but without any lack.
WITH MEMBER A AS (b)+(C) MEMBER [Measures].[Aggregate] AS Aggregate(DAYTIME].[Month].&[2013-01-01T00:00:00]:[DAYTIME].[Month].&[2015-06-01T00:00:00], ([Measures].[D])