Analysis :: Data Masking For Dimension Attribute Based On User In SSAS 2014 Multidimensional?
Jun 15, 2015
I am trying to implement data masking based on user login and not sure why this is not working. I have the dimensions DimBrand, DimProduct and DimUser. I should mask the BrandCode with 'XXXX' nothing but in the report all the BrandCode should appear but few of the code will be masked if the user is not belongs to that group. I have a fact table FactProduct in this. In the cube I created all these 3 dimensions and the fact table. I created a new dimension DimBrandMask and I separated the code over there with a relationship with the actual DimBrand dimension. In the cube a reference relationship is set up with the measure group. Created a role with read access.
In the dimension data tab of role I put the below MDX to allowed set.
I have a cube with a fact table and 3 dimensions. One of the dimensions is a type 2 and surrogate key is stored in fact table. If i query the database, the dimension attributes display correctly, however the cube is always displaying the latest dimension attribute and not preserving the history.
The measures are correct for the time period displayed, but the dimension attributes always show the latest values.
In my SSAS Cube I have created a dynamic named set "top 10 e-learnings by language" which consists out of a set of tuples. Each tuple has two attributes out of the same base dimension "training": attribute 1 is "sprache" (language) and attribute 2 is "training text".
Normally a named set would be automatically visible in Excel Pivot under the dimension you used to create the named set, but it seems that named sets with tuples which have more than one attribute are placed in a separate folder "Sets" in between the measures and dimensions.Additionally in the SSAS cube browser this named set is not visible at all.Is there any way to tell the named set in which dimension it should appear or any workaround?
I have only 1 denormalized table that is being used in a SSAS Tabular model(which is about 3GB). I am doing a POC to convert it into a SSAS Multidimensional and explore it.
1st Question) I am seeing that there is no Primary Key(unique key) in the current denormalized table. (Tabular Model didnt require any primary key). But i think for Multidimensional the key is mandatory? Should i generate a composite key myself in a Named Query based on this table(in the DSV)?
2nd Question) What is the best way to design my Multidimensional Cube/Dimensions based on this single table?
Say if i comeup with a Composite Primary Key called (PK_ID) . Should i be splitting up my facts / dimensions in my DSV using Named Queries similar to below(Using the same PK for my dimension tables also?)
a) FactTable = Select PK_ID, Qty1,Cost1,Price1,Amount1 from Table1 b) StoreDim = Select PK_ID, StoreName,StoreDese from Table1 c) ItemDim = Select PK_ID, ItemName,ItemDesc from Table1
I have 5 cubes, and hierachy defined for all cubes. for example:geography database with 5 continents as cubes and contries as dimensions.Now when i am doing security restrictions on my dimension ex: In USA dimension if i want only to give access to texas region then i should be able to see only texas cities. But i cansee all the states under USA even after selecting only Texas region under Dimension data tab inside ROles section in SSMS.I have tried security at database ,cube level as well as dimension level.But still not working.is that because of some wrong design of cubes or something related to database design.? I am not able to undersand that except roles everything in my cubes or datawarehouse is working fine without and defect in data.
Many dimensions don't have unique members. Instead, the dimension source data has duplicates at the leaf level: it's left up to SSAS to aggregate up to the actual leaf level used in hierarchies.
Every cube I've worked on in the past, a dimension is clearly defined in the source data, with uniqueness already present there: we don't make a dimension out of duplicated, sort of facty data. This kind of design seems as weird to me as an unnormalised SQL database.
Here's an example to illustrate what I mean; I'll use that Adventureworks database.
We have a Geography dimension with a Geography hierarchy. Levels go like this from top to bottom:
Country State-Province City Postcode
The Geography dimension has a key attribute called Geography Key. It's there in the cube design as a dimension attribute, but it's not in any of the hierarchies, so I can't query it in MDX. But that's fine: it has the same cardinality as the lowest level (Postal Code), because the dimension has some kind of normal design.
In the cube I'm dealing with, it's all messed up. Using the AdventureWorks example above as a parallel, someone made a Geography dimension with source data keyed on [PostalCode, ExactAddress], but only wanted the dimension granularity to be PostalCode.
This makes it very hard to debug why the data in this dimension is incorrect. I can't match up the dimension members in the cube to the source data, because the dimension doesn't actually go down to the real leaf level!
So I have a dimension attribute called ExactAddressKey, but I can't query on it in MDX, because it's not part of any dimension hierarchy. Unfortunately changing any part of this cube design is not possible, so I can't even experiment with settings and see what happens.
How I could get to the leaf level of the data imported? Something like
Or does this kind of dimension design result in SSAS discarding all the data that's more granular than the most granular attribute defined in any hierarchy - so that the data actually isn't there to be queried?
I've been working with SSAS for a good few years now but I keep bumping into this problem - my users are trying to build a measure that is based on a calculated attribute and finding it difficult to work out how to write the MDX to do so. Intuitively, they thought a Calculated Member would work, but I don't think a Calculated Member is quite the same thing from my understanding.
So, here's the scenario.
We have a Product Dimension. We have a Measure that is the Number of days the Product took to make, e.g. 5 days. We also have a Product Count measure that counts the number of Products.
The user would like to write a calculated measure that works out the number of products that took <5 days, 5-10 days, 10-15 days etc.It would be easy to write a set of calculated measures for each of these bandings, but the user wants effectively a single dynamic attribute to use in the calculation in order to automatically distribute these values across the columns in their pivot table.
Is this even possible? I was thinking I could build an attribute on the Product Dimension in the ETL to do this quite easily, but the user wants to be able to change the bandings on the fly by changing the MDX for the attribute, rather than go back to the developer every time.
I have 2 dimensions that pull their Facility Name from the same Location Dimension. The business users want to change Facility Name in the Material Facilities dimension to “Material Facility Name”, but keep Facilities dimension attribute the same. What is a good way to go about completing this task.
I have a business requirement to build a tabular data model, where I need to mask information of other Agents from a given Agent but I still need to show the overall sales of the given product.
For eg: IF an Agent is in APAC region he should see APAC region sales and also should be able see the sales of the same product in other region without knowing region specific break down.
For Agent "Tom" in APAC region, the numbers will look like this APAC_Sales = 100,000 Other_Sales = 500,000
And if "John" is in NA region, then the number will look like this for him
NA_Sales = 200,000 Other_Sales = 400,000
I wanted to create "Roles" based on the Region, so all the agents belong to "APAC" region will have same view as Tom and "NA" region agents will have John's view.
hi all, I was wondering if it is possible in SSAS 2005 that a calculated member is based off of an (integer) dimension attribute and another (integer) measure (let's say a multiplication operation) ?
If there a trick on doing so? other than stuffing the (integer) dimension attribute back in the fact table, as an measure?
how to move the dimension attributes from currency to geography and vice versa(i.e need to change their positions) in SQL Server 2012. i need currency to be placed in top of geography or geography below currency.
I have a monthly time period dimension representing average number of students for each month. At the yearly aggregate level I don't want it to sum up the avg number of students from every month because that number is incorrect. I would like it to use the number of students from the most recent month as a roll up. Is that possible to configure in SSAS?
Is it possible to have Analysis Services in both modes or are they mutually exclusive?I have a machine setup with Multidimensional AS and would like to know if it's possible to add a Service in Tabular mode.
For Example: I have one dimension named as "Name", Under this I have "FirstName" and "LastName" Attributes are there.But when i drag "Name" dimension, By default "First Name" dragged. But i Want "Last Name" should drag.
I create a Dimension Date using SSAS 2008 but when i execute the dimension and i go to see the result i have this result:the result is not sorted..what i need is having the result order by year i mean i have Calendrier 2020,Calendrier 2019 ...
I'm facing an issue while processing OLAP. I have enabled BitLocker for dirve encryption and OLAP services uses this drive for db storage. OLAP is executing though SSIS package and I'm getting below error in Package. When debugging the script, it says Drive is encrypted using BitLocker.
My client requires TDE for all databases, for OLAP we decided to use BitLocker: [URL] ....
SQL server is installed on C Drive & D Drive is the storage location for OLAP DB. When locking D Drive, OLAP processing failed. When I tried to restart SQL Server Analysis service in Services.msc it is not starting. Service restarted only when D Drive is unlocked. Is there any way we can process OLAP even when the drive is locked?
Error message is given below:
"The following system error occurred: This drive is locked by BitLocker Drive Encryption. You must unlock this drive from Control Panel. "
I have a dimension like Districts, Under that 2 Attributes are there i.e,District ID and Districts. When i drag Dimension "Districts", in OLAP grid it come District ID first. But i want Districts to drag first. How can we sort Attributes(District ID and Districts) for a dimension.
As u can see there is two company references in my fact table, and the schema is in snowflake. My customer requirements state that the Contracts' amounts can be aggregated/filtered for/by, ServiceProviderCompany, its city/profession or ClientCompay, its city/profession.
First thing came in to my mind is to dublicate whole dimension structure (one for serviceproviders, one for clients), which i thought that there should be another way around?
I have an SSIS and SSAS project in the same solution. I need to debug the SSIS package regardless if there is an error or two in the SSAS project. Is there a way to ignore the SSAS project while I debug the SSIS package?
I have a new 2014 SSAS installation. During VS2013 Tabular Project create I get an error Cannot connect, Reason The workspace database on server ***** is not running in tabular mode.I've changed msmdsrv.ini - DeploymentMode>2...The server wont start.
Need to resolve this calculation, which I would believe is something very common on SSAS environments.
Like many companies, my company has different ways of calculating Sales and the two I want to focus are Sales Gross and Sales Net.
At a high level, we calculate Sales Gross as Sales with returns, and Sales Net as Sales without returns.
We have an attribute called Order Type that has various types of orders a user can execute with my company. One of them is Returns. If you return something back to us, we record that as a return line on the sales table. With that, we can calculate that return, breaking data down by Order Type, such as:
Order Type Line Total
Mail Orders $ 776,655.44
Internet Orders $ 2,211,334.00
Call Center Orders $ 11,223,344.00
Credit Orders $ (55,666.00)
Today, to calculate Sales Gross and Sales Net, we are creating two dimensions: DimSalesGross and DimSalesNet.
To calculate Sales Gross, we leave the data at the natural state, not making any changes to mappings.
To calculate Sales Net, we map Credit Orders to Call Center Orders at the ETL level, getting a Net value for sales (Orders - Returns), however, I doubt this is the correct way of doing.
I would like to have a Line Total Net / Line Total Gross calculation, which would be based on the Order Type value.
Perhaps using a CASE statement in MDX? Is the above possible?
I am pretty new to MDX and am having trouble getting what I need out of this MDX query. Some business rules:
Gross Amount applies to all clients, whether Type A or Type B. I always want to return Gross Amount.Some clients are Type A, some are Type B, some are both, and some are neither.There are Type A Net Amount and Type B Net Amount values for all clients, but I only want to display the Type A Net value if the client is a Type A client, only Type B if the client is a Type B, or both for both, and neither for neither. I would like to return blank/null, not $0.00, for those values that should not be displayed.
Here's the basic query.
SELECT { [Measures].[Gross Amount], [Measures].[Type A Net Amount], [Measures].[Type B Net Amount] } ON COLUMNS, NON EMPTY {[Dim Client].[Parent Client Code].[Parent Client Code] * [Dim Client].[Child Client Code].[Child Client Code] * [Dim Client].[Is Type A].CHILDREN * [Dim Client].[Is Type B].CHILDREN } ON ROWS FROM ClientInfo
Are Measure Expressions Supported in SSAS 2014 Standard Edition?In 2005 SSAS, I remember that Measure Expressions were not supported in the Standard Edition, only Enterprise Edition.
I have been tasked with processing a large tabular cube using SQL AS 2014 (with latest CUs).The three Fact tables having 1.2 billion rows (in each table) have been divided into 30 vertical partitions to aid in parallel processing. So around 40 million rows per partition.
Using SQL Profiler to monitor the Row counts (IntegerData) of records processed seems to max out around 2 million rows per minute, then tapers down to about 200k /minute.
The processing is taking over 14 hours and I need to get it lower if possible. The server has 48 cores (2.66MHz) and over 1TB RAM installed. But I really don't ever see CPU exceed 20% having a maximum of 206 threads running on the instance msmdvr.exe
Available RAM is always at least 30% (or 300GB).
I have increased the Vertipaq MIN/MAX 60%/80%
I have increased the OLAP / Processing / Max Thread Pool Min 500 and Max to 1000.
The connection properties have been increased to allow 100 connections, the majority of the processing consumes about 92 connections for the 90 large partition views for the facts.
What can be done to increased the server resource utilization and decrease processing times?
i want to create a new measure that will behave based on the dimension dropped,ex. if i added the employee dimension only it will aggregate data from the #Calls Count but if i added the product dimension it should display # Product Calls at the product level and #Calls Count at the employee level as shown in the screen shot.
I need to limit the sessions to access to SSAS cube to one per user. For example, if a customer uses Excel to check a cube, two or more users cannot use the same session or account.
All I get back is an error message of "Analysis Services Processing Task Error: A Connection cannot be made. Ensure the Server is running" The server is running, I can process the cube by connecting to the AS instance and right-click processing it.
I can process the cube by running the SSIS task inside of SSDT Just when I deploy the SSIS package (in Project mode) and then execute it do I get the error message.
SQL Server, SSAS, and SSIS processes are all running under the same account. SSAS is on a separate server from SSIS and SQL if that matters.