I 've read that there is a workaround for this issue by customizing errors at processing time but I am not glad to have to ignore errors, also the cube process is scheduled so ignore errors is not a choice at least a good one.
This is part of my cube where the error is thrown.
DimTime PK (int)MyMonth (int, Example = 201501, 201502, 201503, etc.) Another Columns FactBudget PK (int)Month (int, Example = 201501, 201502, 201503, etc.)
I set the relation between DimTime and FactBudget doing DimTime MyMonth as Primary Key and FactBudget Month as Foreign Key. The cube built without problem, when processing the errror: The attribute key cannot be found when processingwas thrown.
It was thrown due to FactBudget has some Month values (201510, 201511, 201512 in example) which DimTime don't, so the integrity is broken.
My actual question: is there a way or pattern to redesign this DWH to correctly deploy and process?
I have designed a cube. It has two fact tables and some dimensions. Fact table to fact table is many to many relationship.
For example
FactMain DataKey(PK), StartDateKey, PostCodeKey, TotalCost FactBridge DataKey(FK), ProductKey(FK), Position - PrimaryKey on DataKey + ProductKey + Position DimProduct ProductKey(PK), ProductCode
Cube is built successfully, processed successfully.When I try to process the cube from agent job, I am getting error "Attribute key not found: tablename, value..." I have added a job step to run AnalysisServices Command. I have taken the command from cube process script(taken from manually process the cube and take script generated). I used ProcessAffectedObjects = "true" in the script. When I checked the tables, the key does exist. Why am I getting this error?
Can somebody help me with the following error while deploying my cube:
The attribute key cannot be found : dbo_fact_table, Column: datetime Value 25/10/1901 4:18:00 pm...
The year 1901 is included in the time period of the time dimension. The calendar includes the following dates : 1/1/1753 - 31/12/2007 (Time binding) Why this referential integrity error occurs ??
Please help me because it is urgent and I cannot find a solution..
I have a cube that we are processing nightly via an Analysis Service Processing Task in SSIS. In order to increase the performance of the processing time, we elected to use a lot of rigid dimension attributes, and do a full process of everything in the SSIS task. The issue that I am having is that after that task completes, I need to go into Visual Studio to deploy the cube becuase we are unable to browse or use the cube. This issue seemed to start once we changed the SSIS Analysis Service Processing Task to do a full process on the dimensions, rather than an incremental.
I would expect that once development is done, and it is processed and deployed, that is it. My thinking is that the SSIS task should just update the already deployed cube,
I know, this is a common OLAP Error, but In fact I'm having trouble with this while trying to process a DM Mining Structure. I'm currently working on a website that gets data from its users and analyzes it using SSAS. The thing is each time we add a new "analysis criterium" (sorry I'm trying to translate our french BI language in English...), we have to build a new mining structure, which needs data about users who have actually answered the question associated with this criterium. Some times, there are thousands, and some other only dozens, which is the case for the structure I'm having trouble with.
I got only 2 hundred tuples in the learning set. So lots of the common criteria weren't filled: I removed them using a stored procedure before feeding the structure, so that I got no column with only "null" values.
Of course, I know that 200 learning cases is really not enough to build an accurate model, but the purpose was just a proof of concept for machine driven Mining Structure building, and that was supposed to work even with so few cases. When I process the MS, it fires: (Sorry it's in french, translation follows) Erreurs dans le moteur de stockage OLAP : La clé d'attribut est introuvable : Table : _x0032_0_EtudeIphone_Apprentissage, Colonne : EtudeIphone, Valeur : le nouvel iPhone (téléphone tactile et musical dApple). Erreurs dans le moteur de stockage OLAP : La clé d'attribut a été convertie en un membre inconnu parce que cette dernière est introuvable. Attribut Id de la dimension : 20_EtudeIPhone ~MC-Id de la base de données : ClassificationVDCE, Enregistrement : 2.
Badly translated it says "Errors in OLAP Storage Engine: Attribute Key not found Table:<StrangeTable> Column <MyPredictableColumn> Value <OneOfTheInterestingValues> Errors in OLAP Storage Engine: Attribute key not found: converted to an unknown member. Attribute Id from dimension..."
Why? Too few cases? I have structures based on the same template but associated with other criteria and they work perfectly.
I'm ready to answer any question, and give any detail. Thanks in advance.
Many dimensions don't have unique members. Instead, the dimension source data has duplicates at the leaf level: it's left up to SSAS to aggregate up to the actual leaf level used in hierarchies.
Every cube I've worked on in the past, a dimension is clearly defined in the source data, with uniqueness already present there: we don't make a dimension out of duplicated, sort of facty data. This kind of design seems as weird to me as an unnormalised SQL database.
Here's an example to illustrate what I mean; I'll use that Adventureworks database.
We have a Geography dimension with a Geography hierarchy. Levels go like this from top to bottom:
Country State-Province City Postcode
The Geography dimension has a key attribute called Geography Key. It's there in the cube design as a dimension attribute, but it's not in any of the hierarchies, so I can't query it in MDX. But that's fine: it has the same cardinality as the lowest level (Postal Code), because the dimension has some kind of normal design.
In the cube I'm dealing with, it's all messed up. Using the AdventureWorks example above as a parallel, someone made a Geography dimension with source data keyed on [PostalCode, ExactAddress], but only wanted the dimension granularity to be PostalCode.
This makes it very hard to debug why the data in this dimension is incorrect. I can't match up the dimension members in the cube to the source data, because the dimension doesn't actually go down to the real leaf level!
So I have a dimension attribute called ExactAddressKey, but I can't query on it in MDX, because it's not part of any dimension hierarchy. Unfortunately changing any part of this cube design is not possible, so I can't even experiment with settings and see what happens.
How I could get to the leaf level of the data imported? Something like
Or does this kind of dimension design result in SSAS discarding all the data that's more granular than the most granular attribute defined in any hierarchy - so that the data actually isn't there to be queried?
I have date hierarchy with year - qtr- month - date. from the below query, if i have a date or month or year on rows, i want to derive the top member in the hierarchy that is year. I f i have date on rows,
[Date].[Calendar Hierarchy].[Date].&[20150106],I should get [Date].[Calendar Hierarchy].[year].&[2015].
How to find the parent?
with member [dt7] as Drillupmember([Date].[Calendar Hierarchy].currentmember, [Date].[Calendar Hierarchy].[Calendar Year]) select {[dt7]} on 0, ([Date].[Calendar Hierarchy].[Month].&[201501]) on 1 from Cube
For example, I have a Date dimension with attributes like Current Day and Current Month. If I run the following, I get exactly what I expect: a list of the days in the current month.
select NULL on 0, [Case - Date - PSPT Entry].[Year - Quarter - Month].[Date] on 1 from [Customer Support] where [Case - Date - PSPT Entry].[Current Month].&[True]
When I run the following, I'm getting a list of the days in the current month *plus the first couple days of the next month*. with
set [Days of Interest] as filter([Case - Date - PSPT Entry].[Year - Quarter - Month].[Date], [Case - Date - PSPT Entry].[Current Month].&[True]) select NULL on 0, [Days of Interest] on 1 from [Customer Support]
I have a Tabular model with a situation where I want to have three alternate attributehierachies in one dimension.
Dimension FruitAndVegetables (with 4 columns: Id, Name, Fruit and Vegetable) Id Name Fruit Vegetable 1 Apple Apple 2 Onion Onion 3 Banana Banana 4 etc
Now I would like to put Vegetable on rows in a report without getting a blank row (with the sales of all Fruits)..I would like to supress all those Fruit records without adding a separate filter to the report, just let the user pick this Attribute should do the move.
I have 2 dimensions that pull their Facility Name from the same Location Dimension. The business users want to change Facility Name in the Material Facilities dimension to “Material Facility Name”, but keep Facilities dimension attribute the same. What is a good way to go about completing this task.
I've been working with SSAS for a good few years now but I keep bumping into this problem - my users are trying to build a measure that is based on a calculated attribute and finding it difficult to work out how to write the MDX to do so. Intuitively, they thought a Calculated Member would work, but I don't think a Calculated Member is quite the same thing from my understanding.
So, here's the scenario.
We have a Product Dimension. We have a Measure that is the Number of days the Product took to make, e.g. 5 days. We also have a Product Count measure that counts the number of Products.
The user would like to write a calculated measure that works out the number of products that took <5 days, 5-10 days, 10-15 days etc.It would be easy to write a set of calculated measures for each of these bandings, but the user wants effectively a single dynamic attribute to use in the calculation in order to automatically distribute these values across the columns in their pivot table.
Is this even possible? I was thinking I could build an attribute on the Product Dimension in the ETL to do this quite easily, but the user wants to be able to change the bandings on the fly by changing the MDX for the attribute, rather than go back to the developer every time.
I have a cube with a fact table and 3 dimensions. One of the dimensions is a type 2 and surrogate key is stored in fact table. If i query the database, the dimension attributes display correctly, however the cube is always displaying the latest dimension attribute and not preserving the history.
The measures are correct for the time period displayed, but the dimension attributes always show the latest values.
I have a parent Child attribute in my dimension. I am currently displaying the correct ID value as the business wants. So now they can see the rollup of the ID(intOrgNodeID ) values.They would also like to see the same rollup of the Name (vcharOrgNodeName) for this ID.However they do not want it concatenated. They want to be able to see them separate.You cannot create two parent child attibutes in one dimension so not sure if there is some simple trick to make this work? It seems like there should be some simple trick for this.
My dimension table looks something like this intdimOrgNodeID int Key (surreget key) intOrgNodeID int (Actual ID) intDimParentOrgNodeID vcharOrgNodeName In the Propertys I have set this below. KeyColumns = tbldimOrgNode.intDimParentOrgNodeID NameColumn = tbldimOrgNode.intOrgNodeID
In my SSAS Cube I have created a dynamic named set "top 10 e-learnings by language" which consists out of a set of tuples. Each tuple has two attributes out of the same base dimension "training": attribute 1 is "sprache" (language) and attribute 2 is "training text".
Normally a named set would be automatically visible in Excel Pivot under the dimension you used to create the named set, but it seems that named sets with tuples which have more than one attribute are placed in a separate folder "Sets" in between the measures and dimensions.Additionally in the SSAS cube browser this named set is not visible at all.Is there any way to tell the named set in which dimension it should appear or any workaround?
Our SSAS integration didn't initially use attribute relationships.Now that our system has been running for a few years and we have bigger databases, we think we need to add them to improve performance. So we're in the process of adding them but we found out that, when attribute relationships are added, the full unique name of our members all go from something like:
[DIM].[HIERARCHY].[LEVEL].&[GRANDPARENT].&[PARENT].&[MEMBER] to something like: [DIM].[HIERARCHY].[LEVEL].&[MEMBER]
It looks nice and SSAS will accept the longer names fine but it will return the short ones in response to 'discovery' requests and in the XMLA response of MDX queries. This is causing problems in our low level XMLA-based modules that assume the long names in and out. is there any clean way to use attribute relationships and still have SSAS generate the long member names. We fiddled with the various documented dim/attribute properties but to no avail. It also appears that some switches are obsolete.
I have find out almost 80 duplicate indexes in my database while tuning db with the use of following query:how to delete all duplicate indexes in one go instead of deleting individual.
-- Duplicate Index Scripts -- Original Author: Pinal Dave (C) 2011 -- SQL Server Journey with SQL Authority WITH MyDuplicate AS (SELECT Sch.[name] AS SchemaName, Obj.[name] AS TableName, Idx.[name] AS IndexName, INDEX_Col(Sch.[name] + '.' + Obj.[name], Idx.index_id, 1) AS Col1,
I am trying to implement data masking based on user login and not sure why this is not working. I have the dimensions DimBrand, DimProduct and DimUser. I should mask the BrandCode with 'XXXX' nothing but in the report all the BrandCode should appear but few of the code will be masked if the user is not belongs to that group. I have a fact table FactProduct in this. In the cube I created all these 3 dimensions and the fact table. I created a new dimension DimBrandMask and I separated the code over there with a relationship with the actual DimBrand dimension. In the cube a reference relationship is set up with the measure group. Created a role with read access.
In the dimension data tab of role I put the below MDX to allowed set.
Hi. I have an Analaysis Services (2005) cube with four dimensions and one fact table (with three partitions - 2006,2007,2008) for which I need to create an SSIS package to process. I only want to process one of the three partitions (2008) - the previous two years should remain unchanged.
This is what I have currently in the Analysis Services Processing Task under Processing configuration: - An object for each of the dimensions with "Process Full." - An object for the 2008 partition with "Process Full."
(Note - Under Process Options, I see only Process Default, Process Full, Unprocess, and Process Data for dimensions and partitions).
Batch settings are: - Processing order: Sequential - Transaction mode: All in one transaction - Dimension errors: Ignore errors - Process affected objects: Do not process
When I execute the package, the cube loses the 2006 and 2007 data.
I am assuming that I have an issue with the Process Option or the Batch Settings, and I would appreciate any guidance!
The last node of my workflow in SSIS is an analysis Services Processing Task, which is supposed to fully reprocess a cube, defined in a different project.
In the configuration, I found the correct cube and setups for it, I thought I wasn't gonna have any problems with it, but it started to complain about user and password information. I thought since the databases configured itself when I added them, the same thing would happen with this Task.
I do have my own user and pass which has permissions to reprocess the cube, although I thought windows authentication would be better then setting up a user and password for the application/task.
I looked in the entire configuration pane and found no information regarding username and password. Where should I set it up, my SSIS solution or the Cube's solution?
This might be a newbie question, I'm not quite sure...
EDIT: Here is the error message: [Analysis Services Execute DDL Task] Error: The following system error occurred: Logon failure: unknown user name or bad password. .
I am trying to run Olap Cube 2000 inside SSIS project. I am using "Analysis Services Processing Task" Object. The Visual Studio Project is sitting on the machine where the analysis 2000 is running but yet i get an error while establish a connection to the Analysis server.
On that machine also install MICROSOFT SQL SERVER 2005 .
the error is: A Connection Cannot be made . Ensure that the server is running.
Does Anybody have an idea to why i get this Error.
i have got a SSIS Package, that contains a sequence container with transactionoption "required". Within this sequence I placed different AS processing tasks and different SQL tasks. The transactionoptions of these tasks are set to supported.
My problem: in the case a SQL task fails on execution, all executed tasks are rolled back except the AS processing tasks. The expected and necessary behavior should be, that also the AS processing tasks get rolled back.
Has anyone got a solution or a workaround for this problem?
I am facing issue with partition processing. I am having a SSAS cube which is having 5 partitions. These partitions are processed through a sql server job using SSIS packages. In packages I used SSAS process task to do this.Now problem is, job is running successfully and showing that the step which is having partition process also fine.But data is not updating in the partition. While checking the partition properties, it is not updated with recent date and time.
When I try to manually process the partition, it is getting succeeded and recent data is getting reflected with recent date and time.Package configuration is done in job itself.
I'm getting this error during processing one dimension.OLE DB error: OLE DB or ODBC error: SQL Server blocked access to STATEMENT 'OpenRowset/ OpenDatasource' of component 'Ad Hoc Distributed Queries' because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of 'Ad Hoc Distributed Queries' by using sp_configure. For more information about enabling 'Ad Hoc Distributed Queries', see "Surface Area Configuration" in SQL Server Books Online.; 42000.my dimension contains member from two datasource table.
and in another dimension i get error;Errors in the high-level relational engine. The 'dbo_vicidial_Users' table that is required for a join cannot be reached based on the relationships in the data source view.Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Staging User', Name of 'DimUsers' was being processed.
I am having problems with the "Duplicate key was ignored" warning message. The problem is that the message seems to happen randomly and cannot be reproduced. If i take the same set of data and run the stored procedure that causes the problem i don't get the warning message a second or subsequent time. Also all the SELECT statements have criteria set to remove duplicates before they are inserted into the tables.
I have a data feed that pulls data from a DB2 database to a SQL Server 2008 staging table as a flattened set of records. A stored procedure in SQL Server is run to load the data into the destination tables. The data feed is run hourly for new and updated records in DB2 Monday-Friday 09:00-17:00 and then there is a midnight run of all the records going back for the last 12 months.
The data feed was originally sent from DB2 as a CSV file and pulled into SQL Server using SSIS but is now an Informatica workflow that pulls the data directly from DB2.
It is the Informatica workflow that is returning the "duplicate key was ignored" warning message and this stops the workflow. The workflow is restarted and the data is always loaded the second time without the warning message. The warning does not happen every time the workflow is run - it can run for a number of days with no warnings and then one will come through I can see in Profiler that it is SQL Server that returns the Duplicate key was ignored warning message so it is not an issue with Informatica.
I cannot reproduce the problem to get to the root cause of the issue. I would expect that if i run the same set of data through the stored procedure i would get the warning message every time, but this is not the case. Even when i step through the stored procedure i do not get the message. As the midnight data feed returns the records from the last 12 months, so by definition would include duplicates, the warning message only appears randomly and is not consistent.
We plan to process our SSAS Cube nightly after our data warehouse is loaded (SSIS package) using an SQL Agent Job.
1. What is the best option to automate the processing of our cube? 2. Can this be added to our SQL Agent Job? 3. As we will only be adding new dimensions and fact records, will be use Process Add? 4. Does the initial load require Process Full? 5. How can we configure a processing option before the automated execution?
I 'm using 'Analysis Services Processing Task' as part of a SSIS package to refresh the cube. in the property page,
the 'loggingMode' is set 'enabled', but there is no records in the sysdtslog90 table while all other tasks are logged in the table. How to logging into the sysdtslog90 table?
We have an Integration services package that executes a few TSQL tasks, then processes an Analsys Services database. This has been in production for about three weeks now and twice the package has failed with this error from the event log:
Event Type: Error
Event Source: MSSQLServerOLAPService
Event Category: (289)
Event ID: 3
Date: 7/11/2007
Time: 1:48:59 AM
User: N/A
Computer:
Description:
OLE DB error: OLE DB or ODBC error: An error has occurred while establishing a connection to the server.
When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server
does not allow remote connections.; 08001;
Communication link failure; 08S01;
TCP Provider: An existing connection was forcibly closed by the remote host.
; 08S01.
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
I don't think that this error is accurate because the package and Analysis Services are on the same server.
Also, this does not happen in our development environment. Any help is appreciated.
I am getting following errors when i processed Cube in Production.An error occurred while the dimension, with the ID of 'DIM_PARTICIPANT', Name of 'DIM_PARTICIPANT' was being processed. End Error
An error occurred while the 'PARTICIPANT NAME' attribute of the 'DIM_PARTICIPANT' dimension from the 'XL_GCS_SelfServices' database was being processed. End Error
An error occurred while the dimension, with the ID of 'v d Transaction', Name of 'DIM_TRANSACTION' was being processed. End Error
An error occurred while the 'INVOICE NUMBER' attribute of the 'DIM_TRANSACTION' dimension from the 'XL_GCS_SelfServices' database was being processed. End Error.
But i implmented same in Dev and QA server, there is no issues found.