Analysis :: Adding Or Updating New Schema Updates Into DB With Regards To SSAS Project?
Jul 22, 2015
I seem to having a really hard time when making schema changes. Adding a new fact table, renaming a dimension column, adding a new measure or renaming an existing one. Somehow these things tend to cause problems at one or more stages - either running the schema wizard or processing the cube.So I ask, what is your overall strategy for adding or updating new schema updates into your DB with regards to the SSAS project?
Another way to ask this - is how often do you find yourself deleting the the DSV and / or the whole cube and starting over because some schema change lead to a cascade of issues that just didnt seem to want to let you correct them?
SQL Server Data Tools within VS 2010 Shell.2012 SASS in Tabular Mode.I've created a Measure and KPI in my fact table. I've created a perspective named "Summary" When selecting the fields and measures for display in the perspective, the measure is available, but not the KPI (target, status, etc.). When Analyzing in Excel with the Summary perspective, the KPI is not available in the field list. When analyzing in Excel with the (default) perspective, the KPI is available.As far as I can tell, there is no way to select a KPI within the perspective window.
I used SSEUtil to add a schema to my database but I am having problems. Used these steps:SSEUtil -c> USE "c:Rich.mdf"> GO>!RUN Resume.SQL//indicates success>SELECT * FROM SYS.XML_SCHEMA_COLLECTIONS>GO//schema not shown in list> USE master>GO>SELECT * FROM SYS.XML_SCHEMA_COLLECTIONS>GO//schema is shown in the queryIt appears that the schema is not added to the desired database, so when I try to use the schema in Visual Studio, the schema does not appear when I connect to the Rich.mdf database. Any ideas on what I am doing wrong or why this might be happening?ThanksKevin
We have an SSAS project that we want to auto deploy. I am not sure how to handle the external data sources in the project. This one in particular has a single external data source defined to Microsoft SQL Server.
I would like to be able to change the data source based on the environment. In SSIS projects I can do this by setting up environments in the SSISDB and linking them to project parameters in the SSIS project but SSAS projects don't seem to have a similar mechanism.
How to handle this? I would like to be able to have the build/deployment agent pass in server / database information to the data source based on environment (dev, QA, production).
The only way to automate this that I've discovered is to have an intermediary process that executes after the build that updates the generated .asdatabase and .configsettings files in the bin folder replacing the connection string information.
No error is thrown, but the update is not made? Possibly due to the dreaded inline sql being used? The data structure for these 2 servers is horrific (but that is a story in itself). The current structure (which needs immediate change) is each employee all 10 of them, have their own database. If the userid is 6 or higher they are on server2 if the userid is 5 or below they are on server1. Then they each have their own table that stores the data. (A story for another day, but def needs overhaul). However, here is the sql -- what needs to be altered to work with the current data structure and still get the updates needed? Or we can scratch the update statements entirely if we can get the correct counts needed.
how can use this mdx script in the calculation part of a cube, will i simply dump it in the script form by starting with the 'create member current cube.
[measures].[test]' select [measures].[abc] on 0, [xyz].[xyz].(&0):[xyz].[xyz].(&60) on 1 from ( select (tail([month].[month].[month].members,6))on 0 from [cube])
When I try to connect from excel , to SSAS Getting error message like
A Connection attempt failed because the connected party did not properly respond after a period of time or established connection failed because connected host has failed to respond.
If I connect to SSIS , I'm able to connect correctly.
Why I'm getting this error and how to overcome this?
I am doing workload analysis on SSAS - Tabular (2012), I have perfmon logs captured and want to run through PAL. I am looking out for threshold file for SSAS tabular 2012/2014.
I have a Calculated Member in SSAS that I need to adjust based what the current member is.
The code is below
CASE WHEN [Measures].[End LIS] = 0 AND "HELP" THEN CASE WHEN [Measures].[Beginning LIS] = 0 OR [Measures].[Beginning LIS] + [Measures].[Beginning LIS] + [Measures].[NETACTIVATIONS] = 0 THEN NULL ELSE ROUND([Measures].[Disconnects]/(([Measures].[Beginning LIS] + [Measures].[Beginning LIS] + [Measures].[NETACTIVATIONS])/2) * 100 ,2) END ELSE ROUND(([Measures].[Disconnects] / [AVERAGELIS] * 100) ,2) END
In English - i need this to translate to - of End LIS is 0 "AND the current member is the current month and current year" THEN carry on
I have two ssas databases with same number of cube in them. Cube names are also same. I want to merge/combine these two as one so that my reporting application may see a single database having measures/dimensions from both databases.Is it possible?I am doing this to achieve loose coupling between base and regional development work. if above is not possible, any way to achieve minimum dependency between a base and a regional project.
I am facing issue with partition processing. I am having a SSAS cube which is having 5 partitions. These partitions are processed through a sql server job using SSIS packages. In packages I used SSAS process task to do this.Now problem is, job is running successfully and showing that the step which is having partition process also fine.But data is not updating in the partition. While checking the partition properties, it is not updated with recent date and time.
When I try to manually process the partition, it is getting succeeded and recent data is getting reflected with recent date and time.Package configuration is done in job itself.
I got this error from a sql agent driven cube process yesterday and am wondering where the log of errors is created...
Executed as user: xservername$. <return xmlns="urn:schemas-microsoft-com:xml-analysis"><results xmlns="<root">http://schemas.microsoft.com/analysisservices/2003/xmla-multipleresults"><root xmlns="urn:schemas-microsoft-com:xml-analysis:empty"><Messages xmlns="urn:schemas-microsoft-com:xml-analysis:exception"><Warning WarningCode="1092354050" Description="Server: Operation completed with 1042 problems logged." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" /></Messages></root></results></return>.
The step failed.if its a property I can see related to the server in ssms, which property as I see lots of stuff with the word log in the property name.
Is it possible to have more than one cube under one SSAS database? For example I have a database "Test" and in this the cube exist is "TestCube", is iit possible to deploy another cube "TestCube2" under the Test databse?
If yes then what is the process to do that, the reason I am asing is there are some common dimensions used n both the cubes and I am not sure what is the best way so that I can use the shared dimension?
I've been working with SSAS for a good few years now but I keep bumping into this problem - my users are trying to build a measure that is based on a calculated attribute and finding it difficult to work out how to write the MDX to do so. Intuitively, they thought a Calculated Member would work, but I don't think a Calculated Member is quite the same thing from my understanding.
So, here's the scenario.
We have a Product Dimension. We have a Measure that is the Number of days the Product took to make, e.g. 5 days. We also have a Product Count measure that counts the number of Products.
The user would like to write a calculated measure that works out the number of products that took <5 days, 5-10 days, 10-15 days etc.It would be easy to write a set of calculated measures for each of these bandings, but the user wants effectively a single dynamic attribute to use in the calculation in order to automatically distribute these values across the columns in their pivot table.
Is this even possible? I was thinking I could build an attribute on the Product Dimension in the ETL to do this quite easily, but the user wants to be able to change the bandings on the fly by changing the MDX for the attribute, rather than go back to the developer every time.
I need to find the percentile using cube so i am using the below formula :
((n-1) * p /100) -1
n= count of number of array records p= percentile
I am using below MDX query:
WITH MEMBER [Measures].[PV] AS 25 Member [Measures].[CntCT] as Count(NonEmpty([Tb City].[City Name].&[DC], [Measures].[CPT1])) Member [Measures].[PVInt25] as Int(((([Measures].[CntCT] - 1)* [Measures].[PV])/100) - 1) Member [Measures].[PVC] as ([Measures].[CPT1],Order(NonEmpty([Tb City].[City Name].&[DC],[Measures].[CPT1]), [Measures].[CPT1],ASC).Item([Measures].[PVInt25]))
Select [Measures].[PVC] on columns, {[Tb City].[City Name]} on rows from test;
The line 2: Member [Measures].[CntCT]
In that i need to find the n count of rows where city is DC (City is my Dimension) in Measures CPT1
But currently it is giving the result 1 instead in actual in my test cube there is the city DC exists with 23 CPT1 rows count.
I tried the below query:
SELECT NON EMPTY {[Tb City 1].[City Name].[City Name].&[DC] } ON COLUMNS, NON EMPTY { ( [Measures].[Tb Main Count] ) } ON ROWS FROM [test]
Above query gives me the correct count i.e. 23 but i need to get the result of above query in line 2 of MDX query:
I have a requirement where I need to show the maximum value in grand totals but for the dimension members the same measure has to sum.
For ex: lets say I have a measure called Test and this is a base measure. The aggregation type set to this is SUM.
For this same measure the grand totals should not show the sum instead it should show the maximum value of the dimension members which is being analyzed across.
I want to implement population data in sales cube.
Fact table has customer code which is foreign key of Customer master dimension which in turn is linked to census data dimension. Census data dimension have city wise population data having foreign keys of zone and state.
I was trying to update an SSAS database on a test server and got the following error when trying to deploy from Visual Stuido. I get a similar error in SMS. I tried to delete the database before deploying but I get an error when trying to delete. I rebooted the server.
(SQL Server 2005 R2, Windows Server 2008R2.)
Error 1 File system error: The following file is corrupted: Physical file:
?D:Program FilesMicrosoft SQL ServerMSSQL.2OLAPDataHeidtman Sales Cube.0.dbDim Age.0.dim5.Dim Age.Dim Age.dstore. Logical file . Errors in the metadata manager. An error occurred when loading the Age dimension, from the file, '?D:Program FilesMicrosoft SQL ServerMSSQL.2OLAPDataHeidtman Sales Cube.0.dbDim Age.5.dim.xml'. Errors in the metadata manager. An error occurred when loading the Heidtman DW cube, from the file, '?D:Program FilesMicrosoft SQL ServerMSSQL.2OLAPDataHeidtman Sales Cube.0.dbHeidtman DW.7.cub.xml'. 0 0
I need to limit the sessions to access to SSAS cube to one per user. For example, if a customer uses Excel to check a cube, two or more users cannot use the same session or account.
how to move the dimension attributes from currency to geography and vice versa(i.e need to change their positions) in SQL Server 2012. i need currency to be placed in top of geography or geography below currency.
I am thinking of a possible design where the cube will never go offline.
Usually when I do some code changes on my cube the cube goes offline and I need to Full Process it again to get it back .
However , in cases where the cube is extremely critical for the business users , it would be great if I can deliver a solution where the cube never goes down.
I'm trying to do a currency conversion with an MDX statement in my Cube SSAS 2012.
Here is my script :
SCOPE( LEAVES([Entity]) ); SCOPE( LEAVES([Time]) ); SCOPE( LEAVES([Currency])); SCOPE( [Account].[Account].[Total ACCOUNT].members) THIS = ([Measures].[Value],[Currency].[Currency].[Local])*([Measures].[Value],[Account].[Account].[Fx Rate]); END SCOPE; END SCOPE; END SCOPE; END SCOPE;
The problem is I want to exclude frome the scope Currency my local Currency in order to make the conversion only if a currency (€,$, £) is selected. I tried the following syntax but it always return a "MDX script is not valid" :