Analysis :: Cube Restoration In Different Storage Location
Aug 6, 2015
I am in process of automating a cube migration from SSMS 2008 to SSMS 2012.
In this process iam deleting the existing cube databases and restoring them on a different location on the same server.
When i try to execute the restore command or restore using UI i get a wierd error message like this below:
"TITLE: Microsoft SQL Server Management Studio
------------------------------
File system error: The following error occurred while opening the file 'DrivePath3F9D4D128D5E417FA6F2[CUBEDBNamepath].fact.map'.
Server: The current operation was cancelled because another operation in the transaction failed.
(Microsoft.AnalysisServices)
We are running SQL Server 2000 w SP4 on a 2 node active/passive Windows 2003 w SP1 configuration. We are presented with 2 150GB LUNs and 1 600MB on particular SAN that does not belong to us. M: and N: drives are the datadrives and Q: is the quorum.
We now have our own SAN and we will be using it for the SQL cluster data storage. The SAN administrator stated that he will present me with 2 150GB LUNs and 1 600MB…pretty much the same configuration.
How will I be able to move all my data and configure the cluster to the new SAN?
I have Transfered my dts packages from sql 2000 to sql2005 by directly migrating the rows from Sysdtspackages table on sqlserver 2000 to Sysdtspackages table on Sqlserver 2005.Now im able to see all my DTS of sql2000 in sqlserver 2005 Management studio under MANAGMENT--->LEGACY--->Data transforamtion services and i have all the corresponding records in sysdtspackages table of MSDB database on SQLSERVER 2005.
Now i have to schedule a job for executing these dts packages. In the job schedule window, when i try to select my dts packages on the SSIS package store for the package source and go to SSIS --->MSDB, IM NOT ABLE TO FIND MY DTS PACKAGES? WHERE ARE MY DTS PACKAGES GONE. HOW CAN I SCHEDULE.
i can find another table by name 'sysdtspackages90' on msdb database.do i have to migrate the data from sysdtspackages to sysdtspacakges90 ?
I have a cube that we are processing nightly via an Analysis Service Processing Task in SSIS. In order to increase the performance of the processing time, we elected to use a lot of rigid dimension attributes, and do a full process of everything in the SSIS task. The issue that I am having is that after that task completes, I need to go into Visual Studio to deploy the cube becuase we are unable to browse or use the cube. This issue seemed to start once we changed the SSIS Analysis Service Processing Task to do a full process on the dimensions, rather than an incremental.
I would expect that once development is done, and it is processed and deployed, that is it. My thinking is that the SSIS task should just update the already deployed cube,
We have an existing OLD System in SQL Server 2000 DTS Packages. The Whole application runs on DTS. There are several Packages which are called from a Master Package. Each Child packages have their own Global Variables. Most of them are the File Location variables to have the Source Location of the Input Data, mainly from the Excel Files. Now, even if the Global Variables are there to change whenever they want to change the Locations of the Files, they have to goto each child package and change the variable themselves. To resolve this issue, they want a configuration File (INI) / Table which would store those Variables. My thought is to read from that File/Table and Update all the packages' global variables through an ActiveX Script as the First Step of the Master package. That would eliminate the need of changing anything in the existing System. But the Problem is the management (PM / DBAs / Team members) have different views to store the Configuration data. 1. Some wants it into a Different Database, having one table for this application so in future they can also add another table for some other application. 2. Some wants to store it in a Table in the Same Database of this Application. 3. Some wants to save it as a INI file.
As i'm the one who's going to really implement it, they have asked me to research for a best solution out there.
so I request to help me to decide which is a good solution and why.
I have a question about the storage design wizard in the analysis manager.
We are working with different seasons in our reports and every week we update the data of the seasons in our cubes. But as seasons end, and at some point the data for old seasons doesn't change anymore, I don't think it is necessary to update every season every week (which we now still do now for seasons in, for example, 2004). It's a waste of time. So my question now is. How can I storage the data of previous seasons (and work with them in the report) and still be able to update the current season? Can I use the storage design for this??
I 've read that there is a workaround for this issue by customizing errors at processing time but I am not glad to have to ignore errors, also the cube process is scheduled so ignore errors is not a choice at least a good one.
This is part of my cube where the error is thrown.
DimTime PK (int)MyMonth (int, Example = 201501, 201502, 201503, etc.) Another Columns FactBudget PK (int)Month (int, Example = 201501, 201502, 201503, etc.)
I set the relation between DimTime and FactBudget doing DimTime MyMonth as Primary Key and FactBudget Month as Foreign Key. The cube built without problem, when processing the errror: The attribute key cannot be found when processingwas thrown.
It was thrown due to FactBudget has some Month values (201510, 201511, 201512 in example) which DimTime don't, so the integrity is broken.
My actual question: is there a way or pattern to redesign this DWH to correctly deploy and process?
I was talking to my boss to day and our report request are not very consistant. We always having someone coming back to change something in our report. We were thinking of useing something called the Cube Analysis. Then it give our employees the raw data for them to run any standard query for themself. We have folks that want a report one way, but then they changed their minds and we are creating yet another report 4 or 5 times. what are your thoughts about this type of database?
I Create a measure group and two dimensions using [AdventureWorksDW2012], I try to change one dimension's storage mode with setting property proactive caching as Real-Time ROlap. There is no any warning message when deploying and processing, but error occurs when I query in sql server analysis services, see below for the error messages and the screen capture.
Error occurred retrieving child nodes: the current operation was cancelled because another operation in the transaction failed.
I'm making my first attempt at creating a cube using Analysis Services based on my exisiting datamart. Datasource, views, and dimensions have been defined. But comes deploying the cube, it's giving the error saying "A connection cannot be made. Ensure that the server is running." The Deploy Target server and database are the same where my datamart is. Or, maybe I don't know what I'm doing.
Would appreciate any suggestion for my enlightenment. Thanks
I have a (hopefully typical) problem when it comes to cube design. Westore millions of product records every year, broken down bymonth/quarter. Each product can be assigned to various heirarchialclassification groups etc. The data in an OLTP DB occupies roughly100G for a typical year.We're looking at breaking this out into OLAP to provide faster accessto the data in various configurations and groupings. This is not aproblem, as this is the intended use for Analysis Services.The problem is that we apply projection factors on the product pricesand quantities. This would be ok if it only happened once, however,this happens every quarter (don't ask why). The projection factorschange 4 times a year, and they affect all historical product records.This presents a challenge because to aggregate the data into a usefulconfiguration in the cubes, you throw out the detail data, but thismeans throwing out the price and quantities which are needed to applythe projection.So if you have Product A at $10 and Product B at $20, and roll both upinto Category X, you'll have $30, but you'll lose the ability to applya projection factor of .5 to Product A and .78 to Product B. They'rerolled up.I don't want to regenerate the cubes every 3 months. That's absurd.But we can't live without the ability of projection theprices/quantities on a product level (detail level). So how can thisbe achieved when the other cubes are created at a higher level withless details and sums of the detail data?My initial guess is that we have to update the product data, and thenreaggregate all the other data that is built upon that product data.Is there any other way to apply math to the data on the way out?Thanks in advance!Regards,Zach
how can use this mdx script in the calculation part of a cube, will i simply dump it in the script form by starting with the 'create member current cube.
[measures].[test]' select [measures].[abc] on 0, [xyz].[xyz].(&0):[xyz].[xyz].(&60) on 1 from ( select (tail([month].[month].[month].members,6))on 0 from [cube])
The data attached below is from a Fact table. When this data is browsed in the Cube the end user is only interested in value of Measure 1 when it is not equal to zero. Measure 1 is a base measure .how to suppress the value 0 for Measure 1 in the Cube.
I know where the data files are etc, but what is the physical structure of a cube like on disk?
Logically we see the data as a star. Is the physical file akin to a star also? Or is a single file a measure group containing all the required member and measure data - thereby eliminating the need for physical join operators.
I am planning to implement the security in my cube..
Total sales is 1000..
Customer Sales A 100 B 200 C 300 D 400
Total Sales 1000
When i want to give access to the customer A, he can capabule of viewing only A value 100... But the problem is A is able to see total sales 1000.. How can i restrict the sales a value to 100 in the cube.. Where i have to set the property to fix this issue.....
Is it correct to say that for each cube you can have only one Fact Table? I am having a funny dispute just now.
According my experience I never built cubes with more than one fact table, if I want take data from more than one table I write a view and I use it as Fact table...but a cube with two or three fact tables? Never tried..
i am currently trying to build a distinct count on my cube (mssql2005 analysis services).But after i added the discount count on the field i want to and start the processing, the following errors appear.Errors in the OLAP storage engine: The sort order specified for distinct count records is incorrect.
I have a cube. Its xml is different at some point than its design view. Suppose for some dimension and its attributes, source table is different than what it showing in the properties window for them..
Is this possible? How to read cube xml because there are repeating tags in it. There are two type if dimension tags.. one has only attributed and other has all properties.
Is it possible to have more than one cube under one SSAS database? For example I have a database "Test" and in this the cube exist is "TestCube", is iit possible to deploy another cube "TestCube2" under the Test databse?
If yes then what is the process to do that, the reason I am asing is there are some common dimensions used n both the cubes and I am not sure what is the best way so that I can use the shared dimension?
How can we select translated text of cube translations, like measures, dimensions, dimension attributes. I have below query which return translations of dimension attribute's memebers
WITH MEMBER Measures.CategoryCaption AS Product.Category.CurrentMember.MEMBER_CAPTION MEMBER Measures.SpanishCategoryCaption AS Product.Category.CurrentMember.Properties("LCID3082") MEMBER Measures.FrenchCategoryCaption AS Product.Category.CurrentMember.Properties("LCID1036") SELECT { Measures.CategoryCaption, Measures.SpanishCategoryCaption, Measures.FrenchCategoryCaption } ON 0 ,[Product].[Category].MEMBERS ON 1 FROM [Adventure Works]
I am getting translations of product category members but I want to get translated text of "Category"?
If there are any AS gurus out there, I could use some help. I've been having some problems with particular AS dimension and cubes and it's driving me crazy! It doesn't matter what I do, nothing seems to work.
Anyway, here's what I'm trying to do. I've got a fairly simple dimension. There is a date stored in the dimension that is formatted as an int. The dimension needs to only display AGE, so I cast the int as a date and do a datediff to get AGE. The dimension builds just fine and I get the results I want. My problem is when I add the dimension to the cube. The cube fails to build and I get an error message - "Data source provider error: The column prefix 'MY TABLE' does not match with a table name or alias used in the query." Basically, AS is telling me that the table that is used for the dimension doesn't exist, even though the fact and the dimension are joined properly and I've validated the structure.
I've run a number of queries in QA on the two tables and everything works fine. No funky data issues. I've run the service packs a few times, but that didn't work either. I've tried making a cube that only has the just fact table and the one dimension table, and it still fails.
Basically, I'm out of ideas. Any help that anyone has is greatly appreciated.
I have designed a cube. It has two fact tables and some dimensions. Fact table to fact table is many to many relationship.
For example
FactMain DataKey(PK), StartDateKey, PostCodeKey, TotalCost FactBridge DataKey(FK), ProductKey(FK), Position - PrimaryKey on DataKey + ProductKey + Position DimProduct ProductKey(PK), ProductCode
Cube is built successfully, processed successfully.When I try to process the cube from agent job, I am getting error "Attribute key not found: tablename, value..." I have added a job step to run AnalysisServices Command. I have taken the command from cube process script(taken from manually process the cube and take script generated). I used ProcessAffectedObjects = "true" in the script. When I checked the tables, the key does exist. Why am I getting this error?
I want to implement population data in sales cube.
Fact table has customer code which is foreign key of Customer master dimension which in turn is linked to census data dimension. Census data dimension have city wise population data having foreign keys of zone and state.
I have been trying to get the ValueR column of the following query through the MDX but instead getting ValueW as output of the MDX
select exp(Log(sum(MTMROR)+ 1 ))-1 as ValueW, exp(sum(Log(MTMROR + 1)))-1 as ValueR from Temp_Performance where Rundate in ('2015-03-01','2015-03-02')
MDX written for the above query is
With Member [Measures].[LogValuePre] as ([Measures].[MTMROR] + 1) Member [Measures].[LogValuePre1] as VBA![LOG]([Measures].[LogValuePre])
[Code] ...
[MTMROR] measure has the aggregate function Sum. What i can get from this behavior is MDX is first aggregating the result and default aggregation function is Sum. When i try to see the value with more granular data by having the date dimensions on the row (un-commenting the date dimension) i get the correct log and exp log values. Its showing the correct value as date dimension is most granular level in the fact table. While trying to get the data at less granular level(Fund level), getting the sum function applied automatically.
If i set AggregateFunction to none in the cube structure, i get null as the output.
How could i apply the log function before the sum function in the [MTMROR] measure?
1. SQL Server Data Warehouse 2. OLAP CUBE in Analysis Services
My question is - If my SQL Server Data Warehouse is changed (Having Append Data) - Is that My OLAP Cube will have the Append Data?
It's possible, my OLAP Cube always having Append Data if my Data Warehouse is changed? If yes, how to do it without re-deploy and re-process my Analysis Services Project.
I have a cube with 2 many-to-many dimensions where a special mdx query needs about 5 seconds. When I resolve the many to many relationships by multiplying the data in the fact table the query needs 21 seconds.
In general do many-to-many dimensions slow down query performance of a cube?
Without the many-to-many dimensions of course the fact table has much more rows. Could this be the reason for the performance loss?
how to tweak query performance of a cube in general?