I have a question about the storage design wizard in the analysis manager.
We are working with different seasons in our reports and every week we update the data of the seasons in our cubes. But as seasons end, and at some point the data for old seasons doesn't change anymore, I don't think it is necessary to update every season every week (which we now still do now for seasons in, for example, 2004). It's a waste of time. So my question now is. How can I storage the data of previous seasons (and work with them in the report) and still be able to update the current season? Can I use the storage design for this??
I am a Windows developer for the IBM Tivoli Storage Manager Server (TSMS) product. Our product installation is built with InstallShield and uses the Windows Installer.
On a new installation of Windows 2003 x64 Storage Server R2, at a customer's site, the TSMS product fails to install. The install of the OS has version 3.01.400.3959 of the Windows Installer and I see no newer version that installs.
Part of our product is 32 bit (console) and another part is x64 (server). When installing I can see that the install's default is being redirected/reset to C:Program Files (x86)TivoliTSM after it is explicitly set by a custom action to ..Program Files.. . I further observe that our custom actions to write 64 bit registry entries are being refused.
REGSAM samMask = KEY_ALL_ACCESS; if ( regIsWow64Process () ) samMask = samMask | KEY_WOW64_64KEY; lStatus = RegCreateKeyEx( hLocalConnectKeyRoot, szSubkey, 0L, NULL, REG_OPTION_NON_VOLATILE, samMask, NULL, hKey, &dw ) ; The above fails to create the key.
We have tried four versions of our TSMS spanning many changes but the install acts the same. This does not happen on any other Windows OS we test on but we do not test on Windows 2003 Storage Server R2 being that it is an OEM product. We did test on Windows server 2003 R2 x64 and do not see this problem.
Do you have any suggestions on how to tackle this problem? I have full installation traces but can only see that the registry work is being refused. I can't see why.
I am in process of automating a cube migration from SSMS 2008 to SSMS 2012.
In this process iam deleting the existing cube databases and restoring them on a different location on the same server. When i try to execute the restore command or restore using UI i get a wierd error message like this below:
"TITLE: Microsoft SQL Server Management Studio ------------------------------ File system error: The following error occurred while opening the file 'DrivePath3F9D4D128D5E417FA6F2[CUBEDBNamepath].fact.map'. Server: The current operation was cancelled because another operation in the transaction failed. (Microsoft.AnalysisServices)
Fact table gets update every day with thousands of records, I been increase the aggregations manually. Is there a way I can increase aggregations (design storage) programmatically . My cub gets process ones in day with VB program. I want to increase aggregations before processing cub through VB.
Hello, Our department will create several SSIS packages that will contain connection managers that use the IBM or Microsoft OLE DB provider for DB2. In every case, we will have to use a specific user name and password.
We need one point of reference for the password for the multitude of packages in case the password changes. I'm guessing that we would do this with a single XML package configuration that each of these packages could use. If this is the case, how can it be done securely?
Ok so facebook groups have 100,000's of members. Members can be part of an unlimited number of groups, and a group can have an unlimited number of members.
Comma Deliniated String seems absurd. Many-2-Many Database relationship seems like it won't scale well t the 10's of thousands and 100's of thousands of members (especially if you have 1000-5000 groups). A table for each group would work but thats a bit over the top in my opinion. XML file doesn't seem to be any better than the above options.
I am no database guru, but I can't figure out a scalable method of doing this, be it with or without a database. I need something that can support 10 groups that have 20 members each OR 1000 groups with 100,000 members each.
Any help, suggestions, or kicked in the right direction would be most appreciated.
I 've read that there is a workaround for this issue by customizing errors at processing time but I am not glad to have to ignore errors, also the cube process is scheduled so ignore errors is not a choice at least a good one.
This is part of my cube where the error is thrown.
DimTime PK (int)MyMonth (int, Example = 201501, 201502, 201503, etc.) Another Columns FactBudget PK (int)Month (int, Example = 201501, 201502, 201503, etc.)
I set the relation between DimTime and FactBudget doing DimTime MyMonth as Primary Key and FactBudget Month as Foreign Key. The cube built without problem, when processing the errror: The attribute key cannot be found when processingwas thrown.
It was thrown due to FactBudget has some Month values (201510, 201511, 201512 in example) which DimTime don't, so the integrity is broken.
My actual question: is there a way or pattern to redesign this DWH to correctly deploy and process?
I Create a measure group and two dimensions using [AdventureWorksDW2012], I try to change one dimension's storage mode with setting property proactive caching as Real-Time ROlap. There is no any warning message when deploying and processing, but error occurs when I query in sql server analysis services, see below for the error messages and the screen capture.
Error occurred retrieving child nodes: the current operation was cancelled because another operation in the transaction failed.
I have a (hopefully typical) problem when it comes to cube design. Westore millions of product records every year, broken down bymonth/quarter. Each product can be assigned to various heirarchialclassification groups etc. The data in an OLTP DB occupies roughly100G for a typical year.We're looking at breaking this out into OLAP to provide faster accessto the data in various configurations and groupings. This is not aproblem, as this is the intended use for Analysis Services.The problem is that we apply projection factors on the product pricesand quantities. This would be ok if it only happened once, however,this happens every quarter (don't ask why). The projection factorschange 4 times a year, and they affect all historical product records.This presents a challenge because to aggregate the data into a usefulconfiguration in the cubes, you throw out the detail data, but thismeans throwing out the price and quantities which are needed to applythe projection.So if you have Product A at $10 and Product B at $20, and roll both upinto Category X, you'll have $30, but you'll lose the ability to applya projection factor of .5 to Product A and .78 to Product B. They'rerolled up.I don't want to regenerate the cubes every 3 months. That's absurd.But we can't live without the ability of projection theprices/quantities on a product level (detail level). So how can thisbe achieved when the other cubes are created at a higher level withless details and sums of the detail data?My initial guess is that we have to update the product data, and thenreaggregate all the other data that is built upon that product data.Is there any other way to apply math to the data on the way out?Thanks in advance!Regards,Zach
I have 3 partitions using a year grouping. Current year, previous 4 years, older than 5 years. I have two measure groups, one is a distinct count, so I actually have 6 partitions.I also use usage based optimization to build my aggregations. Should each partition have a separate aggregation or should there be one for each measure group?
I have a cube. Its xml is different at some point than its design view. Suppose for some dimension and its attributes, source table is different than what it showing in the properties window for them..
Is this possible? How to read cube xml because there are repeating tags in it. There are two type if dimension tags.. one has only attributed and other has all properties.
i have to create a cube in analysis manager in sql server 2000.as i want to see the data according to my organisation levels.
organisation level:
1.Business unit
2.Team manager
3.Sales manager.
and also i create the three user respectively and assign all user has adminstrator category in user accounts of windows XP.
after create the user and design the manage role to restrict the levels which i have create the dimension.after login which i have to create the user i can see all levels of data.
so pls solution for see data which i have to restrict the dimenssion only..
I am working on datawarehouse using sql server analysis manager. I created a cube ..that is working fine but now i have to distribute to end users so how to do it and how many ways we can do that
1)can we make that .cub file 2)how can give access to endusers without giving access to database 3)how to host a cube and access from excel or any other software
I'm trying to connect to an Oracle Database because I want to make a connection for my datasource in Analysis Manager. I have a servername, an user-id and a password. When I try the connection in SQL+, it works but when try it in Analysis Manager I get an error: "Test connection failed because of an error in initializing provider. Oracle error occured, but error message could not be retrieved from Oracle."
My provider is an "MS OLE DB provider for Oracle" because it is an oracle database I have to connect to.
I give up, how do you start the Analysis Manager? SSAS is installed and running, but I just can't seem to find the button/shortcut/whatever to start the manager.
How to implement distinct storage tiers on SQL Remote BLOB Storage (RBS)?
I want to use this SQL Feature to move files(images, videos, pdf files) from a database to a distinct database dedicated to RBS. Then I want to have several storage tiers, where objects will be saved and moved according access frequency. Old data will be arquived in cheap storage, but it must be always accessible if needed.
Description: - 1st and main tier: new and frequently accessed objects stored in high performance storage; - 2nd tier: automatically move older or less accessed objects to an inexpensive and different storage tier; - in all cases, all objects must be accessible to all users, but accessing to archived objects(2nd tier) will be much slower;
I use MSsql server 2000 and analysis service for create cube.But when I drilldown many Dimension on Analysis manager it can not show all level. If I drilldown some level it can show data but when I drilldown many level so it have many row (I test about 60,000) it can not show result. How can I show all result in Analysis manager or other tool ? Thank you :D
Developing a Retail cube using SSAS 2012. One of the dimension is DimCustomer with SCD type II. Each Customer can be a member or a non-member over a period of time. We have StartDt and EndDt to reflect the membership status.
eg: Joe is a member between 06-01-2014 and 31-08-2014 Joe is a non-member between 09-01-2014 and 01-31-2015 Joe is a member between 02-01-2015 and 04-30-2015 Joe is a non-member between 05-01-2015 and 12-31-9999
Without adding fact row of Joe for each day to reflect the membership status, I want to provide the ability to measure "Active Customers Count" on a given date. There are 2 million customers in the DimCustomer Table.
Does anyone out there know a way to export data from the results pane in the Analysis Manager to Excel, or even text? I had thought this would be simple and intuitive to accomplish (after all, it is reasonable to think they would assume you would need to display the results of your analysis), but have found it to be neither. Your help is much appreciated.
I`m trying to connect to the Analysis Services (AS 2000 Enterprise Edition SP4 on Windows Server 2003 EE, respository is in Access DB) throught Analysis Manager over HTTPS(HTTP) connection. I dont want to use domain account.
I need to specify USER_ID and PASSWORD to the Analysis Manager, but in registration of new server is only IP address or HTTP adress (for http://server) - but it doesnt work - it says Unauthorized. But I`ve tried to use Anonymous account and it doesn`t work (client wan`t able to make connection)... I would like to use Basic Authentification.... Thise account is member of OLAP_Administrators... but the problem is where should I put USER_ID and PASSWORD???
It works perfect in Excel 2003.... MSOLAP.asp work fine in IE...
I use Sql server 2005 standard edition in a 64 bit machine.I tried to create a SSIS project with an Analysis Services Processing Task. In connection Manager, the test connection succeeds but when I try to add the analysis services connection manager it gives following error.-The new connection manager could not be created,Exception from HRESULT: 0xC0010014 (Microsoft.SqlServer.DTSRuntimeWrap). This is a production server and we cannot reproduce this in other servers , so troubleshooting is difficult. This error appears when connecting to other sources like MS excel in Export task of Enteerprise Manager as well. Cubes can be processed manually without any problem.Pls give me a solution.
As u can see there is two company references in my fact table, and the schema is in snowflake. My customer requirements state that the Contracts' amounts can be aggregated/filtered for/by, ServiceProviderCompany, its city/profession or ClientCompay, its city/profession.
First thing came in to my mind is to dublicate whole dimension structure (one for serviceproviders, one for clients), which i thought that there should be another way around?
I have, a SSAS 2012 tabular instance with SP2, there is a database on the instance with a read role with everyone assigned permissions. When configuring the Power BI analysis services connector, at the point where you enter Friendly Name, Description and Friendly error message, when you click next I receive the error "The remote server returned an error (403)." I've tested connecting to the database from Excel on a desktop and connect fine.I don't use a "onmicrosoft" account so don't have that problem to deal with.
We use Power BI Pro with our Office 365. As far as I can tell that part is working ok as I pass that stage of the configuration with a message saying connected to Power BI.The connector is installed on the same server as tabular services, its a Win2012 Standard server. The tabular instance is running a domain account that is the admin account for the instance (this is a dev environment) that account is what I've used in the connector configuration. It's also a local admin account. There is no gateway installed on the server.
I have a cube that we are processing nightly via an Analysis Service Processing Task in SSIS. In order to increase the performance of the processing time, we elected to use a lot of rigid dimension attributes, and do a full process of everything in the SSIS task. The issue that I am having is that after that task completes, I need to go into Visual Studio to deploy the cube becuase we are unable to browse or use the cube. This issue seemed to start once we changed the SSIS Analysis Service Processing Task to do a full process on the dimensions, rather than an incremental.
I would expect that once development is done, and it is processed and deployed, that is it. My thinking is that the SSIS task should just update the already deployed cube,
How to right choose key column in"Mining Structure" for Microsoft Analysis Services?
I have table:
"Incoming goods"
Create table Income ( ID int not null identity(1, 1) [Date] datetime not null, GoodID int not null, PriceDeliver decimal(18, 2) not null, PriceSalse decimal(18, 2) not null, CONSTRAINT PK_ Income PRIMARY KEY CLUSTERED (ID), CONSTRAINT FK_IncomeGood foreign key (GoodID) references dbo.Goods ( ID ) )
I'm trying to build a relationship(regression) between “Price Sale” from Good and “Price Deliver”.But I do not know what column better choose as “key column”: ID or GoodID ?
I am connecting to SSAS cube from Excel and I have date dimension with 4 fields (I have others but I don't use it for this case). I created 4 fields in order to test all possible scenarios that I could think of:
DateKey: - Type: System.Integer - Value: yyyyMMdd Date: - Type: System.DateTime DateStr0: - Type: System.String - Value: dd/MM/yyyy (note: I am not using US culture) - Example: 01/11/2015 DateStr1: - Type: System.String - Value: %d/%M/yyyy (note: I am not using US culture) - Example: 1/11/2015
Filtering on date is working fine:
Initially, in excel, filtering on date was not working. But after changing dimensional type to time, and setting DataType to Date, as mentioned in [URL] filter is working fine as you can see in the picture.Grouping on date is not working:
I have hierarchy in my Date dimension and I can group based on hierarchy, no problem. But user is used to pre-build grouping function of excel, and he wants to use that. Pre-build functions of Excel, Group and ungroup seems to be available as you can see in following picture:
But when user clicks 'Group', excel groups it as if it is a string, and that is the problem. User wants to group using pre-build grouping function available in Pivot table. I also find out that Power Pivot Table does not support this excel grouping functionality. And if I understood well, this pre-build grouping functionality of excel, needs to do calculation at run time, and that is not viable solution if you have millions of rows. So Power pivot table does not support pre-build grouping functionality of excel and hence we need to use dimension hierarchy to do the grouping. But I am not using Power Pivot table, I am using simple Pivot Table. So I expect grouping functionality to be working fine. Then I tried to do simple test. I created a simple data source in excel itself. And use it as source of my Pivot table. Then grouping is working fine. The only difference that I can see is (When double click the Measure value in Excel),For date values of my simple test, excel consider them as 'Date'.
For date values of my data coming from cube, excel consider them as 'General'
2.1. But value here is same as it was in simple test.
2.2. 'Date Filter' works just fine.
2.3. If I just select this cell and unselect it, then excel change type to 'Date' though for that cell.
2.4. I have created 4 different types of fields in my date dimension thinking that values of attribute of my dimension might be the problem, but excel consider 'General' for all of them.
2.5 This value (that can be seen when double clicking on measure) comes from 'Name Column' of the attribute. And the DataType defined is WChar. And I thought that might be the reason of issue. And I changed it to 'Date'. But SSAS does not allow it to change to 'Date' giving error : The 'Date' data type is not allowed for the 'NameColumn' property; 'WChar' should be used.
So, I don't know, what is the puzzle piece that I am missing.
1. Date filter works, group does not work
2. Excel consider it as 'General' string.
3. SSAS does not allow to change 'NameColumn' to Date.
I would like to know the best practice for running analysis service in terms of port usage. Is it better to run on a specific port or have dynamic ports ? We have clustered servers that run default on 2383 but not sure with non clustered what's the best way to get performance.
A user was created with a limited privilege under the USERS group. Once this user loged in the Report Manager he is acting like an Admin and Content Manager, though he is not given even a browser role.
What do u think that this guy is acting like a Super User evenif he is restricted to a browser role on the Report Manager ????????????
On one of our machines, all of the SQL Server 2000components except for the main Server component (SQL Servercore) itself were installed (Management tools, etc) a while agoand everything was running fine. Now I go and add/install theServer component and then Service Pack 3a.It seems that Service Manager won't start up (I get an hourglass cursor)and now I find that Enterprise Manager won't run as well. No errormessages appeared and I don't think I saw anything unusual inthe log file.However, I can use Enterprise Manager on a differentmachine and connect to the database (so the databaseitself seems to be running).Any suggestions as to what the problem might be and how tofix it? I like to see if I can repair this without havingto do a reinstall.Thanks.PF
IF someone can assist me. Everytime I load up enterprise manager the service manager turns off. And the enterprise manager can't connect to the local database. But everytime i turn it back on and try to connect again it shuts it off and around and around we go. Help would be appreciated. Thanks.