SQL 2005 Analysis Service Clustering And Majority Node Set
Apr 4, 2008
Hi,
I am trying to set up a SQL 2005 analysis server cluster in our two servers, AS02 and AS04. The server cluster is built up on Majority Node Set (MNS). During the installation of the analysis service, I don't see the available cluster groups. In the cluster admin, the cluster is up and running fine. The MNS cluster has no shared disk, and it has two nodes.
Any thoughts or suggestions? or is it possible to built the SQL 2005 analysis service cluster with MNS?
I am trying to setup a test cluster and am having an issue. When I try to create the resource of a physical disk it takes both the drive e: and drive q: and doesn't seperate them into two physical disks as resources. This means when I try to associate the quorum disk it links the to physcial disk resource of drive e and q. Then when I try to install SQL2k5 I get the warning about installing SQL on the quorum disk. Am I missing something? Is there a way to seperate e and q onto two physical disk resources so I can specifically associate the quorum to q and the sql to e or should I be setting the quorum disk to a majority node set? Thanks in advance.
Our management want to have two instances of SQL Server 2005 in Clustered environment.
First instance will serve existing application and Second instance will serve their new application.
As Microsoft suggests that, its not a good practice to have multiple instance on the same node, until you have any compelling reason to have such setup. The compelling reason what the mangement have is , some time ago they used same instance for both the application. What happened was, the secondary application took all the resources from primary(main) application, and server went down( this was past.).
So, now in clustering they want to have their first instance (which will serve our existing application) with enough CPU cores & memory so that it can run smoothly and then have second instance with the remaining CPU power & memory. So ,in case if second instance tries to eats up all the resources, it can eat what it have , it cannot take resources from primary instance.
But, what my idea here is:
Initially, dont install second instance for new application( in clustering environment). Instead ,use existing servers for the upcoming new appication where production & DR is there right now ( Standalone servers).
What i mean here is: After making sure that the existing (Primary) application is comfortable in clustering environment (in PRODUCTION) , until then, run the second application on the old PROD box & old DR will serve as its DR. So ,by going in this way , we are not installing initally two instances on the same node. When the life is good for the first instance, then based on those results, we can think for the second instance.
Ii wanted to know from you guys, what you think about this. Is this idea look feasible. Please let me know.
I have created a cube with Analysis Service 2005. I then publish the pivot table (generated in Microsoft Excel 2003) as a web page. When I view the web page on the domain the LAN everything is working properly. But when the web page I view on Internet I get the error.
The query could not be processed:
* An error was encounted in the transport layer
* The peer prematurely closed the connection.
The web page is publish on the Internet Information Server, I changed the security directory of the website, but the error persists.
The firewall of the machine is disabled.
The port 2382 and 2883 are allowed.
The components needed to consult the cube are installed in the machine on which you are viewing the cube.
On the Rol of Analysis Service are allowed all users.
I am having an existing sql 2005 cluster on an active passive cluster. I need to add analysis service to this as a new component. I am having enough space available in the cluster disks. Following are my queries,
1) do i need a new virtual ip and virtual server name for the analysis services ?
2) do i need a seperate cluster resource group for analysis services with an additional disk added ?
i'm new to this forum .. Maybe my way of expression is not very good, but I hope to be understandable.
I've a sql server 2005 database with 90 columns and more or less 185 thousands records. I've to run microsoft associations rules on my laptop (sony vaio sz3, core 2 duo, 2gb ram).
The problems is that the amount of ram seems not to be enough [it starts to swap when it's reading 240th case)
Because of this, i decided to sample my data by extracting 10thousands records randomnly ... it lasts 25minutes (more or less) now, but it's still to much...
Does a better way exists? What's the problem: column or row numbers?
I've installed SSAS 2012 as a failover cluster instance on a 2-node Windows 2012 R2 cluster. All was fine in that regard but I now need to change the instance name so am going about the process of uninstalling and re-installing. I've successfully removed the second node from the configuration using the "Remove cluster node" wizard from the Setup options, but when it comes to removing it from the final node I'm getting the error "the cluster group **** could not be moved from node x to node 'null'". At this point the uninstall fails and the cluster resource remains in cluster administrator. It's obviously done some work/damage because the service won't start now, but it's left in limbo land where I can't do anything with it.
Is there a manual way of removing the instance to overcome this issue (file deletions, registry entry deletions etc.)?
I am connecting to SSAS cube from Excel and I have date dimension with 4 fields (I have others but I don't use it for this case). I created 4 fields in order to test all possible scenarios that I could think of:
DateKey: - Type: System.Integer - Value: yyyyMMdd Date: - Type: System.DateTime DateStr0: - Type: System.String - Value: dd/MM/yyyy (note: I am not using US culture) - Example: 01/11/2015 DateStr1: - Type: System.String - Value: %d/%M/yyyy (note: I am not using US culture) - Example: 1/11/2015
Filtering on date is working fine:
Initially, in excel, filtering on date was not working. But after changing dimensional type to time, and setting DataType to Date, as mentioned in [URL] filter is working fine as you can see in the picture.Grouping on date is not working:
I have hierarchy in my Date dimension and I can group based on hierarchy, no problem. But user is used to pre-build grouping function of excel, and he wants to use that. Pre-build functions of Excel, Group and ungroup seems to be available as you can see in following picture:
But when user clicks 'Group', excel groups it as if it is a string, and that is the problem. User wants to group using pre-build grouping function available in Pivot table. I also find out that Power Pivot Table does not support this excel grouping functionality. And if I understood well, this pre-build grouping functionality of excel, needs to do calculation at run time, and that is not viable solution if you have millions of rows. So Power pivot table does not support pre-build grouping functionality of excel and hence we need to use dimension hierarchy to do the grouping. But I am not using Power Pivot table, I am using simple Pivot Table. So I expect grouping functionality to be working fine. Then I tried to do simple test. I created a simple data source in excel itself. And use it as source of my Pivot table. Then grouping is working fine. The only difference that I can see is (When double click the Measure value in Excel),For date values of my simple test, excel consider them as 'Date'.
For date values of my data coming from cube, excel consider them as 'General'
2.1. But value here is same as it was in simple test.
2.2. 'Date Filter' works just fine.
2.3. If I just select this cell and unselect it, then excel change type to 'Date' though for that cell.
2.4. I have created 4 different types of fields in my date dimension thinking that values of attribute of my dimension might be the problem, but excel consider 'General' for all of them.
2.5 This value (that can be seen when double clicking on measure) comes from 'Name Column' of the attribute. And the DataType defined is WChar. And I thought that might be the reason of issue. And I changed it to 'Date'. But SSAS does not allow it to change to 'Date' giving error : The 'Date' data type is not allowed for the 'NameColumn' property; 'WChar' should be used.
So, I don't know, what is the puzzle piece that I am missing.
1. Date filter works, group does not work
2. Excel consider it as 'General' string.
3. SSAS does not allow to change 'NameColumn' to Date.
I would like to know the best practice for running analysis service in terms of port usage. Is it better to run on a specific port or have dynamic ports ? We have clustered servers that run default on 2383 but not sure with non clustered what's the best way to get performance.
In our project, we would like to use the same data source for our analysis service database cubes and for our reporting service reports.
I created the analysis service project first, deployed successfully. When trying to setting up the data source in the report model project, I selected the "create a data source based on another object", and then selected the "create a data source based on an analysis service project". However, there is no analysis service project to select, and no browse button to see where the reporting service is looking for analysis service project either.
I have tried creating a new analysis service project with data source views, cubes, dimensions and all the stuff, but still cannot see the analysis service project in the drop down box to be selected for my reporting model project data source.
As I am fairly new to the reporting service, I'm sure I'm missing something, but couldn't find much information in the help or on the web. Any suggestion would be much appreciated.
I have configured Alwayson HA setup as following environment and versions, and working fine to client side by connected with AG Listener.
Two Node WSFC setup. Node1 (Primary), Node 2(Secondary Replica). Normal quorum(Not shared disk)
Server Platform: HyperV OS: Windows server 2008 R2 64 bit with SP1 RDBMS: MS SQLSERVER 2012 64 bit with SP2
when I am doing testing Failover method between two nodes. automatic failover not happened.
Just shutdown Node1 and try to connect at client side by using AG Listener name but not succeeded, also Cluster name and AG Listener name are not pinging, Second node2 cluster service is not started by automatically due to as below errors. So how will connect client and overcome this issues? Any modification need to be done on both side WSFC and AlwaysOn SQL? for achieving and automatic failover.
Cluster network 'Cluster Network 1' is partitioned. Some attached failover cluster nodes cannot communicate with each other over the network. The failover cluster was not able to determine the location of the failure. Run the Validate a Configuration wizard to check your network configuration. If the condition persists, check for hardware or software errors related to the network adapter. Also check for failures in any other network components to which the node is connected such as hubs, switches, or bridges.
Client side got connected once Node1 get started again also cluster services started.
Since some analysis services features are only available in Enterprise version , I have to upgrade my SQL 2005 server from standard edition to enterpise edition.
So I uninstall originial standard version of analysis service and install a Enterprise version. However, the analysis service is still a standard version after installation.
Is it possible to keep data engine as standard version and install a enterprise version of analysis service?
I am just wondering about the Discretization function available on analysis service server (which can not actually discretize data into user-defined expressions) . Isnt it redundant ? I mean since users can discretize their data more meaningfully based on their own expressions in database engine of SQL Server 2005.
I got this system error log in the event viewer every time I start the SQL Server cluster resource:
The Microsoft Clustering Service failed restore a registry key for resource SQL Server when it was brought online. This error code was 2. Some changes may be lost.
Even though the SQL seems to be working fine now, I won't know if something drastic will happen later. :confused:
Searching the Internet for resolution, I saw this article. The Microsoft Knowledge Base Article - 307469 (http://support.microsoft.com/default.aspx?kbid=307469) requires using the Windows Server 2003 ClusterRecover utility to reset the server cluster check points.
I've not tried it because I don't know if this is the correct solution or if it will work or not.
Can I even solve the error without using the above utility? I also cannot afford to reinstall the clustering or SQL server. :mad:
I'm just starting to work with AlwaysOn Availability and WSFC.
I have in my environment (in Azure) a DC, WSFC and to SQL instances, so I have 3 nodes in my Failover Cluster:
WSFC SQL1 SQL2
If I simulate failure by shutting down one of the SQL boxes my Availability group seamlessly fails over to the other SQL instance - which is great.
However, I'm starting to look into the workings of the Quorum, my envt has the default settings and when I shutdown both of my SQL servers I expected the Cluster itself to go offline as 2 out of the 3 votes will be negative, but the Cluster is still up - Screenshot below when SQL1 and SQL2 are shutdown:
Going through the Wizard (but not changing anything) it shows following config:
I have not used log shipping before and find myself in a position where I need to reboot the secondary node and then the primary node and I don't actually need to failover.
Is there anything I need to be aware of. When rebooting the secondary node I assume the transactions will be held in the primary nodes log till the secondary comes back and just carry on once back up?
When rebooting the primary node nothing needs to be done and the log shipping will just start again once it has come back?
But I'm not sure if I have to install SQL Server first on node 2, then add it to the cluster. Or does adding it to the cluster also install the software?
Is there any method by which I can use Stored procedures in Analysis services. I have some procedures which uses Temprorary tables. I wanna use those procedures's columns. Is it possible?
I am running XP in the office and already have SQL's Enterprise Manager and Analysis services installed - which I can use to access the SQL databases on our office server.
Question: I have installed the desktop version of SQL on my machine and am trying to register the server in the Analysis Service Manager but no luck.
I was doing steps on page 15 / 16 of attached sheet
I was doing the microsoft example about cube to setup DSN SOURCE CONNECTION to ACCESS database (food mart) when i do a test it works
but after i do design storage and go to Process the cube it gives me error
'test connection failed because of an error in initializing provide (microsoft) ODBC DRIVER manager data source name not found and no default driver specified
one more error found Microsoft ODBC DRIVER MANAGER driver sql connect attr failed: IMOO6
I'm contemplating running two availability groups on a two node WSFC. The WSFC is setup with a file share witness (i.e. no shared storage). Can I safely run 1 AG on one primary node, and the other AG on the other node (as primary). Each AG would have replicas on the passive node. This would effectively allow both servers to be in use at the same time. In a failover event, I understand that both workloads would transfer to a single server - so the box needs to be sized appropriately.
We are in the process of building a 3 node SQL Server Cluster (Server 2012/ SQL Server 2012), and we have configured the quorum so that all 3 nodes have a vote (no file share witness as we already have an odd number of nodes).
As I understand it, this should allow the cluster to run as long as 2 of the nodes remain online.
However, the validation report states that 2 node failures would be acceptable and, when we tested this by powering off two of the nodes, the cluster did indeed continue to run on a single node.
I give up, how do you start the Analysis Manager? SSAS is installed and running, but I just can't seem to find the button/shortcut/whatever to start the manager.
I was facing problems applying SP3 for Analysis Services so I removed it and then re-installed.
I then tried to apply SP3 but I am still getting v8.00.382 for AS (which represents SP1). I have tried a couple of things but it seems that SP3 is just not getting applied (for Analysis Services).
I have installed MS analysis service(OLAP) on my lap top
my user is in the Administrative group , other than that I tried creating a user with 'OLAP ADMINISTRATOR' privilate and tried to start the ms analysis service
I am getting the error , unable to start the service on server(), you are not a member of OLAP Administrators group
I tried to install the service pack 3 from Microsoft
Could you pleae guide me to start this serivice Thanks,
i have AAAA project create by BI, now i want to change my computer, so i get XMLA from AAAA project in BI and run it in analysis service XMLA query but it return follow error: Executing the query ... Either the user, PCHOMEmax, does not have access to the Analysis Services Project1 database, or the database does not exist.
Execution complete
i try another way: i click right mouse on my database from analysis service, so i choice "Script database as "--> CREATE to... and i get a new XMLA file so igo to another computer, and run it, SQL return follow error: Executing the query ... Either the user, PCHOMEmax, does not have access to the AAAA database, or the database does not exist.
Execution complete can i solve this problem. thanks