Analysis :: Looping AMO Objects Versus Using SSAS Built-in Parallel Processing?
Jul 22, 2015
I am creating an SSIS Script Task that will be used to process SSAS dimensions and partitions and ideally log the details of each in a table. Any info on the benefits or drawbacks of using the built-in SSAS parallel processing as opposed to doing it manually in a multi-threaded "Parallel.Foreach" loop using the .NET AMO library.
In my testing, when I use a Parallel.foreach loop, I am able to obtain and log information about the object such as end time and time to process immediately after each object is processed. This allows me to keep a history of processing time for each object:
public void processDimensions(Server Server, Database Database, ProcessType processType)
{
Parallel.ForEach(Database.Dimensions.OfType<Microsoft.AnalysisServices.Dimension>(), d =>
{
DateTime beginTime = DateTime.Now;
try
{
d.Process(processType);
[code]....
If circumventing the built-in SSAS parallel processing is not best practice I'd like to know in advance before I go too far down that path.
View 2 Replies
ADVERTISEMENT
Mar 27, 2012
I am trying to create a calculated member for parallel period function using ssas 2012. I have 10 measures for which i need to create parallelperiod.
I can successfully create for 1 measure but when i add multiple values to it it breaks. Below is the sample i tried for multiple measures:
sum(ParallelPeriod([Date].[Calendar].[year],1,[Date].[Calendar].currentmember) ,
([Measures].[Revenue],[Measures].[Expenses]))
View 10 Replies
View Related
Oct 21, 2015
I am facing issue with partition processing. I am having a SSAS cube which is having 5 partitions. These partitions are processed through a sql server job using SSIS packages. In packages I used SSAS process task to do this.Now problem is, job is running successfully and showing that the step which is having partition process also fine.But data is not updating in the partition. While checking the partition properties, it is not updated with recent date and time.
When I try to manually process the partition, it is getting succeeded and recent data is getting reflected with recent date and time.Package configuration is done in job itself.
View 4 Replies
View Related
Apr 16, 2015
I am working on SQL 2012.We have a SSAS Cube build. On top of it client use Excel to connect to SSAS Cube and See the reports.My Cube process every hour and take almost a 1-2 min to Process.When ever End user/Client See or refresh the Excel report (Which use Cube as Source),WHEN CUBE IS PROCESSING ,they get an error that Source is not available
We Have tried best but cannot bring down the Processing time of the cube to 3-5 Second , so that End user don't face report refresh issue at the moment of cube processing
Requirement :In Case End User see the report while Cube is processing from back end , Instead of Error they should see some customize Msg which we can provide some thing Like "Data is Refreshing , please wait ".
View 5 Replies
View Related
Oct 18, 2012
how can i automatically update my tabular model in the future when there's an update in my database.
View 4 Replies
View Related
Nov 4, 2015
I get the following error while processing a SSAS tabular model (2014) on a new server.The SSAS service on this server is running under a login which has access to the SQL server data sources. I tried changing the provider to OLEDB from SQLCLNI11 in the connection string but that doesn't work too. The error message isn't useful to debug further.
The cube processing succeeds on a different server. I scripted out the cube DB and ran it on the new server and am trying to process full but it fails with the following error.
Error Message:
The operation failed because the source database does not exist, the source table does not exist, or because you do not have access to the data source.
More Details:
OLE DB or ODBC error: A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections.
For more information see SQL Server Books Online.; 08001; SSL Provider: No credentials are available in the security package
; 08001; Client unable to establish connection; 08001; Encryption not supported on the client.; 08001.
A connection could not be made to the data source with the DataSourceID of 'd7a37dae-be87-44e0-a8b2-498069af82c9', Name of 'connection name'.
An error occurred while processing the partition 'XXXX_460f3467-1a99-4dc9-aaf2-bcf3d54a5c4c' in table 'XXXX_460f3467-1a99-4dc9-aaf2-bcf3d54a5c4c'.
The current operation was cancelled because another operation in the transaction failed. The operation failed because the source database does not exist, the source table does not exist, or because you do not have access to the data source.
View 3 Replies
View Related
May 8, 2015
I'm facing an issue while processing OLAP. I have enabled BitLocker for dirve encryption and OLAP services uses this drive for db storage. OLAP is executing though SSIS package and I'm getting below error in Package. When debugging the script, it says Drive is encrypted using BitLocker.
My client requires TDE for all databases, for OLAP we decided to use BitLocker: [URL] ....
SQL server is installed on C Drive & D Drive is the storage location for OLAP DB. When locking D Drive, OLAP processing failed. When I tried to restart SQL Server Analysis service in Services.msc it is not starting. Service restarted only when D Drive is unlocked. Is there any way we can process OLAP even when the drive is locked?
Error message is given below:
"The following system error occurred: This drive is locked by BitLocker Drive Encryption. You must unlock this drive from Control Panel. "
View 3 Replies
View Related
Jul 6, 2015
I have been tasked with processing a large tabular cube using SQL AS 2014 (with latest CUs).The three Fact tables having 1.2 billion rows (in each table) have been divided into 30 vertical partitions to aid in parallel processing. So around 40 million rows per partition.
Using SQL Profiler to monitor the Row counts (IntegerData) of records processed seems to max out around 2 million rows per minute, then tapers down to about 200k /minute.
The processing is taking over 14 hours and I need to get it lower if possible. The server has 48 cores (2.66MHz) and over 1TB RAM installed. But I really don't ever see CPU exceed 20% having a maximum of 206 threads running on the instance msmdvr.exe
Available RAM is always at least 30% (or 300GB).
I have increased the Vertipaq MIN/MAX 60%/80%
I have increased the OLAP / Processing / Max Thread Pool Min 500 and Max to 1000.
The connection properties have been increased to allow 100 connections, the majority of the processing consumes about 92 connections for the 90 large partition views for the facts.
What can be done to increased the server resource utilization and decrease processing times?
I have increased both
View 5 Replies
View Related
Mar 28, 2006
I know this question has been asked. And the usual answer is don't usecursors or any other looping method. Instead, try to find a solutionthat uses set-based queries.But this brings up several questions / senarios:* I created several stored procedures that take parameters and insertsthe data into the appropriate tables. This was done for easy access/usefrom client side apps (i.e. web-based).Proper development tactics says to try and do "code reuse". So, if Ialready have stored procs that do my logic, should I be writing asecond way of handling the data? If I ever need to change the way thedata is handled, I now have to make the same change in two (or more)places.* Different data from the same row needs to be inserted into multipletables. "Common sense" (maybe "gut instinct" is better) says to handleeach row as a "unit". Seems weird to process the entire set for onetable, then to process the entire set AGAIN for another table, and thenYET AGAIN for a third table, and so on.* Exception handling. Set based processing means that if one row failsthe entire set fails. Looping through allows you to fail a row butallow everything else to be processed properly. It also allows you togather statistics. (How many failed, how many worked, how many wereskipped, etc.)?? Good idea ?? The alternative is to create a temporary table (sandboxor workspace type thing), copy the data to there along with "status" or"valdation" columns, run through the set many times over looking forany rows that may fail, marking them as such, and then at the end onlydealing with those rows which "passed" the testing. Of course, in orderfor this to work you must know (and duplicate) all constraints so youknow what to look for in your testing.
View 13 Replies
View Related
Dec 9, 2005
SQL Server 2000 SP3ALast week one of our processes starting issuing or suffering deadlockdetected errors every 15 minutes or so.I have read several articles at MS on the subject. I set a couple ofstartup parameters related to producing deadlock detection informationand ran SQL Profiler. I found the SQL statements being issued by thedeadlocked statements. In every deadlock the same UPDATE statementappears however the data values being searched on are different. Thebest I can tell from trying to query the actual data each update hitsonly one or very few rows. No indexed column is updated so the indexesshould not be the source of conflict.Looking at the query I noticed that the query does not have anavailable index and Query Analyzer shows that the full table scan isbeing done in parallel.My question: Does SQL Server change or modify its locking rules whenqueries are converted to be ran using parallel processing? If so, doyou have a reference?Here is the deadlock entries posted to the error log:SPID=167ResType:LockOwner Stype:'OR' Mode: IX SPID:63 ECID:0 Ec:(0x65971510)Value:0x3c577e60 Cost:(0/0)Input Buf: Language Event: UPDATE Station_Upload setStation_Accept_Status = 'ACC',HeadStatus ='ACC',LastProcessedSta='110',HeadPartType='1' WHERE Part_Serial_No ='SCH1119323' AND Station = 'H110'SPID=63ResType:LockOwner Stype:'OR' Mode: IX SPID:167 ECID:0 Ec:(0x65801510)Value:0x3c27d060 Cost:(0/0)Input Buf: Language Event: UPDATE Station_Upload setStation_Accept_Status = 'ACC',HeadStatus ='ACC',LastProcessedSta='70',HeadPartType='1' WHERE Part_Serial_No ='SCH1119060' AND Station = 'H070'I have suggested adding an index to support the query.Any ideas?Thanks -- Mark D Powell --
View 4 Replies
View Related
Oct 7, 2006
Hi,
I am facing some problem's while using the FOR loop container to execute 7-10 packages in parallel.
The main package has 7 FOR loop containers say F1-F7.
Each FOR loop container has 2 task's
T1==> exec child package C1
T2==> exec delay task Delay1.
The idea is to run child packages c1-c7 in parallel ...delay for some time and then run again since there are in the FOR loop container.
I am facing someproblems.
1. The execution of tasks T1-T7 is not guranteed. This means SSIS picks up any 6 tasks of T1-T7 randomly to start with. 6 is the max it processes whereas i have more than that. Can i change this setting???
2. Its not guranteed that if say Task t1 of FOR loop F1 is executed the subsequent task for Delay within tat For loop would be executed next. Typically wat happens is it starts with T1-T6 (T7 onhold) and then exec the delay for T1-T5 and passes control to T7 without going into the delay for T6.This is not the intended execution.
What i want is exec T1-t7 ..delay for the next exec and start again.
How do i do this.
View 5 Replies
View Related
Mar 18, 2015
I create a main program which will launch two jobs at a time, each job does some processes and at the end I'm trying to delete those jobs after storing the job details in one of the custom table I created (cleanup sub-program).
Out of two jobs I am able to store one job details (like job_name,job_id,start_time and end_time of the job) in the custom table and able to delete that job, but the job that's getting completed at the end is not getting captured nor getting deleted from sysjobs and sysjobhistory tables.
I had included this step (which will call the cleanup sub-program to store the job details and delete it) at the end. I can see that this cleanup procedure getting called from debug message but it is neither storing details nor deleting the job.
When I execute this cleanup program separately, it does store the job details and delete it.
View 0 Replies
View Related
May 12, 2008
I want to apply the same query for 250 tables (see below).
It would be very painful to code 250 individual queries.
Is there a way to loop through all 250 tables using "sysobjects WHERE type = 'U'", and applying the below code for each table (users table would remain constant)?
select co.*
from company table_type
where not exists(select * from users u
where (u.users_id = type.rn_create_user) or (u.users_id = type.rn_edit_user)
)
View 11 Replies
View Related
May 13, 2014
I have a cube that we are processing nightly via an Analysis Service Processing Task in SSIS. In order to increase the performance of the processing time, we elected to use a lot of rigid dimension attributes, and do a full process of everything in the SSIS task. The issue that I am having is that after that task completes, I need to go into Visual Studio to deploy the cube becuase we are unable to browse or use the cube. This issue seemed to start once we changed the SSIS Analysis Service Processing Task to do a full process on the dimensions, rather than an incremental.
I would expect that once development is done, and it is processed and deployed, that is it. My thinking is that the SSIS task should just update the already deployed cube,
View 2 Replies
View Related
Apr 12, 2007
I've got an SSRS report that is set up using a data-driven subscription to supply input parameters to the stored procedure that is called to generate the report results.
I was wondering if there is any way to specify the execution processing method (running the reports in parallel or serially). The subscription that we have set up appears to be running all of the reports in parallel which is causing massive load on our servers.
Thanks.
View 3 Replies
View Related
Aug 4, 2006
Hi,
I have a problem with processing my cube. My fact table (with telephone data) contains about 400,000 records... which is increasing rapidly (400,000 records is about 8 months of data)...
I have a few dimensions:
Dimension User: about 200 records
Dimension Line: about 200 records
Dimension Direction: 4 records
Dimension Date: 365 records for each year
Dimension TimeInterval: with 24 records
So far so good... when I process this dimension I have no problem....
However, when I add a dimension (CalledNumber, with exactly 101 records) the processing hangs as soon as it starts...
The SQL performed when processing the cube looks like this:
SELECT field1, field2,... fieldn
FROM table1, table2,.... tablem
WHERE
(table1.id=table2.table1id)
AND
(table2.id=table3.table2id)
...
When I execute above SQL in the Query Analyser from SQL Server Enterprise Manager, it ALSO hangs...
I am not really suprised by that, because this SQL first create a huge table of 400,000 x 200 x 200 x 4 x 365 x 24 x 101 records and after that works through the WHERE statements to filter out the appropriate records.
for me it would be more logical to use the following code to process the cube, but that cannot be changed in Analysis Manager:
SELECT field1, field2,... fieldn
FROM table1
LEFT JOIN table2 ON (table1.id=table2.table1id)
....
LEFT JOIN tablem ON (tablem.id = tablem-1.tablemid)
When I execute above SQL in the Query Analyser from SQL Servel Enterprise Manager, it does NOT hang, but performs the query in about 35 seconds....
But Analysis Manager does not allow me to change the SQL used for processing the cube...
What can I do to add more dimensions to my cube... (It will be more anyway after adding the CalledNumber dimension)??
any suggestions?
PS. forgot to mention: I use Sql Server 2000
View 1 Replies
View Related
Aug 4, 2006
Hi,
I have a problem with processing my cube. My fact table (with telephone data) contains about 400,000 records... which is increasing rapidly (400,000 records is about 8 months of data)...
I have a few dimensions:
Dimension User: about 200 records
Dimension Line: about 200 records
Dimension Direction: 4 records
Dimension Date: 365 records for each year
Dimension TimeInterval: with 24 records
So far so good... when I process this dimension I have no problem....
However, when I add a dimension (CalledNumber, with exactly 101 records) the processing hangs as soon as it starts...
The SQL performed when processing the cube looks like this:
SELECT field1, field2,... fieldn
FROM table1, table2,.... tablem
WHERE
(table1.id=table2.table1id)
AND
(table2.id=table3.table2id)
...
When I execute above SQL in the Query Analyser from SQL Server Enterprise Manager, it ALSO hangs...
I am not really suprised by that, because this SQL first create a huge table of 400,000 x 200 x 200 x 4 x 365 x 24 x 101 records and after that works through the WHERE statements to filter out the appropriate records.
for me it would be more logical to use the following code to process the cube, but that cannot be changed in Analysis Manager:
SELECT field1, field2,... fieldn
FROM table1
LEFT JOIN table2 ON (table1.id=table2.table1id)
....
LEFT JOIN tablem ON (tablem.id = tablem-1.tablemid)
When I execute above SQL in the Query Analyser from SQL Servel Enterprise Manager, it does NOT hang, but performs the query in about 35 seconds....
But Analysis Manager does not allow me to change the SQL used for processing the cube...
What can I do to add more dimensions to my cube... (It will be more anyway after adding the CalledNumber dimension)??
any suggestions?
PS. forgot to mention: I am using Sql Server 2000
View 2 Replies
View Related
Aug 4, 2015
I have make a calculated member for previous period of an given date range. The previous period is the same date range from the previous year, and I have managed to achieve that with the calculated member:
Create member currentcube.[Measures].[PrevPeriod] as
(ParallelPeriod( [Start Date].[Cal Hierarchy].[Year], 1, [Start Date].[CAL Hierarchy].CurrentMember), [Measures].[Count]);
This member returns the correct result as long as my query uses the time dimension, which makes sense... but I also need to show results sliced by other dimensions in bar charts that do not display the time dimension. For example, I have a dimension with only 3 members called [Region].[Area].[AreaName].
The result set for the bar chart needs to look like this:
[AreaName] | [Count] | [PrevPeriod]
East | 43 | 56
West | 53 | 95
But the [PrevPeriod] only returns values if I include the time dimension. I essentially need to sum the results of the time dimension/AreaName/[PrevPeriod] tuple down to just Areaname/[PrevPeriod] for whatever date range may be involved.
I don't know if this is significant to the issue, but the client tool that generates the bar charts builds the query with the date range as a subcube in the FROM statement. If the [PrevPeriod] is outside of the subcube that is still OK, as long as the time dimension is included in an Axis on the final select statement, so at least I know I am not suffering from the members inside the subcube. I've also found in SSMS that it makes no difference if I make the query a subcube, or put the date range in a where clause instead; I still get NULL for [PrevPeriod] without the dates.
I can't imagine that this is an unusual situation, so I hope I've explained it adequately! What is the recommended technique for summarizing a Parallelperiod by dimensions without displaying the time/dates ?
View 6 Replies
View Related
Nov 4, 2009
i am getting the following error when i am processing the cube in SSAS 2008...Errors in the back-end database access module. The size specified for a binding was too small, resulting in one or more column values being truncated. Errors in the OLAP storage engine: An error occurred while the 'Policy Type' attribute of the 'Policy Type' dimension from the MyDemo' database.i verified the datatype column length for policytype column in the dimension as well as all fact views.
View 9 Replies
View Related
May 13, 2015
I am struggling to calculate Full year in my SSAS Cube. Meaning, regardless of what fiscal year hierarchy level I am in; i need a measure aggregating from 01/01/year of current member to 12/31/year of current member.
I want to replicate it using the Year To Date below:
FY-FQ-FM is the fiscal year quarter hieararchy
I am using for built in time intelligence.
Create Member
CurrentCube.[DimTime].[FY-FQ-FM DimTime Calculations].[Year to Date]
As "NA";
/*Year to Date*/
(
[DimTime].[FY-FQ-FM DimTime Calculations].[Year to Date],
[Code] ....
View 3 Replies
View Related
Nov 30, 2015
I have developed a cube in my work place for analyzing current year sales with previous year sales in Time Hierarchy (Year- Quarter- Month) using Parallel period. If we want to see data for particular Quarters i.e. Q1 and Q2 then total at the year level should also get change. Currently if we only choose 2 quarters in the filter then current year data gets change, however data using parallel period is not getting change accordingly and its shows Total of full year.
View 4 Replies
View Related
Jan 27, 2009
Our client wants to report on their trade volume for last year as compared to the current quarter. For simplicity let's pretend they have a report where they have a two key measures:
[Trade Volume - Tons]
[Trade Volume - Tons MTD]
[Trade Volume - Tons]
is based on outlook - that is, for any period we are reporting on the trade volume will be reported as actuals that have been loaded up until the current period, and forecast for the current and future periods.
[Trade Volume - Tons MTD] is based only on actuals - that is for any period we are reporting on the trade volume will be reported as actuals that have been loaded up until and including the current period, and 0 for any future periods.
If Feb09 is our current period, and we are using quarter on the time dimension (where quarter 1=Jan09,Feb09,Mar09) and we have the following data:
Jan09 Trade Volume Actual: 100 Trade Volume Forecast: 150
Feb09 Trade Volume Actual: 50 Trade Volume Forecast: 200
March09 Trade Volume Actual: 75 Trade Volume Forecast: 225
Then
[Trade Volume - Tons]=100+200+225=525
[Trade Volume - Tons MTD]=100+50=150
This is a problem, because the comparison with their current results ([Trade Volume - Tons MTD]) with what they 'forecast' ([Trade Volume - Tons]) is not based on the same period of time - we are comparing the sum of two periods versus three periods.To solve this we changed the reporting period to be monthly granularity, and now select Jan09-Feb09 as our range (as opposed to having a quarter granularity and selecting Q1,2009 in the example above).
This works well and produces the expected results:
[Trade Volume - Tons]=100+200=350
[Trade Volume - Tons MTD]=100+50=150
However, this introduces a secondary problem: we are doing a prior year calculation on the Trade Volume also, so the users can compare how the actuals are comparing to the same period last year.To do this we use the following formula for the prior year calculation:
Prior Year Actuals=([Measures].[Trade Volume - Tons], ParallelPeriod([Time].[544 Hierarchy].[Period Year],1,[Time].[544 Hierarchy].currentmember))
The problem is as soon as we move from quarter granularity to (monthly granularity AND select more than one monthly period) the Prior Year Actuals calculation produces a an error "The MDX function CURRENTMEMBER failed because the coordinate for the 'Period Year' attribute contains a set".So, ParallelPeriod does not like it when currentmember is a range (Jan09,Feb09) rather than a single period (Jan09).
View 8 Replies
View Related
Sep 22, 2015
I have a cube with 2 many-to-many dimensions where a special mdx query needs about 5 seconds. When I resolve the many to many relationships by multiplying the data in the fact table the query needs 21 seconds.
In general do many-to-many dimensions slow down query performance of a cube?
Without the many-to-many dimensions of course the fact table has much more rows. Could this be the reason for the performance loss?
how to tweak query performance of a cube in general?
View 3 Replies
View Related
Oct 31, 2012
All I get back is an error message of "Analysis Services Processing Task Error: A Connection cannot be made. Ensure the Server is running" The server is running, I can process the cube by connecting to the AS instance and right-click processing it.
I can process the cube by running the SSIS task inside of SSDT Just when I deploy the SSIS package (in Project mode) and then execute it do I get the error message.
SQL Server, SSAS, and SSIS processes are all running under the same account. SSAS is on a separate server from SSIS and SQL if that matters.
View 2 Replies
View Related
Feb 4, 2015
we have our cubes in Server A and SQL DB resides on Server B (we are on SQL 2014), from last couple of days are cube started failing due to below error:
OLE DB or ODBC error: Protocol error in TDS stream; HY000; Protocol error in TDS stream; HY000; Protocol error in TDS stream; HY000; Communication link failure; 08S01; TCP Provider: An existing connection was forcibly closed by the remote host.
; 08S01
I have been going through some blogs to understand the error but don't find any specific yet.
View 0 Replies
View Related
May 21, 2015
how can use this mdx script in the calculation part of a cube, will i simply dump it in the script form by starting with the 'create member current cube.
[measures].[test]'
select
[measures].[abc] on 0,
[xyz].[xyz].(&0):[xyz].[xyz].(&60) on 1
from
(
select
(tail([month].[month].[month].members,6))on 0
from
[cube])
View 3 Replies
View Related
Sep 21, 2015
I have created SSAS package successfully.
When I try to connect from excel , to SSAS Getting error message like
A Connection attempt failed because the connected party did not properly respond after a period of time or established connection failed because connected host has failed to respond.
If I connect to SSIS , I'm able to connect correctly.
Why I'm getting this error and how to overcome this?
View 4 Replies
View Related
Sep 2, 2015
I've see twice this year two different companies either already or planning on having ssas and ssrs running on the same server.
View 7 Replies
View Related
Aug 7, 2015
I am doing workload analysis on SSAS - Tabular (2012), I have perfmon logs captured and want to run through PAL. I am looking out for threshold file for SSAS tabular 2012/2014.
View 2 Replies
View Related
Jul 1, 2015
I have a Calculated Member in SSAS that I need to adjust based what the current member is.
The code is below
CASE WHEN [Measures].[End LIS] = 0 AND "HELP" THEN
CASE WHEN [Measures].[Beginning LIS] = 0 OR [Measures].[Beginning LIS] + [Measures].[Beginning LIS] + [Measures].[NETACTIVATIONS] = 0 THEN NULL ELSE
ROUND([Measures].[Disconnects]/(([Measures].[Beginning LIS] + [Measures].[Beginning LIS] + [Measures].[NETACTIVATIONS])/2) * 100 ,2) END
ELSE ROUND(([Measures].[Disconnects] / [AVERAGELIS] * 100) ,2)
END
In English - i need this to translate to - of End LIS is 0 "AND the current member is the current month and current year" THEN carry on
Else ROUND(([Measures].[Disconnects] / [AVERAGELIS] * 100) ,2).
I came up with
CASE WHEN [Measures].[End LIS] = 0
AND [Dim Date].[FSCL_YM].[Month Nm].currentmember.membervalue = Format(now(), "yyyy")+"]&["+Format(now(), "M")
THEN
CASE WHEN [Measures].[Beginning LIS] = 0 OR [Measures].[Beginning LIS] + [Measures].[Beginning LIS] + [Measures].[NETACTIVATIONS] = 0 THEN NULL ELSE
ROUND([Measures].[Disconnects]/(([Measures].[Beginning LIS] + [Measures].[Beginning LIS] + [Measures].[NETACTIVATIONS])/2) * 100 ,2) END
ELSE ROUND(([Measures].[Disconnects] / [AVERAGELIS] * 100) ,2)
END
But to no avail.
View 4 Replies
View Related
May 7, 2015
I have cubes which are large and query response time exceeds 1 minute.
So, I like advice of pre-warming cache. Any way to see if cache is already/still available for the cube?
non related - if there is some setting timeout for SSAS cache?
View 3 Replies
View Related
Jul 6, 2015
I have two ssas databases with same number of cube in them. Cube names are also same. I want to merge/combine these two as one so that my reporting application may see a single database having measures/dimensions from both databases.Is it possible?I am doing this to achieve loose coupling between base and regional development work. if above is not possible, any way to achieve minimum dependency between a base and a regional project.
View 3 Replies
View Related
Jun 2, 2015
I am using multiple IIF statements in ssas, it does not produce any output, does not shows any error, simply it processing..
View 3 Replies
View Related