Error Processing Cube With Disabled Lowest Level Of Shared Dimension
Jul 23, 2005
Hi,
This is probably a classic scenario with a shared dimension that we
need to use in different cubes, where all fact tables do not offer the
same level of detail. Dimension is snow-flaked.
The cube that's causing me troubles was designed by marking the lowest
dimension level Diabled and not Visible. This allows me to get rid of
one of the snow-flake tables (the one with the lowest level), thus
allowing an INNER JOIN with the remaining table which has a level of
detail corresponding to the fact table.
When processing the cube, I get a 'member with key '[blah]' was found
in the fact table but was not found in the level '[blah]' of the
dimension '[blah]'' that seems to indicate that none of my fact foreign
keys exist as primary keys in the dimension table. However if I then
attempt to query the cube, all data seems to be there.
Would anybody be in a position (and willing ;-)) to share his/her own
experience working around a similar issue?
When i add a dimension to the cube dimension without any relation in my dimension usage to any measure group my units are going down.However when i remove the dimension from the cube am getting the correct values.
I'm getting this error during processing one dimension.OLE DB error: OLE DB or ODBC error: SQL Server blocked access to STATEMENT 'OpenRowset/ OpenDatasource' of component 'Ad Hoc Distributed Queries' because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of 'Ad Hoc Distributed Queries' by using sp_configure. For more information about enabling 'Ad Hoc Distributed Queries', see "Surface Area Configuration" in SQL Server Books Online.; 42000.my dimension contains member from two datasource table.
and in another dimension i get error;Errors in the high-level relational engine. The 'dbo_vicidial_Users' table that is required for a join cannot be reached based on the relationships in the data source view.Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Staging User', Name of 'DimUsers' was being processed.
When I process a new cube I recieve an error "Error (211): Unknown dimension member ' 8'; Time: 8/11/00 1:09:45 PM". Any ideas about what this error is about ?
Errors in the high-level relational engine. The data source view does not contain a definition for the 'WeightRecieved' column in the 'dbo_factPurchases' table or view.
The problem is WeightReceived IS defined in my DSV so I don't know what to do about this error.
i am getting the following error when i am processing the cube in SSAS 2008...Errors in the back-end database access module. The size specified for a binding was too small, resulting in one or more column values being truncated. Errors in the OLAP storage engine: An error occurred while the 'Policy Type' attribute of the 'Policy Type' dimension from the MyDemo' database.i verified the datatype column length for policytype column in the dimension as well as all fact views.
I am getting this error when trying to open a report created with the cube. We use dundas olap chart to generate the xml for the report from the base cube or default perspective. What we do is change the xml dynamically to a different perspective for security purposes and then load the report. If the report xml accesses a dimension that is hidden the assigned perspective then we get the error shown below. Does anyone know how to handle this error?
Exception Details: System.NullReferenceException: Internal error. Dimension level or it's parent is NULL.
This only happens with some of our dimensions. With the other hidden dimensions we get no data message on the chart which is what we want.
I am comparing our dimensions to the Adventure Works cube dimensions which seem to work (I have not tried all of their dimensions yet). Nothing sticks out to me as being a setting that will make the error go way. Anyone have suggestions?
Other things I tried but didn't help were removing the parent child hierarchies and Role based security that uses one of the dimensions that throws the error. I also did a trace on the cube when the error is thrown. I could not see any errors and I was able to execute all of the MDX queries without any problems (I copied and pasted the mdx into SSMS).
As an aside we tried used the Cube Roles and Dimension security (Hiding measures in the Measures Dimension too) but the error that MS (#Error or #Value) throws causes dundas not to work. We also looked UDFs and Cell Level Security but both are more complex/what we need for now.
I have an SSAS 2005 database "A" and SSIS package "P" which process full "A" olap database. SSAS SERVER connection string is based on a variable read from XML configuration file.
It works well in BIDS, but when i deployed, the package failed at the step connecting SSAS, the message is "a connection cannot be made, please ensure the server is running"
In the connnecting string, i am using server name like servera.xx.com, if I change it to IP address, it works. if I change it to Localhost(happens to be on the same server), it works.
But I need the server name solution as IP may be changed.
we have our cubes in Server A and SQL DB resides on Server B (we are on SQL 2014), from last couple of days are cube started failing due to below error:
OLE DB or ODBC error: Protocol error in TDS stream; HY000; Protocol error in TDS stream; HY000; Protocol error in TDS stream; HY000; Communication link failure; 08S01; TCP Provider: An existing connection was forcibly closed by the remote host. ; 08S01
I have been going through some blogs to understand the error but don't find any specific yet.
I've got a fairly large hierarchy table and I'm trying to put together a query to find the lowest level descendants of the hierarchy. I think there must be some way to use the "Breadth-first" approach that's stated in the MSDN technet sites about SQL Server HierarchyID but i'm not sure how to write the necessary T-SQL to traverse that. I know I can get all the descendants of a parent node like this
SELECT * FROM AdventureWorks2012.HumanResources.Employee WHERE OrganizationNode.IsDescendantOf(@ParentNode) = 1
However, this query returns all levels for that parent's branch. If I just wanted list of employees that were at the lowest level of the branch(es) for this parent node, how would I do this?
I have 3 cubes in a single SSAS database and these cubes should be processed using the following schedule
Cube 1 - Every Day Cube 2 - Every Week Cube 3 - Every Month Cube 4 - Every Day
The issue that I face is that these cubes share the dimensions and so I cant do a FULL process of these SHARED Dimensions as it will affec other cubes.
I can expect additions and deletions to my dimension data , but the structure remains the same. It would be great if someone can suggest how to go about processing the dimensions. I am confused with the number of options(Process Incremental, Process Update etc.,) available for processing the dimensions.
I will creating a SSIS package to automate the processing. One more question is say, if Cube 2 fails during a day and Cube 1 has succesfully processed on the same day earlier, how do I revert back to the old state of Cube 2? Does this mean that I need to do a back up of the SSAS database before processing each cube?
I have 1 report with 2 charts, both charts have their own dataset. The two datasets are mdx queries on 2 different cubes, but some dimensions have the same name.
Now I want to have 2 differenent selectable parameters for the [dim time] dimension. One for the first query in the first cube and the second for the other query in the other cube .
So I check in the mdx query builder, both dimensions as parameter, but because both dimensions have the same name, i have only one selectable [dim time] -parameter in my report.
I have a cube which can be processed in our development server. However, when it is deployed to the UAT server, the cube processing was hanged. What I have done then:
Checking sys.sysprocesses found that there are 8 threads suspended with waittype=CXPACKET. The threads are all reading the transactional database.
We then searched from web that CXPACKET is related to parallel query processing. So we have done the following:
Select 'Sequential' option in processing -> result is the same with CXPACKET.
Select 'Parallel' with parallel thread set to 1 -> result in one thread with ASYNC IO wait type
Select 'Parallel' with parallel thread set to 2 -> result in CXPACKET.
Change the number of CPUs SQL Server use from 8 to 1 -> result in one thread with ASYNC IO wait type
Change the number of CPUs SQL Server use from 8 to 2 -> result in CXPACKET.
Change 'Maximum parallel query' in SQL server = 4 -> result in CXPACKET. All the trials result in hanged status
Difference between development server and UAT server: 1. CPU : dev = 2, UAT = 8 2. SQL Server Version : dev = 2005 SP1, UAT = 2005 SP2 3. database size : dev > UAT
Anyone has the problem and solution before? Please share. Thanks.
I have a fact table with a simple integer lookup key into a basic dimension table. However, some of the fact lookup key fields are NULL. I would like the Analaysis Services reports to show this NULL category. Instead, Analysis Services discards any NULL entries and the records are completely absent from the reports. What is the best way to achieve this?
I am working with several tables, but for now I just mention 4 : one is fact table (named Usage), and 3 dimensional tables Periods, Products, and Regions. The fact table contains references to the dimensional tables. Table Periods contain two other columns month and year.
I created a cube containing columns from those 4 tables. Deployment was successful. Trouble comes when I want to create a mining structure using Time Series containing these columns :
- Period - Amount (of money) - Product name - Region name
When I choose to use cube (instead of table) as source for mining structure, I'm forced to choose only one dimension (among the Periods, Products, and Regions). Whatever dimension I choose I end up being unable to use the column period as the Time-Key column. Effectively I cannot use Time Series method since I cannot use the column period.
(1) Why is this so [why Visual Studio forced us to use only one dimension from the cube] ? (2) Why Visual Studio eliminates the column period, column that has relationship with the time dimension? (3) What is the use of Cube anyway to the mining? Is there still any use for it? (4) What is the solution to that kind of problem I face?
I'm looking for a possiblity to schedule the processing of a ms olap services cube in a SQL Server Agent job. Has anyone expericiences with that? Are there any alternatives for scheduling the processing?
I have a problem with processing my cube. My fact table (with telephone data) contains about 400,000 records... which is increasing rapidly (400,000 records is about 8 months of data)... I have a few dimensions: Dimension User: about 200 records Dimension Line: about 200 records Dimension Direction: 4 records Dimension Date: 365 records for each year Dimension TimeInterval: with 24 records
So far so good... when I process this dimension I have no problem.... However, when I add a dimension (CalledNumber, with exactly 101 records) the processing hangs as soon as it starts...
The SQL performed when processing the cube looks like this:
SELECT field1, field2,... fieldn FROM table1, table2,.... tablem WHERE (table1.id=table2.table1id) AND (table2.id=table3.table2id) ...
When I execute above SQL in the Query Analyser from SQL Server Enterprise Manager, it ALSO hangs...
I am not really suprised by that, because this SQL first create a huge table of 400,000 x 200 x 200 x 4 x 365 x 24 x 101 records and after that works through the WHERE statements to filter out the appropriate records.
for me it would be more logical to use the following code to process the cube, but that cannot be changed in Analysis Manager:
SELECT field1, field2,... fieldn FROM table1 LEFT JOIN table2 ON (table1.id=table2.table1id) .... LEFT JOIN tablem ON (tablem.id = tablem-1.tablemid)
When I execute above SQL in the Query Analyser from SQL Servel Enterprise Manager, it does NOT hang, but performs the query in about 35 seconds.... But Analysis Manager does not allow me to change the SQL used for processing the cube...
What can I do to add more dimensions to my cube... (It will be more anyway after adding the CalledNumber dimension)?? any suggestions?
Hi, cube processing is taking more time in a new server while same cubes takes less time in another server. the cubes are processed through DTS package can anybody help finding out the possible reasons for this. Regards Naseem
I have a problem with processing my cube. My fact table (with telephone data) contains about 400,000 records... which is increasing rapidly (400,000 records is about 8 months of data)... I have a few dimensions: Dimension User: about 200 records Dimension Line: about 200 records Dimension Direction: 4 records Dimension Date: 365 records for each year Dimension TimeInterval: with 24 records
So far so good... when I process this dimension I have no problem.... However, when I add a dimension (CalledNumber, with exactly 101 records) the processing hangs as soon as it starts...
The SQL performed when processing the cube looks like this:
SELECT field1, field2,... fieldn FROM table1, table2,.... tablem WHERE (table1.id=table2.table1id) AND (table2.id=table3.table2id) ...
When I execute above SQL in the Query Analyser from SQL Server Enterprise Manager, it ALSO hangs...
I am not really suprised by that, because this SQL first create a huge table of 400,000 x 200 x 200 x 4 x 365 x 24 x 101 records and after that works through the WHERE statements to filter out the appropriate records.
for me it would be more logical to use the following code to process the cube, but that cannot be changed in Analysis Manager:
SELECT field1, field2,... fieldn FROM table1 LEFT JOIN table2 ON (table1.id=table2.table1id) .... LEFT JOIN tablem ON (tablem.id = tablem-1.tablemid)
When I execute above SQL in the Query Analyser from SQL Servel Enterprise Manager, it does NOT hang, but performs the query in about 35 seconds.... But Analysis Manager does not allow me to change the SQL used for processing the cube...
What can I do to add more dimensions to my cube... (It will be more anyway after adding the CalledNumber dimension)?? any suggestions?
We have a MS-OLAP cube that has about 11 partitions and I have created a prototype package which processes these partitions conditionally based on expressions that are fed values from a SQL Server control table. It appears that one or more of the partitions seem to fail due to the fact that all of the data for the various partitions come from the same huge fact table. Is there a way to control the level of concurrency within the package itself? If not, I am thinking I should move some of the partitions to process based on other partitions completing their process successfully. Appreciate any help.
I am trying to log the processing time details so that we can identify bottlenecks. My SSIS package has a bunch of OLAP processing tasks. In the Event Handler (onPreExecute and onPostExecute events), I am trying to capture the start and end time for each OLAP processing task by using an "Execute SQL task". In the event handler, I have a conditional expression that checks the following:
@SourceName != @[User::Expression1]
where Expression1 is a variable that contains the value of "Execute SQL Task". This expression I thought would be true only for OLAP processing tasks which btw never fire the OnPreExecute or OnPostExecute events. What am I doing wrong?
Hi! I have a problem with cube processing. When i am processing a cube with 5 million rows it will stop responding and the process on the SQL server has been suspended with the waittype ASYNC_NETWORK_IO. The waittime changes both up and down.
However if i change the the view(the cube is getting the data from a view that gets the data from ONE table) to only return up to 35 rows it works. However, 40 and it will go into suspended again. And i can let it run for several hours without it finishing. Other cubes in the same database with more rows works fine.
If i just run the query that is used when processing in Management Studio it works fine.
I have SP2 both on the AS and DB server.
The DB server looks like this:
4GB ram 2 x 2.80 dualcore intel xeon
The AS server:
16 GB ram 2 x 3.00 intel xeon X5450 quadcore
Any ideas?
Edit: And the AS service runs at 13%, in task manager it has one processor at 100 %. SQL Server Profiler shows no activity on the AS server.
I made a cube with time dimension with hieracly year/month/date/hour the problem is that dimension is growin to fast. In older version of MSSQL (2000) the same dimension doesn't grew so much. Any ideas? The table is big (may be around 1 500 000 rows per month) now it contains around 4 500 000 rows.
I have an Analysis Services Cube that I would like to report on. However, the Time Dimension currently only has four columns, Day of Month, Month(name) , Year, and DateKey (DateTime representation at midnight for every day). Thus when I drag the month attribute onto the report, it is sorted April - August - December - etc. instead of Jan - Feb - Mar. How do I fix this? I remember reading something in the MSDN Library about it but I can't find it again now.
We have a set of cubes and dimensions, and we're experimenting with data mining against the cubes (primarily for forecasting applications). We have a custom time dimension (which we call calendar), not generated by the BIStudio wizard. The dimension has year/month/day/hour/... attributes. But when I try to add this Calendar dimension to the mining structure as a nested table using BI studio, it only shows the Year attribute, not the others. Other dimensions seem to show all the attributes.
Is there something we've done wrong in defining our time dimension? What determines which attributes show up as available for selection in BI studio?
I have designed a cube. It has two fact tables and some dimensions. Fact table to fact table is many to many relationship.
For example
FactMain DataKey(PK), StartDateKey, PostCodeKey, TotalCost FactBridge DataKey(FK), ProductKey(FK), Position - PrimaryKey on DataKey + ProductKey + Position DimProduct ProductKey(PK), ProductCode
Cube is built successfully, processed successfully.When I try to process the cube from agent job, I am getting error "Attribute key not found: tablename, value..." I have added a job step to run AnalysisServices Command. I have taken the command from cube process script(taken from manually process the cube and take script generated). I used ProcessAffectedObjects = "true" in the script. When I checked the tables, the key does exist. Why am I getting this error?
When I highlight a few partitions and start processing, the process occasionally stops with a message that operation has been canceled, like this:
Response 3 Server: the operation has been cancelled. Response 4
Server: the operation has been cancelled.
Response 5 Server: the operation has been cancelled.
..etc...
(no further error message details are provided) SQL profiler shows that batch was completed (but rowcount shown in process progress log is too small).
Analysis Services profiler shows no messages at that time at all. It just shows messages when it started, and when I restarted the processing.
The Analysis trace appears to be stopped when processing stops with "Server: the operation has been cancelled."This is an occasional error and sometime occurs within 15-20 minutes from starting to process. It could fail on 1st partition in the process list , or on some partition in the middle. Some partitions might run for a few hours and not error out, but sometimes it fails quickly.I was not sure if this is an issue with the underlying fact data, so I broke up the last partition where it failed into 5 smaller partitions, they processed OK separately.I think restarting SQL Server and SSAS helps to process a few more partitions. Some that failed with this problem process OK after restart, but some fail again. (apologies for re-posting, wanted to put under more specific thread title)
I am getting following errors when i processed Cube in Production.An error occurred while the dimension, with the ID of 'DIM_PARTICIPANT', Name of 'DIM_PARTICIPANT' was being processed. End Error
An error occurred while the 'PARTICIPANT NAME' attribute of the 'DIM_PARTICIPANT' dimension from the 'XL_GCS_SelfServices' database was being processed. End Error
An error occurred while the dimension, with the ID of 'v d Transaction', Name of 'DIM_TRANSACTION' was being processed. End Error
An error occurred while the 'INVOICE NUMBER' attribute of the 'DIM_TRANSACTION' dimension from the 'XL_GCS_SelfServices' database was being processed. End Error.
But i implmented same in Dev and QA server, there is no issues found.