Analysis :: Cube Performance Versus Many-to-Many Dimensions

Sep 22, 2015

I have a cube with 2 many-to-many dimensions where a special mdx query needs about 5 seconds. When I resolve the many to many relationships by multiplying the data in the fact table the query needs 21 seconds.

In general do many-to-many dimensions slow down query performance of a cube?

Without the many-to-many dimensions of course the fact table has much more rows. Could this be the reason for the performance loss?

how to tweak query performance of a cube in general?

View 3 Replies


ADVERTISEMENT

Updating Dimensions And Cube Data

Feb 21, 2007

I need to create a package that updates the dimensions and cube data from a data warehouse on daily basis. I was going to create a Data Flow that takes the data from the DW source then put it as input to a Process Dimension destination to update the dimensions and use a Process Partition destination in the same manner to update the cube, but then I came across the Analysis Services Processing Task which seems to do the job as well. I am kinda confused which way to go. Any recommendations?

View 1 Replies View Related

Analysis :: Creating Cube With AMO - Cube Has No Measure Groups?

May 19, 2015

I have problems creating a cube with AMO.

I can add the cube to the database object and fill it with dimensions and a measuregroup (see code below).

If I call cube.Update() it says something like "Error in meta data manager. Cube has no measuregroups." (getting the message in german language)

The error in Microsoft.AnalysisServices.OperationException.Results.Messages is -1055653629

I can't find any documentation about this (or any other) error code in Microsoft documentation.

Here's my Code:

Cube newCube = database.Cubes.Add("MyCube","MyCube");
newCube.Language = 1031;
newCube.Collation = "Latin1_General_CI_AS";
CubeDimension dim = newCube.Dimensions.Add("dim1","dim1","dim1");
CubeAttribute attrib = dim.Attributes.Find("dim1Attr1");

[code]....

View 2 Replies View Related

Analysis Services - Dimensions

Oct 23, 2002

In Analysis Services is there a way you can change a private dimension into a shared dimension without having to recreate it

View 1 Replies View Related

Analysis :: Cube Needs To Be Deployed From VS After SSIS Analysis Services Processing Task Completes?

May 13, 2014

I have a cube that we are processing nightly via an Analysis Service Processing Task in SSIS.  In order to increase the performance of the processing time, we elected to use a lot of rigid dimension attributes, and do a full process of everything in the SSIS task.  The issue that I am having is that after that task completes, I need to go into Visual Studio to deploy the cube becuase we are unable to browse or use the cube.  This issue seemed to start once we changed the SSIS Analysis Service Processing Task to do a full process on the dimensions, rather than an incremental.

I would expect that once development is done, and it is processed and deployed, that is it.  My thinking is that the SSIS task should just update the already deployed cube,

View 2 Replies View Related

Analysis Services - Dimensions && Measures

Aug 13, 2004

I have a problem where I have 3 three measures in a virtual cube:
"Actual", "Budget" and "Full Year Budget".

The dimensions I have are:
- Account No_ / Name
- Cost Code
- Sub Cost Code
- Time/Dates
- Budget Name

Both "Actual" & "Budget" measures need to be filtered/dimensioned by:
- Account No_ / Name
- Cost Code
- Sub Cost Code
- Time/Dates (exclusive to "Actual", "Budget")

Thus have put these in one cube


AND "Full Year Budget" needs to be filtered/dimensioned by:
- Account No_ / Name
- Cost Code
- Sub Cost Code
- Budget Name (exclusive to "Full Year Budget")

THUS have put this as one cube…

I then created a virtual cube, with the 2 cubes thinking that the dimensions I created in the original cubes would only filter the measures of the original cube measures in the virtual cube. ...BUT all dimension filters in the virtual cube filter all measures in the virtual cube, irrespective of which dimensions were created with the original cubes.

please help!

View 1 Replies View Related

Analysis :: Joining Multiple Dimensions In MDX

Jun 13, 2015

I'm new to MDX, and most of the time I customize existing queries rather than writing new ones. I currently have a MDX query like this

SELECT 
[Measures].[Fees Billed]
                  on 0,
except([Age].[Day Buckets].members, {[Age].[Day Buckets].[All], [Age].[Day Buckets].&[Unknown]})
  on 1
FROM 
     MyCube
WHERE ([Fiscal Period].[Fiscal Year].&[2015], [Customer].[City].&[Auckland] )

Which brings the fees billed by age buckets where the customer's city is Auckland. I also have another dimension called [Sales Agent] with a member [City] in it, and there is a member in [Customer] called [Customer].[Sales Agent]

I am trying to retrieve the same information where the customer's sales agent's city is Auckland rather than the customer's city.

If it is SQL, I will join Customer and SalesAgent on Customer.SalesAgentUno = SalesAgent.SalesAgentUno and bring in the desired data. Any way in MDX to do this?

View 4 Replies View Related

Analysis :: Show Dimensions Like Columns In MDX

Jun 25, 2015

I need to show the dimensions of my model like columns in the result. I have this query

with 
member [Measures].[Customer] as [Customers].[Customer].CURRENTMEMBER.Name
member [Measures].[UCs] as [UCs].[UC].CURRENTMEMBER.Name
member [Measures].[Order Type] as [Order Types].[Order Type].CURRENTMEMBER.Name
member [Measures].[UC Dates] as [UC Dates].[UC Date].CURRENTMEMBER.Name

[Code] .....

View 14 Replies View Related

Analysis :: How To Show Measures Dynamically For Different Dimensions

May 4, 2015

Actually I want to do distinct sum on a measure group, please find the below table as sample

XL Measure group
LK     OK      Amount
1        10         100
1         11        100
3          30       250
3          31       250
3          32       250

For the above measure group two dimensions have relationships, One is L dimension which is having relationship with XL on LK and One is O dimension which is having relationship with XL on OK. If I drag L dimension attributes  it should show results as below

LK LName  Amount
1     A         100
3     C         250

But above results are coming as below

LK LName  Amount
1     A         200
3     C         750

If I drag O dimension attributes along with L dimension, it should show results as below.

LK  LName   OK     OKName   Amount
1        A         10      XYZ         100
1        A         11      UVW        100
3        C         30      PQR         250
3        C          31     KLM         250
3        C          32     TUV         250

I used formula Measures.Amount/Measures.Count, this formula is not showing correct results when I don't drag any dimensions, it is showing results for All member  as 425, but it should show as 350.

So I made a same change ([L].[LK].Currentmember, Measures.Amount)/([L].[LK].Currentmember,Measures.Count), this worked fine but performance is very low and so stopped working on this.

Atlast I did the measure group like this

LK     OK      LAmount   OAmount
1        10         100        100
1         11        0            100
3          30        300        300
3          31        0            300

I want to show Measures.LAmount when only L dimension is querying and want to show OAmount when both L dimension and O dimension are querying. Is this possible ?

View 3 Replies View Related

Analysis :: Filter Measure On Intersection Of Dimensions

Jul 24, 2015

SSAS 2008 R2

Is it possible to filter out a measure only at the intersection of Two dimension members? I have a date dimension,  a Hospital dimension and a wait time measure.

For Example, is it possible to filter out Wait time for Bayside Hospital for the Month of June 2015?

I want Wait time to continue to be displayed for all other months and roll up into the totals without the filtered value.

View 4 Replies View Related

Analysis :: Parallel Period Sliced By Other Dimensions

Aug 4, 2015

I have make a calculated member for previous period of an given date range.  The previous period is the same date range from the previous year, and I have managed to achieve that with the calculated member:

Create member currentcube.[Measures].[PrevPeriod] as
(ParallelPeriod( [Start Date].[Cal Hierarchy].[Year], 1, [Start Date].[CAL Hierarchy].CurrentMember), [Measures].[Count]);

This member returns the correct result as long as my query uses the time dimension, which makes sense... but I also need to show results sliced by other dimensions in bar charts that do not display the time dimension.  For example, I have a dimension with only 3 members called [Region].[Area].[AreaName].

The result set for the bar chart needs to look like this:

[AreaName] | [Count] | [PrevPeriod]
East            |    43      |       56
West           |    53      |       95

But the [PrevPeriod] only returns values if I include the time dimension.  I essentially need to sum the results of the time dimension/AreaName/[PrevPeriod] tuple down to just Areaname/[PrevPeriod] for whatever date range may be involved.

I don't know if this is significant to the issue, but the client tool that generates the bar charts builds the query with the date range as a subcube in the FROM statement.  If the [PrevPeriod] is outside of the subcube that is still OK, as long as the time dimension is included in an Axis on the final select statement, so at least I know I am not suffering from the members inside the subcube.  I've also found in SSMS that it makes no difference if I make the query a subcube, or put the date range in a where clause instead;  I still get NULL for [PrevPeriod] without the dates.

I can't imagine that this is an unusual situation, so I hope I've explained it adequately!  What is the recommended technique for summarizing a Parallelperiod by dimensions without displaying the time/dates ?

View 6 Replies View Related

Analysis Services: MDX - Date Ranges With 2 Time Dimensions

Aug 31, 2004

Hi,

I have a Time Dimension which allows me to select a specific YEAR, or YEAR & QUARTER or YEAR & QUARTER & MONTH, or YEAR & QUARTER & MONTH & DAY.

Is there any way that I can have a range of dates?

Is it possible to have 2 time Dimensions for example which did the following:

a start: Year|Month for example
(>= Year|Month )
.......and......
an end: Year|Month
(<= Year|Month )

Thus I would be able to select a range of dates/months.

Do you know if this is possible to write this inot the dimension?

Thanks,

David

View 3 Replies View Related

Analysis :: Add Descriptions To Attributes In Role Playing Dimensions

Jul 28, 2015

I have a requirement to set Description values for our cube dimensions and attributes. 

I've done this for regular dimensions but I cant seem to find a way for role playing dimensions. I can set the base dimension descriptions but not the 'clones'. Is this possible? 

View 2 Replies View Related

Analysis :: How To Show All Level Of SSAS Dimensions In Excel

Oct 28, 2010

The all-level of dimensions doesn't show up in the PivotTable Field List? I have reports where I want to show one member of a dimensions compared to the total of the dimension (and not the total of the members shown). But I can't select the ALL-level. Is there any way to do this?

View 5 Replies View Related

Analysis :: Ignore Unrelated Dimensions In Tabular Model

Jul 11, 2013

how to set 'Ignore Unrelated Dimensions' property in Tabular Model.

View 3 Replies View Related

Analysis :: Unable To Browse Data When Adding Multiple Dimensions?

Jul 5, 2015

I have one dimension and one measure group. I deployed and processed the cube. Now I am able to browse the data. Now I added one more dimension. I deployed and reprocessed again the Cube. Now I am not able to see any values.  I am getting like below.

View 6 Replies View Related

Analysis :: How To Give Permission To Read Dimensions Members In Hierarchy

Jun 9, 2015

I am using a "Client" dimension that includes a "Holding - Client" hierarchy. I have to make sure, that only the appropriate roles may access appropriate members from this dimension, but I only have the information which role may access which ClientID - I do not have the information which HoldingID should be accessible. Also, "Client" is used as the key column of the dimension with "ClientID" and "HoldingID" as key columns. The hierarchy is strict, no client may belong to multiple holdings.

I cannot seem to find the right MDX for the allowed member set. My MDX expression would need to look like this:

[Client].[Holding - Client].[Client].&[*]&[123]

In this example I want to give access for client &123, no matter the holding, so &1&123 and &2&123 would be allowed.

Is this doable?

View 4 Replies View Related

Analysis :: Linking Multiple Fact Table At Different Granularity To Same Dimensions

Jul 15, 2015

I am modelling two fact tables of Actuals and Budget which are at different granularity, Actuals are at day, customer and product sub category level. Budgets are at month, Region and Product category level.

Month, Region and Product Category is present in Date, Region and Product Category dimension respectively. I have only three dimensions as Customer, Product and Date. Linking those dimensions to Actual Fact table is not an issue, what is the best way and options are there to link budget fact table to those three dimensions.

View 2 Replies View Related

Analysis :: Fact Tables Appearing As Dimensions In Tabular Model

Jul 6, 2015

I built my first tabular model and see that my fact tables are also appearing as dimensions. In Multi dimensional mode i could choose which are the dimensions. How do i do that in tabular model.

View 2 Replies View Related

View Performance Versus SQL Statement

Nov 1, 2000

I have a complex(long) SQL statement inside of a stored procedure which feeds several variables from 2 tables. Something like

Select @var1, @var2, etc from
table1, table2 where
table1.id = table2.id

Is there any advantage to creating a view for this statement
and selecting from that, even though this resides in a stored procedure?

View 1 Replies View Related

Int Versus Char Primay Key Performance

Jul 23, 2005

Hi,My company has a scenario where we would like to change the data typeof an existing primary key from an integer to a char, but we areconcerned about the performance implications of doing so. The scriptfor the two tables that we need to modify is listed below. TableFR_Sessions contains a column named TransmissionID which is currentlyan integer. This table contains about 1 million rows of data. TableFR_VTracking table also contains the TransmissionID as part of it'sprimary key and it contains about 35 millions rows of data. These twotables are frequently joined on TransmissionID (FR_Sessions is theparent). The TransmissionID column is used primarily for joins and isnot typically displayed.We need like to change the TransmissionID data type from int tochar(7), and I had a few questions:1) Would this introduce significant performance degradation? I haveread that char keys/indexes are slower than int/numeric.2) Are there collation options (or any other optimizations) that wecould use to minimize the performance hit of the char(7)...if so whichones?I am a software architect by trade, not a database guru, so please goeasy on my if I overlooked something obvious :)Any suggestions or information would be greatly appreciated.Thanks,Tim-------------------CREATE TABLE [FR_Sessions] ([TransmissionID] [int] IDENTITY (1, 1) NOT NULL ,[PTUID] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[PortNum] [numeric](6, 0) NOT NULL CONSTRAINT [DF_FR_Sessions_PortNum]DEFAULT (0),[CloseStatus] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[RecvBytes] [int] NULL ,[SendBytes] [int] NULL ,[EndDT] [datetime] NULL CONSTRAINT [DF_FR_Sessions_EndDT] DEFAULT(getutcdate()),[LocalEndDT] [datetime] NULL ,[TotalTime] [int] NULL ,[OffenderID] [numeric](9, 0) NULL ,[UploadStatus] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOTNULL CONSTRAINT [DF_FR_Sessions_UploadStatus] DEFAULT ('N'),[SchedBatchID] [numeric](18, 0) NULL ,[SWVersion] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[DLST] [bit] NULL ,[TZO] [smallint] NULL ,[Processed] [bit] NOT NULL CONSTRAINT [DF_FR_Sessions_Processed]DEFAULT (0),[CallerID] [varchar] (13) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[PeerIP] [varchar] (16) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[XtraInfo] [varchar] (1024) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[IdType] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,CONSTRAINT [PK_FR_Sessions] PRIMARY KEY CLUSTERED([TransmissionID]) WITH FILLFACTOR = 90 ON [PRIMARY]) ON [PRIMARY]CREATE TABLE [FR_VTracking] ([TransmissionID] [int] NOT NULL ,[FrameNum] [int] NOT NULL ,[LatDegrees] [float] NOT NULL ,[LonDegrees] [float] NOT NULL ,[Altitude] [float] NOT NULL ,[Velocity] [float] NOT NULL ,[NumPositions] [smallint] NOT NULL ,[NavMode] [smallint] NOT NULL ,[Units] [smallint] NOT NULL ,[GPSTrackingID] [numeric](18, 0) IDENTITY (1, 1) NOT NULL ,[dtStamp] [datetime] NULL ,CONSTRAINT [PK_FR_VTracking] PRIMARY KEY CLUSTERED([TransmissionID],[FrameNum]) WITH FILLFACTOR = 90 ON [PRIMARY]) ON [PRIMARY]

View 5 Replies View Related

Analysis :: Looping AMO Objects Versus Using SSAS Built-in Parallel Processing?

Jul 22, 2015

I am creating an SSIS Script Task that will be used to process SSAS dimensions and partitions and ideally log the details of each in a table. Any info on the benefits or drawbacks of using the built-in SSAS parallel processing as opposed to doing it manually in a multi-threaded "Parallel.Foreach" loop using the .NET AMO library.

In my testing, when I use a Parallel.foreach loop, I am able to obtain and log information about the object such as end time and time to process immediately after each object is processed.  This allows me to keep a history of processing time for each object:

public void processDimensions(Server Server, Database Database, ProcessType processType)
{
Parallel.ForEach(Database.Dimensions.OfType<Microsoft.AnalysisServices.Dimension>(), d =>
{
DateTime beginTime = DateTime.Now;
try
{
d.Process(processType);

[code]....

If circumventing the built-in SSAS parallel processing is not best practice I'd like to know in advance before I go too far down that path.

View 2 Replies View Related

Query Performance With DateTime Versus Int Condition

Jul 10, 2007

Good day,



The following query performs acceptably (2 seconds against 126,000,000 rows in the main table):

SELECT Count(*)

FROM

Message1_2_3 INNER JOIN

VDMVDO ON Message1_2_3.VDMVDO_ID = VDMVDO.VDMVDO_ID INNER JOIN

NMEA ON VDMVDO.NMEA_ID = NMEA.NMEA_ID

WHERE

NMEA.NMEA_ID BETWEEN 14000000 AND 14086000 AND

VDMVDO.RepeatIndicator = 0 AND

NMEA.SentenceFormatterID = 'VDM'



When we change the first condition from an Int column to a DateTime as in:

NMEA.TimeDate BETWEEN CONVERT(DATETIME, '2007-07-09 8:30:00', 102) AND CONVERT(DATETIME, '2007-07-09 9:30:00', 102)



the query performance falls to 14 seconds, even though both columns are indexed and a similar number of rows are found. When the select clause changes from a simple Count to a complex Max expression, response time falls to over a minute!



Any thoughs on optimizing the DateTime search would be greatly appreciated...

View 4 Replies View Related

Performance Issues On Sql 2005 Versus Sql 2000 - AGAIN!

May 15, 2008

I was hoping I wouldn't be another poster with performance issues after migrating to SQl 2005 from SQL 2000 but here I am.

I am in the process of testing out our databases on Sql Server 2005 for migration from SQL Server 2000 and there are certain portions of code that have been affected negatively. I have read thru many of the posts here and have tried out most of the recommendations. I will start out with things I've done and then provide the actual SQL.

1) I have rebuilt all indexes ( using the DBCC REINDEX using the table option).
2) Updated the db engine to latest hot fix (build 3239) that addresses speed related fixes.
3) I also ran sp_createstats using the 'fullscan' option to create stats on all columns of all tables (minus indexed columns)
4) Since nothing seemed to work, I even ran UPDATE STATICS with FULL SCAN on all tables even though I did not need it as the REBUILD woudl have created stats. But I was willing to try anything.

I have confirmed that the execution plans are different even though the data on both sql 2000 and sql 2005 are identical (i put a copy on 2005). The plans themselves are huge as the queries are huge. Here is the query.


SELECT InterimView.* ,TestView.*

FROM View_LabDataExport_TestFormData_55 TestView
RIGHT OUTER JOIN ( SELECT ReqView.*, CDView.*
FROM View_LabDataExport_FormData_55 ReqView
LEFT OUTER JOIN View_LabDataExport_FormData_CD_55 CDView
ON ( CDView.DB_SubjectID_CD = ReqView.DB_SUbjectID )

) InterimView

ON ( InterimView.DB_FormID = TestView.DB_FormID_T AND

InterimView.DB_LabSampleID = TestView.DB_LabSampleID_T )

The above query takes abotu 8 secs to run on 2000 and about 1 minute to run on 2005. This is for a small dataset and on larger datasets this is only going to more pronounced ( as confirmed by other teams that have already migrated in my company). Another point worth mentioning might be if I remove the TestView.* from the select list, it works in 5 to 6 seconds. Is there an issue with Sql 2005 and a large number of columns or anything of that sort? On 2000, the time remains the same , about 8 seconds if I remove this from the select list.

Here is the statistics ion on 2005


(21234 row(s) affected)

Table 'Worktable'. Scan count 75490, logical reads 3676867, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabTestToReportPanel'. Scan count 476, logical reads 1524, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabReportPanel'. Scan count 0, logical reads 260, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'DiscreteValue'. Scan count 1, logical reads 176106, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabReleasedSampleTest'. Scan count 1, logical reads 2078, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabSample'. Scan count 1360, logical reads 18567, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Form'. Scan count 2302, logical reads 8225, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabTest'. Scan count 1, logical reads 23, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabSampleDef'. Scan count 1, logical reads 10530, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabArea'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Lab'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Location'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Study'. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Item'. Scan count 1335, logical reads 32940, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'ObjectState'. Scan count 1, logical reads 10972, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Object'. Scan count 0, logical reads 20674, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Subject'. Scan count 0, logical reads 3293, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'FormDef'. Scan count 2, logical reads 70, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'PrintedLabSampleLabel'. Scan count 0, logical reads 13144, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'PrintedForm'. Scan count 0, logical reads 4219, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'StudySite'. Scan count 0, logical reads 2756, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'StudyEvent'. Scan count 18, logical reads 40, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'StudyEventDef'. Scan count 0, logical reads 36, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'FormDefToStudyEventDef'. Scan count 1, logical reads 43, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'LabSampleDefToFormDef'. Scan count 1, logical reads 255, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Here is the statistics ion on 2000

Table 'LabTestToReportPanel'. Scan count 2123, logical reads 4820, physical reads 44, read-ahead reads 0.

Table 'LabReportPanel'. Scan count 130, logical reads 260, physical reads 0, read-ahead reads 0.

Table 'DiscreteValue'. Scan count 103914, logical reads 208214, physical reads 0, read-ahead reads 0.

Table 'Location'. Scan count 19031, logical reads 38062, physical reads 2, read-ahead reads 0.

Table 'Lab'. Scan count 19031, logical reads 38062, physical reads 0, read-ahead reads 0.

Table 'LabArea'. Scan count 19031, logical reads 38062, physical reads 0, read-ahead reads 0.

Table 'LabSampleDef'. Scan count 24670, logical reads 49340, physical reads 0, read-ahead reads 0.

Table 'LabTest'. Scan count 19406, logical reads 39575, physical reads 0, read-ahead reads 0.

Table 'LabReleasedSampleTest'. Scan count 4289, logical reads 73865, physical reads 1014, read-ahead reads 24.

Table 'Study'. Scan count 4291, logical reads 8582, physical reads 0, read-ahead reads 0.

Table 'LabSample'. Scan count 5647, logical reads 31382, physical reads 308, read-ahead reads 4.

Table 'Form'. Scan count 4291, logical reads 9272, physical reads 2, read-ahead reads 10.

Table 'PrintedLabSampleLabel'. Scan count 4289, logical reads 17097, physical reads 114, read-ahead reads 308.

Table 'ObjectState'. Scan count 6860, logical reads 13760, physical reads 1, read-ahead reads 0.

Table 'Object'. Scan count 6860, logical reads 23559, physical reads 90, read-ahead reads 701.

Table 'PrintedForm'. Scan count 1375, logical reads 4505, physical reads 40, read-ahead reads 16.

Table 'StudySite'. Scan count 1378, logical reads 2756, physical reads 4, read-ahead reads 0.

Table 'Subject'. Scan count 1599, logical reads 3332, physical reads 2, read-ahead reads 0.

Table 'StudyEvent'. Scan count 18, logical reads 52, physical reads 0, read-ahead reads 0.

Table 'StudyEventDef'. Scan count 18, logical reads 54, physical reads 0, read-ahead reads 2.

Table 'FormDefToStudyEventDef'. Scan count 1, logical reads 69, physical reads 0, read-ahead reads 23.

Table 'FormDef'. Scan count 2, logical reads 78, physical reads 1, read-ahead reads 4.

Table 'LabSampleDefToFormDef'. Scan count 1, logical reads 308, physical reads 1, read-ahead reads 306.

Table 'Item'. Scan count 1335, logical reads 36510, physical reads 140, read-ahead reads 1047.

(21234 row(s) affected)

(147 row(s) affected)


One difference between the two is the work table that 2005 creates versus 2000. I can attach the plans but they are huge. I will attach it if you ask.

What I was looking for was suggestions on what I could do short of rewriting code or any suggestions in general.

FYI, this has also been posted on the SQL Server Engine forum.

Thanks

View 10 Replies View Related

Query Performance Hard Coded Value Versus Table Driven

Nov 6, 2014

I want to be able to return the rows from a table that have been updated since a specific time. My query returns results in less than 1 minute if I hard code the reference timestamp, but it keeps spinning if I load the reference timestamp in a table. See examples below (the "Reference" table has only one row with a value 2014-09-30 00:00:00.000)

select * from A where ReceiptTS > '2014-09-30 00:00:00.000'

select * from A where ReceiptTS > (select ReferenceTS from Reference)

View 5 Replies View Related

SQL2005 Enterprise Database Engine Performance Features (versus Standard)

Feb 21, 2007

Can anyone comment on the engine performance difference between SQL2005 Enterprise Edition versus Standard? I'm talking generalized performance of the engine and not admin features (parallel index operations) or scaled-storage (partitioning)

( http://www.microsoft.com/sql/editions/enterprise/comparison.mspx )

The marketing literature makes note of two things:

Enterprise can use more then 4 processors

Enhanced read-ahead and scan (super scan)
(note: I cannot find anything about this 'feature')
One un-noted Feature:

only Enterprise supports 'lock pages in memory'


We are in the process of migrating from SQL2000 to SQL2005 in an OLTP environment. Based on the marketing literature; I would have chosen SQL2005-standard. But based on our limited testing, we are seeing some strange differences.

Query Performance

With MaxDOP=1 and using a large batch query (select top 1500000); SQL2005-Enterprise is twice as fast as SQL2005-Standard.

(Note: this difference persists regardless of lock-pages-in-memory setting)

CPU Utilization

In addition, taskmgr shows that SQL2005-Enterprise uses a single processor at ~90%. While SQL2005-Standard shows a single processor at ~20%.

Lock Behavior

We are also seeing lock-behavior differences. A single DML statement that attempts to modify ~5000 rows will cause Table-locks on SQL2005-Standard but obtain normal row-locks on SQL2005-Enterprise.



These empirical differences make me wonder if the engine codebase is fundamentally different between the two?

Any insight would be appreciated.

View 4 Replies View Related

Opinion On Cube Analysis

Feb 7, 2008

Hi,

I was talking to my boss to day and our report request are not very consistant. We always having someone coming back to change something in our report. We were thinking of useing something called the Cube Analysis. Then it give our employees the raw data for them to run any standard query for themself. We have folks that want a report one way, but then they changed their minds and we are creating yet another report 4 or 5 times. what are your thoughts about this type of database?

View 6 Replies View Related

Analysis :: Dimensions Attributes - Drag All Or Some Specific Attributes

May 24, 2015

I'm using a DW from Northwind database to build a cube to do some analitical taks. I already create the cube and now I am "cleaning" the dimensions. I'm having some difficults to understand the logical off this part. The reason is that When I create the Data Source View, I only import the Foreign Keys that connect the Dimensions to Fact_Table. I have to drag the attributes of Dimension from Data Source View to the tab attributes? 

Imagine this:

I have the following dimension:

Dim_Customer:
Customer_ID
Name_Customer
Job_Function
Date_of_Birth
Contact
Address
City
Country

When I create the cube only Customer_ID appears in attributes tab, it's normal? 

One more question:

I don't want to create a hierarchy like:

Customer ID -> Name_Customer
Customer ID -> Date_of_Birth
Customer ID -> Address
Customer ID -> City
Customer ID -> Country

My idea is to create the following hierarchy: 

Name_Customer -> Date_of_Birth ->  Address ->  City -> Country

But the first hierarchy that I show is always appears to me. Do you know what is happens?

View 2 Replies View Related

Analysis Services: Deploy A Cube

Jun 21, 2007

I'm making my first attempt at creating a cube using Analysis Services based on my exisiting datamart. Datasource, views, and dimensions have been defined. But comes deploying the cube, it's giving the error saying "A connection cannot be made. Ensure that the server is running." The Deploy Target server and database are the same where my datamart is. Or, maybe I don't know what I'm doing.

Would appreciate any suggestion for my enlightenment. Thanks

View 1 Replies View Related

Analysis Services Hiearchy Cube

Jun 30, 2007

I setup Hiarchy in the dimentions of my cube, however when I go and look at it via proclarity, I can't see the hiearchy there.

I have one table that has:

Category
Subcategory
Partnumber

And I have the hiearchy set from Subcategory to partnumber.

Then I go into proclarity and limit the category by "my catname" and I still see the 8000 partnumbers in the list.

Any ideas?

View 1 Replies View Related

Q: Analysis Services Cube Design

Jul 20, 2005

I have a (hopefully typical) problem when it comes to cube design. Westore millions of product records every year, broken down bymonth/quarter. Each product can be assigned to various heirarchialclassification groups etc. The data in an OLTP DB occupies roughly100G for a typical year.We're looking at breaking this out into OLAP to provide faster accessto the data in various configurations and groupings. This is not aproblem, as this is the intended use for Analysis Services.The problem is that we apply projection factors on the product pricesand quantities. This would be ok if it only happened once, however,this happens every quarter (don't ask why). The projection factorschange 4 times a year, and they affect all historical product records.This presents a challenge because to aggregate the data into a usefulconfiguration in the cubes, you throw out the detail data, but thismeans throwing out the price and quantities which are needed to applythe projection.So if you have Product A at $10 and Product B at $20, and roll both upinto Category X, you'll have $30, but you'll lose the ability to applya projection factor of .5 to Product A and .78 to Product B. They'rerolled up.I don't want to regenerate the cubes every 3 months. That's absurd.But we can't live without the ability of projection theprices/quantities on a product level (detail level). So how can thisbe achieved when the other cubes are created at a higher level withless details and sums of the detail data?My initial guess is that we have to update the product data, and thenreaggregate all the other data that is built upon that product data.Is there any other way to apply math to the data on the way out?Thanks in advance!Regards,Zach

View 2 Replies View Related

Analysis :: Create Cube With 2 Table

May 6, 2015

i want to create cube withe 2 fact tables. is it possible?? if it is could you show me how !!

View 2 Replies View Related

Analysis :: Using Calculations In SSAS Cube?

May 21, 2015

how can use this mdx script in the calculation part of a cube, will i simply dump it in the script form by starting with the 'create member current cube.

[measures].[test]'
select 
[measures].[abc] on 0,
[xyz].[xyz].(&0):[xyz].[xyz].(&60) on 1
from
(
select
(tail([month].[month].[month].members,6))on 0
from
[cube])

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved