I have a bridge table used in my current design, the relation ship is FactTable -> DimRoom -> BridgeTable -> BedGrpoup. Fact table is joined with the Dim room dimension on room key, the bridge table maps values from the DimRoom and BedGroup dimension. The scenario I have is that the mapping between room and bed group can change during which I'm recording it in the effective date attribute in the bridge table.
Dimension usage on the cube is having a many to many relationship with bridge table.Example, if a fact occurred in room 100 which belonged to Bed Group A in Jan 2015 and then the room changed to Bedgroup b in feb 2015 and another fact happened in the room then I should have 1 count for jan in A and 1 in B for feb, I get that right when sliced by room, but when I slice by Month the cube does not honour time dim and displays both. I tried mapping the date from the bridge table to the fact as a regular relationship but that does not work either.
I am connecting to SSAS cube from Excel and I have date dimension with 4 fields (I have others but I don't use it for this case). I created 4 fields in order to test all possible scenarios that I could think of:
DateKey: - Type: System.Integer - Value: yyyyMMdd Date: - Type: System.DateTime DateStr0: - Type: System.String - Value: dd/MM/yyyy (note: I am not using US culture) - Example: 01/11/2015 DateStr1: - Type: System.String - Value: %d/%M/yyyy (note: I am not using US culture) - Example: 1/11/2015
Filtering on date is working fine:
Initially, in excel, filtering on date was not working. But after changing dimensional type to time, and setting DataType to Date, as mentioned in [URL] filter is working fine as you can see in the picture.Grouping on date is not working:
I have hierarchy in my Date dimension and I can group based on hierarchy, no problem. But user is used to pre-build grouping function of excel, and he wants to use that. Pre-build functions of Excel, Group and ungroup seems to be available as you can see in following picture:
But when user clicks 'Group', excel groups it as if it is a string, and that is the problem. User wants to group using pre-build grouping function available in Pivot table. I also find out that Power Pivot Table does not support this excel grouping functionality. And if I understood well, this pre-build grouping functionality of excel, needs to do calculation at run time, and that is not viable solution if you have millions of rows. So Power pivot table does not support pre-build grouping functionality of excel and hence we need to use dimension hierarchy to do the grouping. But I am not using Power Pivot table, I am using simple Pivot Table. So I expect grouping functionality to be working fine. Then I tried to do simple test. I created a simple data source in excel itself. And use it as source of my Pivot table. Then grouping is working fine. The only difference that I can see is (When double click the Measure value in Excel),For date values of my simple test, excel consider them as 'Date'.
For date values of my data coming from cube, excel consider them as 'General'
2.1. But value here is same as it was in simple test.
2.2. 'Date Filter' works just fine.
2.3. If I just select this cell and unselect it, then excel change type to 'Date' though for that cell.
2.4. I have created 4 different types of fields in my date dimension thinking that values of attribute of my dimension might be the problem, but excel consider 'General' for all of them.
2.5 This value (that can be seen when double clicking on measure) comes from 'Name Column' of the attribute. And the DataType defined is WChar. And I thought that might be the reason of issue. And I changed it to 'Date'. But SSAS does not allow it to change to 'Date' giving error : The 'Date' data type is not allowed for the 'NameColumn' property; 'WChar' should be used.
So, I don't know, what is the puzzle piece that I am missing.
1. Date filter works, group does not work
2. Excel consider it as 'General' string.
3. SSAS does not allow to change 'NameColumn' to Date.
is there a solution outside for getting of DB2 for OS/390 into SQL server? I mean not replication or copying of data with flat files or an ETL tool, but a kind of integration of DB2 tables as they would be "normal" mssql tables.
Oracle has Transparent Gateways for os/z DB2 and many other non-Oracle databases, is there some similar for SQL Server?
Could it be realisable with an ODBC client for DB2?
I have a setup with a bridge table. There are about 5 different tables on one side of the bridge (all with compatable PK columns) one of which is called 'mobilesub', and one on the other side called 'allcostcenters'. The bridge table is called 'subaccountcostcenter'.
I can enter data for mobilesub in the bridge table. But then when I try to enter the info into the bridge table for any of the other tables, such as localsub, there is a conflict like this:
INSERT statement conflicted with TABLE FOREIGN KEY constraint 'FK_subaccountcostcenter_mobilesub'. The conflict occurred in database 'test1', table 'mobilesub'. The statement has been terminated.
Is there some rule against using a bridge table that references several different tables, and I'm just not aware of it. Because I've done everything I can to make sure the info from the different tables don't conflict . . . The same error comes up if I do the localsub table first--in that case the foriegn key messing me up is FK_subaccountcostcenter_localsub. So it's not something with the individual tables.
Hey Everyone,I am a stored procedures newbie, so I wanted to ask if it is possibleto use them with PHP via an ODBC-ODBC bridge connected to a MS SQLserver. I am simply use the commands from the php odbc library toconnect and execute the SQL commands, I am trying to avoid buildingdynamic queries, so stored proceddures seem to be the answer. I knowthat there are only certain environments stored procedures can be usedwithin. Thank you for all of your help.
i have a query that i need to extend but just dont have the knowledge to do so, so hoping someone here could help me
this is my current query:-
SELECT ES.PatientID, DATEDIFF(day, EP.DateofBirth, GetDate())/365 AS Age, CASE EP.SexCode WHEN 'M' THEN 'MALE' ELSE 'FEMALE'END AS Gender, COUNT(H.HRDID)AS HRD, ES.FourRegularDrugs, isnull(ES.FourRegularDrugsNo,0) as NoOfDrugs, ES.ReadmissSixMonths, PCE.[Primary diagnosis]
FROM tblStudyServices ES INNER JOIN tblPatient EP ON EP.PatientID = ES.PatientID INNER JOIN PAS.dbo.[PAS Patient Details] PP ON PP.[PAS key] = EP.PatientInternalNumber INNER JOIN PAS.dbo.[PAS Admission Details] [PAD] ON [PAD].[PAS key 1] = PP.[PAS key] INNER JOIN PAS.dbo.[PAS Consultant Episode] PCE ON PCE.[PAS key 1] = PAD.[PAS key 1] AND PAD.[PAS key 2] = PCE.[PAS key 2] LEFT JOIN tblStudyServicesHRD H ON H.ServicesStudyID = ES.ID
WHERE PAD.[Method of Admission] = 'AE' AND (PP.[Death Indicator] <> 1 OR PP.[Death Indicator] IS NULL) AND (CONVERT(CHAR(10),PAD.[Discharge Date],103) > DATEADD(year, -1, CONVERT(CHAR(10),GETDATE(),103)) OR PAD.[Discharge Date] IS NULL) AND DATEDIFF(day, EP.DateofBirth, GetDate())/365 >= 18 AND ES.Consent=0
the problem arises with the Discharge Date. which basically says that the discharge date must be within a year or is null
a patient may have 3 admissions within that year but i need to only bring back the latest discharge as this is the latest admission.
pas key 1 pas key 2 discharge date 1 1 01/Jan/2006 1 2 01/Mar/2006 1 3 01/Apr/2006
so i want to bring back only the record that has latest discharge date - 01/Apr/2006
i also need to accomodate the following scenario where the patient is still in hospital and discharge date is null - so therefore bring back the record with the null discharge date as this is the latest admission
pas key 1 pas key 2 discharge date 2 1 01/Jan/2006 2 2 01/Mar/2006 2 3 <NULL>
any help is greatly appreciated as ill be learning something new!!
I am trying to find what datatype I can use for variable values like below in a column
E.g. column which we get
10000.10 100 180.34 98203710231.34
From the above example, you can see some of the values contains no decimal and with decimal
Also we cant say whether the decimal comes after the 5th number or 10th number. Any other datatype to capture this values. If not last option is to give varchar2.
I'm having trouble with cube processing. While processing a code I'm getting a "Invalid column name MessageType" error.
I unfolded the cube, then I opened "measure groups", my failing dimension (ServiceRequestDim) and the partition.
In the partition I opened the "Source" attribute so it now includes my column which was missing. But it didn't solve the issue.
If I get the query used during the process I'm getting this :
SELECT DISTINCT [ServiceRequestDim].[MessageType] AS [ServiceRequestDimMessageType0_0] FROM ( Select IsNull(IsDeleted, 0) as IsDeleted, [ServiceRequestDimKey], IsNull([Status_ServiceRequestStatusId], 0) as [Status_ServiceRequestStatusId],[Status],IsNull([TemplateId_ServiceRequestTemplateId], 0) as
[Code] ....
In the nested query which defines ServiceRequestDim the messagetype attribute is still missing. In my source datamart the ServiceRequestDim has the "MessageType" column.
So the question is where do I change the nested request that the dim process use to reflect the actual columns in my datamart .
I am using the following approach to enable dynamic date calculations: [URL] ...
However, some measures I work with are percentages (not numeric values) and demand a different formula.
See example snippet bellow (first IIF statement)
/*Calendar*/ [Date Calculation].[Calculation].[Calendar Actual versus PY WK %] = IIF ( [measures].Name = "HSP2 %", -- < ThIs is where I am trying to check if the measure is a percetgae type 0, -- < When this is the case then do something different
[Code] ....
I am trying to access the metadata. I am aware of $system.mdschema_measures but I am unable to utulize this for my case.
I like to define my procedure parameter type to match a referenced table colum type, similar to PL/SQL "table.column%type" notation. That way, when the table column is changes, I would not have to change my stored proc. Any suggestion?
I Create a measure group and two dimensions using [AdventureWorksDW2012], I try to change one dimension's storage mode with setting property proactive caching as Real-Time ROlap. There is no any warning message when deploying and processing, but error occurs when I query in sql server analysis services, see below for the error messages and the screen capture.
Error occurred retrieving child nodes: the current operation was cancelled because another operation in the transaction failed.
I am working on a model where I have a sales fact table. Each fact record has four different customer fields (ship- to, sold-to, payer, and bill-to customer). I have one customer dimension table that joins to the sales fact table four times (once for each of the customer fields above). When viewing the data in Excel, I would like to have four hierarchies (ship -to, sold-to, payer, and bill-to customer) within Customer.
Is there a way to build hierarchies within my Customer dimension based on the same Customer table? What I want is to view the data in Excel and see the Customer dimension. Within Customer, I want four hierarchies.
Is it correct to say that for each cube you can have only one Fact Table? I am having a funny dispute just now.
According my experience I never built cubes with more than one fact table, if I want take data from more than one table I write a view and I use it as Fact table...but a cube with two or three fact tables? Never tried..
I have a big table with several types of transsactions: PO (Puchase Orders, SO,( Sales orders), INV (Invoices) .... I want to create cubes with only one type of transactions (1 cube for PO,...)
Where and how can I filter the rows I want to use in my cube ?
What I want to do is be able to show a measure that SUMS the number of hours logged by anyone who is a SiteManager from the ConstructionSites table.
I wanted to do a SUMX of WorkerTimesheets against HoursLogged, but FILTER against WorkerTimesheets[WorkerID] = ConstructionSites[SiteManagerID] so only workers who are also SiteManager would be counted.However, I can't seem to get that to resolve it always throws an error along the lines that it can't determine context.
Is it possible to design a Cube with a Dimension "DimEmployee" (SCD Type2) and Query the following: What "title" had the person "xyz" at time "2015-01-01"? - See Example data on Images.
I have also a DimTime with Day granularity. The DimEmployee is not at day granularity.
I have set up a cube running under MSSQL Server 2000 Analysis services and have created reports in Excel 2002 using pivot tables and linking into the cube as an external data source.
The pivottable works fine and I can slice / dice as usual, the only thing that doesnt work is the drill through into AS. I receive and error message saying "Cannot show detail data for that selection...".
Now I know that the drill through works, as when I create a report using the Analsyis Services Excel Add Inthe drill through works fine, its just how do I get it working using the pivot tables ?
fyi - the reason I dont want to use the Analsyis Services Excel Add In for all the reports is because I have to deploy this to XX number of users, who wont have admin rights on their machines etc etc....
Is there any VB that I could use to perform this drill through and return the results.....or easier ??
I need to extract data from SSAS' cubes into a SQL Server table.
I already read examples using Linked server (with openquery), SSIS, etc. However, the result always return as many columns per dimension as levels. I need to extract all members of a dimension in a column. E.g., when excecuting the following MDX query in Adventure Works 2014:
select [Measures].[Sales Amount] on columns, Non Empty [Date].[Calendar].members on rows from [Adventure Works]
I would like to get this result (MDX query in SSMS), but with keys displayed intead of names:
I am working with SSAS Tabular. I have a stand alone table with 60 columns and contains 120K records. Table size is 250MB. And trying to build a tabular report out of it and it is taking longer and throwing exception, screenshot attached.
It might be cross-join issues, as workaround created a dummy measure and using in report. But it working for 10-20 k records and beyond throwing same exception. I have 8 GB RAM and 100 GB free disk space.
I can pull a last processed date for the entire model: URL...That works great for showing the last time anything in the entire model was processed. It is not table specific. Updating / processing table A also changes the timestamp on tables B and C.However, if I want to look at just a specific table in the model (build a column for each fact table and a measure to go with it) I find that doing any process operation updates all the =Now() columns in all the fact tables.If I have a model with 3 fact tables and I do a process table using ProcessFull on one of them, all three tables calculated columns "LastProcessed" =NOW() are updated. URL...
Is there a way to setup a measure in the model on a per table level to show last processed date for each individual table? LastDataRefresh:= MAX ('TableA'[LastProcessed])
For example: Our sales table may update three times a day, where as our warehouse inventory table is only updated nightly.I wanted to let end users see by adding a measure for each table when the last process event was for a given table in the model.
I have only 1 denormalized table that is being used in a SSAS Tabular model(which is about 3GB). I am doing a POC to convert it into a SSAS Multidimensional and explore it.
1st Question) I am seeing that there is no Primary Key(unique key) in the current denormalized table. (Tabular Model didnt require any primary key). But i think for Multidimensional the key is mandatory? Should i generate a composite key myself in a Named Query based on this table(in the DSV)?
2nd Question) What is the best way to design my Multidimensional Cube/Dimensions based on this single table?
Say if i comeup with a Composite Primary Key called (PK_ID) . Should i be splitting up my facts / dimensions in my DSV using Named Queries similar to below(Using the same PK for my dimension tables also?)
a) FactTable = Select PK_ID, Qty1,Cost1,Price1,Amount1 from Table1 b) StoreDim = Select PK_ID, StoreName,StoreDese from Table1 c) ItemDim = Select PK_ID, ItemName,ItemDesc from Table1
I have, a SSAS 2012 tabular instance with SP2, there is a database on the instance with a read role with everyone assigned permissions. When configuring the Power BI analysis services connector, at the point where you enter Friendly Name, Description and Friendly error message, when you click next I receive the error "The remote server returned an error (403)." I've tested connecting to the database from Excel on a desktop and connect fine.I don't use a "onmicrosoft" account so don't have that problem to deal with.
We use Power BI Pro with our Office 365. As far as I can tell that part is working ok as I pass that stage of the configuration with a message saying connected to Power BI.The connector is installed on the same server as tabular services, its a Win2012 Standard server. The tabular instance is running a domain account that is the admin account for the instance (this is a dev environment) that account is what I've used in the connector configuration. It's also a local admin account. There is no gateway installed on the server.