DataSet Contents Are Removed When Going To Data Tab (Using MDX -&&> SSAS)
Mar 6, 2008
Hi,
I Recently uninstalled Visual Studio 2005 and SQL Server 2005 Tools from my machine, then I installed Visual Studio 2008 and SQL Server 2005 again. this way I can write normal .NET applications with vs 2008 and still be able to develop my reports with the BIDS.
I thought everything was working oke, until I had to modify a report. When I open the report and preview it...it works fine, but when I go back to the Data Tab, it clears out the mdx queries of all the datasets and I get the default blank designer.
I really need this fixed a.s.a.p. otherwise I can't make fixes to the reports.
I find that dynamic masking does not work on joining tables in SQL Server 2016 CTP2.1.
For examples, I create the following table:
CREATE TABLE [dbo].[HRM_StaffAppointment]( [StaffID] [nvarchar](11) NOT NULL, [ApptSeqNo] [smallint] NOT NULL, [ReportingDept] [nvarchar](10) NULL,
[Code] ...
Then I apply mask on StaffID and RankDesc.
alter [dbo].[HRM_StaffAppointment] alter column [StaffID] add masked with (function='default()') alter [dbo].[HRM_StaffAppointment] alter column [RankDesc] add masked with (function='default()')
When User A logged in and query on HRM_StaffAppointment, StaffID and RankDesc are perfectly masked. But User A can remove the masking using another table:
CREATE TABLE [dbo].[staffID]( [staffID] [nvarchar](255)Â ) ON [PRIMARY] select a.* from dbo.HRM_StaffAppointment as a left join dbo.staffID as b on a.StaffID = b.StaffID
It looks like a security hole to me, or I'm doing anything wrong?
Hi All,I have come up against a wall which i cannot get over.I have an sql db where the date column is set as a varchar (i know, should have used datetime but this was done before my time and i've got to work with what is there). The majority of values are in the format dd/mm/yyyy. However, some values contain the word 'various'.I'm attempting to compare the date chosen on a c# .net page with the values in the db and also return all the 'various' values as well.I have accomplished casting the varchar to a datetime and then comparing to the selected date on the .net page. However, it errors when it comes across the 'various' entrant.Is there anyway to carry out a select statement comparing the start_date values in the db to the selected date on the .net page and also pull out all 'various' entrants at the same time without it erroring? i thought about replacing the 'various' to a date like '01/01/2010' so it doesn't stumble over the none recognised format, but am unsure of how to do it.This is how far i have got: casting the varchar column to datetime and comparing. SELECT * FROM table1 WHERE Cast(SUBSTRING(Start_Date,4,2) + '/' + SUBSTRING(Start_Date,1,2) + '/' +SUBSTRING(Start_Date,7,4) as datetime) '" + date + "'"Many thanks in advance!
There will be three levels of data imposed at the Application Layer
Level 1: ParentID = 0 An Item Like Geography Level 2: ParentID = a Level 1 WikiID A sub Topic like Volcanoes Level 3: ParentID = Level 2 WikiID A bottom Topic like Pyroclastic Flows
I Need a SQL statement that Will Produce the Output where The output will be produced like this: Level 1 Level 2 Level 2 Level 2 Level 1 Level 2 Level 2
I Built this but its wrong and has no order by Group by Statements Select * from WikiData where ParentID = 0 or ParentID IN (Select * from WikiData where ParentID = 0)
i have two datasets.one dataset have old data from some other database.second dataset have original data from sql server 2005 database.both database have same field having id as a primary key.i want to transfer all the data from first dataset to new dataset retaining the previous data but if old dataset have the same id(primary key) as in the new one then that row will not transfer. but if the id(primary key) have changed values then the fields updated with that data.how can i do that.
I've got a report that is using a cube as a data source and I can't get the report to show all the data. Only data at the lowest level of the cube is displayed. The problem is that most of the data I'm concerned with is at higher levels. There's no problem with the MDX. I get the correct results when I run the query.
I'm using a table to show the results. I've also tried a matrix, but I get the same results. I'm using SSRS 2005 and SSAS 2000.
Anyone have experience with this? Am I missing something simple?
I have a cube that tracks sales by sales rep. In this cube, I have dimensions for SalesRep, Product, and Region hierarchies. I also have a Time dimension that provides Fiscal Year, Fiscal Quarter, Fiscal Month, and Calendar Date; the Time dimension also has an attribute showing the first day of the year for any given date (our fiscal year starts on a different date every year).
I have a report that passes StartDate and EndDate parameters back to the cube, and provides sales numbers by Rep, Product, and Region for that given date range. What I would also like is a field that provides YTD sales through the EndDate parameter.
This is a piece of cake for me to implement against the SQL tables, but I am pulling my hair out trying to determine the best way to implement with a cube. Does anyone have any suggestions?
Hello: I would like to send data feeds to Sales Reps listing customers with their likelihood of purchasing a product from a certain Category. The data will basically contain Customer Name, Category, Likelihood Score (or Probability Score). When sorted in Descending order of Probabililty a Sales Rep can easily see the customers who are most likely to purchase at the top of the list and can call them.
I figured I would explore the use of SSAS data mining for this, and integrate it within an SSIS data flow (needed to send the data out to the sales reps).
I have not worked on Data Mining and so am clueless as to where to begin. I have been reading a book on it but most of the examples this far show me how the results can be viewed within AS, which is not what I am looking for.
Ideally I would like to run an algorithm against all our product categories for 2007 and store the results to a table. Then use that table as a lookup to get the probability score for specific categories that we are targeting in 2008.
I appreciate any advice for the experienced SSAS folks on this forum.
I have a vision of a beautifully interlocked Business Intelligence system in which I load dimensional fact tables through SSIS, build aggregations in SSAS cubes, and quickly and easily generate SSRS reports using the cubes as data marts.
I am now a month into trying to implement this vision and Reality is stomping all over it. Primarily, I am running into issues with the "cube as a data mart" idea. Each cube is taking up an additional gig or two of space, report queries against them seem to take longer than a straight SQL query against my fact tables, and I am running into serious problems trying to pass time-related calculations such as Year-To-Date to my reports.
Has anyone else tried setting up a BI system that uses cubes as the primary drivers for their reports? Have you seen any benefits to doing it that way? Or should I generate most of my reports through the SQL tables?
I'm nearing the point-of-no-return in choosing my final methodology, and any feedback would be greatly appreciated.
I want to implement population data in sales cube.
Fact table has customer code which is foreign key of Customer master dimension which in turn is linked to census data dimension. Census data dimension have city wise population data having foreign keys of zone and state.
I'm sure what I am trying to do is very simple - but I just can't seem to figure it out. I have a report based on a SSAS cube (SQL 2005). The report shows sales based on the dates the user selects from the parameter field (the date parameter field comes from a Y-Q-M-D hierarchy). This all works fine.
What I would like to happen is for the members within the last 3 months to be automatically selected so that the report automatically executes for the last 3 months.
Can anyone help or offer any advice. If possible I would like to achieve this using the GUI features so that power users can use the "plug and pray interface".
I am wondering is there any way to select only a portion of a data set to train the mining model? In this case, I mean we dont need to split the dataset in advance, what I want to do is being able to select any random portion of a selected dataset to train a mining model. Any advices?
I am looking forward to hearing from you and thanks a lot in advance for your advices and help.
I need to extract data from SSAS' cubes into a SQL Server table.
I already read examples using Linked server (with openquery), SSIS, etc. However, the result always return as many columns per dimension as levels. I need to extract all members of a dimension in a column. E.g., when excecuting the following MDX query in Adventure Works 2014:
select [Measures].[Sales Amount] on columns,     Non Empty [Date].[Calendar].members on rows  from [Adventure Works]
I would like to get this result (MDX query in SSMS), but with keys displayed intead of names:
I have 5 cubes, and hierachy defined for all cubes. for example:geography database with 5 continents as cubes and contries as dimensions.Now when i am doing security restrictions on my dimension ex: In USA dimension if i want only to give access to texas region then i should be able to see only texas cities. But i cansee all the states under USA even after selecting only Texas region under Dimension data tab inside ROles section in SSMS.I have tried security at database ,cube level as well as dimension level.But still not working.is that because of some wrong design of cubes or something related to database design.? I am not able to undersand that except roles everything in my cubes or datawarehouse is working fine without and defect in data.
I've been doing research on the web trying to find if it's possible to create a Report Model that is based on a Data Source that is NOT SQL Server and NOT SSAS.
Specifically I'd like to be able to build a Report Model on a Web Server (or multiple web services).
Does anyone have any information relating to this? Based on my research it looks like there's a few interfaces that need to be implemented (such as ISemanticModelGenerator, etc.).
I did find a post relating to using ODBC data sources for a Report Model and the recommended solution was Linked Tables in SQL Server or UDM in SSAS. Both cases don't look feasible for a Web Service-based Data Source.
I'm trying to figure this out I have a store procedure that return the userId if a user exists in my table, return 0 otherwise ------------------------------------------------------------------------ Create Procedure spUpdatePasswordByUserId @userName varchar(20), @password varchar(20) AS Begin Declare @userId int Select @userId = (Select userId from userInfo Where userName = @userName and password = @password) if (@userId > 0) return @userId else return 0 ------------------------------------------------------------------ I create a function called UpdatePasswordByUserId in my dataset with the above stored procedure that returns a scalar value. When I preview the data from the table adapter in my dataset, it spits out the right value. But when I call this UpdatepasswordByUserId from an asp.net page, it returns null/blank/0 passport.UserInfoTableAdapters oUserInfo = new UserInfoTableAdapters(); Response.Write("userId: " + oUserInfo.UpdatePasswordByUserId(txtUserName.text, txtPassword.text) ); Do you guys have any idea why?
We have an SSAS project that we want to auto deploy. I am not sure how to handle the external data sources in the project. This one in particular has a single external data source defined to Microsoft SQL Server.
I would like to be able to change the data source based on the environment. In SSIS projects I can do this by setting up environments in the SSISDB and linking them to project parameters in the SSIS project but SSAS projects don't seem to have a similar mechanism.
How to handle this? I would like to be able to have the build/deployment agent pass in server / database information to the data source based on environment (dev, QA, production).
The only way to automate this that I've discovered is to have an intermediary process that executes after the build that updates the generated .asdatabase and .configsettings files in the bin folder replacing the connection string information.
I have a MDX query , where I have a date Range in where clause.
I want to replace it with Cuurent Date and Last 7 days date.
I tried multiple ways using NOW function , but could not get it correct .
modifying the Query so that I can fetch DATA for last 7 daysÂ
SELECT NON EMPTY { [Measures].[X] } ON COLUMNS, NON EMPTY { ([PRODUCT].[PRODUCT].[PRODUCT].ALLMEMBERS ) } DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWS FROM ( SELECT ( { [COLOR].[COLORName].&[BLACK], [COLOR].[COLORName].&[BLUE] } ) ON COLUMNS FROM ( SELECT ( [Date].[Calendar].[Calendar Year].&[2015].&[2015]&[3].&[7].&[20150706] : [Date].[Calendar].[Calendar Year].&[2015].&[2015]&[2].&[6].&[20150629] ) ON COLUMNS FROM [MYCUBE]))
I want to replace Date Hard code value , I have used Calendar Hierarchy of date dimension. to find Last 7 days Data.
Is there any way to copy my Data of 2015 to the Planning/Forecasting Value of 2016?
My question is based on that we use INFOR ION BI right now and there we can just add an Button in our reports wich physically copies the value from one year to the next year based on some other rules in the cube.
Now I need to make this example work with SSAS and Excel PivotTables but I cant figure out how.
I have absolutely no clue where and how to accomplish it. Do I use Calculations, do I use Actions, do i make it in the Dataview, Cube or directly in Excel?
*** Product Removed **** is a rendering extension for Microsoft SQL Server 2005 Reporting Services that allows exporting reports in the DOC, RTF and WordprocessingML formats. All RDL report features, including tables, matrices, charts, textboxes, lists and images are converted with the highest degree of precision to Microsoft Word documents.
Read More at: *** URL Removed ****
Download Link: *** URL Removed ***
Post your queries and questions to *** Product Removed Forum: *** URL Removed ***
I have created a tabular model with VS 2013 Ultimate. I deployed the model several times and everything works great with Power Pivot.
One of the dimension tables, "Age", has an integer field that is null. I updated that column in SSMS to have values, processed the table in VS and the "Age" column now has integer values.
I deployed the model again, connected with Power Pivot, and the "Age" column still has not data. I refreshed the Excel spreadsheet several times, started a new one, deployed several times and there is still not data in the "Age" column even though it clearly exists in VS.
I opened SSMS on the Analysis Service and queried the table in question and it also shows no data in the Age column even though it is in the data table and the VS model.
I can't find anything different about this column than others that contain data. How can this be happening?
I am facing a very weird issue in our SSAS database. We have several roles with 'Read definition' access. None of these roles are able to see data in the cubes. I have checked these roles over and over and there is no problem with the definition. Each of the roles have been given read access to Data Source, Cube and Dimensions.
The users are able to access the definition and structure of the cube i.e. they can see the measures & dimensions available but when they drag measures and dimension attributes in the browser (SSMS) OR execute an MDX - they get null values. The roles with 'Full Control (administrator)' access are able to see all the data without any issues.  I have tried the following:
1. Deleted all roles and re-created. 2. Created roles directly on SSAS DB. 3. Deployed all objects and processed the DB. Â Each time only the admins are able to see the data.
I€™m having a problem with Excel 2007 DM and SQL 2005 and I hope someone out there has a solution.
Consider the following environment:
Windows XP SP2 or Windows Vista, Excel 2007, Data Mining Add-in, SSAS 2005 (with session mining models enabled, an AdventureWorksDW cube deployed and drill-through actions available).
Now take the following steps:
1. In Excel 2007 set up a connection to SSAS
2. Connect to the cube and create a new pivot table report (drag and drop whatever you like)
3. Right-click on one of the cell values in the data region and either select a drill-through action, or, select Show Details in the context menu
4. Ensure that you have at least 10 detailed records that are generated on a new worksheet page; you should have a time-based column in your detailed records
5. Select the table of detailed data, then select the Analyze tab (within the Table Tools grouping) which appears in the topmost menu above the ribbon
6. Click the Forecast button in the ribbon and choose both the field which you want to predict as well as the time-based column (from step 4) as well as the number of time periods to forecast
7. Finally click OK.
1. Having followed these steps on both WinXP SP2 and Vista, I keep coming across the exception: HResult:0x800A03EC. Any ideas as to why this exception pops up? If I was using a normal table of data (which was not generated from a Show Details or drill-through action), then the Forecast button works fine.
I googled it and thought the localization settings for SSAS 2005 and Excel 2007 needed to be the same (initially they weren€™t). I€™ve tried removing the auto-filters which appear atop each column in the detailed data table prior to clicking the Forecast button, and, I€™ve also tested for a series of data across a number of time periods with the same result.
Also, a colleague of mine discovered that the column headers that appear by default from a drill-through start with "$[", and, in removing them the Forecast function appears to work.
I would have thought there would be a seamless transition in Excel 2007 between data retrieved from a cube and the DM Add-in featueres (or at the very least, a more meaningful exception message than the one presented).
Is there something I€™ve missed, or, is there a KB article I haven€™t come across yet? As I know for a fact that the problem is reproducible, is there a fix to this problem on its way to us? Is there a useful workaround that doesn't require manual intervention?
I have a time dimension which has Date, Week, Month and Year. However, the hierarchy will have only Week, Month and Year. It works great for any Sales measure with AggregateFunction as SUM.
I have created a new measure with AggregateFunction = LastNonEmpty. Also in the backend, I have pushed all the inventory data to last date in every month as inventory is always looked on a monthly basis not on a weekly basis. This measure shows correct data for every last week of the month in the hierarchy. However, Months and Years are displayed as zeros.
I'm baffled by a problem I noticed yesterday, I have a 3 NODE SQL 2005 x64 ENT cluster which was setup with SSIS and Notification services upon setup. I have gone through patching SQL 2005 with SP1 (initial setup) SP2 and Hotfix 3052 through the last two months and it in the NT applicaiton log that SSIS was patched and started up to 6/15. Yesterday I received an email stating SSIS was not installed by one of our developers, upon logging in a confirmed it's no longer there! I re-installed through ADD/RM Programs SQL 2005 and ran through the SP2 and HF 3052 setup.. but upon scanning the Application logs I can't find any record of SSIS or Notification services being uninstalled. The MSinstaller shows the initial install of both packages, and then the re-install today but there's no log of uninstalling it. Does anyone know where else I can look or has can explain this odd occurance?
I am in the process of porting over an application from Access To SQL implementing SQL Server 2005 Express. My intention is to implement this database on a full time server and upgrade to a full blown version of SQL later. Am I correct in assuming that there is not limit on the number of concurrent connections to SQL Server Express since it was stated that the "Workload Governor" has been removed? Or does something else control the number of users that can be simultaneously connected to the server.
My reason for asking is I have 7 machines that need to access the server. I also have 2 databases that need to be accessed from each machine. If there is no limit, I will keep my databases seperate. However, if there is a limit, I will most likely merge the tables into 1 db.
When a workstation losts connection to server,it can leave an uncomplete transaction. ThenSQL Server removes the transaction.Could anyone guide me how to set the delaybefore SQL Server do it ?Thanks in advanceJohn S.*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I have a sql table with a varchar column which contains codes like '080101000', in my SSIS dataflow I have a lookup against this table and the column whith the code is used as output column for my lookup transformation.
In the advance editor the output column datatype is DT_WSTR, but when the code contains only numbers like the code '080101000' the first '0' is removed! It's like the code is at some point transformed to numeric and then inserted in the output column as a string. This in nonsense!!
I am trying to implement data masking based on user login and not sure why this is not working. I have the dimensions DimBrand, DimProduct and DimUser. I should mask the BrandCode with 'XXXX' nothing but in the report all the BrandCode should appear but few of the code will be masked if the user is not belongs to that group. I have a fact table FactProduct in this. In the cube I created all these 3 dimensions and the fact table. I created a new dimension DimBrandMask and I separated the code over there with a relationship with the actual DimBrand dimension. In the cube a reference relationship is set up with the measure group. Created a role with read access.
In the dimension data tab of role I put the below MDX to allowed set.
Note I created one measure group from the DimUser table and the measure [Dim User Count] is used in the above query.
I am expecting some result like below
Brand     BrandCode          Count Brand1     b1                      6 Brand2    XXXXX                 5 Brand3    XXXXX                10
We have a publisher that got red-crossed(Run to problem). I decide to remove and recreate it. However, after remove it, the publisher still stays in Replication Monitor. The remain thing has no distributor and logreader, but only snapshot agent. When to check the property of the agent, we got a error message basically say the job does not exist, which makes sense.
Now, it does not show up in any places, except Replication Monitor. Well, it cannot be removed from Replication Monitor. Can any one tell us how to clear it from Replication Monitor?
Thank you.
More Infor: The replication was set up on 2000.(2000 pub, 2000 dis and 2000 sub) with 2005 Management Studio, and the publication was removed with 2005 Management Studio.