Errors In Table Analysis Tools For Excel DataMining AddIn
Mar 2, 2007
It's an error "The 'MINIMUM_DEPENDENCY_PROBABILITY' data mining parameter is not valid..." always I try to run Analyze Key Influencers Tool. How it's possible to fix it?
I am connecting to SSAS cube from Excel and I have date dimension with 4 fields (I have others but I don't use it for this case). I created 4 fields in order to test all possible scenarios that I could think of:
DateKey:   - Type: System.Integer   - Value: yyyyMMdd Date:   - Type: System.DateTime DateStr0:   - Type: System.String   - Value: dd/MM/yyyy (note: I am not using US culture)   - Example: 01/11/2015  DateStr1:   - Type: System.String   - Value: %d/%M/yyyy (note: I am not using US culture)   - Example: 1/11/2015 Â
Filtering on date is working fine:
Initially, in excel, filtering on date was not working. But after changing dimensional type to time, and setting  DataType to Date, as mentioned in [URL] filter is working fine as you can see in the picture.Grouping on date is not working:
I have hierarchy in my Date dimension and I can group based on hierarchy, no problem. But user is used to pre-build grouping function of excel, and he wants to use that. Pre-build functions of Excel, Group and ungroup seems to be available as you can see in following picture:
But when user clicks 'Group', excel groups it as if it is a string, and that is the problem. User wants to group using pre-build grouping function available in Pivot table. I also find out that Power Pivot Table does not support this excel grouping functionality. And if I understood well, this pre-build grouping functionality of excel, needs to do calculation at run time, and that is not viable solution if you have millions of rows. So Power pivot table does not support pre-build grouping functionality of excel and hence we need to use dimension hierarchy to do the grouping. But I am not using Power Pivot table, I am using simple Pivot Table. So I expect grouping functionality to be working fine. Then I tried to do simple test. I created a simple data source in excel itself. And use it as source of my Pivot table. Then grouping is working fine. The only difference that I can see is (When double click the Measure value in Excel),For date values of my simple test, excel consider them as 'Date'.
For date values of my data coming from cube, excel consider them as 'General'
  2.1. But value here is same as it was in simple test.
  2.2. 'Date Filter' works just fine.
  2.3. If I just select this cell and unselect it, then excel change type to 'Date' though for that cell.Â
  2.4. I have created 4 different types of fields in my date dimension thinking that values of attribute of my dimension might be the problem, but excel consider 'General' for all of them.
  2.5 This value (that can be seen when double clicking on measure) comes from 'Name Column' of the attribute. And the DataType defined is WChar. And I thought that might be the reason of issue. And I changed it to 'Date'. But SSAS does not allow it to change to 'Date' giving error : The 'Date' data type is not allowed for the 'NameColumn' property; 'WChar' should be used.
So, I don't know, what is the puzzle piece that I am missing.
1. Date filter works, group does not work
2. Excel consider it as 'General' string.
3. SSAS does not allow to change 'NameColumn' to Date.
I have a complex scenario , so first I want to ask out the feasibility of it . I think its better if I state the scenario and some one on this forum reply to it , I need to build an application (e.g credit card application , Loan Application etc ) that requires some approval from expert , what I firstly want is that I apply datamining on this data so that next time when I enter the data the result (approval or reject ) should be given by datamining tool , this I gather is poosible by using Analysis Services in SQL 2005 , but I also want the bases of that decision ( I mean the rules/some thing else that the Datamining created agaisnt the data entered ) , So can any one do any help on this , you can also reach me at razi_rais@yahoo.com . its preety much urgent so your prompt response is higly appreciated.
I have an issue with MDS (Master Data Services for ONE platform). When I try to connect to MDS from Excel addin, I get the error - please see the attachement.
We are using Office 2010 on Citrix with the RCO Master Data Excel addin. I am getting many calls from users that the Addin has disappeared from the Excel menu. Can re-enable it by going to files->Options->Addins and manage COM addins. Have not been able to work out why it becomes disabled. Any registry setting that I can use to stop it from being disabled?
Recently installed Sql Server 2005 client and am now attempting to import data from a spreadsheet into an existing table. This works fine with Sql Server 2000 but I am getting data conversion truncation errors that stop the process when this runs using import utility in Sql Server 2005.
I have set up a cube running under MSSQL Server 2000 Analysis services and have created reports in Excel 2002 using pivot tables and linking into the cube as an external data source.
The pivottable works fine and I can slice / dice as usual, the only thing that doesnt work is the drill through into AS. I receive and error message saying "Cannot show detail data for that selection...".
Now I know that the drill through works, as when I create a report using the Analsyis Services Excel Add Inthe drill through works fine, its just how do I get it working using the pivot tables ?
fyi - the reason I dont want to use the Analsyis Services Excel Add In for all the reports is because I have to deploy this to XX number of users, who wont have admin rights on their machines etc etc....
Is there any VB that I could use to perform this drill through and return the results.....or easier ??
I am working with SSAS Tabular. I have a stand alone table with 60 columns and contains 120K records. Table size is 250MB. And trying to build a tabular report out of it and it is taking longer and throwing exception, screenshot attached.
It might be cross-join issues, as workaround created a dummy measure and using in report. But it working for 10-20 k records and beyond throwing same exception. I have 8 GB RAM and 100 GB free disk space.
I have a very small SSAS database with around 35 Mb. I opened it on Excel 32 bits and started dragging fields to a pivot table and it started failing with memory errors. The behavior on the SSAS server was that memory started growing very fast until 8 GB (vm memory total) and then the error is reported in excel.
What might be the issue in such a small database? I would understand in a big database, but not on this one.
i have made some Data Mining Model Examples in Excel 2007 (not temporarily!). They where there after leaving an re-opening Excel. I have used them several times. Then I want to look, if I can see them also via SQL Server Managment Studio in the Analysis Services. There where nothing in the DMAddInDB in Analysis Services.
And after this, in Excel my DataMining Models have disappeared and all Models i have made since this disappeared also.
Perhaps I have destroyed the database. But will this happen every time? Can I share Data Mining Models I have made with Excel with Projects in SQL Server Analysis Services?
We're looking for a sql server log analysis tool. Currently we often need to identify which users execued DML in the database. But I can see other uses for a product from this tool category. I'm looking at APEXSQL Log and Lumigent Log Explorer.
I see we can install SQL Server 2016 CTP2 and Tabular SSAS.However, where are the tools to allow us to connect to the server and work with the tabular models?Supporting features such as bi directional filtering, etc..The existing SSDT BI tools for 2014 don't contain those additional features (editing relationship filtering direction for example).
A rather dumb question but I've installed an evaluation copy of SQL 2005 on my machine and a colleague would like the Analysis Services Data Mining capability on his machine (without the eval SQL Server).
Is there a license associated with such an installation when we buy 2005 or does it fall under client components which can be installed on any number of users machines?
I have a database table which has all the inputs, key and the result. In visual studio, I created a decision tree model which has exactly the same fields as in the table. However the visual studio automatically add space preceding the capital letters. As the field name in the Datamining model and those in the database table are slightly different. I cannot use NATURAL prediction join. Is there anyway to told the visual studio not to add the spaces in the variable names?
I am getting the dropdown of the dimension folder twice, as i took printshot from client tool. In SSMS/bids also its showing this error. How to resolve it.
I got this error from a sql agent driven cube process yesterday and am wondering where the log of errors is created...
Executed as user: xservername$. <return xmlns="urn:schemas-microsoft-com:xml-analysis"><results xmlns="<root">http://schemas.microsoft.com/analysisservices/2003/xmla-multipleresults"><root xmlns="urn:schemas-microsoft-com:xml-analysis:empty"><Messages xmlns="urn:schemas-microsoft-com:xml-analysis:exception"><Warning WarningCode="1092354050" Description="Server: Operation completed with 1042 problems logged." Source="Microsoft SQL Server 2008 R2 Analysis Services" HelpFile="" /></Messages></root></results></return>.Â
The step failed.if its a property I can see related to the server in ssms, which property as I see lots of stuff with the word log in the property name.
I recently updated the datatype of a sproc parameter from bit to tinyint. When I executed the sproc with the updated parameters the sproc appeared to succeed and returned "1 row(s) affected" in the console. However, the update triggered by the sproc did not actually work.
The table column was a bit which only allows 0 or 1 and the sproc was passing a value of 2 so the table was rejecting this value. However, the sproc did not return an error and appeared to return success. So is there a way to configure the database or sproc to return an error message when this type of error occurs?
I 've read that there is a workaround for this issue by customizing errors at processing time but I am not glad to have to ignore errors, also the cube process is scheduled so ignore errors is not a choice at least a good one.
This is part of my cube where the error is thrown.
DimTime PK (int)MyMonth (int, Example = 201501, 201502, 201503, etc.)Â Another Columns FactBudget PK (int)Month (int, Example = 201501, 201502, 201503, etc.)
I set the relation between DimTime and FactBudget doing DimTime MyMonth as Primary Key and FactBudget Month as Foreign Key. The cube built without problem, when processing the errror:Â The attribute key cannot be found when processingwas thrown.
It was thrown due to FactBudget has some Month values (201510, 201511, 201512 in example) which DimTime don't, so the integrity is broken.
My actual question: is there a way or pattern to redesign this DWH to correctly deploy and process?
When I want to create a dimension i always end showing up errors below:
COPY <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> Â <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100"
[Code] ...
Errors and Warnings from Response    Internal error: The operation terminated unsuccessfully.    The following system error occurred:    Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'DB LAB2', Name of 'DB LAB2'.
I have an SSIS and SSAS project in the same solution. Â I need to debug the SSIS package regardless if there is an error or two in the SSAS project. Â Is there a way to ignore the SSAS project while I debug the SSIS package?
I have an Excel spreasheet - the first column is text, the second numbers, the third a mix of the two. If I point An Excel Source at this in my data flow, it will import the first two columns without problem, but not the third: all cells containing text are being imported as nulls, but those containing numbers are imported just fine.
Even if the numbers are stored as text, they are converted into numbers at import and genuine text is still discarded. It's treated as if the entire column is numeric if there's just one numeric value in it.
I can get around this by creating a .csv or .txt file from the excel file, but that will add an extra layer of admin to this process and I'm tryuing to make it as seamless as possible
But in both the importexport utility and from SSIS. "#" become "." (aka pound to period) in column headers. So for example [# of errors] becomes [. of errors]. Now I truly do not ever remember having this issue before but then again I have don't remember having to construct field names with a various assortment of characters that are randomly strewn about as though someone discovered the symbol picker recently and had decided to employ it to make the a spreadsheet a more interesting place. Hence after a good day of time wasted. (as I considered doing an alter table after the sheets were put out.. and then thought better of it.) I seek guidence here. Thanks in advance.
I´ve been trying the Excel 2003 Add-in for Analysis Services because after reading the white paper from Microsoft it seemed to be a good client of Analysis Services. I´m getting trouble to make it work right, maybe some of you can help :confused: :
-Can I insert a formula as a result of applying "cubecellvalue" to a "cubemember" function?
-How do I calculate a variance, meaning the percentage of sales for example? Is it possible to divide a "cubecellvalue" into another one?
-How do I format the Excel sheet so that the format changes when drilling-up and down?
I thought this should be easy... but, so far it has not been. I want to export data from SQL Server using a query to an Excel spreadsheet. I'm using SQL Server ODBC for the source connection and a Connection To Excel as my destination source. The spreadsheet exists and has the first row with column names. My mappings and query work fine. I don't have any warnings before trying to execute. BUT it will not insert the data into the spreadsheet. Here are the errors I'm getting: [Destination - TEST$ [28]] Error: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E21. [Destination - TEST$ [28]] Error: Cannot create an OLE DB accessor. Verify that the column metadata is valid. [DTS.Pipeline] Error: component "Destination - TEST$" (28) failed the pre-execute phase and returned error code 0xC0202025.
TEST$ is the sheet that I am trying to add the data to and I'm using Excel 2003.
My query is simple: SELECT OrderDate AS Date, VendorName AS Vendor, Item AS Product, TotalCost AS Amount FROM osv_Ordercaldwecs319 All fields have been converted to varchar although I started with not trying to convert them so I have tried both ways.
What is causing the errors? Where can I look to find the problem? I'm guessing it's a data conversion problem but I made everything varchar and no formating on the spreadsheet (although I've tried that as well)
I have a package that uses a for loop to iterate through an unknown amount of excel files and pull their data into a table. However, there will be cases when the file is corrupted or has some sort of problem so that either the transformation will fail or the excel source will fail.
I have it so that for each iteration if the transform was successful the file is moved to an archive directory, and if it fails the file is moved to a different directory.
But I don't want the package to be marked as failed. For the control flow tasks I have set the individual components to FailPackageonFailure = False, and for the Data Flow tasks I have set ValidateExternalMetadata = False.
It no use to set the MaxErrorCount higher because I can't guarantee how many files will be processed and how many might fail.
Could anyone suggest a clean way to trap these errors? Specifically, the "Cannot Aquire Connection from Connection Manager", which is the excel connection.
Am trying to write sql2005 data to Excel. I have problems when data length exceeds 255 chars. I used a sample destination file with cells containing >255 chars where required, so that the Excel Destination external column was recognized as DT_Text.
My oledb source columns(external and output) are both varchar (1000).
Works fine , but fails when values>255 chars.
Error 0xC0202009 on ProcessInput.
I use SQL2005 enterprise SP2.
I tried changing the Oledb source output to Textstream, but that does not work at all.
When I try to connect from excel , to SSAS Getting error message like
A Connection attempt failed because the connected party did not properly respond after a period of time or established connection failed because connected host has failed to respond.
If I connect to SSIS , I'm able to connect correctly.
Why I'm getting this error and how to overcome this?