Data Mining Add-Ins Doesn't Work With Non-English Regional Settings
Dec 23, 2006
I tried Data Mining Add-Ins for Office 2007 - CTP December 2006.
Test settings: Windows XP SP2 english with Italian regional settings, Office 2007 english (RTM), SQL Server 2005 Developer (with SP2 CTP Dec06) and Data Mining Add-ins for Office 2007 (CTP Dec06).
If I keep regional settings in Italian, I get error like this:
Old format or invalid type library. (Exception from HRESULT: 0x80028018 (TYPE_E_INVDATAREAD))
If I change regional settings to English, the Add-in works.
I found this description as the possible cause of the problem: http://msdn2.microsoft.com/en-us/library/ms178780(vs.80).aspx - if this is the issue, it would be necessary to change the ExcelLocale1033Attribute on the component.
Is there another workaround other than to install the Office 2007 MUI?
Marco Russo
http://www.sqlbi.eu
http://www.sqljunkies.com/weblog/sqlbi
I have been playing around with SSIS for a bit, and i am running into the following problem.
I am trying to import 3 worksheets from an excel file. The import must be performed on several servers.
I created a config file for this which contains the path to the excel file and the connectionstring, and the LocaleID's of the different tasks.
I have the following tasks :
1. preparation SQL task ( this calls a few sql statements which cleanup certain import tables ) 2. data flow task ( this the actual import from the excel file to several tables, this what goes wrong ) 3. execute SQL task ( this calls a stored proc which does some filtering the imported data into other tables ).
point 2 is giving problems.
Point 2 will only work when i change my computers regional settings to English. If i have my settings set to Dutch, it won;t work. Even specifying the english localeID's seem to have no effect.
The problem is a decimal field in the database, and the difference in decimal separator. When i set my regional settings to English the values in the excel are notated like 74.54 which works. And when i have the dutch regional settings the same value in excel is displayed like 74,54. I haven't got a clue how to solve this.
Is there a way to tell the SSIS to not use the regional settings, but use the settings specified in the config file ? And if this is possible how can i check if the correct settings are used ?
Here's the situation: I have a ssis package which receives 3 dates as input parameters (3 datetime variables), executes different data flows and at the end inserts (among some other values) these 3 dates into a custom log table.
The problem comes while inserting the values into a log table. I added an Execute SQL Task that calls a stored procedure which inserts a record into a log table (sqlStatement exec dbo.InsertIntoLog date1=?, date2=?, date3=?). In Parameter mapping section I set parameters = variables that hold the datetime values. The problem is that the dates are inserted into the log table in American format (mm/dd/yyyy, that's the server setting), my dates are in the european format...
Any ideas how to avoid it? Is there a way to write an expression instead of the sqlStatement in which I'd do some datepart-ing?
Hi We have a bunch of servers running server 2000 & 2003 along with many sql server (versions 2000 & 2005) databases in a production environment pulling transactions then doing warehousing & reporting.
An audit has shown up 1 production server using English_US 'mmddyyyy' system date formats, all others being English_Australian 'ddmmyyyy'.
Is it safe to simply change the regional settings on this English_US server to English_Australian or will it mess up the data in our SQL 2005 databases? I've not been able to get a definitive answer from anyone yet!
i have created a dsn to extract data from an SQL2000 server into Excel. this all works fine. Now i have edited the data and would like to import the updated fields in my database.
There are no new fields just updated information. I have used the import wizard from SQL Enterprise manager. I have selected to delete the current table and replace is with the one ive created.
This is where the problems begin.
When i finish the wizard i get an error saying there is a conflict "collum reference constraint". wich i think has something to do that there are links to this table wich can not be simply deleted and recreated.
I am wondering where can I store my mining results in data mining engine? For example, I got mining results like accuracy chart, decision trees, and other formats of results based on different mining algorithms I used for my data mining, so where can I actually store the results for reporting service use later? Is it possible to do that in SQL Server 2005?
Thanks a lot for any help and guidance in advance.
I have had the same Error 29506 that a lot of people are having when installing SP2 for SQL 2005. I've tried the install with myself (a Domain Admin), local Administrator, cascaded full rights down the entire file system structure and still not luck. One thing I'm wondering if it is hanging me up is that all of my databases and logs are not on C:. They are on LUNS on a NetApp SAN (Data is on M: and Logs is on L:). Even the system databases (Master, Model, etc.) are on the LUNs. The error logs referenced permissions to the data directory under the default installation path on C:. Anyone else have this problem? Got a fix? I really don't want to migrate all of my data back to the local machine, apply the patch, them migrate back. Surely this SP should be able to read the data location from the SQL engine. And surely others have their databases on SANs.... I'm at a loss.
It's well known issue, that one can't use any dataset fields in a report header/footer directly. One of the approach is to create query-based parameter that basically equals =First(Fields!@FieldName@.Value, "@DataSetName@") and use that parameter value instead. But it doesn't work in my case!
My report displays some entity description and is parametrized with EntityID param. Its header contains entity name that, according to the approach, is queried from the data source through the EntityName report parameter. There's important issue: the report is displayed in ReportViewer control, that is embedded into my application and entity ID parameter isn't ser by user in ReportViewer parameters area. Its default value is changed by the application with SetReportParameters() web method every time a user wants to view the report according to the entity the user is exploring in the application. But after the report has been rendered, its header always contains not actual (outdated) entity name. Nevertheless, the report body contains actual data (including entity name). If I alter entity ID parameter in ReportViewer or in web-based Report Manager and refresh report, header displays correct entity name.
This problem is a bit weird but I'm just wondering if anybody else experienced this.
I have a package that has file system tasks (copying dtsx files actually). Basically the package copies other packages to a pre-defined destination. Thing is, it only works if one of the packages it is configured to copy has some sort of sensitive data (e.g., a connectionstring with a password), otherwise it reports a success message on execution but doesn't actually do anything. I've checked the forcedexecutionresult and it is set to None for that matter.
Just wondering if anybody else experienced this problem and of course if there's a way to solve it.
Another tricky confusion to me is that: many algorithms settings for the native algorithms in SQL Server 2005 Data Mining do not really significantly improve the results of those mining models with settings changes? (Apart from clustering algorithm setting of cluster number, by setting 0 as the number of clusters, the system will automatically cluster the data into clusters which I assume is the best way of mining the model with this method).
Any good advcies on this will be a lot appreciated.
I am looking forward to hearing from you shortly for this confusion and thanks a lot in advance.
I have a simple parent that uses an Execute Package Task to call a simple child package. The child package has a Data Flow that I disabled. When running the child package by itself, the data flow task is bypassed. When running the package via the parent the data flow task executes.
Second issue is when you disable the package configurations in the child, the parent doesn't recognize that it's diables and tries to load the configurations.
It's almost like there are some settings in a child that get ignored by the parent??
hi guys :( ive searched on all the topics here about replication. but now i am wondering what windows xp settings do i have to edit? i read something about dcom settings and changing permissions. what is the RIGHT way to share a file or whatever because the schema cannot be accessed for some reason. i need a whole list of things to do because things are not working out after i read the tutorials on other sites. please someone help me.
basically i need to have two servers that need to be replicating each other. i realized that the instances have to be named so i am just totally confused. ill even let anyone connect remotely to help me out if they want. or explaining will do fine. please! thanks guys! happy holidays!
I would like to know if there is any way to migrate third-party data mining packages with SQL Server 2005 data mining algorithms together then we can have a comparison among all of them to get the best results for training models.
Hoping someone will have a solution for this error
Errors in the metadata manager. The data type of the '~CaseDetail ~MG-Fact Voic~6' measure must be the same as its source data type. This is because the aggregate function is not set to count or distinct count.
Is the problem due to the data type of the column used in the mining structure is Long, and the underlying field in the cube has a type of BigInt,or am I barking up the wrong tree?
I'm a beginner with SQL 2012 SSDT & SSMS. I get this error message when I try to deploy my project:Â
"Error 6 Error (Data mining): KEY SEQUENCE columns are not supported at the case level. The 'Customer Key' column of the 'TK448 Ch09 Cube Clustering' mining structure contains content that is not valid. 0 0 " I am finding it hard to locate the content that is not valid. I've been trying to find a answer for this problem but can't seem to find anything. How can I locate the content that is not valid and change or delete it so that I can deploy this solution?
- a data mining structure with about 80 columns. - a data mining model using Microsoft_Decision_Trees with 2 prediction columns.Â
I thought I would then explore the possibility of have more than 2 prediction columns, in this case 20.
I get an error message and I can't work out : a) if this is because there's a limit to the maximum number of prediction columns and where that maximum is stated. b) if something else has become corrupted c) there's a know bug and if the error message is either meaningful or not.
Either way, I'm unable to complete the data mining wizardÂ
The error message is :Errors in the metadata manager. Either the mining structure with the ID of '[my model Structure]' does not exist in the database with the ID of 'DMAddinsDB', or the user does not have permissions to access the object.
I am using Microsoft_Time_Series and have set HISTORIC_MODEL_GAP to various values (from 1 to 21). I always get this error: Error (Data mining): The 'HISTORIC_MODEL_GAP' data mining parameter is not valid for the 'My Time Series' model.
In Algorithm Parameters window, this parameters is not there by default, so I have to add it.
Implementing data mining Add-in in an academic setting? We need to handle over 150 new students a semester and have their connection to Analysis Services survive for their four years at the college. We are introducing data mining to every freshman business student as a unit within their Intro to Excel class (close to a month of work to give them a sense of what is possible). Other courses later in their curriculum will expand on that introduction.Â
Once implemented, we would have as many as 900 connections to manage (four years from now). It is possible that multiple sections will be running at the same time, so 40 students may be accessing the data mining tools concurrently. Â
Is there a way to "bulk establish" the access credentials and establish those databases?
With SASS Database i have created Data mining Structure Using Time series algorithm, while processing the SSAS db, Data mining  taking long time to process, so how we can  reduce processing time ???
Hi, I have just run a simple data set through a model to predict a simple true or false value (i.e. binary output) The Lift Chart/Mining Legend in Analysis Services shows three results €“ Score, Population Correct (%), and Predict Probability (%)
Population Correct I beleive is the percentage of predictions it got right out of the total number of predictions it tried to make. Is this correct?
However, I can€™t work out how the other two are derived in particular the 'SCORE'. To give a live example the scores were as follows:
Model Score Pop Correct Pred Probability Decision Trees 0.83 76.59% 54.28% Neural Network 0.75 67.63% 50.05% Ideal Model 100.00%
Can anyone help with this and give a detailed explanation?
I am trying to model data in analysis services with the Advance Create Mining Model function in the excel addin. I am having trouble creating an association model that works like the Associate button above the Advanced button.
The format of my data is like this
OrderID Product
100 Bike
100 Helmet
100 Shoes
200 Helmet
200 basketball
200 Bat
300 Shoes
300 Socks
The associate button works perfectly since it asks me which column is the transaction id (orderid) and which column I am trying to predict (product). The advanced create mining model asks me to determine what the columns are...
OrderID=key Product=Input+Predict?
When I run the advance create mining model associate, I get a browser that gives me no rules and the support for only one item itemset (each product but no combination of products).
Does anyone know what I have to do to get it to work like the associate button?
I perform data mining on all products and a specific product category. Do I need to create 2 data source views, one for all products and the other one for the specific product category? Afterward, I run the Data Mining Wizard 2 times to create 2 mining structures. I also need to add the same mining model (e.g. Bayes, Cluster) to each of these mining structures. Is there any simple way to do it?
I'm pretty new to DTS, so forgive me if this is basic. I created a simple DTS package to run a query and export it to a text file. I can execute the package fine from my workstation through EM, but when I try to execute the job to run the package I get this error: Error = -2147467259 (80004005) Error string: Error opening datafile: Access is denied.
I think that maybe SQL Agent doesn't have the right permissions to write to that network drive. What should the permissions be?
This is probably very simple, but I can't get passed this problem.
I have a report in MS Access that uses info generated by a query. One of the text fields in the query contains either the word 'Select' or the name of a course. The report should display a space if the value is 'Select', or the actual value of the field in any other case. The field can never contain a null value.
I've used: =IIf([optVoc1]="Select","",[optVoc1]) in the text box on the report, but this only returns #error regardless of the actual content of the field.
I create and schedule a SQL job to run every minute to update a table base on certain condition but it doesn't work. Job history says successful every time but the table doesn't get updated.
However if I move it to Query Analyzer and run it under dba, it will work. Thinking that it may have to do with the user the job run as, I then change run as user from self to dba. But still SQL job won't update my table.
Anything about user permission or security that I can check? Or it there any other possibility?
When I run the select its fine but I cannot delete..... i have done this many times and it has worked.... I cannot see the error what am i missing
select eqnow.empnumber, eqnow_names.empnumber, eqnow_names.names --delete from eqnow inner join eqnow_names on eqnow.empnumber = eqnow_names.empnumber where eqnow_names.names is null
i get this error Server: Msg 156, Level 15, State 1, Line 4 Incorrect syntax near the keyword 'inner'.
When I try to install the problem I get the following error.
The SQL Server service failed to start. For more information, see the SQL Server Books Online topics, "How to: View SQL Server 2005 Setup Log Files" and "Starting SQL Server Manually."
The log tells me nothing useful. I can't start the thing manually because after clicking cancel on the error message, the installer proceeds to roll back the installation.
This is the autogenerated code from the SelectCommand of my DataAdapter, except the red text. This DataAdapter is used to fill a DataGrid. What I want to do, is to calculate the total memory (4 slots) / PC.This code makes the sum of all memory of all PC's together.I'm not sure if the group by clause is needed here ...Me.OleDbSelectCommand1.CommandText = "SELECT PC.ID, PC.Nummer, PC.Netwerknaam, Case_Type.Type AS Case_Type, Processor_T" & _"ype.Type AS Processor_Type, Processor_Snelheid.Snelheid AS Processor_Snelheid, " & _"(SELECT SUM(Memory) FROM Memory, PC, RAM WHERE RAM.PcID = PC.ID AND RAM.GrootteID = Memory.ID)" & _"AS Memory, OS.Naam AS OS, OS_SP.Nummer AS OS_SP, Gebru" & _"iker.Naam AS Gebruiker_Naam, Status.Status, PC.Tagged FROM (Status RIGHT OUTER J" & _"OIN ((((((((PC LEFT OUTER JOIN (RAM LEFT OUTER JOIN Geheugen ON RAM.GrootteID = " & _"Geheugen.ID) ON PC.ID = RAM.PcID) LEFT OUTER JOIN Case_Type ON PC.Case_TypeID = " & _"Case_Type.ID) LEFT OUTER JOIN OS_SP ON PC.OS_SpID = OS_SP.ID) LEFT OUTER JOIN Ge" & _"bruiker ON PC.GebruikersID = Gebruiker.ID) LEFT OUTER JOIN Processor_Snelheid ON" & _" PC.Processor_SnelheidID = Processor_Snelheid.ID) LEFT OUTER JOIN Processor_Type" & _" ON PC.Processor_TypeID = Processor_Type.ID) LEFT OUTER JOIN OS ON PC.OsID = OS." & _"ID) LEFT OUTER JOIN Switchbox_Details ON PC.ID = Switchbox_Details.PcID) ON Stat" & _"us.ID = PC.StatusID) GROUP BY PC.ID, PC.Nummer, PC.Netwerknaam, Case_Type.Type, " & _"Processor_Type.Type, Processor_Snelheid.Snelheid, OS.Naam, OS_" & _"SP.Nummer, Gebruiker.Naam, Status.Status, PC.Tagged"I would like to know how to calculate the total memory for each separate PC.Hope you can help me.