I am starting using OLAP service. The first problem the first day I use it.
I setup a system DSN for OLAP under ODBC source. I use SQL driver to create a data source to a SQL server. I am using SQL SA login account. Then I go into OLAP manager create database. Then in the library I set up a data source using the DSN I created before.In the General Tab I use OLE DB for ODBC provider. After this I try to create a new cube using the wizard. I got error the message that the SQL login is invalid. I double check it is right login account. I failed several times. Then I tried directly create data source under library and not using the DSN I created before. The first time it failed again has the same message. Then the second time when I create data source I use NT authentication instead SQL login. Now everything went fine. I was able to create a cube. But I still wondering what is preventing using SQL login account when creating data source.
I prepared an OLAP cube for the report data source in the SSAS 2005. The OLAP cube consists of more than 20 dimensions and several measure groups. I then created the subset/view of the OLAP cube using the "Prepective" function and limit to not more than 7 dimensions on each of the subset. How do I reference the OLAP cube subset as the data source when developing the report in the report designer. Furthermore what is the advantage of creating multiple smaller OLAP cubes with less dimensions comparing to one big OLAP cube with several subset/view attached to it. Thanks.
I've generated some mining models against an OLAP data source (dimension). However, when I go to generate a lift chart, it seems that the only data source that can be used for input is the data source view (the relational database). Is that right? Or is there something I'm missing here.
I was figuring I'd use one slice to train the model, then another slice to test the accuracy of the results. But right now it's looking like I can't do that.
I think I found a bug or something, I have created a package like this
For Loop Container
OLE DB source, Sql Variable the query is an openquery with an MDX query Script Component (just ToString("+00,-00") transformations) OLE DB destination End loop
the MDX query is changed within the variable which is evaluated as expression.
when I run that package the process msmdsrv.exe starts consuming more and more memory. I don't know if it's the analysis services cache that gets bigger or something else. but if it is, how do I clear it in SSIS?
just in case, the mdx query goes like this:
select {[Datos Resumen Mensual]}
on columns,
[Cuenta].[Cuenta].&[variable1]:[Cuenta].[Cuenta].&[variable2] on rows
from [Bd Rtd]
WHERE ([Tiempo].[Mes].&[2007-09-01T00:00:00],[Unidad Negocio].[Unidad Negocio].&[1])
Our OLAP environment involves an ETL/Data Warehouse/Data Mart server and a cube publisher server. We would like to learn how to automate the Archival/Restore of OLAP databases. We are currently doing it manually though OLAP Manager. Any help would be appreciated. Thanks. James.
-- James E. Bothamley Senior Database Administrator Dave & Buster's, Inc. 2481 Manana Dallas, TX 75220
Work Phone (214) 904-2296 email jbothaml@DaveAndBusters.Com
"Once in a while you can get shown the light in the strangest of places if you look at it right"
, Hi In this code how can I create a new data source and new data source view and model and structure that it run dynamic. In this code I have a lot of errors, that they are about server and database don€™t have in current code, In this code, first I should definition server or no?
How can I create data source and data source view and model and structure? Please say code of that, and guide me. databasename and srv is unknown. Do I add other reference with analysis services? Please explain about these codes: ************************************************************************ 1) RelationalDataSource dsNew = new RelationalDataSource( datasourceName, Utils.GetSyntacticallyValidID( datasourceName, typeof(RelationalDataSource)));
I have set up a new connection as a connection from data source, but I cannot see how to use this connection to create my Data Flow Source. I have tried using an OLE DB connection, but this is painfully slow! The process of loading 10,000 rows takes 14 - 15 minutes. The same process in Access using SQL on a linked table via DSN takes 45 seconds.
Have I missed something in my set up of the OLE DB source / connection? Will a DSN source be faster?
On the subject of Data Warehouses, Data Cubes & OLAP….I would like to speak frankly about Data Warehouses, Data Cubes andOLAP (on-line analytical processing). Has it dawned on anyone elsethat these buzz words were created by some geek who decided to take astab at marketing? Knowing that to the backwoods manager who knowslittle of technology that new innovative names for old concepts wouldhelp to sale their products.I mean seriously, what is the story here? In a nut shell, and pleasestop me if you disagree, but isn’t a data warehouse simply adatabase? Can’t you do everything on a conventional databaselike SQL Server, Oracle or DB2 that you can do on these newproprietary Data Warehouse constructs? I mean who are they trying tofool?Take a look, for instance, at Data Cubes. Who hasn’t noticedthe striking similarity between data cubes and views used in all themore robust databases? Also, what about OLAP? OLAP is nothing morethan a report generator. There’s nothing you can do with thesemillion dollar price tagged Data Warehouse total solution packagesthat I can’t do with SQL Server, Oracle or DB2…for thatmatter Microsoft Access.As an example some sales people for Metadata Corporation has the VicePresident of I.T. in Nashville, for Healthspring, sold on their totalsolution data respository which is such a scam. All they had to dowas throw a couple of buzzwords at him and they have him hypnotized.Personally, I feel that these kinds of marketing practices undermineour industry. It helps to unravel what little standards orconsistency we have. What do you guys think?Stuart
hi everybody, i want to create data source and data source view for data mining, with using C Sharp. i have create data source and data source view and export to XML file, but when i change to another computer, run those XML file, it return error, when i run statement to create and biuld mining model, what can i change on xml or how to run XML on another computer sucessfully, and have i build data source and data source view, how to do it.?
Today I was making a few reports. When I tested the reports in Visual Studio, they worked great: I got the expected result. But when I deployed the reports to our reportserver the problem started. When I click on the directory in which my reports are deployed, I got my 4 reports. Till now everything worked correct. But when I click on a report to view the results it went wrong. I got an error: "Cannot create a connection to data source 'Live'" (Live is the name of our data source).
We are using the Windows Logons and I am sure that I have all the rights on the server, I gave myself 'sysadmin' rights, so it should work. I also have tried it with all the roles assigned on my account, but then it still won't work.
When I modify the data source, and set it to another server en database it works. The datasource 'Live' exists on a x64 MsSQL server, en the other datasource is on a x86 MsSQL server. Maybe that is the problem?
Hello,If I wrote the next ebay (yes I know, yawn-snore) and I had a databasewith 5 million auction items in it, what would be a really goodstrategy to get a search done very quickly? Would it involvesomething called OLAP and/or "data mining"? The only technology I amfamiliar with is simply SQL Server databases with stored procedures.I think I'd be guessing correctly and say that this technology simplywouldn't be fast enough *on it's own* to do super fast queries againstmassive amounts of data.Any insights would be of great interest. Thanks.-Frameworker.
I was trying to load data using SSIS, Data Flow Task, OLE DB Source, source was a view to a OLE DB Destination (SQL Server). This view returns 420,591 rows from Query Analyzer in 21 seconds. Row length is 925. When I try to executed the Data Flow Task from SSIS, I had to stop the process after 30 minutes, because only 2,000 rows had been retrieved. I modified the view to retun top 440, 000 and reran. This time all 420, 591 rows were retrieved and written in 22 seconds. Next, I tried to use a TOP 100 Percent. Again, only 2,000 rows were return after 30 minutes. TempDB is on a separate SAN Raid group with 200 gig free, Databases on a separate drive with 200 gig free. Server has 13 gig of memory and no other processes were executing.
The only way I could populate the table was by using an Execute SQL Task and hard code an Insert into table selecting data from the view (35 seconds) from SSIS.
Have anyone else experience this or a similar issue? Anyone have a solutionexplanation?
I have an OLAP server and would like to use my Chart FX software without having to purchase the OLAP extensions on the server due to budget restraints (ouch).
I've heard that it is possible (although limited) to attach toan OLAP cube using SQL select statements (not MDX).
Basically, I would like to pull the OLAP data in the relational sense.
Is this possible? If so, are any good articles on this subject?
I'm new to OLAP and would like to transition slowly.
The drillthrough in my cube is working fine except for the cases where the dimension member is null. For example I have the dimension PRODUCT (dim) - PROD_TYPE_CD (name = PROD_TYPE_CD || '-' || PROD_TYPE_NM) - PROD_CD -PROD_NM
So for if the data is like the following where the PROD_TYPE_CD is null : - (name = -FOOD) - 123 - bread
The drillthough is not displaying any data no matter I select the PROD_TYPE_CD, PROD_CD or PROD_NM in the dimension drop down altough if query the DB directly I found data to display in the drillthrough Any ideas why this is happening and how to solve it.
After reading some threads on this forum about accessing OLAP data from SSIS via an OLE DB Source component (and trying it myself and getting that "pcrsstore.cpp line 325" error), I have worked around the problem by using a Script Task component.
This calls a .NET assembly I wrote that uses ADOMD.NET to get at the data using an MDX query, and then actually posts an event to a Notification Services instance via a stored proc.
Writing, deploying, and maintaining a .NET assembly is more than I wanted to do, but it was worth the effort since now I know a little about extending SSIS when I need to.
But what I really wanted to do was the following, all from within the core components of SSIS:
Import my OLTP data into my relational warehouse (already done using SSIS) Process the cube (already done using SSIS) Access the cube via MDX and post an event to Notification Services It's this last step that needed some special work because of SSIS's apparent limitation accessing cube data (although posting the event to NS would be easy because SSIS can call a stored proc on the NS application database). So I'm hoping that a future release or service pack of SSIS will give us some components to run MDX and get the OLAP data into the pipeline as if it were from any other source. Thanks for reading, and if anyone has any suggestions on a better way to achieve what I need please let me know! -Larry
I am using data flow task.And data flow source uses ole db for olap 9.0 to connect my ssas. sql comment is my access mode. A mdx query extracts data. Data flow destination is sql server table. Error said Data Flow Task: OLE DB Source [579]: The output "OLE DB Source Output" (589) references an external data type that cannot be mapped to a Data Flow task data type. I guess it is a implicit data type convertion problem. But how to solve it???
I'm working with PivotTable on Excel 2000 which is connected to an OLAP server (from SQL Server 7 installation). The pivot is intended to analyze Sales during April 2001. Yesterday I found out that OLAP/Excel returned/displayed inconsistent data. The 'April Total' value is NOT equal to the 'Quarter 2 Total' (I already inspected the underlying database and sure that there is absolutely NO data for months after April 2001). The value for 'April Total' is the correct one. I'm not sure whether the problem resides on the OLAP Server or Excel (pivot) itself. For ones who like to help me I would be glad to supply you with the screenshots (just email me). Please help.
I have a .rdl file that was exported out of ProClarity's Desktop Professional 6.1 using their RS plug-in. I uploaded the file into Report Manager and when I execute it, I get the following error:
An error has occurred during report processing.
Query execution failed for data set 'Three_Month_Funding_Trend'. Unable to recognize the requested property ID 'ReturnCellProperties'.
Does anyone have any idea what this is referring to? Does it have something to do with my configuration, connection or the report definition? Other reports such as DBMS based reports work fine.
1. SQL Server Data Warehouse 2. OLAP CUBE in Analysis Services
My question is - If my SQL Server Data Warehouse is changed (Having Append Data) - Is that My OLAP Cube will have the Append Data?
It's possible, my OLAP Cube always having Append Data if my Data Warehouse is changed? If yes, how to do it without re-deploy and re-process my Analysis Services Project.
1. Data Warehouse 2. OLAP CUBE in Analysis Services
My question is - If my Data Warehouse is changed (Having Append Data) - My OLAP Cube will have the Append Data?
It's possible, my OLAP Cube always having Append Data if my Data Warehouse is changed? If yes, how to do it without re-deploy and re-process my Analysis Services Project....
Hello,Can I import an OLTP (Reltional DB) as a Data Source into SQL ServerAnalysis Services 2005 and then use the Cube Wizard and the new DataSource View feature to create the OLAP model ?Or do I have to first design an OLAP Data Warehouse with a Star Schemaand then import this DW as a Data Source into my Analysis ServicesProject.With SQL Server 2000 , OLAP would be the way to go..but with SQLServer 2005 , it seems as though the wizard and data source viewfeatures do half the work for you.I have an OLTP DB and am not sure which route I should take ! Anysuggestions / input would be much appreciated.Thanks in Advance...RegardsRusszee
I have created database and OLAP cube in Analysis services using SSAS.In SSAS I have used a datasource which is using SQL tables to populate OLAP cube.Now when I added some more data to my SQL tables and trying to deploy cube,the newly added is not getting populated in the cube.So i want run SSIS package which will import data from SQL tables to this OLAP cube.
Can you please help me how to write this SSIS package to import data from SQL tables to OLAP cube.(Very urgent issue)
[DTS.Pipeline] Error: "component "Excel Source" (1)" failed validation and returned validation status "VS_NEEDSNEWMETADATA".
and also this:
[Excel Source [1]] Warning: The external metadata column collection is out of synchronization with the data source columns. The column "Fiscal Week" needs to be updated in the external metadata column collection. The column "Fiscal Year" needs to be updated in the external metadata column collection. The column "1st level" needs to be added to the external metadata column collection. The column "2nd level" needs to be added to the external metadata column collection. The column "3rd level" needs to be added to the external metadata column collection. The "external metadata column "1st Level" (16745)" needs to be removed from the external metadata column collection. The "external metadata column "3rd Level" (16609)" needs to be removed from the external metadata column collection. The "external metadata column "2nd Level" (16272)" needs to be removed from the external metadata column collection.
I tried going data flow->excel connection->advanced editor for excel source-> input and output properties and tried to refresh the columns affected. It seems that somehow the 3 columns are not read in from the source file? ans alslo fiscal year, fiscal week is not set up up properly in my data destination? anyone faced such errors before?
RE: XML Data source .. Expression? Variable? Connection? Error: unable to read the XML data.
I want my XML Data source to be an expression as i will be looping through a directory of xml files.
I don't see the expression property or the connection property??
I tried setting the XMLData property to @[User::filename], but that results in:
Information: 0x40043006 at Load XML Files, DTS.Pipeline: Prepare for Execute phase is beginning. Error: 0xC02090D0 at Load XML Files, XML Source [108]: The component "XML Source" (108) was unable to read the XML data. Error: 0xC0047019 at Load XML Files, DTS.Pipeline: component "XML Source" (108) failed the prepare phase and returned error code 0xC02090D0. Information: 0x4004300B at Load XML Files, DTS.Pipeline: "component "OLE DB Destination" (341)" wrote 0 rows. Task failed: Load XML Files Information: 0xC002F30E at Bad, File System Task: File or directory "d:jcpxmlLoadjcp2.xml.bad" was deleted. Warning: 0x80019002 at Package: The Execution method succeeded, but the number of errors raised (2) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. SSIS package "Package.dtsx" finished: Failure. The program '[3312] Package.dtsx: DTS' has exited with code 0 (0x0).
I've question about how to handle structural datamodel changes in a datasource of PowerPivot. Suppose I'm developing a starmodel in SQL Server and sometimes a datatype changes or a name of a field changes in a table. It seems to me that PowerPivot handle this not gracefully as Analysis MD does (mostly). I received an error because of a wrong fieldname or even no error when a dattype changes in PowerPivot. Is this common or do I something wrong here. Does this mean that every time the datamodel changes the PowerPivot should be recreated? Or am I missing the clue here?