I'm working with PivotTable on Excel 2000 which is connected to an OLAP server (from SQL Server 7 installation). The pivot is intended to analyze Sales during April 2001. Yesterday I found out that OLAP/Excel returned/displayed inconsistent data. The 'April Total' value is NOT equal to the 'Quarter 2 Total' (I already inspected the underlying database and sure that there is absolutely NO data for months after April 2001). The value for 'April Total' is the correct one. I'm not sure whether the problem resides on the OLAP Server or Excel (pivot) itself. For ones who like to help me I would be glad to supply you with the screenshots (just email me).
Please help.
I am a newcomer to Microsoft Database Technology and have appeared to come across an issue with the SUM function in SQL Server Mobile Edition.
I am running Visual Studio 2005 and have created 2 tables:
Orders and OrderLines which are set up in master detail fashion.
The SQL Statement I create in the Query Builder is as follows:
SELECT Orders.OrderNo, Orders.OrderDate, Orders.Priority, Orders.Address, SUM(OrderLines.Quantity * OrderLines.QualityReference) AS Total FROM Orders, OrderLines WHERE Orders.OrderNo = OrderLines.OrderNo GROUP BY Orders.OrderNo, Orders.OrderDate, Orders.Priority, Orders.Address
Now, the SUM returns a total for all records in the OrderLines table, and not just the records whose OrderNo is the same as Orders.OrderNo
Can someone out there please clarify whether its an issue with my code or a bug with SQL Server Mobile???
Here are a couple of screenshots which will demonstrate what I mean.
Here is the contents of the Orders and OrderLines tables. Each order has only 1 line item in it.
When performing the summation over the OrderLines table, it produces the SUM of the all records in OrderLines and not according to the GROUP BY clause. See the following screen shot.
When I use the following to get the StoredProc parameters, the IN variables are returned correctly as "IN" but all the OUT variables returned as "INOUT" in the PARAMETER_MODE instead of "OUT". Is this a bug?
SELECT PARAMETER_MODE,PARAMETER_NAME,DATA_TYPE FROM INFORMATION_SCHEMA.PARAMETERS WHERE SPECIFIC_NAME = 'SPName_with_IN_OUT_Variables'"
STDEV() gives incorrect values with reasonable input.
I have a table filled with GPS readings. I've got a column LATITUDE (FLOAT) with about 20,000 records between 35.6369018 and 35.639890. (Same value to the first 5 digits of precision---what can i say, it's a good gps.)
Here's what happens when I ask SQL Server ("9.00.1399.06 (IntelX86)") to compute the standard deviation of the latitude:
// Transact-SQL StdDev function:
SELECT STDEV(LATITUDE) FROM GPSHISTORY WHERE STATTIME BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000;
0
// Zero. ZERO??!?!! //Let's re-implement Std Dev from the definition using other aggregate functions:
DECLARE @AVERAGE FLOAT; SELECT @AVERAGE = AVG(LATITUDE) FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000; SELECT SQRT(SUM(SQUARE((LATITUDE - @AVERAGE)))/COUNT(LATITUDE)) FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000;
6.03401924005392E-06
// That's better. Maybe STDEV is using fixed point arithmetic?!?
SELECT STDEV(10 * LATITUDE)/10 FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000;
4.77267753808509E-06
SELECT STDEV(100 * LATITUDE)/100 FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000;
1.66904329068838E-05
SELECT STDEV(1000 * LATITUDE)/1000 FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000;
8.11904280806654E-06
// The standard deviation should, of course, be linear, e.g.
DECLARE @AVERAGE FLOAT; SELECT @AVERAGE = AVG(LATITUDE) FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000; SELECT SQRT(SUM(SQUARE(100*(LATITUDE - @AVERAGE)))/COUNT(LATITUDE))/100 FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000;
6.03401924005389E-06
// Std Dev is a numerically stable computation, although it does require traversing the dataset twice. // // This calculation is not being done correctly. // // Incidently, SQRT(VAR(Latitude....)) gives 4.80354E-4, which is also way off.
I will redefine STDEV to use a stored procedure similar to the above, but the algorithm used to compute VAR, STDEV etc should be reviewed and fixed.
1. Data Warehouse 2. OLAP CUBE in Analysis Services
My question is - If my Data Warehouse is changed (Having Append Data) - My OLAP Cube will have the Append Data?
It's possible, my OLAP Cube always having Append Data if my Data Warehouse is changed? If yes, how to do it without re-deploy and re-process my Analysis Services Project....
When I double click on a database in the Enterprise Manager and the "Database Space Available" (in the Database tab) always shows 0.00 MB even though there is space available. This only happens to one of our database. The data device is 1.8GB and log device is 360MB and I've allocated all the space from device to database.
We're using SQL Server 6.5 sp3 in NT 4.0 sp3.
Anybody knows why is SQL Server reporting 0.00MB when you know there is space avaialble. I need to know the data space available and log space available so that I can expand it when the space runs low. Please help.
I have installed the trial enterprise edition of sql server 2005. I am trying to recreate a dts packages. I am using an iseries OLE DB provider. In the data flow the connection to a physical file that has string data gets translated into binary data. Has any one gotten a download to work with the correct data?
Our OLAP environment involves an ETL/Data Warehouse/Data Mart server and a cube publisher server. We would like to learn how to automate the Archival/Restore of OLAP databases. We are currently doing it manually though OLAP Manager. Any help would be appreciated. Thanks. James.
-- James E. Bothamley Senior Database Administrator Dave & Buster's, Inc. 2481 Manana Dallas, TX 75220
Work Phone (214) 904-2296 email jbothaml@DaveAndBusters.Com
"Once in a while you can get shown the light in the strangest of places if you look at it right"
On the subject of Data Warehouses, Data Cubes & OLAP….I would like to speak frankly about Data Warehouses, Data Cubes andOLAP (on-line analytical processing). Has it dawned on anyone elsethat these buzz words were created by some geek who decided to take astab at marketing? Knowing that to the backwoods manager who knowslittle of technology that new innovative names for old concepts wouldhelp to sale their products.I mean seriously, what is the story here? In a nut shell, and pleasestop me if you disagree, but isn’t a data warehouse simply adatabase? Can’t you do everything on a conventional databaselike SQL Server, Oracle or DB2 that you can do on these newproprietary Data Warehouse constructs? I mean who are they trying tofool?Take a look, for instance, at Data Cubes. Who hasn’t noticedthe striking similarity between data cubes and views used in all themore robust databases? Also, what about OLAP? OLAP is nothing morethan a report generator. There’s nothing you can do with thesemillion dollar price tagged Data Warehouse total solution packagesthat I can’t do with SQL Server, Oracle or DB2…for thatmatter Microsoft Access.As an example some sales people for Metadata Corporation has the VicePresident of I.T. in Nashville, for Healthspring, sold on their totalsolution data respository which is such a scam. All they had to dowas throw a couple of buzzwords at him and they have him hypnotized.Personally, I feel that these kinds of marketing practices undermineour industry. It helps to unravel what little standards orconsistency we have. What do you guys think?Stuart
I am starting using OLAP service. The first problem the first day I use it.
I setup a system DSN for OLAP under ODBC source. I use SQL driver to create a data source to a SQL server. I am using SQL SA login account. Then I go into OLAP manager create database. Then in the library I set up a data source using the DSN I created before.In the General Tab I use OLE DB for ODBC provider. After this I try to create a new cube using the wizard. I got error the message that the SQL login is invalid. I double check it is right login account. I failed several times. Then I tried directly create data source under library and not using the DSN I created before. The first time it failed again has the same message. Then the second time when I create data source I use NT authentication instead SQL login. Now everything went fine. I was able to create a cube. But I still wondering what is preventing using SQL login account when creating data source.
Hello,If I wrote the next ebay (yes I know, yawn-snore) and I had a databasewith 5 million auction items in it, what would be a really goodstrategy to get a search done very quickly? Would it involvesomething called OLAP and/or "data mining"? The only technology I amfamiliar with is simply SQL Server databases with stored procedures.I think I'd be guessing correctly and say that this technology simplywouldn't be fast enough *on it's own* to do super fast queries againstmassive amounts of data.Any insights would be of great interest. Thanks.-Frameworker.
I have an OLAP server and would like to use my Chart FX software without having to purchase the OLAP extensions on the server due to budget restraints (ouch).
I've heard that it is possible (although limited) to attach toan OLAP cube using SQL select statements (not MDX).
Basically, I would like to pull the OLAP data in the relational sense.
Is this possible? If so, are any good articles on this subject?
I'm new to OLAP and would like to transition slowly.
The drillthrough in my cube is working fine except for the cases where the dimension member is null. For example I have the dimension PRODUCT (dim) - PROD_TYPE_CD (name = PROD_TYPE_CD || '-' || PROD_TYPE_NM) - PROD_CD -PROD_NM
So for if the data is like the following where the PROD_TYPE_CD is null : - (name = -FOOD) - 123 - bread
The drillthough is not displaying any data no matter I select the PROD_TYPE_CD, PROD_CD or PROD_NM in the dimension drop down altough if query the DB directly I found data to display in the drillthrough Any ideas why this is happening and how to solve it.
I prepared an OLAP cube for the report data source in the SSAS 2005. The OLAP cube consists of more than 20 dimensions and several measure groups. I then created the subset/view of the OLAP cube using the "Prepective" function and limit to not more than 7 dimensions on each of the subset. How do I reference the OLAP cube subset as the data source when developing the report in the report designer. Furthermore what is the advantage of creating multiple smaller OLAP cubes with less dimensions comparing to one big OLAP cube with several subset/view attached to it. Thanks.
After reading some threads on this forum about accessing OLAP data from SSIS via an OLE DB Source component (and trying it myself and getting that "pcrsstore.cpp line 325" error), I have worked around the problem by using a Script Task component.
This calls a .NET assembly I wrote that uses ADOMD.NET to get at the data using an MDX query, and then actually posts an event to a Notification Services instance via a stored proc.
Writing, deploying, and maintaining a .NET assembly is more than I wanted to do, but it was worth the effort since now I know a little about extending SSIS when I need to.
But what I really wanted to do was the following, all from within the core components of SSIS:
Import my OLTP data into my relational warehouse (already done using SSIS) Process the cube (already done using SSIS) Access the cube via MDX and post an event to Notification Services It's this last step that needed some special work because of SSIS's apparent limitation accessing cube data (although posting the event to NS would be easy because SSIS can call a stored proc on the NS application database). So I'm hoping that a future release or service pack of SSIS will give us some components to run MDX and get the OLAP data into the pipeline as if it were from any other source. Thanks for reading, and if anyone has any suggestions on a better way to achieve what I need please let me know! -Larry
I am using data flow task.And data flow source uses ole db for olap 9.0 to connect my ssas. sql comment is my access mode. A mdx query extracts data. Data flow destination is sql server table. Error said Data Flow Task: OLE DB Source [579]: The output "OLE DB Source Output" (589) references an external data type that cannot be mapped to a Data Flow task data type. I guess it is a implicit data type convertion problem. But how to solve it???
I have a .rdl file that was exported out of ProClarity's Desktop Professional 6.1 using their RS plug-in. I uploaded the file into Report Manager and when I execute it, I get the following error:
An error has occurred during report processing.
Query execution failed for data set 'Three_Month_Funding_Trend'. Unable to recognize the requested property ID 'ReturnCellProperties'.
Does anyone have any idea what this is referring to? Does it have something to do with my configuration, connection or the report definition? Other reports such as DBMS based reports work fine.
I've generated some mining models against an OLAP data source (dimension). However, when I go to generate a lift chart, it seems that the only data source that can be used for input is the data source view (the relational database). Is that right? Or is there something I'm missing here.
I was figuring I'd use one slice to train the model, then another slice to test the accuracy of the results. But right now it's looking like I can't do that.
1. SQL Server Data Warehouse 2. OLAP CUBE in Analysis Services
My question is - If my SQL Server Data Warehouse is changed (Having Append Data) - Is that My OLAP Cube will have the Append Data?
It's possible, my OLAP Cube always having Append Data if my Data Warehouse is changed? If yes, how to do it without re-deploy and re-process my Analysis Services Project.
My Problem is Hoe the Stored Procedure returns data to the Calling Env.If it is fieds values then we can use the OUT Parameters.In Case of My SP returns more number of rows .Then how we capture the data and returs to the Calling Env.i know that we can use the Cursors in Oracle but how is it in SQLSERVER???????pls
pls Help me throw some samll exps.Any help Really appriciate..ThaksRams
SQLserver on Small Business Server for Win 2003.Client application updates records in DB. i.e. the content of a field isupdated from "First" to "Second"Client application closed and opend again. DB returns value "Second".Next morning _SOME_ - not all - of the updates seems missing. DB returnsthe value "First"The client software has run for years i several thousand installations,with MSDE/SQLserver on w2k and 2003 servers.This is the first time we have seen this spooky behavior. And it is thefirst time we are targeting SBS.Could there be some unflushed cache between client and SQLserver?Poul Petersen
I have a table which has a row that contains a subreport. I'd like to set the visibility of the row to hidden if the subreport returns no data. I've played with a few techniques but can't seem to get it right. Any ideas?
DECLARE @strRole nvarchar(15), @ContactID int SELECT @strRole = Role, @ContactID = ContactID FROM Contact WHERE UserName = @strUserName Select DISTINCT Contact.ContactID ID, UPPER(Surname + 'gd ' + Forename ) AS Description, UPPER(Surname + ' ' + Forename + ' ' + ContactReference) AS Description_CR , UPPER(ISNULL(ContactReference,'')) ContactReference FROM Contact LEFT JOIN AssignedEmployee on Contact.ContactID = AssignedEmployee.ContactID WHERE RUEmployee=1 AND ( ((@CourseID = 0 AND @blnIsSearch=1 ) OR COALESCE(AssignedEmployee.CourseID,0) = @CourseID)) AND (@ProjectID = 0 OR COALESCE(AssignedEmployee.ProjectID,0) = @ProjectID) AND ( (@strRole = 'MIS_TUTOR' AND (AssignedEmployee.ContactID = @ContactID OR CourseID IN (SELECT CourseID FROM AssignedEmployee WHERE ContactID = @ContactID AND Lead = 1) OR AssignedEmployee.ProjectID IN (SELECT ProjectID FROM AssignedEmployee WHERE CourseID IS NULL AND Lead = 1 AND ContactID = @ContactID)) ) OR (@strRole <> 'MIS_TUTOR') ) SET CONCAT_NULL_YIELDS_NULL ON GO
now if i run this stored procedure in Query Analyzer like so...
but if i lift the SQL out of the stored procedure and run it in Query Analyzer like so....
Code Block SET CONCAT_NULL_YIELDS_NULL OFF
DECLARE @ProjectID int, @CourseID int, @blnIsSearch int, @strUserName nvarchar(20) SET @ProjectID = 0 SET @CourseID = 0 SET @blnIsSearch = 1 SET @strUserName = 'K_T'
DECLARE @Role nvarchar(15), @ContactID int SELECT @Role = Role, @ContactID = ContactID FROM Contact WHERE UserName = @strUserName PRINT @ContactID Select DISTINCT Contact.ContactID ID, UPPER(Surname + ' ' + Forename ) AS Description, UPPER(Surname + ' ' + Forename + ' ' + ContactReference) AS Description_CR , UPPER(ISNULL(ContactReference,'')) ContactReference FROM Contact LEFT JOIN AssignedEmployee on Contact.ContactID = AssignedEmployee.ContactID WHERE RUEmployee=1 AND ( ((@CourseID = 0 AND @blnIsSearch=1 ) OR COALESCE(AssignedEmployee.CourseID,0) = @CourseID)) -- the above line was modified to make sure only employees explicitly assigned to a project are brought back. unless it's a search AND (@ProjectID = 0 OR COALESCE(AssignedEmployee.ProjectID,0) = @ProjectID) AND ( (@Role = 'MIS_TUTOR' AND ( (AssignedEmployee.ContactID = @ContactID OR CourseID IN (SELECT CourseID FROM AssignedEmployee WHERE ContactID = @ContactID AND Lead = 1))) OR AssignedEmployee.ProjectID IN (SELECT ProjectID FROM AssignedEmployee WHERE CourseID IS NULL AND Lead = 1 AND ContactID = @ContactID) ) OR (@Role <> 'MIS_TUTOR') )
SET CONCAT_NULL_YIELDS_NULL ON
i only get 5 records returned???
so why do i get a difference when its the same SQL??
Username 'K_T' is of role 'MIS_TUTOR' therefore @Role = 'MIS_TUTOR'
any help on unravelling this mystery is appreciated!
Hello,Can I import an OLTP (Reltional DB) as a Data Source into SQL ServerAnalysis Services 2005 and then use the Cube Wizard and the new DataSource View feature to create the OLAP model ?Or do I have to first design an OLAP Data Warehouse with a Star Schemaand then import this DW as a Data Source into my Analysis ServicesProject.With SQL Server 2000 , OLAP would be the way to go..but with SQLServer 2005 , it seems as though the wizard and data source viewfeatures do half the work for you.I have an OLTP DB and am not sure which route I should take ! Anysuggestions / input would be much appreciated.Thanks in Advance...RegardsRusszee
I have created database and OLAP cube in Analysis services using SSAS.In SSAS I have used a datasource which is using SQL tables to populate OLAP cube.Now when I added some more data to my SQL tables and trying to deploy cube,the newly added is not getting populated in the cube.So i want run SSIS package which will import data from SQL tables to this OLAP cube.
Can you please help me how to write this SSIS package to import data from SQL tables to OLAP cube.(Very urgent issue)
I have a very weird and frustrating problem at a customer. Since two months we are busy developing a report model, so users are able to create their own reports using the reportbuilder.
Suddenly the 1:n relations seem to be screwed! If I have a simple relation between entities, like a customer can have optional multiple invoices and a invoice has always one customer..
When in reportbuilder I select the customername, invoicenumber and #Invoices, then I expect there is only one invoice per invoicenumber. (When I select this in a query myself it is the truth...) But, the reportbuilder gives 2 as a result. This suddenly happens to all my 1:n roles.
It's not only in the #invoices fields, even when i select invoice amount this amount is being multiplied by 2.
I opened a backup of my project from one month ago, with the same relation in it and try to make the same report and it gave me the right results. But ofcourse I don't want to waste a month works, so I need to get this model working!
I am migrating from 2000 to 2005 and in the process of rewriting DTS to SSIS. So far so good.
I have an import of a flat file that I need to reproduce that works fine in DTS but throws SSIS.
The file contains details of products, and has a Long Description field that may contain lots of different chars - CR & LF amongst them. It's source and destination columns are Text type.
The file is tab delimited, text delimited with double quotes, with row delimiters of CRLF and column names on row 1.
The error SSIS throws is "The column delimiter for column "LongDescription" was not found"
I imagine (perhaps wrongly) that this is because of the extra CRLFs but it worked fine in DTS. It also previews fine, and lets me define the column properties OK in SSIS package designer.
Any help greatly appreciated. I am trying to avoid the obvious thing of replacing those chars in that field at this point as currently the export is shared between a live production server, and my test 2005 rig.
The test.sql scripts I write to test CLR stored procedures run successfully, but when I want to display the resulting data in the database with a simple "SELECT * from Employee"
I get the result as: Name Address ---- ------- No rows affected. (1 row(s) returned)
But not the actual row is displayed whereas I would expect to see something like:
Name Address
---- -------
John Doe
No rows affected.
(1 row(s) returned)
I have another database project where doing the same thing displays the row information but there doesn't seem to be a lot different between the two.
I created a Market Basket Data Mining Model with Association Rules, which I want to query and show in a Report. Everything works fine, when I preview the result in the Reporting Services Data tab I see some sort of table which I can expand and then see the related products.
Unfortunately this result seems to be an ADOMD Datareader which I cannot place in a table, matrix or textfield.
If anyone knows how I can make this Informationen available in my Report please let me know.