License Considerations For SQL Data Source Of Reports
Dec 10, 2007
I may be overthinking this, but I want to make sure this is right. If you have a processor license of SQL Server Standard running both Reporting Service databases and the IIS interface, isn't it true that the underlying licenses of other servers containing your data are irrelivent in the context of serving the reports over the web? Example. Server 1 has SSRS as described above, processor license of Standard. Server 2 has user license of SQL Enterprise and serves data to a couple of reports on Server 1. This does not violate a license, correct? Doesn't Server 1 just take one of the CALs from Server 2?
View 1 Replies
ADVERTISEMENT
Jan 5, 2008
Scenario:
I have 2 identical Analytics cubes, Cube 1 & 2, my reports point to Cube 1, when Cube 2 is refreshed the reports data source is manually repointed to the updated Cube 2. The following day the Cube 1 is refreshed and again the reports data source is edited and repointed back to Cube 1.
This ensures that if a cube build failes the reports aren't down.
To modify the data source I connect to Reporting Services using SSMS go to the Data source folder and manually edit the Data source.
I have looked through the Reporting database in SQL Server but didn't see anything that resembled the connection string for me to edit.
Are the connection strings for the Data sources stored in the Reporting Database ? Where can I find them?
Here is a Microsoft document that gave me the idea. http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/scoqryas.mspx
It talks about changing the connections strings in the Web UI? But doesn't tell you how or point to any docs that do.
Is there a way to automate this and is anyone else running a similar setup.
I appreciate you help
Thanks
Marcus
View 3 Replies
View Related
Oct 9, 2008
I have published SSRS reports to sharepoint and used the Reporting Services Report viewer to present the reports on a Page.  Everything works fine for the day, however in the morning, when I go to view the page, I get an error messageÂ
The report server cannot process the report. The data source connection information has been deleted. (rsInvalidDataSourceReference)
View 9 Replies
View Related
Oct 13, 2006
I have a VERY Legitmate use to install sql 4 workgroups on a pc without licensing it. I am making a disk image for MANY pcs to be imaged and load licensing after the imaging is completed.
I have been able to do this with Windows XP Professional, MS Office 2003 and a couple non-Ms programs. The information for MS software even came from their KB, so I know what I am doing is legit. I just can't find any information on doing this for my sql app.
Any suggestions??
Respectfully,
Frustrated
"Frustrated"
View 1 Replies
View Related
Mar 13, 2008
hi ,
i am trying for a drill through report (rdlc)
ihave written the following code in drill through event of reportviewer, whenever i click on the first report iam getting the error like
An error has occurred during report processing.
A data source instance has no
t been supplied for the data source "DetailDS_get_orderdetail".
the code is
using System;
using System.Data;
using System.Data.SqlClient;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
//using Microsoft.ApplicationBlocks.Data;
using Microsoft.Reporting.WebForms;
using DAC;
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
ReportViewer1.Visible = false;
}
protected void Button1_Click(object sender, EventArgs e)
{
DAC.clsReportsWoman obj = new clsReportsWoman();
DataSet ds = new DataSet();
ds = obj.get_order();
ReportViewer1.LocalReport.DataSources.Clear();
ReportDataSource reds = new ReportDataSource("DataSet1_get_order", ds.Tables[0]);
ReportViewer1.LocalReport.DataSources.Add(reds);
ReportViewer1.LocalReport.ReportPath = "C:/Documents and Settings/km63096/My Documents/Visual Studio 2005/WebSites/drillthrurep/Report.rdlc";
ReportViewer1.LocalReport.Refresh();
ReportViewer1.Visible = true;
}
protected void ReportViewer1_Drillthrough(object sender, DrillthroughEventArgs e)
{
DAC.clsReportsWoman obj = new clsReportsWoman();
ReportParameterInfoCollection DrillThroughValues =
e.Report.GetParameters();
foreach (ReportParameterInfo d in DrillThroughValues)
{
Label1.Text = d.Values[0].ToString().Trim();
}
LocalReport localreport = (LocalReport)e.Report;
string order_id = Label1.Text;
DataSet ds = new DataSet();
ds = obj.get_orderdetail(order_id);
ReportViewer1.LocalReport.DataSources.Clear();
ReportDataSource reds = new ReportDataSource("DetailDS_get_orderdetail", ds.Tables[0]);
ReportViewer1.LocalReport.DataSources.Add(reds);
ReportViewer1.LocalReport.ReportPath = Server.MapPath(@"Reportlevel1.rdlc");
ReportViewer1.LocalReport.Refresh();
}
}
the code in method get_orderdetail(order_id) is
public DataSet get_orderdetail(string order_id)
{
SqlCommand cmd = new SqlCommand();
DataSet ds = new DataSet();
cmd.Parameters.Add("@order_id", SqlDbType.VarChar, 50);
cmd.Parameters["@order_id"].Value = order_id;
ds = SQLHelper.ExecuteAdapter(cmd, CommandType.StoredProcedure, "dbo.get_orderdetail");
return (ds);
}pls help me.
View 1 Replies
View Related
Feb 2, 2008
,
Hi
In this code how can I create a new data source and new data source view and model and structure that it run dynamic.
In this code I have a lot of errors, that they are about server and database don€™t have in current code,
In this code, first I should definition server or no?
Database dbNew = new Database (databaseName,
Utils.GetSyntacticallyValidID(databaseName, typeof(Database)));
srv.Databases.Add(dbNew);
dbNew.Update(true);
***********************************************************
How can I create data source and data source view and model and structure?
Please say code of that, and guide me.
databasename and srv is unknown.
Do I add other reference with analysis services?
Please explain about these codes:
************************************************************************
1)
RelationalDataSource dsNew = new RelationalDataSource(
datasourceName,
Utils.GetSyntacticallyValidID(
datasourceName,
typeof(RelationalDataSource)));
db.DataSources.Add(dsNew);
dsNew.ConnectionString = connectionString;
dsNew.Update();
2)
RelationalDataSourceView rdsv;
rdsv = db.DataSourceViews.Add(
datasourceviewName,
Utils.GetSyntacticallyValidID(
datasourceviewName,
typeof(RelationalDataSourceView)));
rdsv.DataSourceID = ds.ID
***************************************************************
3)
OleDbConnection cn = new OleDbConnection(ds.ConnectionString);
OleDbCommand cmd = new OleDbCommand(
"SELECT * FROM [" + tableName + "] WHERE 0=1", cn);
OleDbDataAdapter ad = new OleDbDataAdapter(cmd);
DataSet dss = new DataSet();
ad.FillSchema(dss, SchemaType.Source);
*************************************************************
4)
// Make sure we have the name we thought
dss.Tables[0].TableName = tableName;
// Clone here - the original DataTable already belongs to a DataSet
rdsv.Schema.Tables.Add(dss.Tables[tableName].Clone());
rdsv.Update();
5)
MiningStructure ms = db.MiningStructures.Add(miningstructureName, Utils.GetSyntacticallyValidID(miningstructureName,
typeof(MiningStructure)));
ms.Source = new DataSourceViewBinding(dsv.ID);
ms.CaseTableName = "Customer";
Add columns:
ScalarMiningStructureColumn smsc;
// From table "Customer" we will add a couple of columns
// CustomerID - key
smsc = new ScalarMiningStructureColumn("Customer ID",
Utils.GetSyntacticallyValidID("Customer ID", typeof(ScalarMiningStructureColumn)));
smsc.IsKey = true;
smsc.Content = "Key";
smsc.KeyColumns.Add("Customer", "customer_id", OleDbType.Integer);
ms.Columns.Add(smsc);
*******************************************
6)
MiningModel mm = ms.MiningModels.Add(miningmodelName,
Utils.GetSyntacticallyValidID(miningmodelName,
typeof(MiningModel)));
mm.Algorithm = "Microsoft_Decision_Trees";
mm.Parameters.Add("COMPLEXITY_PENALTY", 0.3);
MiningModelColumn mc = new MiningModelColumn("Customer ID",
Utils.GetSyntacticallyValidID("CustomerID",
typeof(MiningModelColumn)));
mc.SourceColumnID = ms.Columns["Customer ID"].ID;
mc.Usage = "Key";
mm.Columns.Add(mc);
mm.Update();
Please exactly say, whatever I want
Thanks a lot for your answer
Please don€™t move this question because I don€™t know where I should write this.
View 1 Replies
View Related
Mar 2, 2007
I have set up a new connection as a connection from data source, but I cannot see how to use this connection to create my Data Flow Source. I have tried using an OLE DB connection, but this is painfully slow! The process of loading 10,000 rows takes 14 - 15 minutes. The same process in Access using SQL on a linked table via DSN takes 45 seconds.
Have I missed something in my set up of the OLE DB source / connection? Will a DSN source be faster?
Thanks in advance
ADG
View 2 Replies
View Related
Jan 23, 2007
Hi does anyone know how to do the above with out going through reportserver url?
Preferably by using a cmd tool ? such rs.exe
or through the backend in the reportserver DB?
Thanks
Dave
View 2 Replies
View Related
Feb 19, 2008
TITLE: Microsoft Visual Studio
------------------------------
Error at Data Flow Task [XML Source [55]]: There was an error setting up the mapping. The root element of a W3C XML Schema should be <schema> and its namespace should be 'http://www.w3.org/2001/XMLSchema'.
------------------------------
ADDITIONAL INFORMATION:
Pipeline component has returned HRESULT error code 0xC02090CF from a method call. (Microsoft.SqlServer.DTSPipelineWrap)
------------------------------
BUTTONS:
OK
------------------------------
The error pasted above arose when I attempted to create a XML source based on a Crystal Reports XI report exported as XML. Is there any special modification to make to the XML exported from Crystal to allow SSIS to read it properly?
Thanks for your help,
Sid
View 1 Replies
View Related
Apr 19, 2007
I have a table with about 200 million rows of data. I add a couple million rows of data each week to the table in a single load process. The table is used for reporting purposes only and there are never (not intentionally at least) any updates or deletes to the table. The data is always being added to the "end" of the table with the new AsOfDate being the main factor in the clustered index.
My question is this: Since I'm not "inserting" rows that would split pages, should I have my FILLFACTOR for the table set to 100, or am I missing something? I obviously want to save physical hard drive space, but I also don't want to slow down the import process.
BTW, I'm using SQL2000
View 1 Replies
View Related
May 8, 2007
hi everybody,
i want to create data source and data source view for data mining, with using C Sharp.
i have create data source and data source view and export to XML file, but when i change to another computer, run those XML file, it return error, when i run statement to create and biuld mining model, what can i change on xml or how to run XML on another computer sucessfully,
and have i build data source and data source view, how to do it.?
thanks for you helps
View 1 Replies
View Related
Dec 11, 2007
Today I was making a few reports.
When I tested the reports in Visual Studio, they worked great: I got the expected result.
But when I deployed the reports to our reportserver the problem started.
When I click on the directory in which my reports are deployed, I got my 4 reports.
Till now everything worked correct.
But when I click on a report to view the results it went wrong.
I got an error:
"Cannot create a connection to data source 'Live'"
(Live is the name of our data source).
We are using the Windows Logons and I am sure that I have all the rights on the server, I gave myself 'sysadmin' rights, so it should work.
I also have tried it with all the roles assigned on my account, but then it still won't work.
When I modify the data source, and set it to another server en database it works.
The datasource 'Live' exists on a x64 MsSQL server, en the other datasource is on a x86 MsSQL server.
Maybe that is the problem?
Can someone tell me what is wrong?
View 1 Replies
View Related
May 20, 2007
Hi there,
I'm trying to build a report for Windows Forms (C#) using .rdlc files.
But every time I run it, I get this error message. I've followed the step-by-step from MSDN instructions... but it doesn't work!
Could anyone help me?!
Tks a lot!
Luis Antonio - Brazil
View 1 Replies
View Related
Dec 13, 2007
I was trying to load data using SSIS, Data Flow Task, OLE DB Source, source was a view to a OLE DB Destination (SQL Server). This view returns 420,591 rows from Query Analyzer in 21 seconds. Row length is 925. When I try to executed the Data Flow Task from SSIS, I had to stop the process after 30 minutes, because only 2,000 rows had been retrieved. I modified the view to retun top 440, 000 and reran. This time all 420, 591 rows were retrieved and written in 22 seconds. Next, I tried to use a TOP 100 Percent. Again, only 2,000 rows were return after 30 minutes. TempDB is on a separate SAN Raid group with 200 gig free, Databases on a separate drive with 200 gig free. Server has 13 gig of memory and no other processes were executing.
The only way I could populate the table was by using an Execute SQL Task and hard code an Insert into table selecting data from the view (35 seconds) from SSIS.
Have anyone else experience this or a similar issue? Anyone have a solutionexplanation?
View 13 Replies
View Related
Apr 19, 2007
I would like to know people's thoughts on any special network considerations to take for mirroring and the logic behind them. Is it best to segregate mirroring traffic from other network traffic? Use a VLAN? Dedicate one NIC for mirroring and the other for general network traffic or just aggregate the two and let both types of traffic share the bandwidth?
I haven't seen much in this area from Microsoft's best practices and wanted to know what those who have implemented it have done and why. There are pros and cons for each method: Letting everything share one massive pipe with load balancing vs. trying to segregate traffic in some way so that general network connections etc. do not impact the mirroring capability.
I look forward to hearing from you.
-M-
View 4 Replies
View Related
Sep 14, 2007
We have about 150 SQL servers and basically we're considering the pros and cons of installing SSIS on a central SSIS server - that is responsible for all DTS jobs - as opposed to installing SSIS on the local SQL instance.
On the plus side so far:
1./ Central administration, alerting, change management etc
2./ Possible performance gain on the local instance not having SSIS installed?
On the negative side:
1./ Central point of failure
2./ Possibility that it would need to be a clustered...
3./ Compatibility issues may mean having to make the central SSIS server 32-bit?
4./ Possible performance cost of remote SSIS?
5./ With multiple DTS packages running at different times, when would we take the server down for maintenace...?
Would appreciate your thoughts.
View 1 Replies
View Related
Jun 21, 2007
Hi all,
Can a publisher be mirrored? What are the implications, issues, gotchas? Transactional, Merge or Transactional w/ Updating Subscribers is what I'm considering.
Bottom line is I would like to use mirroring, but only one mirror will not suffice.
Thank you in advance.
Ray Nichols
View 1 Replies
View Related
Nov 15, 2006
Hello, I'm developing application which monitors network packets. The monitoring data are saved into table. Monitoring table maintains the data for fixed quantum time,for example during one 1 hour. So, every minute before or after insert new data, I delete the time-expired data. I doubt that the endless delete operation would results in some problems(increasing index,etc..).
Is this mechanism safe to the dbms?
Aren't there round-robin(?) style table?
View 1 Replies
View Related
Apr 12, 2006
I would like to know how to, if at all possible, to reconstruct the following trigger as to be able to handle multiple row insert when a single insert command is used - because the trigger will only be called once...I'm not familiar and don't know anything about cursors and i've read that its not the best way to go.
TRIGGER ON childtable INSTEAD OF INSERT
AS
BEGIN
DECLARE @customkey char(16);
DECLARE @nextchild int;
DECLARE @parent int;
DECLARE @date datetime;
SET @date = getdate();
SELECT @parent = parenttable FROM inserted;
SELECT @nextchild=count(*)+1 FROM childtable WHERE parenttable = @parent;
IF (@nextchild >= 9998) return;
SET @customkey = €˜type€™+ convert(char(4),year(@date)) + convert(char(2),month(@date)) + convert(char(2),day(@date))+convert(char(4),@nextchild + 1);
INSERT INTO childtable (customkey,parent) VALUES (@customkey,@parent);
END
View 7 Replies
View Related
Oct 31, 2006
Can someone point me to a resource for Table Design Considerations for Merge Replication? I have an ASP.Net/SQL2K5 app that I need to run on disconnected machines, then allow dfor data sync through merge replication. I assume that the first step is getting my tables indexed in a replication friendlt manner?
Many Thanks to anyone who can point me in the right direction!
View 3 Replies
View Related
Feb 13, 2008
in a prior "legacy" life we couldn't imagine 24x7 implementations because it was important to 1) reorganize databases periodically to remove fragmentation that adversely affected performance and 2) back up databases just in case.
In a 24x7 SQL Server 2005 implementation, high level only, how are these and other maintenance related things accomplished with confidence?
I dont think SQL cleanses its own page splits unsolicited. Are DBAs totally reliant on logs in full recovery installations where db must be up 24x7? What if the devices those logs sit on fail? What if the logs become too large? Is it likely that if you want 24x7 you're looking at Enterrise Edition only?
I'm totally aware of and confident in the sliding window partitioning thing but it seems to me there must be more out there in terms of periodic, more frequent maintenance activity.
View 3 Replies
View Related
Apr 26, 2006
Can someone point me to some good articles or perhaps directly supply some words of wisdom with regard to wise utilization of variables within a T-SQL script from and standpoint of conserving memory usage and improved execution cost?
For example:
(1) Is it better to use varchars, nvarchars, etc. defined with minimal lengths to support the needs of the script or is it just as efficient to declare all with a length of say 4,000?
(2) I've seen behavior that leads me to believe that when passing a variable as a parameter in a nested procedure call, if the declared types of the parameter and the variable being passed in don't match (i.e. one is numeric(38,10) and the other is int), then implicit type conversions hurt performance. Is this true and how broadly does it apply?
(3) Does the number of variables declared in a script materially impact the performance and / or resource utlization?
(4) Is it more efficient to have a series of variable value assignments in a single SELECT statement versus a series of SET statements? Should I always perfer one to the other? Only within a looping construct?
Thanks,
Shadowraven
View 1 Replies
View Related
Mar 1, 2006
Anyone know of a good "free" way to back up web files and SQL Server 2005 Express Database?
I was able to use Windows Server 2003 Backup utility to back up the folder where the Databases were stored, as well as the web files, with no errors.
But I have heard a lot of discussion that you can't just simply backup SQL Server data files?
I'm wondering how sound the backup I've created is...
Any suggestions?
View 1 Replies
View Related
Dec 13, 2007
I need to copy all the data from all the tables in a database to a copy of this database on another server.
What feature of SSIS should I take advantage of to accomplish this?
We have an SLA for 8am, most times the data warehousing jobs complete at 8:05am. Adding an additional process/set of tasks to this package would obviously make matters so I'm trying to update/copy/replicate the data in the fastest manner. Typically we're talking 2 marts (10-20GB) with 2 large tables (5-10 mill records) and 20 marts (0.5 - 5 GB) with many more smaller tables (~40 tables with record count ranging from 1 to a million)
Additionally please indicate if the design/feature you suggest can handle (pushing schema changes and additions to the target server) schema changes or new tablesviews added to the source database.
My only idea so far...is using the import wizard (in Management Studio) to create an SSIS package (top copy all the tables from one server to another) and saving it to the server, Then executing this package after the job is complete. However this would not work if the schema of a table changed, or if a a table is added. Moreover I don't think I can edit this package in visual studio.
View 3 Replies
View Related
Apr 16, 2008
Hi all, i got this error:
[DTS.Pipeline] Error: "component "Excel Source" (1)" failed validation and returned validation status "VS_NEEDSNEWMETADATA".
and also this:
[Excel Source [1]] Warning: The external metadata column collection is out of synchronization with the data source columns. The column "Fiscal Week" needs to be updated in the external metadata column collection. The column "Fiscal Year" needs to be updated in the external metadata column collection. The column "1st level" needs to be added to the external metadata column collection. The column "2nd level" needs to be added to the external metadata column collection. The column "3rd level" needs to be added to the external metadata column collection. The "external metadata column "1st Level" (16745)" needs to be removed from the external metadata column collection. The "external metadata column "3rd Level" (16609)" needs to be removed from the external metadata column collection. The "external metadata column "2nd Level" (16272)" needs to be removed from the external metadata column collection.
I tried going data flow->excel connection->advanced editor for excel source-> input and output properties and tried to refresh the columns affected.
It seems that somehow the 3 columns are not read in from the source file?
ans alslo fiscal year, fiscal week is not set up up properly in my data destination?
anyone faced such errors before?
Thanks
View 13 Replies
View Related
Feb 23, 2008
RE: XML Data source .. Expression? Variable? Connection? Error: unable to read the XML data.
I want my XML Data source to be an expression as i will be looping through a directory of xml files.
I don't see the expression property or the connection property??
I tried setting the XMLData property to @[User::filename], but that results in:
Information: 0x40043006 at Load XML Files, DTS.Pipeline: Prepare for Execute phase is beginning.
Error: 0xC02090D0 at Load XML Files, XML Source [108]: The component "XML Source" (108) was unable to read the XML data.
Error: 0xC0047019 at Load XML Files, DTS.Pipeline: component "XML Source" (108) failed the prepare phase and returned error code 0xC02090D0.
Information: 0x4004300B at Load XML Files, DTS.Pipeline: "component "OLE DB Destination" (341)" wrote 0 rows.
Task failed: Load XML Files
Information: 0xC002F30E at Bad, File System Task: File or directory "d:jcpxmlLoadjcp2.xml.bad" was deleted.
Warning: 0x80019002 at Package: The Execution method succeeded, but the number of errors raised (2) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
SSIS package "Package.dtsx" finished: Failure.
The program '[3312] Package.dtsx: DTS' has exited with code 0 (0x0).
Thanks for any help or information.
View 3 Replies
View Related
Oct 12, 2015
I've question about how to handle structural datamodel changes in a datasource of PowerPivot. Suppose I'm developing a starmodel in SQL Server and sometimes a datatype changes or a name of a field changes in a table. It seems to me that PowerPivot handle this not gracefully as Analysis MD does (mostly). I received an error because of a wrong fieldname or even no error when a dattype changes in PowerPivot. Is this common or do I something wrong here. Does this mean that every time the datamodel changes the PowerPivot should be recreated? Or am I missing the clue here?
View 6 Replies
View Related
Jan 12, 2015
Was wondering if there was a best practice minimum permissions for creating a SQL login to use when setting up a new shared Data source for SSRS report manager?
Something along the lines of them being a data read for the DB and permissions to update tempdb?
Would have thought it not advisable to have the login be able to update the main db...
View 1 Replies
View Related
Jul 10, 2015
I have 4 Tablix and 2 of the Tablix get data from Server 1 and other 2 get the data from Server 2.I have set NoRowsMessage "=Data Not Available for the Selected Values"  for all the 4 Tablix.Now if data is not available from Server 1 then I must show "Data Not Available for the Selected Values" only once in the  outputbut now its appearing twice in the output because of the 2 tablix that had no rows.Similarly if data not available from Server 2 then it should show "Data Not Available for the Selected Values" only once in my output.If Data not avilable from all the Tablix then also i t should show only once as "Data Not Available for the Selected Values" in the report output.
View 3 Replies
View Related
Mar 18, 2007
A data reader is using a connection manager to connect to an ODBC System DSN . A query in the SqlCommand property is provided. Data is being truncated in the only string column . The data type in data reader output-->external columns shows as Unicode string [DT_WSTR] Length 7.
The truncated output in a text file is the first 3 characters from left to right . Changing the column order has no effect.
A linked server was created in SQL Server Management Studio to test the ODBC System DSN using the following:
EXEC sp_addlinkedserver
@server = 'server_name',
@srvproduct = '',
@provider = 'MSDASQL',
@datasrc = 'odbc_dsn_name'
Data returned using "OPENQUERY" does not truncate the string column indicating that the ODBC Driver returns data as expected with sql 2005, but not with the Data Reader.
Any assistance would be appreciated.
Thanks,
View 3 Replies
View Related
Apr 3, 2014
I need to see inside a SSIS 2012 project a new SSIS installed component, but in the SSDT 2010 I cannot see the SSIS Data Flow Items tab for adding data source/data destination respect to the choose toolbox items pane.
View 4 Replies
View Related
Sep 10, 2007
Hi i am trying to do a straight forward load from a Flatfile source , i have defined the columns according to the lenghts defined in the Data Dictionary Provided but when i am trying to run the Task i am encounterring this error
The column data for column "Column 20" overflowed the disk I/O buffer.
I tried to add another column 21 at the end and truncate or leave that column unmapped to destination but the same problem occurs for column 21 what should i do to over come this .
In case of Bad Data how to clean up the source.. Please help me with this
View 5 Replies
View Related
May 9, 2006
I've got a report that is using a cube as a data source and I can't get the report to show all the data. Only data at the lowest level of the cube is displayed. The problem is that most of the data I'm concerned with is at higher levels. There's no problem with the MDX. I get the correct results when I run the query.
I'm using a table to show the results. I've also tried a matrix, but I get the same results. I'm using SSRS 2005 and SSAS 2000.
Anyone have experience with this? Am I missing something simple?
View 7 Replies
View Related