We had existing cubes in our Analysis Server, we were required to move them on another Reporting Server which would be using Data Replicated every night to that server. Problem is now source data is divided into 2 Reporting Database Servers. Table Names/View Names are the same in all the Databases. I just want to change the data source pointing to existing Database to the new Reporting Server. Can you tell me how this can be achieved?
I've got a report that is using a cube as a data source and I can't get the report to show all the data. Only data at the lowest level of the cube is displayed. The problem is that most of the data I'm concerned with is at higher levels. There's no problem with the MDX. I get the correct results when I run the query.
I'm using a table to show the results. I've also tried a matrix, but I get the same results. I'm using SSRS 2005 and SSAS 2000.
Anyone have experience with this? Am I missing something simple?
I am new to SQL Server 2005 Analysis Services and would like to use the OLAP Cubes as a datasource to build Mining Model . However i would like to use a particular view of the OLAP cube that i have generated to be used as the datasource for the mining model . I find that i am not able to save the Cube View while browsing the OLAP cube in Business Intelligence Studio. Is there a way i can acheive this requirement.
Any ideas regarding this will be really appreciated.
I have created a SSIS package that transfer data from a Foxpro database to an instance of SQL Server 2005 Express. I used the wizard to create the package but I load and execute the package within a custom application that I have written in C#.
The way the custom application is intended to work is that the user can have the database in any location on the computer and all he has to do is specify the location then the application programatically changes the location of the source on the package that it has loaded and then execute it. When I initially run the package the first time (using the original path), it works fine and transfers the data. However, every subsequent time I run the application and specify a different path, the database on the SQL Server side gets created as expected but the data is not transfered!
Where am I going wrong? Do I need to save the package after I modify the source then reload and run it again or do i need to change something else in the Data Flow to make this work?
I have an existing project with the data source and report folder with working reports. I need to create a model for other department members to use Report Builder, not Business Intelligence Developer Studio, to develop their reports from. How do I add a data source view and report model folder to the project so I can then create a view and model?
I'm importing some data from an excel file to sql server 2005.
I created an Excel Data Source inside my Data Flow Task but it is assuming that the source columns DataType is double-precision float [DT_R8]. It isn't, even though some rows may containg numeric string in the column's cell.
If I go to the Data Sources advanded editor, and modify the data type property of the column, SSIS complains the the error output and the source output are not of the same DataType. If I try to change the error output's data type in the advanced editor I get this error: "Property Value". The deailed error states:
Error at MOVIM 04 [MOVIM 04 [1]]: The data type for "output "Excel Source Error Output" (10)" cannot be modified in the error "output column "Agente Protector" (7662)".
Error at MOVIM 04 [MOVIM 04 [1]]: Failed to set property "DataType" on "output column "Agente Protector" (7662)".
If i let SSIS correct the error by itself, it changed the dource column back to double-precision float [DT_R8].
I'm using a shared data source to connect an Oracle server in my packages. After changing the database user password in the shared data source, I noticed the package concerned would fail with the following description.
Can I process a cube without error if there is an application that writes data into the source SQL Server database? Do I have to stop that application every time when I want to process the cube?
sorry im new with using Reporting Services and even more inexperienced with using cubes.
My situation is as follows. I perform dynamic grouping (user selects the view via a parameter) Depending on the view selected, I need to change the dimension filter in the dataset.. Is this possible ?
Hi, Is it possible to change any fieldname of an existing table?I mean to say by TSQL statement.We know that we can alter the data type and width etc. But I haven't got any info about filedname change.So if it is possible Please help... And Is there any TSQL command to alter multiple columns in a single statement?
I want to change the color of the ssas cube calculated member in formview from red background to white or back to default color settings. This also includes changing the text color back to default. What properties do I have to change?
Greetings:I have a SQL 2000 database, in which about 1% of the records are inlower case. I need to make them UPPER CASE.Is there a function to determine and change the case of existingrecords, or will I have to re-write the records manually?Thanks,DW
I must increase column (filed) size in existing datebase but without usingEnterprise manager....(Becouse we use MSDE on our clients PCs)The Filed is part of primary and foreign key constraints....And every constraint has diferent index number in each database...for example (PK_something_9e382hjl8), and I don't know how to pick thisvalue before "drop constraint" command.....Thank you very much....
I have some huge tables (think 200+GB for a single table) which are excellent candidates for sparse columns. The tables have many columns which are defined with decimal datatypes (13,2) with a large percentage of them (over 50% in most cases- some as much as 99%) being 0.00. Since this is very expensive in terms of storage my idea is to set all the 0.00 values equal to NULL then set them as sparse. Across 100 or so identical databases, I have 5 such tables, with 20-40 columns in each table.
1.) three steps for each column in each table in each db.
Step 1: update table to allow for nulls
Step 2: update tabe set column=null where column =0.00
Step 3 update table set sparse columns
2.)
Step 1: Create entirely new table with sparse column definitions
Step 2: copy entire table, transforming 0.00 to null for affected columns via SSIS
Step 3: drop original table, rename new table to original name
Hi, I have this Gridview in which I show attendees to a course (per course) In the same gridview I want to be able (by clicking a button) to show all attendees to all courses simply by leaving out a part of the SQL; WHERE Eventid = @eventid (resulting in all attendees no matter what course) In other words how can I alter the selectcommand of the SQLDataSource? Seems simple, but can't hack it. Thanks in advance, Lx
I am building a data warehouse for a customer who has systems located in two different countries.
I need to import that data from four seperate databases, which all share the same structure.
To do this i have created 20 packages to import that data from the source database. What i would like to do, is at run time set which database the SSIS package should get its data from.
In sql 2k this was easy with a global variable that was set, then use a dynamic properties task to set the data source.
How can i achieve the same result in SSIS? the data source is an ODBC connection, with the four ODBC connections having similar names, eg ABC_NZ, ABC_AU
Can I design data flows with an XML Source pointing to file system xml files, and but run them against variables? I can see the properties at design time (Accessmode=0,1,2; XMLDataVar="User::XMLData), but the XMLSource object doesn't appear to have expressions enabled to change this at runtime, nor do these properties seem to be exposed to configurations.
The scenario:
I have a package that reads through (potentially thousands) of XML files, and having identified the embedded message "type", transforms them via XSLT to an easier XML format (XML Datasource can't cope well with multiple namespaces etc). Then send this the new file off to a type specific dataflow task to send contents to database.
For performance (and other) reasons I'd rather XSLT to a single shared variable and use this variable as the XML source in the data flows (Rather than writing out to the filesystem [via xslt] and immediately reading it back in each time [via datasource]).
But designing the data flow task using a variable as xml source is frustrating for a bunch of reasons - not the least being:
(a) The variable needs to be populated with sample xml data at design time. But this could only ever match one dataflow at a time (fine at run time, but painful at designtime - leading to validation errors popping up all around the place.)
Yes I could have a seperate variable for each xmlsource - but that just makes things more complicated up front and also leaves me with 20+ large, type specific, xml variables floating around at runtime of which only one is ever being used per import - which just doesn't feel right to me.
(b) populating the variable with sample xml at design time is painful due to it being a string - and not accepting hard returns etc that are in text files, design time changes etc)
Using a file XML source at design time is infinitely easier, when I get a design time change - I can just amend the XSLT and run the appropriate task to produce a sample XML file to design against.
I am connecting to a MsAccess-database using ODBC. While developing the package we have changed mappings for this database. The ODBC was changed accordingly and the old definitions were deleted. However SSIS is still using the old ODBC-links even when deleting all existing connections and adding a new connection. Somehow the old settings have been saved and are being reused in the DataReader Source. If so where are they saved and how can I change/delete them ???? Note: I suspect the Server Explorer because every time I add a data connection using the ODBC, the Datareader Source starts using the wrong definition (even when Server Explorer uses the correct one).
Hi, I am trying to import data from Oracle RDB into SQL Server 2005 using SSIS. Created a ODBC data source to connect to Oracle and used DataReader Source component and ADO.net to connect to the ODBC data source.
Under the Component properties tab, the SQL Command looks something like this.
Select ID, ADDRESS, REVISED from ADDRESS
The data type for the source columns are Integer, Varchar(30) and DATE VMS.
Now when I look at the Input and Output properties window,
The External columns has the following data types.
ID - four-byte signed integer [DT_I4] ADDRESS - Unicode string [DT_WSTR], length = 0 REVISED - database timestamp [DT_DBTIMESTAMP]
The Output columns has the following data types
ID - four-byte signed integer [DT_I4] ADDRESS - Unicode string [DT_WSTR], length = 0 REVISED - database timestamp [DT_DBTIMESTAMP]
When I tried to change the length of the ADDRESS on the output column, I get the following error.
Error at Data Flow Task [DataReader Source [1]]: The data type of output columns on the component "DataReader Source" (1) cannot be changed.
Is this the default length for the Unicode string type. I was not able to load the ADDRESS column as it gets truncated before I load it into destination. Even if I use Derived or Data Conversion transformation, the ADDRESS is getting truncated before it reaches this transformation.
I have a data flow that reads from a flat file source, goes through one data transformation component to change from unicode to normal text and writes the data to a SQL Server table. This has been working fine throughout development using a specific source file as input. I have now manually changed the path and name of the input source file in the connection manager to point to a new file and the task continues to process the old text file.
I open the connection manager and check its properties and preview the data and it all looks fine - it is finding the new file. I also edit the flat file source component and preview the data and it shows the data from the new file. I run the data flow by right-clicking and selecting Execute Container and it continually reads the old file and processes it! (I do the right-click thing because this is just one small part of a larger package.)
This has got to be a bug, but just where I wonder. Anyone ever see this before? I'm going to try to run the entire package in debug mode, instead of right-clicking, next and see if that's any different. Anyone have any ideas on how to force a refresh of the necessary internal components to make it read the new file? All the external properties point to the new file, but it's not being read.
Joe
Update:
I ran the entire package and found no difference in execution - it's still reading the wrong input file. I then deleted and recreated the connection manager, again specifying the new file to read. It still continued to read the old file. I deleted the flat file source component and recreated it, specifying the latest connection manager (twice, since I must have pointed it at the wrong one the first time and it read a completely different file). I still have the same problem of it reading the wrong input file. I don't know what else to recreate that would have any effect. Does anyone have any ideas? I need to change the pointer to different files multiple times and have it read several different input files. This has got to work somehow. Any help is appreciated.
I have a scenario where we have to handle dynamically changing source columns.
For example , some times in the source files the number of columns will be increased or decreased, new columns can be added in the middle or in the end of the source file.
, Hi In this code how can I create a new data source and new data source view and model and structure that it run dynamic. In this code I have a lot of errors, that they are about server and database don€™t have in current code, In this code, first I should definition server or no?
How can I create data source and data source view and model and structure? Please say code of that, and guide me. databasename and srv is unknown. Do I add other reference with analysis services? Please explain about these codes: ************************************************************************ 1) RelationalDataSource dsNew = new RelationalDataSource( datasourceName, Utils.GetSyntacticallyValidID( datasourceName, typeof(RelationalDataSource)));
I have set up a new connection as a connection from data source, but I cannot see how to use this connection to create my Data Flow Source. I have tried using an OLE DB connection, but this is painfully slow! The process of loading 10,000 rows takes 14 - 15 minutes. The same process in Access using SQL on a linked table via DSN takes 45 seconds.
Have I missed something in my set up of the OLE DB source / connection? Will a DSN source be faster?
Can you help me to resolve my problem ? I have to do a simple daily backup system. Source : Flat File; Destination : SQL Server. I want to use the Slowly Changing Dimension component to backup only the new and updated row from my source (Flat File) and put them into SQL Server.
But how can I do to detect deleted rows from my source ?
Any suggestions ?
If it's not clear enough, please ask for more details !
I am in between of creating a dynamic SSSIS package which will run for multiple zones having different source connection.My source is in Oracle.I am having 3 DFT with the 3 different source tables.I want to create a package with above DFT dynamically so that my single package can run for the entire zone with dynamically source connection change.I have created a Master table which stores the zone source connection string and zone name. I have 2 different connection.so if in future any new zones come so only newly zone details need to be add in master table without opening the package.