I have an archive of an Analysis Services database that was created on a server that is not accessible to me. I also have a copy of the source SQL Server database that it uses as a data source. I have restored both of these to my server. I have figured out how to change the data source to point to my server for the fact tables referenced, but I can't figure out how change the data source for the shared dimensions. I would like to be able to do work on this version of the database, but I get errors when I try to browse the dimension data because it can't connect to the original data source. Any ideas?
First off, let me just say that I'm a complete newbie to SQL Reporting Services, and .NET in general. We have a VB 6 application that is trying to launch an SRS 2005 report in a viewer window. This was accomplished by creating A VB.NET "wrapper" window that launches the report, and allows the report to be previewed, exported, etc. I did not write any of this.
The report is using a Shared Data Source, which points to a specific database. My problem is that the users can select which db they want when they launch the VB 6 Application, so I need to find a way to edit the connection string in the report to specify which database to use on the fly. I have the db name in the "wrapper" application, but I can't figure out how to pass it to the report.
Hi, I have noticed the the cubes that we have here use shared dimensions. For almost all cubes(5-6) there are at least 4-5 common dimensions. According to what I have been preached so far, the shared dimensions are so that you can reuse them. That is not what is practised here. for example. cube1 has somedim1, dim2_c1, dim3_c1... cube2 has xyzdim1_c2,xyzdim2_c2,dim3_c2..
dim3_c1 and dim3_c2 are the same dimensions, one for each cube. I don't know if I am missing something. Shouldn't the use the same dimensions? Could there be any reason for this. pls. advice.
I experimented a strange behavior using connections for data source inside packages in vss 2005.
Suppose we have a tab1 in a develpment source db and in a test source db:
tab1 dev db contains 100 records tab 2 test db contains 200 records
I start developing a package using an 'ole db data source' task inside a data flow. When I change the data source from dev to test, strange thinks happens.
The 'ole db source task' retrieve only 197 records ... but if I open the connection inside the connection manager pane and only after i click on the 'test connection' button tI get 200 records.
The source db is a sql 2000 db server and I use ole db provider for sql server.
How can i get the shared data sources i created in the report manager to be available in VS.net to be able to assign to a report. I want to be able to create the shared data source first in Report manager and then be able to assign it a report in VS.Net
Was wondering if there was a best practice minimum permissions for creating a SQL login to use when setting up a new shared Data source for SSRS report manager?
Something along the lines of them being a data read for the DB and permissions to update tempdb?
Would have thought it not advisable to have the login be able to update the main db...
I see that a .rds file was created as a result of me defining a shared data source. So are they meant to be shared across projects only within the same solution, across solutions perhaps with references, system wide etc?
I'm trying to use stored credentials to enable caching. I've created a special windows user account with minimum permissions for just this task, and once set up, it works great (almost).
I can update the shared data source using SQL Mgt Studio or directly via the Report Manager to use a set of -windows- stored credentials. But I don't seem to be able to do the same via VS.NET 2005. I can only store -SQL- credentials, which I have no need to enable and no desire to add to my surface area.
The problem lies that every time I deploy any report that using that data source (which is nearly all of them), the data source is re-published, which wipes out the stored credentials and caching immediately stops.
I've tried messing with the XML directly, with no luck, and changing assorted advanced settings, but nothing seems to stick.
Obviously I can manually update the credentials each time I deploy something, but surely it seems that there must be a better way. Is there a way to either set the DS to use stored windows credentials, or just plain prevent the deployment of the DS every time a dependent report is published?
Hi all, I have several reports using single shared datasource. I want to change at a runtime database that is used by that datasource. Can this be achieved? If not what are the other solutions €“ I guess that using not shared datasource for each report may be the solution (is it?) but it is not the best solution for me. My goal is to allow users to run the same set of reports, viewed in ReportViewer control, but using different databases (connection string dependant).
I created the shared data source and the report in VS 2005. After deploying the report to the Report server and trying to access it produces the following error. An error has occurred during report processing.
Cannot create a connection to data source 'kv_testQA'.
ERROR [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified
I checked to see if the Data source exists in the Report Server and it does. But it still produces the error. What is the issue here.
I'm using a shared data source to connect an Oracle server in my packages. After changing the database user password in the shared data source, I noticed the package concerned would fail with the following description.
In the BI development studio when I have to change the data source for data sets within a report, I have to go to each of the datasets individually to do this. Is there a quicker way to do this. Say I want to change data source for the entire report in BI dev studio.
I'm getting a bit lost in SSIS. I've got an Excel source file that I'm trying to load into a table. I keep getting validation errors that warn about not being able to convert between unicode and non-unicode string data types.
I'm trying figure out where I have to change this and am frankly confused. It seems SSIS is selecting various columns as unicode/WSTR data types, but I want them to import as regular string types.
On the Data Flow tab in SSIS, I right-click on the source Data Flow component (the Excel file) and select Show Advanced Editor. Then on the last tab, Input and Output Properties, there's a tree view for the Excel output. There are "External Columns" and "Output Columns" containers in the tree view.
I tried setting some of these but they don't seem to "take". Do I need to change the data type for each column under both the External and Output columns?
That seems like a lot of work! And, as I say, I tried setting some, but I still got the same validation errors. So, then I go back to this spot (Advanced Editor -> Input and Output Properties tab) and my changes seem to have been lost.
I have an Excel 2013 file with lots of DAX connected to an Azure database. I'd like to reuse all that work by changing the data source for the PowerPivot model to a different database which is an exact copy (just empty) on the same server, but Excel won't let me. In PowerPivot I can change the database connection, the user ID and password as well as the connections name. When opening each table properties (inside PowerPivot model) the new connection is used and all old data is removed, but as soon as I refresh using Existing connections, both from PowerPivot or from the Excel Data tab, the old connection is used and old data is reloaded.
If I use Existing connections from inside PowerPivot, I can se that the new connection is highlighted and has the correct variable, but I think maybe that one is run first, then the old one is run afterwards (or something like that).
On the Excel Data tab, I can see that the old connection is the only one Excel itself seems to know about, but I cannot change anything there as it's read-only.
There must be a way to change this. Even with copy and paste it would take me days to recreate this Excel file from scratch and it would be a serious flaw and reduction of usability for PowerPivot.
SELECT t.Doctor, t.LedgerAmount, t.TransactionDate, ISNULL(lg.LedgerGrpDesc, 'No Sales Group') AS LedgerGroup FROM Transactions t LEFT OUTER JOIN LedgerGroups lg ON t.LedgerDescription = lg.dbLedgerDesc
[Code] .....
My problem is that the data in t.LedgerDescription sometimes now has either leading/trailing white space or more likely special chars so the join against lg.dbLedgerDesc doesn't always work.
I can't change the source of the data to strip out special chars/white space so am stuck on how to deal with it.
I tried using LTRIM & RTRIM in the where clause but this doesn't seem to have had any effect...
LEFT OUTER JOIN LedgerGroups lg ON LTRIM(RTRIM(t.LedgerDescription)) = lg.dbLedgerDesc
I have a got a package with source as sql table which has got 50 columns. We are using only 10 columns out of this. Recently one column name has changed and thus throws error invalid mapping. When I open the source to do the changes noticed that all the colums are prselected now and also the datatypes got changed to default ( I had changed the datatypes as per my requirement while i developed). So now I had to select required columns from source and redo the datatype changes in advanced editor.Is there any option which doesnt disturb this settings and we just need to correct the mapping alone.Â
At the Moment we use SQL Server 2008 R2 Std. with Reporting Services. I want to change the individual schedules (non-shared) for 170 subscriptions without using the web Interface.
I tried to change table entries for dbo.Schedule and dbo.Subscriptions but the reports did not run. I also know that here are Jobs in the SQL Server Agent for the schedules. Now I need to understand how the mechanism works that updates the job entries from database tables. Is there a stored procedure which can be used?
Hi, all experts here, Do we always have to use SCD component for the loading of data into data warehouse to handle changes of rows? I am looking forward to hearing from you and thank you very much in advance for your help. With best regards,
I need to create a package that updates the dimensions and cube data from a data warehouse on daily basis. I was going to create a Data Flow that takes the data from the DW source then put it as input to a Process Dimension destination to update the dimensions and use a Process Partition destination in the same manner to update the cube, but then I came across the Analysis Services Processing Task which seems to do the job as well. I am kinda confused which way to go. Any recommendations?
I have one dimension and one measure group. I deployed and processed the cube. Now I am able to browse the data. Now I added one more dimension. I deployed and reprocessed again the Cube. Now I am not able to see any values. Â I am getting like below.
Hi, I use lookups to map surrogate of level 1 dimensions to my fact tables in SSIS. But how to handle a level 2 dimension with a ValidFrom and a ValidUntil date field? I do not use an IsCurrent column, because this could problem with late arriving facts.
- In dts I used an SQL statement like this:
update SA SET SA.DimProdRef = Dim.RecordID FROM SAWarenEingang SA, DimProd Dim where SA.ProduktNumber = Dim.ProduktNumber and SA.ArtikelkontoBewegungsdatum between Dim.ValidFrom and Dim.ValidUntil
Now in SSIS I want to handle the whole thing in the data flow without using a staging table: - Using Lookups: I would have to pass the date column for each inside the fact table into the lookup. That does not work. - Using Execute SQL in the data flow: would be very slow, because the statement will be executed for any line in the dataflow
, Hi In this code how can I create a new data source and new data source view and model and structure that it run dynamic. In this code I have a lot of errors, that they are about server and database don€™t have in current code, In this code, first I should definition server or no?
How can I create data source and data source view and model and structure? Please say code of that, and guide me. databasename and srv is unknown. Do I add other reference with analysis services? Please explain about these codes: ************************************************************************ 1) RelationalDataSource dsNew = new RelationalDataSource( datasourceName, Utils.GetSyntacticallyValidID( datasourceName, typeof(RelationalDataSource)));
I have set up a new connection as a connection from data source, but I cannot see how to use this connection to create my Data Flow Source. I have tried using an OLE DB connection, but this is painfully slow! The process of loading 10,000 rows takes 14 - 15 minutes. The same process in Access using SQL on a linked table via DSN takes 45 seconds.
Have I missed something in my set up of the OLE DB source / connection? Will a DSN source be faster?
I am a relative newbie to SSIS. I have been tasked with writing packages to import data from our clients. We have about 100 clients. Each client has a few different file formats. None of the clients have the same format as each other. We load files from each client each day. Each day the file name changes. I have done all of my current development work with a constant file name in a text file connection manager.
Ultimately we will write a VB application for the computer operator to select the flat file to load and the SSIS package to load it with. I had been planning on accomplishing this thru the SSIS command line interface. Can I specify the flat file to load via a variable that is passed through the command line? Do I need to use a Script Component to take the variable and assign it to the connection manager?
Is there a better way to do this? I have seen glimpses of a VB interface to SSIS. Maybe that is a better way to kick off the packages from a VB app?