I have a scenerio where i need to run the scheduled package for every 10 minutes. Let me give some examples. Say once my ETL process is done, my SS_Batch table gets inserted once. Here i have two columns like SS_Batch_cd and SS_Create_TS respectively. Once the ETL process runs successfully this SS_Batch_CD column will have values as 'C' which means 'Completed'. Similarly when ETL process fails this SS_Batch_CD column will have values as 'F' which means 'Failed'. And Similary when ETL process is in progress this SS_Batch_CD column will have values as 'P' which means it is in 'Progress'. In SS_Create_TS column, it will have date like 2008-04-15'.
Note : This SS_Batch table will get inserted only once when ETL job is over (whether the job is runned sucessfully or not or in Progress) along with the date.
Actually i need to run the package for every 10 minutes because i dont know when the last cube was processed. If i get this last processed cube date then i can check this processed cube date with SS_Create_TS column and SS_Batch_cd in SS_Batch table. Say if Last processed cube date is greater than SS_Create_TS then i can refresh or process the cube. This validation i need to do to achieve my goal.
What are all the control flows should we need to have in SSIS to achieve this scenerio? Please give me briefly to solve this problem.
Hi All, I have some clarifications on stopping my package once cube is refreshed or processed.
Below i have given steps for the transformation in my package
Let me give you what are all the dataflow transformations that i had given in my package.
1. Data Flow Task
2. Script Task 1- I have written code for getting the last processed cube (global variable has been declared for Last processed cube date - lastProcessedCube)
3. Script Task 2 - I have written code for SS_Batch table where i can get Create_Ts date that is assigned to another global variable - create_ts.
4. Analysis Processing Task.
In between Script Task 2 and Analysis Processing Task i have given @lastProcessedCube > @create_ts for Expression and Constraint under Precedence Constraint Editor
Actually i need to run package for every 10 minutes which i can do it in Job Schedule and need to refresh or process the cube daily. Is there any way to stop the package once when my cube is processed on that day. Again start the package for the next day.... Is it possible to do this? Please let me know.
I've created a cube and it processed fine. The calculate command is there. The measure that I'm attempting to run in the SSAS/Visual Studios browser is simply a count rows measure. When I drag the measure to the window, it says no rows available. If I click on the filter that allows nulls, the only change it makes is that it goes from no rows available to "NULL".
I can pull a last processed date for the entire model: URL...That works great for showing the last time anything in the entire model was processed. It is not table specific. Updating / processing table A also changes the timestamp on tables B and C.However, if I want to look at just a specific table in the model (build a column for each fact table and a measure to go with it) I find that doing any process operation updates all the =Now() columns in all the fact tables.If I have a model with 3 fact tables and I do a process table using ProcessFull on one of them, all three tables calculated columns "LastProcessed" =NOW() are updated. URL...
Is there a way to setup a measure in the model on a per table level to show last processed date for each individual table? LastDataRefresh:= MAX ('TableA'[LastProcessed])
For example: Our sales table may update three times a day, where as our warehouse inventory table is only updated nightly.I wanted to let end users see by adding a measure for each table when the last process event was for a given table in the model.
What's the best way to write key values of records processed in my SSIS 2012 package to the log provider chosen?My SSIS package deactivates widgets as well as thingies. It was just released into production this week, runs daily, and we'd like to keep a close eye on what it's doing for a couple of weeks, by that I mean on any day be able to quickly see which thingies and widgets were deactivated that morning. It typically deactivates less than 5 widgets and thingies per day.
We could dig through the database to see which were deactivated, but that only works if somebody hasn't manually reactivated it since it was deactivated. We need a log. This is a temporary watch we're doing, so we don't want to write to a table or make make any significant package changes, such as adding new tasks.It seems like writing the 5-or-so deactivated thingy and widget key values to the log is the best way to watch this package. What's the most efficient way to do this? I'm hoping to avoid a new loop and script component with "Dts.Log" calls, but I don't know any other way.
I'm looking at SSIS and SqlBulkCopy as a possible method to quickly insert and process large amounts of data, my current method uses the sp_xml_preparedocument and OPENXML to parse an xml document of the data I want to process and insert into the database, however I'm noticing the performance of SqlServer parsing the xml document isn't that good.
I found the C# SqlBulkCopy method (new in .NET 2.0) and I was thinking it would be faster to use that to load my data into a staging table and then use SSIS to extract the data from the staging table, process and transform it as necessary and finally load it into the final destination tables. I was able to create an Integration Services Project that selects the data from the staging table, does a bit of processing on one of the fields (by calling a stored procedure), and finally loads the processed data into the final table.
The problem with this is I need to clean out the rows that were extracted from the staging table and I'm not sure how I can accomplish this (and I can't just issue a "delete from staging_table" because there maybe new records in the staging table that were not processed), perhaps I can either delete each record as it is proccessed or somehow get the last proccessed identity id from the staging table and delete all records less than or equal to that id.
thanks in advance for any help you can provide, maybe there is an easy way to accomplish what I'm trying to do.
Hi, I am trying to filter data within my report by a date range (FromDate - ToDate), which is using a cube as a datasource.
My Issue: I have the filtering working ok but if i select a date which is outside the range of the data within my cube for example if i select the starting date for the range as 1/Jan/1965 but by data starts from 15/Jan/1965 then no data is returned.
Within the MDX query within the STRTOSET function i am using 'constrained' which is around the date parameter i.e. StartDate for Range.
My question is has anyone or is it possible to use date values outside of the range of the data within my cube and get a correct dataset returned. If so could you please explain how with an example.
I am having a problem creating an Integration Services package which executes an MDX query and place the results in a local DB.
I am using an OLE connection to connect to cube. However when I run the package I get the following error ....
[OLE DB Source [175]] Error: Cannot create an OLE DB accessor. Verify that the column metadata is valid. AND [DTS.Pipeline] Error: component "OLE DB Source" (175) failed the pre-execute phase and returned error code 0xC0202025.
There is a function called "proactive caching" in Analysis services. It can: ----Automatic synchronization with the relational database ----No more explicit "cube processing
But I cannot have the latest data in the cube even I set the proactive mode as "real time"
Do I need SSIS to process cube in this case?
Following is the procedures I have done: 1. test the data 1.1 use the bi dev studio to browser the cube, ensure no new data are there 1.2 process the the cube and browser the data, ensure new data are there 1.3 delete new data from source database and reprocess the cube, ensure no new data are there 1.4 add new data again
2. configure the proactive setting of cube 2.1 use sql server management studio to open the cube and open the properties window 2.2 in the option of "proactive caching" select "low-latency MOLAP" (even real-time ROLAP later), then click ok
3. configure the proactive setting of cube 3.1 open the patitions view properties window
3.2 in the option of "proactive caching" select "low-latency MOLAP" (even real-time ROLAP later), then click ok
3.3 in the notification tab, select "sql server " and specifiy tracking tables to the "fact table", which is a view to get data from real fact table.
4. wait a period of time...
5. test the data again 5.1 use the bi dev studio to browser the cube, but no new data are there (even I selected real-time ROLAP later). I even tried the reconnect and refresh options in the tool bar
So my questions are : 1. Did I do the right thing to achieve the target "Automatic synchronization with the relational database "
2. Can I monitor the procedure of synchronization, such as monitoring the log of processing, viewing the schedule setting and status of the process?
I am currently using SSIS ,MS Sql Server 2000 database and 2000 Analysis Services for the cube. I am creating a new table everyday and giving name like day_20080504, day_20080505 etc... So then I go to Analysis Services and process dimensions(incremental) AND Create a new partition using old partition as a template.
My first question is how to create a new partition everday and use old partition as template...(Almost same except database table) My Second question : Can I do this on 2000 Analysis services or Should I convert my cube into SSAS?
I am trying to access Cube through SSIS and have been unable to set SSIS package with the work around provided here (https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=219068). On pasting the MDX query using the openrowset command on the OLEDB source editor, I get a pop up window with error 0x08000405 and the message that says 'syntax used for openrowset is incorrect'
I also tried running this on SQL Management studio but, get the following error.
OLE DB provider "MSOLAP" for linked server "(null)" returned message "An error was encountered in the transport layer.". OLE DB provider "MSOLAP" for linked server "(null)" returned message "The peer prematurely closed the connection.". Msg 7303, Level 16, State 1, Line 3 Cannot initialize the data source object of OLE DB provider "MSOLAP" for linked server "(null)".
The server where the cube resides is on 64bit machine and I have 32-bit..could this be the reason for the issue?
I found this article on microsoft support website (http://support.microsoft.com/kb/947512 ) which describes the possible symptoms and causes for connectivity issues But, couldn't find a work around for it.
Here is the syntax of the query I am using in SSIS and query analyser
It runs SSIS packages, stored procedures fine. But when it comes to execute a command, reprocessing a Analysis Services cube it fails, saying the cube either not exists or the account has no rights. The cube does exist. If it's the account, how can I choose a different one or permit the one which is being used to execute reprocessing?
how can I connect to a cube (on a SQL 2005 server) and get (fact) data to process with SSIS? I looked at the toolbox but I see no applicable tool. It would be fine if somebody can provide a link to an article or example.
I've been trying to use Analysis Services 2005 Cube as a data source, query it via MDX and then use the data returned elsewhere in SSIS.
However, I've been unable to get this working and can't find any information regarding how this can be done. Surely it should be possible when I can get this working even in Excel?
I've looked in December edition of BOL and no luck - have also sent a feedback to BOL regarding this and have been told that "it should be possible, since there is a way to send SQL queries to AS." However the person I was speaking with knew of no one who had actually tried this scenario and to try posting here.
Any help as to how to get this done would be greatly appreciated.
All I get back is an error message of "Analysis Services Processing Task Error: A Connection cannot be made. Ensure the Server is running" The server is running, I can process the cube by connecting to the AS instance and right-click processing it.
I can process the cube by running the SSIS task inside of SSDT Just when I deploy the SSIS package (in Project mode) and then execute it do I get the error message.
SQL Server, SSAS, and SSIS processes are all running under the same account. SSAS is on a separate server from SSIS and SQL if that matters.
I have an SSAS 2005 database "A" and SSIS package "P" which process full "A" olap database. SSAS SERVER connection string is based on a variable read from XML configuration file.
It works well in BIDS, but when i deployed, the package failed at the step connecting SSAS, the message is "a connection cannot be made, please ensure the server is running"
In the connnecting string, i am using server name like servera.xx.com, if I change it to IP address, it works. if I change it to Localhost(happens to be on the same server), it works.
But I need the server name solution as IP may be changed.
I have created database and OLAP cube in Analysis services using SSAS.In SSAS I have used a datasource which is using SQL tables to populate OLAP cube.Now when I added some more data to my SQL tables and trying to deploy cube,the newly added is not getting populated in the cube.So i want run SSIS package which will import data from SQL tables to this OLAP cube.
Can you please help me how to write this SSIS package to import data from SQL tables to OLAP cube.(Very urgent issue)
I am just starting out using CUBEMEMBER/CUBEVALUE formulas in excel linked into a sql olap db - using this method for some custom reports where pivot tables are not suitable. The time dimension values include Months, Quarters and Years and the CUBEMEMBER formulas like
=CUBEMEMBER("OLAPCUBE","[Time].[Time].[Year].&[2015].&[1].&[1]") work fine - 1st quarter 1st month etc.
Is there a straightforward notation to aggregate months or do I need to use a plus sign to add a number of CUBEMEMBER formulas together.In other words - Is there an easier way of for say jan to july 2015 totals than
Now I have a different constellation: Integration Services run on one server, in version 2014, the Analysis Services instance to process the cube database on runs on another server, version 2012.I tried several different combinations of SSIS version and Analysis Management Objects version, and got several errors while running the process package (e.g. object reference not set to an instance of an object, cannot find AnalyisServices.dll..)
Is this combination 2014/2012 possible at all?I assume the BIDS version has to be for SQL Server 2014, as I want to run SSIS packages on a 2014 server, is that correct? Does it matter at all, can I also deploy 2012 packages?Which version of Analysis Management Objects do I have to use? I assumed I have to use version 11.0 here, because I want to process a 2012 cube?If it is possible to use the "old" 11.0 version of AMO, do I have to do anything so that it can be found by the SSIS package running on the server (it was built on my local computer, there I have all SQL Server versions from 2005 to 2014 installed in parallel), or do I just have to copy it to the appropriate SQL Server folder?
When I make a call to GetSchemaDataset with a restriction of a cube name with a space in the name of the cube the call fails. Following is a sample of the code: adoRestriction = new AdomdRestriction("CATALOG_NAME", "Contoso Telecom_Contoso"); adoRestrictions.Add(adoRestriction); dataSet = conn.GetSchemaDataSet("MDSCHEMA_CUBES", adoRestrictions); I am running SQL Server 2005 Analysis Services SP2. Is there some way to qualify the cube name in the restriction or is this just a bug? Thanks.
I have parameters in my report. The user can choose the year, month and date (3 parameters). Now I want to set default vaules for the parameters , so that the user sees the report for example for the current day without selecting the parameters. I tried to set the type of the parameters to DateTime and the default value for example for the year to "=Today().Year" . But when I execute the report an error occures . Something like : no validValue for this parameter.
My Attributes for the year month and date are from an Analyis Services Cube from a Server Time dimension . Does somebody know how to make it possible to set default values for this parameters?
Other question :
Does somebody know how I can reduce the values for a parameter. For Example I have a parameter "year" from a server time dimension from a cube. The values which are available are "Year 2004", "Year 2005", "Year 2006", "Year 2007". But I want that the user only can choose "Year 2006" or "Year 2007" ant not every Year or "All". Or Other Example: The User should only choose a Date that is int the past or Today but not a Date in the future.
I have a confusing situation that I CAN resolve with cursors, BUT cursors are far too performance hungry to be run.
Tables: LogFileIn, LogFileOut,
I want to look on LogFileIn FLI for files that has a NULL LogFileOut(response to LFI) AND dont have any other value.
A file can have multiple LFO's depending on how many times that file gets processed.
It is confusing but bear with me.
Code:
select * from LogFileIn where IdentityFileChar = 'BPRTFILE' and responseLogFileOutId is null
This does pull out a list of files that havent been processed, BUT that same file might have been processed again and succeeded. Therefore the results I get arent accurate.
With the cursor I would have taken the filename from the LFI and used that to check if that filename comes up again in the LFO as processed.
Did you get that? And can you help?? Im confusing myself!
This table records everytime a file is run, so there might be 20 identical LFI's all with different LFO's... Im looking for LFI's that have null, and only Null.
I have a cube that we are processing nightly via an Analysis Service Processing Task in SSIS. In order to increase the performance of the processing time, we elected to use a lot of rigid dimension attributes, and do a full process of everything in the SSIS task. The issue that I am having is that after that task completes, I need to go into Visual Studio to deploy the cube becuase we are unable to browse or use the cube. This issue seemed to start once we changed the SSIS Analysis Service Processing Task to do a full process on the dimensions, rather than an incremental.
I would expect that once development is done, and it is processed and deployed, that is it. My thinking is that the SSIS task should just update the already deployed cube,
what is order in which conditions are processed for sql query i.e for select * from table1, table2 where cond1 and cond2 and cond3 which condition will be processed first (i.e. for optimination purpose condition cutting down max no. of row shud be placed first or last?)