I am trying to learn the MDX tutorial(http://www.databasejournal.com/features/mssql/article.php/1550061) by william Pearson.It uses the example of "warehouse" cube.I am unable to proceed with the tutorial as I don't have this cube.Any links or articles on how to build this "warehouse" cube being used in the tutorial.
I'd like to get simple and clear explanation of the cube in data mining, and 3 notions we encounter a lot : Build, Deploy, and Process.
(1) What is the cube that is created when we deploy a mining solution/project? I wonder what type of cubes they are because although the dialog on deploy/process show that cube, after successful deployment we still don't see the cube in Cubes folder of the project.
(2) Why the SQL Server created that cube? Even though we process only one table and only use case-table (without nested table)
(3) Can someone explain these 3 concepts with CLEAR differences between them? (A) Build (B) Deploy (C) Process
As far as I know, the stages are like that : build, then deploy, then process. Also, it seems to me that those operations do not create objects inside 'Relational' database, but create objects (binary and text, with text files usually in XMLA programming language) in the related project's folders and subfolders. Any good explanation is appreciated.
I am new to SQL Server 2005 Analysis Services and would like to use the OLAP Cubes as a datasource to build Mining Model . However i would like to use a particular view of the OLAP cube that i have generated to be used as the datasource for the mining model . I find that i am not able to save the Cube View while browsing the OLAP cube in Business Intelligence Studio. Is there a way i can acheive this requirement.
Any ideas regarding this will be really appreciated.
When ever i tried to build a cube, i get stuck in this attribute relationship. either i shows a "yellow" icon in the hierarchy or "red" underline in the attribute column.I dont know how to rectify those errors.
*every processing* of a cube with 7 dimensions 5 measure groups (no table has more than 100.000 rows) hangs while trying to process the measure groups. I have to restart the Analysis Service all the time. But even worse every 2nd processing crashes the whole Analysis Services Process.
The only time I could sucessfully deploy a cube was with 1 measure group only.
How can anyone do any serious work with Analysis Services?
The error messages from evenlog are not helpful either.
The description for Event ID ( 22 ) in Source ( MSSQLServerOLAPService ) cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer. You may be able to use the /AUXSOURCE= flag to retrieve this description; see Help and Support for details. The following information is part of the event: Internal error: An unexpected exception occured..
we have automated build on every night. In our solution is SSIS project, where each package is encrypted per password. We call build process per command line like this..
C:ProgrammeMicrosoft Visual Studio 8Common7IDEdevenv.exe (c:DevelopmentX3\X3.sln /build Release)' in 'c:DevelopmentProjectsDailyBuild
Through build process we get a error:
External Program Failed: C:ProgrammeMicrosoft Visual Studio 8Common7IDEdevenv.exe (return code was 1):
We think a reason is, that on build of SSIS project must be entered a password. You can wonder for what we need that SSIS packages are part of our build. We hope that on build process is also created Deployment Utility, if so set in dtproject.user. Is it so? Is there any way to create Deployment utility on automated build process? Can be a password provided pre command line?
I am just starting out using CUBEMEMBER/CUBEVALUE formulas in excel linked into a sql olap db - using this method for some custom reports where pivot tables are not suitable. The time dimension values include Months, Quarters and Years and the CUBEMEMBER formulas like
=CUBEMEMBER("OLAPCUBE","[Time].[Time].[Year].&[2015].&[1].&[1]") work fine - 1st quarter 1st month etc.
Is there a straightforward notation to aggregate months or do I need to use a plus sign to add a number of CUBEMEMBER formulas together.In other words - Is there an easier way of for say jan to july 2015 totals than
When I make a call to GetSchemaDataset with a restriction of a cube name with a space in the name of the cube the call fails. Following is a sample of the code: adoRestriction = new AdomdRestriction("CATALOG_NAME", "Contoso Telecom_Contoso"); adoRestrictions.Add(adoRestriction); dataSet = conn.GetSchemaDataSet("MDSCHEMA_CUBES", adoRestrictions); I am running SQL Server 2005 Analysis Services SP2. Is there some way to qualify the cube name in the restriction or is this just a bug? Thanks.
This might not be the right place to post this but I'm going crazy on this....
Has anyone seen this error or know what this error is and how to fix it?
Error: A connection cannot be made to the database. Set and test the connection string. Additional Information: Either the user, (me), does not have access to the TFSWarehouse database, or the database does not exist. (Microsoft SQL Server 2005 Analysis Services)
hi, Can anyone list the type of error that will make @@error =1 I created a procedure to update a table based on a cutomer id, Id 7 doest not exist in table A, and I am suppose to have Not valid id, but in this case nothing happen I always get table a updated thanks Ali
begin tran update table a set title =' manager ' where id =7
if (@error <> 0) begin rollback tran print 'not valid id ' return end ELSE begin commit tran print 'table A updated' end
Hello, I'm here in my first attempt at using SSIS, and I'm creating a data warehouse for my company. I made a huge oversight as I was building my facts and dimensions, which is that while I created a new integer primary key for each table, I destroyed the referentiality between them. I hadn't planned that part out. When I went to go connect the tables at the end, I realized that even though I could create the FK's in each table, they wouldn't mean anything.
So I'm a little confused as to how to maintain the referentiality between the tables during the creation of the new warehouse tables. My only idea right now is to create a new table for each original entity in the relational system -- I'll call it a "LOOKUP" table -- that table would have 2 columns, their original GUID column, and a new integer id column. Then, when I create a new warehouse table, I can do a lookup to the proper LOOKUP table and assign the new PK or FK values so they'll continue to be consistent across the new warehouse tables, maintaining the references to each other.
I am facing a small problem while updating the warehouse using SSIS package.
Here is an sample scenario. I have a staging table T1 with some important columns as stateid,ename etc .. and target table as Trg with only two columns stateid and statecnt. The Rule is, each state will be sending an excel file which basically contains state and employees belonging to the state information. For the first time when i execute the SSIS package, i need to load statetid and count_of_employees for that particular state. say for example.
SELECT stateid, count(ename) from T1 where stateid = ?
For the next time if the same state information is received then we need to update the count to the existing record in the Dataware house rather than creating a brand new record once again.
I am little bit confused of implementig this logic. Can anyone help me out in accomplishing this task. What can done or what steps is involved in achiving the task.
I am going to use Microsoft SQL Server to develop my data warehouse, but one thing makes me confused. Since Analysis Service can create a Star schema database, do I have to pre-set up a Star schema database for ETLed data? Basically, I am wondering what's the relationship between an ETLed database and the one created through the Analysis Services.
Can any one give me an explanation from the implementation perspective?
I am starting to load a data warehouse for a retention period of 10 years. My database backup plan is as follows -
1. Perform full back on Sunday. 2. Perform differential backup everyday from Mon - Sat. 3. Perform transaction log back-ups every hour on all days.
My recovery mode is going to be BULK-LOGGED at all times. I had a few questions / comments on the Maintenance Plan that I would be creating for the back-ups. My database name is Warehouse.
1. Differential Backups cannot be created via a Maintenance Plan. Only a full-backup gets created. Am I correct?
2. I shall be running Optimizations and Integrity checks prior to full-backup. Is this ok?
3. Remove files (both .BAK and .TRN) older than - I am thinking of having 6 days. I want only one full back-up at a time in the server. What settings can I use? I think the old back-up gets deleted when the new one is successful. What settings in the Maintainence Plan do I have to use to overwrite the previous back-up with the current one?
System Databases -
Should the settings for System Databases be the same as my Warehouse database?
The Maintenance Plan takes care of full-back up and TLOG back-ups. For Differential Backups I have to use the All Tasks from EM and specify the Differential Backup job. Correct?
All kinds of back-ups can occur in the database when it is active. Meaning, I have a job that loads data in the warehouse when a back-up is occurring simultaneously. Am I correct?
I do not intend to shrink the Transaction Log at any time, since it gets backed up every hour I do not expect it to grow to a large size. If I do have to shrink it, then I change the recover mode to Simple, shrink the log and then immediately do a full-backup and after that set the mode back to Bulk-Logged. Is the sequence of steps correct?
I am trying to restore my data warehouse from a January 2008 backup under a new name to recover a table that I accidentally deleted. It is taking a long time for the restore to get done. Here is the command I am running as sa in QA
---
RESTORE DATABASE Warehouse_new FROM DISK = 'H:MSSQLDataMSSQLBACKUPDBBackupsWarehouseWa rehouse_db_200801050600.BAK' WITH MOVE 'Warehouse_Data' TO 'G:MSSQLDataMSSQLDataWarehouse_New_Data.MDF', MOVE 'Warehouse_Log' TO 'H:MSSQLDataMSSQLLogsWarehouse_New_Log.ldf'
----
There Warehouse_New_Data.MDF is 375 GB and the log is 12 GB.
There is still 169 GB of free space on the drive I am restoring to after the presence of Warehouse_Data.MDF and Warehouse_New_Data.MDF (each 375 GB).
Its been 4.5 hrs and the restore is still running. Backups take about 3.5 hrs to complete. Can I do any checks on the restore to see what point it is at? I stopped the restore using EM earlier after it took 8 hours and still no progress.
I am facing a small problem while updating the warehouse using SSIS package.
Here is an sample scenario. I have a staging table T1 with some important columns as stateid,ename etc .. and target table as Trg with only two columns stateid and statecnt. The Rule is, each state will be sending an excel file which basically contains state and employees belonging to the state information. For the first time when i execute the SSIS package, i need to load statetid and count_of_employees for that particular state. say for example.
SELECT stateid, count(ename) from T1 where stateid = ?
For the next time if the same state information is received then we need to update the count to the existing record in the Dataware house rather than creating a brand new record once again.
I am little bit confused of implementig this logic. Can anyone help me out in accomplishing this task. What can done or what steps is involved in achiving the task.
Hello..I was wondering if anyone out there could tell me how they deal withNULL values in a data warehouse? I am looking to implement a warehousein SQL 2005 and have some fields which will have NULL values and Iwould like some further ideas on how to deal with them. At my last jobin dealing with Oracle we were just going to leave the fields NULL, butin SQL how would you best recommend cleaning the data? I greatlyappreicate your help and look forward to your reponses.Thank you
I€™m making warehouse for our HMIS (healthcare management information system)by using SSIS. I€™m facing some problems now, could you please help me to solve my problem.
Brief idea about my Warehouse: Source: oracle 9i Destination: Sql server 2005 ETL tool: SSIS
Problems:
How to refresh or load the current data to data warehouse.(now i'm using truncate sql task for deleting old/entire data for each packages, i really dont want to use in the production) . For example: The patient admissions data is adding everyday so i want to load the current data into my warehouse. Could you pls suggest me good solution for this?
Refresh Cycle timings: is there any task available in SSIS?
current status:
First Time load completed, i set one Execute Sql statement ctrl flow task for Truncate the existing loaded data in the sql server 2005. and then again i process one data flow task for loading the data from oracle to sql server.
Is SSIS a tool for extracting realtime data from staging to data warehouse? Realtime in my case can be loading every 15 minutes but no more than 30 minutes. I've a data warehouse which data refresh once a day and it worked fine. The data that I extract into the warehouse is from a Staging database which is realtime replication of multiple production databases. Once a day, I've to have replication pauses on staging for a couple hours to refresh the data warehouse. That's the only way so SSIS can pull the data correctly; if I've replication on while SSIS pull data, it will always copy less rows than it supposed to.
I cannot afford to have replication pauses every 15 minutes just so I can refresh data warehouse. Does anyone every have this problem? or any best practice how to do this?
Does any body have the experience to execute data warehouse backup/recover? What I want to know is how to backup/recover database in data warehouse and cubes.
At my office, we've been slowly working on putting together a data warehouse.
We're a financial services company and one of the services that we offer is debt collection. As far as reports go, our clients are interested in knowing how much money we collect over time. In particular, they want to know how many payments we've gotten 5, 10, and 15 months (and so on) after we receive a case. (Obviously, the 5-month payments are also included in the 10 and 15-month calculations).
When I wrote this report using our transactional database, I was completely new to SQL and the ever-resourceful Patron Saint took pity on me, so you can see a good description of the details at http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=78510
Now that I'm no longer a total newbie at SQL, and having been through a relatively extensive seminar on data warehousing, I've been entrusted with researching certain aspects of data warehouse development (rest easy, though, folks - the real DWH work is not being done by the very inexperienced me, but by an actual professional :) ).
My question:
how would you model this kind of "relative time" in a data warehouse? How would you display the 5-month, 10-month, and 15-month payments in a DWH? I can't really imagine that the kinds of joins necessary to do this in a transactional database would be desirable in a data warehouse.
We have the following:
1.) FACT_Payment: A fact table showing each each payment to the most detailed granularity. One attribute of this table is the payment date. Another attribute is a foreign key to the case dimension described below.
2.) DIM_Case: A dimension table showing information on each case, including the case start date. DIM_Case
3.) DIM_Date: A date dimension table.
(For added clarification: The FACT_Payment payment date has to be 5, 10, 15 months etc... after the DIM_Case start date.)
Any ideas, comments, experience with something like this?
I'm working on my first data warehouse and I'm not sure how I should name the columns in the database.
The first phase of the data warehouse is to store a bunch of data from one third party source. The source contains over 100 pieces of data and the business user doesn't even know what some of the fields are but he wants to store everything. The third party refers to the each field with a somewhat cryptic short name and a longer description. The short name isn't always cryptic.
My question is am I better off naming my columns the same as the source system's short name so that I can easily debug problems later? Should I instead try to shorten their definition into something meaningful? On a side note, I'm 100% positive that we'll never populate the tables in questions with data from an additional source.
Regarding the code/db from the REAL project that just got released, I have no problem attaching the "REAL Sample V6" database, but the "REAL Warehouse Sample V6" database requires ENTERPRISE edition, because the default copy uses Partitioning (i.e. the PT version, although the document stated that the multi-table (MT) version is the default). I only have the STANDARD edition of SQL 2005, is there a workaround?
I'm reviewing a data warehouse design schema for a client that is following Kimball's data warehousing principles. One of the first things I noticed was a table of dates with expanded columns giving such information as the year, month, month name, fiscal year, quarter, etc for each date, They also have a surrogate key (int) for the date value. The fact tables store the surrogate key rather than the date value itself. They were very surprised when I questioned the purpose of this table, assuring me that Kimball was very strong on the concept of having a date dimension for each table. I don't see the purpose of a table containing nothing by derived date formats. I think they will get a bigger performance hit from having to link through the surrogate key than they would suffer from having to convert datevalues stored in the fact tables. Has anybody else ever seen this before? Does Kimball really advise this?
Hi,I would like to know if anyone out there really uses SQLServer 2000 (which edition?) to hold the data for a datawarehouse? How much data does it handle efficiently?TIAFrank
Hello all,I just started a new job this week and they complain about the length oftime it takes to load data into their data warehouse,which they do once a month.From what I can gather, they rebuild the indexes before the insert with an80% Fillfactor, then insert the data (with theindexes enabled), then rebuild the indexes with a 100% Fillfactor.Most of my RDBMS experience is with a different product. We would havedisabled the indexes and Foreign Keys, loaded the data, thenre-enabled them, moving any records that violated the constraints into anappropriate audit table to be checked after.Can someone share with me what the accepted "best practices" are for loadingdata efficiently into a data warehouse?Any thoughts would be deeply appreciated.Steve
I have a question about staging design using SSIS. Has anyone come up with an ETL design that would read table names from a generic table and dynamically create the ETL to stage the table.
1. Have a generic table which would have table name and description and whatever else that was required.
2. Have a master ETL that would enumerate through the table and stage all the table names found in the generic table.
This way I wouldn't have to create an ETL which would hardcode the names of 300-500 tables and have the appropriate 300-500 data sources and targets listed.
Not sure if I am making sense but I hope someone understands the attempt.