Bulk Inserts To Data Warehouse - Best Practices?

Jul 20, 2005

Hello all,

I just started a new job this week and they complain about the length of
time it takes to load data into their data warehouse,
which they do once a month.

From what I can gather, they rebuild the indexes before the insert with an
80% Fillfactor, then insert the data (with the
indexes enabled), then rebuild the indexes with a 100% Fillfactor.

Most of my RDBMS experience is with a different product. We would have
disabled the indexes and Foreign Keys, loaded the data, then
re-enabled them, moving any records that violated the constraints into an
appropriate audit table to be checked after.

Can someone share with me what the accepted "best practices" are for loading
data efficiently into a data warehouse?

Any thoughts would be deeply appreciated.

Steve

View 2 Replies


ADVERTISEMENT

Better Practices Wanted For Cascading Inserts Of Hierarchical Data From Staging Tables

Aug 28, 2007

I apologize if this has been asked, but I can't find a complete answer.

We have a situation with parent/child tables which have an identity column as their PK. We need to be able to insert into the live tables from staging tables. The data in the staging tables are related via a surrogate key.

I have found the OUTPUT clause, but that can only refer to columns of the actual table (since there is no FROM clause in an INSERT). Our current best solution to this problem involves adding bogus "staging" columns to the destination tables, and removing them after we've inserted everything from staging. This is an unattractive solution to say the least.

I'll give an example that mirrors our actual solution, and ask if anyone has a better solution?
----------




Code Snippet
CREATE TABLE [dbo].[TABLE_A](
[ID] [int] IDENTITY(1,1) NOT NULL,
[DATA] [nchar](10) NOT NULL,
[STAGING_COLUMN] [bigint] NULL,
CONSTRAINT [PK_TABLE_A] PRIMARY KEY ([ID] ASC)
)
GO
CREATE TABLE [dbo].[TABLE_B](
[ID] [int] IDENTITY(1,1) NOT NULL,
[A_ID] [int] NOT NULL,
[DATA] [nchar](10) NOT NULL,
[STAGING_COLUMN] [bigint] NULL,
CONSTRAINT [PK_TABLE_B] PRIMARY KEY ([ID] ASC)
)
GO
ALTER TABLE [dbo].[TABLE_B]
ADD CONSTRAINT [FK_TABLE_A_TABLE_B] FOREIGN KEY([A_ID]) REFERENCES [dbo].[TABLE_A] ([ID])
GO
CREATE TABLE [dbo].[STAGE_TABLE_A](
[A_Key] [bigint] NOT NULL,
[DATA] [nchar](10) NOT NULL
)
GO
CREATE TABLE [dbo].[STAGE_TABLE_B](
[B_Key] [bigint] NOT NULL,
[DATA] [nchar](10) NOT NULL,
[A_Key] [bigint] NOT NULL
)
GO


The STAGING_COLUMN columns are the ones that will be added before, and dropped after.






Code Snippet
DECLARE @TABLE_A_MAP TABLE (
A_ID INT,
A_Key BIGINT
)
INSERT INTO TABLE_A (DATA, STAGING_COLUMN)
OUTPUT INSERTED.ID, INSERTED.STAGING_COLUMN INTO @TABLE_A_MAP
SELECT DATA, A_Key FROM STAGE_TABLE_A
INSERT INTO TABLE_B (A_ID, DATA)
SELECT TAM.A_ID, STB.DATA
FROM STAGE_TABLE_B STB INNER JOIN @TABLE_A_MAP TAM ON TAM.A_Key = STB.A_Key






This seems to work, but I'd really like another alternative. Even though this is happening when nobody else is using the database, I cringe at the thought of adding and removing columns just to make this work.

Here are a few of my constraints:



The above is a simplification of the actual problem. The actual problem goes about five levels deep (hence the B_Key in STAGE_TABLE_B). At the top level, our larger customer will have 100,000 rows to insert. Each level will average 3 times as many rows as the next higher level, so we're talking about real volumes here.

This has to finish over the course of a weekend.

This has to be delivered to QA this Friday
Thanks for any help or insight.

View 3 Replies View Related

Bulk Inserts

Mar 3, 2006

I'm trying to perform a bulk insert as shown below. It's problematic b/c it's not updating the identity fields correctly and we're getting dups. I think, but I'm not sure, that On Update Cascade would solve all this, b/c we wouldn't have to concern ourselves with even touching the identity fields, b/c they would be autogenerated. Can someone shed some light?? I'm pretty confused.


CREATE PROCEDURE AddMiamirecords AS

BEGIN TRANSACTION

--USERS
INSERT INTO [Undex_Production].[dbo].[USERS]([LastName], [UserName], [EmailAddress], [Address1], [WorkPhone], [Company], [CompanyWebsite], [pword], [IsAdmin], [IsRestricted],[AdvertiserAccountID])
SELECT dbo.fn_ReplaceTags (convert (varchar (8000),Advertisername)), [AdvertiserEmail], [AdvertiserEmail],[AdvertiserAddress], [AdvertiserPhone], [AdvertiserCompany], [AdvertiserURL], [AccountNumber],'3',0, [AccountNumber]
FROM Miami
WHERE not exists (select * from users Where users.Username = miami.AdvertiserEmail)
AND validAD=1


--PROPERTY
INSERT INTO [Undex_Production].[dbo].[Property]([ListDate],[CommunityName],[TowerName],[PhaseName],[Unit], [Address1], [City], [State], [Zip],[IsActive],[AdPrintId])
SELECT [FirstInsertDate],[PropertyBuilding],[PropertyStreetAddress],PropertyCity + ' ' + PropertyState + ' ' + PropertyZipCode as PhaseName,[PropertyUnitNumber],[PropertyStreetAddress],[PropertyCity], [PropertyState], [PropertyZipCode],'0',[AdPrintId]
FROM [Undex_Production].[dbo].[miami]
WHERE miami.AdvertiserEmail IS NOT NULL
AND validAD=1


--ITEM
INSERT INTO [Undex_Production].[dbo].[ITEM] ([SellerID],[Price],[StartDate],[EndDate], [HomePageFeatured],[Classified],[IsClosed])
SELECT USERS.UserID, miami.PropertyPrice, convert(datetime,miami.FirstInsertDate), dateadd(day, 30, miami.FirstInsertDate)as EndDate, 1, convert (int,AdNumber) as Classified, 0
FROM USERS RIGHT OUTER JOIN
miami ON USERS.UserName = miami.AdvertiserEmail
WHERE validAD=1


--PROPERTYITEM
INSERT INTO [Undex_Production].[dbo].[propertyItem]( [propertyId], [ItemId])
SELECT Property.propertyId, ITEM.ItemID
FROM ITEM RIGHT OUTER JOIN
miami ON ITEM.StartDate = miami.FirstInsertDate AND ITEM.Price = miami.PropertyPrice AND ITEM.Classified = convert(int,miami.AdNumber) LEFT OUTER JOIN
Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND
miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1
WHERE validAD=1

--CONDOFEATURES
INSERT INTO [Undex_Production].[dbo].[CondoFeatures](PropertyId,[Bedrooms], [Area], [PropertyDescription], [Bathrooms], [NumOfFloors])
SELECT Property.propertyId, [PropertyBedrooms], [PropertySquareFeet], dbo.fn_ReplaceTags (convert (varchar (8000),PropertyDescription)),
[PropertyBathrooms], [PropertyTotalFloors]
FROM miami LEFT OUTER JOIN
Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND
miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1
WHERE validAD=1

--COMMUNITY FEATURES
INSERT INTO [Undex_Production].[dbo].[CommunityFeatures](PropertyId,[totalFloors],isComplete1)
SELECT Property.propertyId, miami.propertyTotalFloors,'0' as IsComplete
FROM miami LEFT OUTER JOIN
Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND
miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1
WHERE validAD=1

--UNITDISCLOSURES
INSERT INTO [Undex_Production].[dbo].[UnitDisclosures]([propertyId],[monthcondoasso])
SELECT Property.propertyId, [propertyassocfee]
FROM miami LEFT OUTER JOIN
Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND
miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1
WHERE validAD=1


--BROKERDEVELOPER
INSERT INTO [Undex_Production].[dbo].[BrokerDeveloper]([IsFSBO],[FSBOName],
[FSBOEmail],[FSBOWebsite],[IsDeveloper],[DeveloperName],[DeveloperWebsite],[IsBroker],[BrokerName],[BrokerageWebsite],
[propertyId],[brokercommission],[isComplete])SELECT
CASE AdvertiserType when 'FSBO' THEN 1 else 0 end,
CASE AdvertiserType when 'FSBO' THEN [AdvertiserName] else NULL end,
CASE AdvertiserType when 'FSBO' THEN [AdvertiserEmail] else NULL end,
CASE AdvertiserType when 'FSBO' THEN [AdvertiserURL] else NULL end,
CASE AdvertiserType when 'Developer' THEN 1 else 0 end,
CASE AdvertiserType when 'Developer' THEN [AdvertiserName] else NULL end,
CASE AdvertiserType when 'Developer' THEN [AdvertiserURL] else NULL end,
CASE AdvertiserType when 'Realtor' THEN 1 when 'Broker' THEN 1 else 0 end,
CASE AdvertiserType when 'Realtor' THEN [AdvertiserName] when 'Broker' THEN [AdvertiserName] else NULL end,
CASE AdvertiserType when 'Realtor' THEN [AdvertiserURL] when 'Broker' THEN [AdvertiserName] else NULL end,
Property.propertyId,[PropertyCommBroker],'0' as IsComplete
FROM miami LEFT OUTER JOIN
Property ON miami.PropertyUnitNumber = Property.Unit AND miami.PropertyZipCode = Property.Zip AND
miami.PropertyCity = Property.City AND miami.PropertyStreetAddress = Property.Address1
WHERE validAD=1
IF @@ERROR <> 0
BEGIN
ROLLBACK TRAN
RETURN
END
COMMIT TRANSACTION
GO

View 2 Replies View Related

Using Triggers With Bulk Inserts

May 4, 2005

Hello All.

I am attempting an

insert into <tablename> select * from...

This is inserting thousands of rows into a table that has a trigger on it. The trigger updates a seperate table.

Now my question is will the trigger fire for every record inserted because I need it to fire only once as the table the trigger is updating is becoming locked when the insert statement is executed. I am assuming it is because of the number of times the trigger is being fired that the table is becoming locked. Any help would be greatly appreciated.

View 4 Replies View Related

Bulk Inserts Between Two Different DBMS

Sep 21, 2004

Hi folks,
I have a table located in DB2 nd I need to have a mirror image of this table on a SQL2000 database to avoid some server downtime problems.
Right now I have a solution using ADO.NET with Windows Services.

This windows service invokes itself everyday morning and pulls all the records from this table in DB2 to a dataset. Then I loop through the dataset and insert every record into SQL 2000 Table. This method is working fine ( It take approximately 2 minutes to insert 5000 records). I am just wondering whether there is any way to acheive bulk insertion in this case. Considering future growth of table I am not thinking the existing solution is neither elegant nor efficient.

Please let me know if I can achive the same either using XML, BULK INSERTS or any other mechanism in ADO.NET and please remeber that we are talking about data migration between different DBMS ( DB2 to SQL 2000)

Thanks,
Sai

View 1 Replies View Related

SQL 2012 :: Blocking During Bulk Inserts?

Dec 9, 2014

We are noticing a lot of blocking when performing bulk inserts.

Sometimes the lead blocker(s) are blocking inserts into the same tables, but most of the time, it's insert to other tables. So, while bulk insert into table X is running, bulk insert into table Y is blocked from the insert into X.

- We turned off transactions from the application performing the bulk inserts - no change.

- We enabled table locking from the application - no change

- We enabled lock_on_bulk_load on the tables - this seems to have worked; however still see some blocking.

According to the lock_on_bulk article - When you specify table locking for a bulk import operation, a bulk update (BU) lock is taken on the table for the duration of the bulk-import operation, shouldn't we be seeing this BU lock? Instead, we see nothing but LCK_M_X (exclusive table locks).

Just found out that in order get a BU lock, no indexing can exist on the table ... sure be nice if that was in the article!

Also, we saw the exclusive locks happening before we made these changes, meaning it wasn't using the row locks that it should be by default. Assuming we're missing something here with lock escalation. I am profiling just for lock escalation, and it never happens ...

So I guess my pending question at this time is that why when inserting into table X do inserts into table Y get blocked?

View 3 Replies View Related

T-SQL (SS2K8) :: Bulk Inserts In Batch?

Jun 16, 2015

I have a table with 370 million rows and 50+ columns. I need to change the data type of a column from character to numeric. Here's what it contains:

40 million with numbers I want to keep, the rest I just want to set to null:

4k with alpha characters
55 million other numbers
275 million empty strings

An alter column statement fails not just on the alpha characters but on the empty strings. So I tried a couple things on a test database to get an idea of the time it would take:

An update statement to clear out the non-numeric data is too slow (~1.5 days, batched 10000 at a time). I think I probably should create a new column anyway though, so I'm going to copy the data to a new table since it would be faster than adding a new column to the original table.

An insert ... select ... takes about 12 hours; adding WITH (TABLOCK) didn't seem to have any effect, and I'm not sure how to batch it. Recovery model is simple.

A select ... into ... only takes about 1 hour, but can't be batched.

Using a 3rd party ETL tool takes about 5 hours, batched.

I wanted to batch it to minimize impact on other queries but primarily the logs. Is there any way to do a fast batched bulk transfer within SSMS?

View 9 Replies View Related

BULK INSERT Inserts 2 Rows

May 15, 2008

Greetings

Got bulk insert thing going real nice thanks to your feedback! But now I notice that the BULK INSERT creates 2 identical rows while the flat file has the column headers and then corresponding row, just one row.
Why would it do that. Interestingly if the data file and format file are residing on a shared folder on SQL 2005, and I ran the BULK INSERT from studio it only inserts one row. But when I do BULK INSERT via user interface it creates 2 rows. What is going here.

View 10 Replies View Related

Client Workstation Initiated Bulk Inserts Fail

Sep 24, 2007

I'm setting up a new 2005 server and bulk insert from a client workstation (using windows authentication) is failing with:

Msg 4861, Level 16, State 1, Line 1
Cannot bulk load because the file "\FILESERVERNAMEsharedfolderfilename.txt" could not be opened. Operating system error code 5(Access is denied.).


Here's my BULK INSERT statement (though I'm pretty sure there's nothing wrong with it):


BULK INSERT #FIRSTROW FROM '\FILESERVERNAMEsharedfolderfilename.txt'
WITH (
DATAFILETYPE = 'char',
ROWTERMINATOR = '',
LASTROW = 1
)

If I run the same transact SQL when remote desktopped into the new server (under the same login as that used in the client workstation), it imports the file without errors.

If I use the sa client login from the client workstation (sql server authentication) the bulk insert succeeds.

My old SQL 2000 server lets me bulk insert the file without errors even from my client workstations using windows authentication.

I have followed the instructions on this site: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=928173&SiteID=1
, but still no luck and same error.

I'm pretty sure it is being caused by the increased constraints on bulk insert in 2005. Hoping someone can help. The more specific the better. If you need more info, let me know.

Oh and I've also made sure that the SQL service uses a domain logon account rather than the local system account.

Note that the file server (source file resides there) is a DIFFERENT machine than the 2005 SQL server. If I move the source file to the sql server machine the error goes away (not a preferred solution though).

Thanks!

View 9 Replies View Related

Row Level Security And Integration Services Bulk Inserts Don't Work

Jan 24, 2007

Hi i followed Microsofts "Implementing Row-and-Cell-Level Security in Classified Databases Using SQL Server 2005"

this works fine when i insert delete data on a normal script (mangement studio)

my project runs in a SSIS package, different users. i cannot do a bulk insert using OLEDB data Destination i get the following error

An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Conflicting locking hints are specified for table "dbo.tblUniqueLabelMarking". This may be caused by a conflicting hint specified for a view.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Conflicting locking hints are specified for table "dbo.tblUniqueLabelMarking". This may be caused by a conflicting hint specified for a view.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Conflicting locking hints are specified for table "dbo.tblUniqueLabel". This may be caused by a conflicting hint specified for a view.".
Error: 0xC0209029 at Data Flow Task, OLE DB Destination 1 [1741]: The "input "OLE DB Destination Input" (1754)" failed because error code 0xC020907B occurred, and the error row disposition on "input "OLE DB Destination Input" (1754)" specifies failure on error. An error occurred on the specified object of the specified component.

View 6 Replies View Related

SQL Server Admin 2014 :: Change Data Capture(CDC) For Data Warehouse / Reporting?

Aug 12, 2015

I have a requirement to implement CDC for 50+ tables to implement incremental data changes warehouse/reporting rather than exporting the whole table data. The largest table is having more than half a billion records.

The warehouse use a daily copy of OLTP db (daily DB refresh). How can I accomplish this. Is there a downside in implementing CDC just for the sake of taking incremental changes on the tables?

Is there any performance impact if we enable CDC on OLTP db?

Can we make use of the CDC tables on the environment we do daily db refresh so that the queries don't hit OLTP database?

What is the best way to implement CDC to take incremental changes for reporting.

View 0 Replies View Related

Learning Data Warehouse/analysis Service/data Mining

May 25, 2006

hi
I am new at MSSQL 2000 DBA thing. and trying to learn more about analysis service/data warehouse/data mining. so is any expert out there can Recommend some good books or web link article to read? Thanks

View 1 Replies View Related

For New In SSIS. To Send Data From Operational D/base To Data Warehouse

Oct 6, 2006

Hi Dear All!

I am Crystal Reports Developer and I am new in SSIS environment. I have started to read Professional SQL Server 2005 IS book. I am really confused by many tasks to choose.

I need to develop reports from data warehouse. But before I have to send the data from operational database (SQL Server 2000) to warehouse (SQL Server 2005) monthly - I have a script for retrieving the data. For my package, I chose Data Flow Task, Execute SQL Task, and OLE DB Destination, and it does not work.

Please help me if I can look similar packages performing?
Thank you!!

View 5 Replies View Related

Data Warehouse Example?

Apr 8, 1999

hi, Can anyone list the type of error that will make @@error =1
I created a procedure to update a table based on a cutomer id, Id 7 doest not exist in table A, and I am suppose to have Not valid id, but in this case nothing happen I always get table a updated
thanks
Ali


begin tran
update table a
set title =' manager '
where id =7

if (@error <> 0)
begin
rollback tran
print 'not valid id '
return
end
ELSE
begin
commit tran
print 'table A updated'
end

View 3 Replies View Related

SQL 2012 :: Data Modeling Tool For A Data Warehouse?

Oct 19, 2015

I need a recommendation on a data modeling tool that can be used with a data warehouse. My warehouse is running SQL 2012.

Here is my challenge: Most of the tables in the warehouse do not have primary keys and none of the tables have foreign keys on them. However, there are indexes and unique keys/indexes on the tables. I am looking for a tool that I can create virtual relationships on how the data is related, so it is visually easier for the ETL developers to write the code.

I have looked at both ER/Studio 11 and ERwin 9.6. Neither of them do it exactly the way I want it too. However, ER/Studio is pretty close.

View 0 Replies View Related

Data Warehousing :: How To Represent Metadata In A Data Warehouse

Sep 24, 2015

I am working on to create a data warehouse. I have made a database which will be the data warehouse and will consist of dimension and fact tables. I know that other than dimension and fact table a data warehouse should also consist of a meta data, now my question is what should be the structure of metadata and all the information it should have?

View 2 Replies View Related

Database For Data Warehouse

Dec 22, 2004

I am going to use Microsoft SQL Server to develop my data warehouse, but one thing makes me confused. Since Analysis Service can create a Star schema database, do I have to pre-set up a Star schema database for ETLed data? Basically, I am wondering what's the relationship between an ETLed database and the one created through the Analysis Services.

Can any one give me an explanation from the implementation perspective?

Many thanks!

View 4 Replies View Related

Help With Restore Data Warehouse

Feb 27, 2008

Hi all,

I am trying to restore my data warehouse from a January 2008 backup under a new name to recover a table that I accidentally deleted. It is taking a long time for the restore to get done. Here is the command I am running as sa in QA

---

RESTORE DATABASE Warehouse_new FROM DISK = 'H:MSSQLDataMSSQLBACKUPDBBackupsWarehouseWa rehouse_db_200801050600.BAK'
WITH
MOVE 'Warehouse_Data' TO 'G:MSSQLDataMSSQLDataWarehouse_New_Data.MDF',
MOVE 'Warehouse_Log' TO 'H:MSSQLDataMSSQLLogsWarehouse_New_Log.ldf'

----

There Warehouse_New_Data.MDF is 375 GB and the log is 12 GB.


There is still 169 GB of free space on the drive I am restoring to after the presence of Warehouse_Data.MDF and Warehouse_New_Data.MDF (each 375 GB).

Its been 4.5 hrs and the restore is still running. Backups take about 3.5 hrs to complete. Can I do any checks on the restore to see what point it is at? I stopped the restore using EM earlier after it took 8 hours and still no progress.

Please advise.

Thanks.

View 1 Replies View Related

Has Anyone Seen A 25 TB Data Warehouse On SQL Server?

Jul 23, 2005

Has anyone seen a 25 TB data warehouse on SQL Server? I do not know ofsuch a thing, but maybe it exists out there.

View 6 Replies View Related

Data Warehouse Nulls

Mar 15, 2006

Hello..I was wondering if anyone out there could tell me how they deal withNULL values in a data warehouse? I am looking to implement a warehousein SQL 2005 and have some fields which will have NULL values and Iwould like some further ideas on how to deal with them. At my last jobin dealing with Oracle we were just going to leave the fields NULL, butin SQL how would you best recommend cleaning the data? I greatlyappreicate your help and look forward to your reponses.Thank you

View 2 Replies View Related

REFRESH DATA WAREHOUSE

May 21, 2007

I€™m making warehouse for our HMIS (healthcare management information system)by using SSIS. I€™m facing some problems now, could you please help me to solve my problem.



Brief idea about my Warehouse:
Source: oracle 9i
Destination: Sql server 2005
ETL tool: SSIS



Problems:

How to refresh or load the current data to data warehouse.(now i'm using truncate sql task for deleting old/entire data for each packages, i really dont want to use in the production) . For example: The patient admissions data is adding everyday so i want to load the current data into my warehouse.
Could you pls suggest me good solution for this?


Refresh Cycle timings: is there any task available in SSIS?


current status:

First Time load completed, i set one Execute Sql statement ctrl flow task for Truncate the existing loaded data in the sql server 2005. and then again i process one data flow task for loading the data from oracle to sql server.

View 8 Replies View Related

Data Warehouse Forum

Jul 6, 2007

Hi, all,

Thanks for your kind attention.

Just have some enquries about some issues of data warehouse design and would like to hear from any of you for any prestigious forum on data warehouse?

Thanks a lot in advance and I am looking forward to hearing from you shortly.

With best regards,

Yours sincerely,

View 3 Replies View Related

Realtime Data Warehouse

Apr 10, 2008

Is SSIS a tool for extracting realtime data from staging to data warehouse? Realtime in my case can be loading every 15 minutes but no more than 30 minutes. I've a data warehouse which data refresh once a day and it worked fine. The data that I extract into the warehouse is from a Staging database which is realtime replication of multiple production databases. Once a day, I've to have replication pauses on staging for a couple hours to refresh the data warehouse. That's the only way so SSIS can pull the data correctly; if I've replication on while SSIS pull data, it will always copy less rows than it supposed to.

I cannot afford to have replication pauses every 15 minutes just so I can refresh data warehouse. Does anyone every have this problem? or any best practice how to do this?

Thanks,

-Ash

View 2 Replies View Related

Reg: Data WareHouse And SSRS

Apr 25, 2008



Hi,

Can any one give some good articles and webcast links for Developing SSRS reports with Data WareHouse.


Regards
Sithender

View 1 Replies View Related

Sample RFP For A Data Warehouse Project

Jun 26, 2003

Does anyone have a sample RFP for a Data Warehousing project?

My manager hired an outside consultant to draw up a proposal for our company. But it is getting stuck in details and we are way behind schedule.

It will help me greatly if there is an outline of a DW RFP.

View 4 Replies View Related

Data Warehouse Backup/recover

Jun 9, 2004

Hello, everyone:

Does any body have the experience to execute data warehouse backup/recover? What I want to know is how to backup/recover database in data warehouse and cubes.

Thanks.

ZYT

View 5 Replies View Related

Relative Time In A Data Warehouse

Feb 28, 2008

At my office, we've been slowly working on putting together a data warehouse.

We're a financial services company and one of the services that we offer is debt collection. As far as reports go, our clients are interested in knowing how much money we collect over time. In particular, they want to know how many payments we've gotten 5, 10, and 15 months (and so on) after we receive a case. (Obviously, the 5-month payments are also included in the 10 and 15-month calculations).

When I wrote this report using our transactional database, I was completely new to SQL and the ever-resourceful Patron Saint took pity on me, so you can see a good description of the details at http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=78510

Now that I'm no longer a total newbie at SQL, and having been through a relatively extensive seminar on data warehousing, I've been entrusted with researching certain aspects of data warehouse development (rest easy, though, folks - the real DWH work is not being done by the very inexperienced me, but by an actual professional :) ).

My question:

how would you model this kind of "relative time" in a data warehouse? How would you display the 5-month, 10-month, and 15-month payments in a DWH? I can't really imagine that the kinds of joins necessary to do this in a transactional database would be desirable in a data warehouse.

We have the following:

1.) FACT_Payment: A fact table showing each each payment to the most detailed granularity. One attribute of this table is the payment date. Another attribute is a foreign key to the case dimension described below.

2.) DIM_Case: A dimension table showing information on each case, including the case start date. DIM_Case

3.) DIM_Date: A date dimension table.

(For added clarification: The FACT_Payment payment date has to be 5, 10, 15 months etc... after the DIM_Case start date.)

Any ideas, comments, experience with something like this?

Thank you.

View 7 Replies View Related

Column Names In Data Warehouse

Mar 12, 2008

Hi all,

I'm working on my first data warehouse and I'm not sure how I should name the columns in the database.

The first phase of the data warehouse is to store a bunch of data from one third party source. The source contains over 100 pieces of data and the business user doesn't even know what some of the fields are but he wants to store everything. The third party refers to the each field with a somewhat cryptic short name and a longer description. The short name isn't always cryptic.

My question is am I better off naming my columns the same as the source system's short name so that I can easily debug problems later? Should I instead try to shorten their definition into something meaningful? On a side note, I'm 100% positive that we'll never populate the tables in questions with data from an additional source.

Thanks!

View 3 Replies View Related

Data Warehouse And A Data Base? Difference?

Jun 19, 2007

I need someone to give me a small briefieng on how a enterprise data warehouse (EDW)differs from a regular database?
Currently we have an appplication that accesses a database with about 18 tables. We also have a Data Warehouse. For some reason I was thinking that it would be possibly to migrate thedatabase into the data warehouse. The reason is that in looking at the schematic design forthe data warehouse there are some data tables that could also be used by our application that uses the DB.
I guess I am confused because I am not sure if a data warehouse is used in the same wayas a database?

View 2 Replies View Related

How To Create A Data Warehouse Or Data Mart

Aug 31, 2006

i want to create a data mart from an existing OLTP database. for example northwind or i will create an OLTP database. i dont know how i can create data mart from OLTP database. i want to learn that step by step. help me??? please!!

View 5 Replies View Related

Date Table In Data Warehouse? Opinions...

Jun 1, 2004

I'm reviewing a data warehouse design schema for a client that is following Kimball's data warehousing principles. One of the first things I noticed was a table of dates with expanded columns giving such information as the year, month, month name, fiscal year, quarter, etc for each date, They also have a surrogate key (int) for the date value. The fact tables store the surrogate key rather than the date value itself.
They were very surprised when I questioned the purpose of this table, assuring me that Kimball was very strong on the concept of having a date dimension for each table.
I don't see the purpose of a table containing nothing by derived date formats. I think they will get a bigger performance hit from having to link through the surrogate key than they would suffer from having to convert datevalues stored in the fact tables.
Has anybody else ever seen this before? Does Kimball really advise this?

View 14 Replies View Related

Large Data Warehouse Handled Bei SQL Server?

Jul 20, 2005

Hi,I would like to know if anyone out there really uses SQLServer 2000 (which edition?) to hold the data for a datawarehouse? How much data does it handle efficiently?TIAFrank

View 1 Replies View Related

Generic Staging Design Of Data Warehouse

Jan 6, 2006

I have a question about staging design using SSIS. Has anyone come up with an ETL design that would read table names from a generic table and dynamically create the ETL to stage the table.

1. Have a generic table which would have table name and description and whatever else that was required.

2. Have a master ETL that would enumerate through the table and stage all the table names found in the generic table.

This way I wouldn't have to create an ETL which would hardcode the names of 300-500 tables and have the appropriate 300-500 data sources and targets listed.

Not sure if I am making sense but I hope someone understands the attempt.

thanks

View 10 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved