Hi List Im trying to set up an implementation of Project Real --it works like this- Create two system environment variables called REAL_Root_Dir and REAL_Configuration with the values given below. Click on Start -> Control Panel -> System. Go to the Advanced Panel, click Environment Variables button, then New in the System variables box.
If the Project REAL files were installed at C:Microsoft Project REAL, then the variable values will be:
The package OLEDB connections work like this First read enviroment variable to get location of config file Next read Config File to get connection string for Config Database <?xml version="1.0"?> <DTSConfiguration> <Configuration ConfiguredType="Property" Path="Package.Connections[SQL - Configuration].Properties[ConnectionString]" ValueType="String"> <ConfiguredValue>Data Source=(local);Initial Catalog=DataWarehouseABC;Provider=SQLNCLI.1;Integrated Security=SSPI;</ConfiguredValue> </Configuration> </DTSConfiguration> Next read Config database to get connection strings for Source and Destination databases
Destination database is called "DataWarehouseABC" Source database is called "SnapshotABC"
the Source database OLEDB connection works 100% however the destination OLDB connection we get this error below PS--Both source and destination databases are on the same development machine , however both databases are restored bak files from another production machine
Error 1 Error loading LoadGroup_Daily.dtsx: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Login failed for user 'xxxxxx'.". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Cannot open database "DataWarehouseABC" requested by the login. The login failed.".
Any ideas on how one OLEDB Connection in this package can get this corruption
Hi, I need to implement/set up the Data warehouse/Data mart in one of the department in my company by using SQL server 2005. Do any body knows the steps what I need to follow?
It will be more appreciate that, if any body gives some of the links which will help me to do the implementation/development of the same.
I do have the basic idea however I may face some of the difficulties when I start such as, does the SQL server reporting service allow the end user to customize the report based on their needs etc.?, so any of them having experience in this field please reply me.
I am working on to create a data warehouse. I have made a database which will be the data warehouse and will consist of dimension and fact tables. I know that other than dimension and fact table a data warehouse should also consist of a meta data, now my question is what should be the structure of metadata and all the information it should have?
We are starting with designing a datawarehouse for my company. I have done some reading on the concepts and steps involved, but what I am seriously lacking is some examples. I'd like to read through some real examples of data warehouses that worked including the full design diagrams. Can anyone direct me to some good sites for this?
id uname punchdate punchtime 1 Â A Â Â Â Â 1/1/2007Â Â 7:00am 1 Â A Â Â Â Â 1/2/2007Â Â 8:00am 1 Â A Â Â Â Â 1/4/2007Â Â 7:30am 1 Â A Â Â Â Â 1/6/2007Â Â 7:40am
let say i want to get a result which punchdate is from 1/1/2007 to 1/8/2007, how can i get a result like this one.?
1 Â Â AÂ Â 1/1/2007 Â Â 7:00am 1 Â Â AÂ Â 1/2/2007 Â Â 8:00am 1 Â Â AÂ Â 1/3/2007 Â Â <null> 1 Â Â AÂ Â 1/4/2007 Â Â 7:30am 1 Â Â A Â Â 1/5/2007 Â Â <null> 1 Â Â A Â Â 1/6/2007 Â Â 7:40am 1 Â Â A Â Â 1/7/2007Â Â <null> 1 Â Â A Â Â 1/8/2007 Â Â <null>
listing all data even if theres no punchdate and time in the table.
How do you run a stored procedure on PDW via SSIS? I've tried Execute SQL Task and Execute T-SQL Task but in both cases the task will run and complete almost immediately. Task shows success, no errors, but nothing happens in PDW.  PDW admin console does not even register the query. Procedures run fine manually from SQL Server Object Explorer connection.
Is it possible to write a SP (Automate) to generate STATISTICS on any database and then use the output to create the stats on that database.
I ran the tuning adviser and it suggested indexes with lot of STATISTICS on the dev environment. This dev environment is replicated in several other environment with data size in these environment varying. I would  like to know if I can create a SP which generates STATISTICS information pertaining to specific database environment for the query in question for tuning.Â
I have a large fact table spread across tens of partitions (appx. 1TB each). I found that the business does not need much of the columns in the table. So, as an optimization action, I decided to get rid of these un-needed columns.What is the efficient way to achieve this? Can I simply drop these columns from the table, or use a new table with the reduced structure?
I have a Fact Table with a ID column as Primary key and clustered index is created. And also I have 4 dimensions FK's of data type INTEGER. And finally, I have one aggregation measure in the Fact Table.
Now, my situation is How can I improve the speed of querying the fact table by creating any of the below indexes?
Hello, I've created a Report Model Project that can be used by Report Builder to generate ad-hoc reports. I'm trying to create a connection string in my Report Server Project that points to the Report Model Project data source view.
All I can do is create a regular datasource, which bypasses the metadata contained in the Data Source View.
Basically I want my Report Server Project and my Report Builder reports to leverage the same metadata. Is this possible? If so how do I get the connection string?
I have a table that is increasing quite largely each day. By now, I have average 300 million of records over 2.5 years. Before we received our new interface, the data we received was aggregated and thus not that big.The problem is that the table is so huge that I cannot use the Slowly Changing Component. I was thinking about making a temp table where I load the incremental data before I load it into the final data mart table.Based on this temporary table I use a script to compare the temp data with the already existing data in the data mart. However, this requires a compare of each records (300 mil records).
Question: Is it feasible to use a star schema dimensional model for an OLTP system that incurs few (750 per day)Sales Orders transactions?
Background: My customer wants to replace an existing OLTP system database because it runs on Oracle and their in-house expertise is in SQL Server. Â The original database developers that designed the Oracle DB have apparently retired. Â The Oracle database has been over-normalized, to say the least. Â The number of sales orders being entered daily is small: about 500-750 per day. Â These entries are done at the five clerks' convenience, from a paper form, and are very unlikely to ever be entered in quick succession. Â Nothing else gets regularly entered into this database except for the occasional change to a customer, but new customers are very few and far between. Â
I've designed a star schema for the replacement database with the Sales Order Header and Sales Order detail table combined into a single 'fact' table, and I've introduced some duplication into dimension tables (like customer) in order to eliminate some of the joins (and confusion) that were built into the original database.
I've never tried this before. Â Is there any reason this would not or should not work?
ETL Packages are getting failed sometimes(Package Execution Error). Eventhough executing ETL Package again from start, getting the same Error. But after Restarting Sql Service in BI Server, it is working fine. Whether it is the issue from Developer Code side or from server side.
I'm having issues with bulk update in SQL Server.I'm using SAP BODS as ETL tool and have some 20000 updates.target table has approx 0.5 million records and it has clustered index on id column.I have selected upsert option in BODS.Same setup is also done for Sybase IQ , IQ has bulk update option which is giving very ood performance.
In IQ same update load is finishing in some 9 minutes where SQL is taking more than 2 hours for same, this doesn't seem right.When I look at update is causing whole package to go slow.Sybase is creating query where is ID is present then do update else insert.Is there any way to make bulk update work faster in SQL environment?
care session quarter Q1-15 Q2-15 Q3-14 Q3-15 Q4-14
I am using this [care session quarter] column in the group by clause to achieve but no success.IF I use date column in  the select clause and Group by clause then it comes correctly but groups by all dates which is not required.
Ideally I want show only quarter aggregates. The [Date Dimension] table has the column [care session quarter]  which stores all the quarters of years along with dates for each day. i..e I have all columns in [Date Dimension] table as shown below
I am putting together an invoice for my company. I have a text box describing each section of the invoice, followed by a table to list out the charges. I am using multiple tables based on what type of charge the client is receiving.Â
I would like to hide each section if there are no items purchased of that type. I can do this with the table using the expression "=CountRows() < 1", but I do not know how to refer to that table (call it Tablix1 for the sake of discussion) for the text box. I've tried using a ReportItems function as my basis, without success.Â
My package is connecting to an external data provider using an OLEDB driver . The package runs fine in debug mode.When i tried to run the same from SQL server agent it failed  to aquire the connection. The OLEDB provider does not contain too much of information , ( connection string, initial catalog, blank user name and password).The same package executes successfully if i run using dtexec in BAT file.But if i use the dtexec in sql server job step as operating system command and try to run, the job will fail reporting " can not aquire the connection".
I am trying to create a sample table in the Azure SQL  Data warehouse but its giving me a syntax error Incorrect syntax near the keyword 'CLUSTERED'.
CREATE TABLE [dbo].[FactInternetSales] ( [ProductKey] int NOT NULL , [OrderDateKey] int NOT NULL , [CustomerKey] int NOT NULL , [PromotionKey] int NOT NULL