Jobs Understanding And Analysis
Sep 24, 2007Hi
How to understand a job which already exists as a part of database.
Muralidaran r
Hi
How to understand a job which already exists as a part of database.
Muralidaran r
Hi,
The following SQL is lifted from one of the Reporting Services / Adventureworks2000 sample reports. I'm a little slow / baffled on how the inner joins are working? Specifically the Inner Join Locale and Inner Join ProductModel. I'm used to seeing Inner Join SomTable On Something = Somthing but how these joins are working is lost on me. Can someone give a quick overview (or point me to a reference) so I can better understand.
Thanks!
SELECT ProductSubCategory.Name AS ProdSubCat, ProductModel.Name AS ProdModel, ProductCategory.Name AS ProdCat, ProductDescription.Description,
ProductPhoto.LargePhoto, Product.Name AS ProdName, Product.ProductNumber, Product.Color, Product.Size, Product.Weight, Product.DealerPrice,
Product.Style, Product.Class, Product.ListPrice
FROM ProductSubCategory
INNER JOIN Locale
INNER JOIN ProductDescriptionXLocale ON Locale.LocaleID = ProductDescriptionXLocale.LocaleID
INNER JOIN ProductDescription ON ProductDescriptionXLocale.ProductDescriptionID = ProductDescription.ProductDescriptionID
INNER JOIN ProductModel
INNER JOIN Product ON ProductModel.ProductModelID = Product.ProductModelID
INNER JOIN ProductModelXProductDescriptionXLocale ON ProductModel.ProductModelID = ProductModelXProductDescriptionXLocale.ProductModelID
ON ProductDescriptionXLocale.LocaleID = ProductModelXProductDescriptionXLocale.LocaleID AND
ProductDescriptionXLocale.ProductDescriptionID = ProductModelXProductDescriptionXLocale.ProductDescriptionID
ON ProductSubCategory.ProductSubCategoryID = Product.ProductSubCategoryID
INNER JOIN ProductCategory ON ProductSubCategory.ProductCategoryID = ProductCategory.ProductCategoryID
LEFT OUTER JOIN ProductPhoto ON Product.ProductPhotoID = ProductPhoto.ProductPhotoID
WHERE (Locale.LocaleID = 'EN')
Shawn
I'm trying to get the following poll working:http://www.codeproject.com/useritems/Site_Poll_Control.aspIt looks like it's exactly what I was looking for, but it doesn't come with much in the way of instructions. I have the following function: Public Function CastVote(ByVal PollId As Integer, ByVal Answer As Integer, ByVal MemberId As Integer) As Boolean Dim cmd As New SqlCommand("InsertPollResult", New SqlConnection(Connection)) With cmd.Parameters .AddWithValue("@PollId", PollId) .AddWithValue("@PollChoice", Answer) .AddWithValue("@MemberId", MemberId) End With Return (SqlExecuteInsertSp(cmd) > 0) End Function This calls SqlExecuteInsertSp(cmd) which is:Public Function SqlExecuteInsertSp(ByVal cmd As SqlCommand) As Integer Dim i As Integer cmd.CommandType = CommandType.StoredProcedure Try cmd.Connection.Open() i = cmd.ExecuteNonQuery() Catch ex As Exception ErrorMessage = "ProDBObject.SqlExecuteInsertSp(SqlCommand): " & ex.Message.ToString Finally cmd.Connection.Close() End Try Return i End Function I can't figure out what this is doing. The best I can figure is it determines if we have a good connection. Is this right? In my code CastVote keeps returning false, and I don't know why. The answer seems to be in the i = cmd.ExecuteNonQuery() line, but I can't figure out what that line is supposed to be doing.Diane
View 3 Replies View RelatedHi Guys,
I have written quite a big stored procedure which creates a temporary table (multi-session) and updates it. All the statements are encapsulated in a single transaction which is explicitly declared in the code. What happens is that a lock is being put by the server on that table (of type Sch-M) in order thus preventing any type of operations on it (including simple select)
Now, I want to be able read that table from within another transaction. Why is that I cannot use a table hint NOLOCK in the select statement?
Here is some code which reproduces my problem.
Query A:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
BEGIN TRAN TR_DEMO;
CREATE TABLE ##TBL1(
Oidx int not null primary key identity(1,1),
Name nvarchar(30) not null,
Type char(1) not null
);
INSERT ##TBL1 (Name,Type) VALUES ('Car','M');
WAITFOR DELAY '00:00:10';
INSERT ##TBL1 (Name,Type) VALUES ('Plane','M');
WAITFOR DELAY '00:00:10';
INSERT ##TBL1 (Name,Type) VALUES('Submarine','M');
WAITFOR DELAY '00:00:10';
DELETE FROM ##TBL1;
DROP TABLE ##TBL1;
COMMIT TRAN TR_DEMO;
Query B:
SELECT TOP 1 * FROM ##TBL1 (NOLOCK) ORDER BY oidx DESC;
Launch query A and then execute query B.
Thanks a lot for your help.
My question is in what situations @@ERROR will be set...
I like to do some logic when some error is occured in a particular statement....
the doc. says the @@ERROR value will be set if an error occurs in a statement, and the control will move to the next statement without exiting(???) the procedure and @@ERROR value can be used in that statement.
but when i execute the below procedure, the execution is terminated ( when the error occurs) without moving to the next statement. please help me to understand the SQL Server's @@ERROR and the situations when it will be set....
-----------------------------------------------------------------------
CREATE PROCEDURE VALUE_ERROR_TEST
AS
BEGIN
DECLARE @adv_error INT
DECLARE @errno INT
DECLARE @var int
SELECT @var = '101 a'
SELECT @errno = @@ERROR
print @errno
END
go
-----------------------------------------------------------------------
procedure get successfully compiled. when executed it says,
Server: Msg 245, Level 16, State 1, Procedure VALUE_ERROR_TEST, Line 10
Syntax error converting the varchar value '101 a' to a column of data type int.
Jake
I am a new learner of SQL, Please can someone give me a very brief overview of what SSIS and what it can do?
thanks
What's the best resource (book or web article) that explains the joins between database tables?
Thank you!
Please consider the following example.CREATE TABLE test (an_ndx int NOT NULL primary key identity(1,1),a_var varchar(48) NOT NULL,last_edit_timestamp datetime NOT NULL default CURRENT_TIMESTAMP);CREATE TABLE test_history (an_ndx int NOT NULL,a_var varchar(48) NOT NULL,last_edit_timestamp datetime NOT NULL,current_edit_timestamp datetime NOT NULL default CURRENT_TIMESTAMP);GOCREATE TRIGGER update_history ON test FOR UPDATEASBEGININSERT INTO test_history (an_ndx, a_var, last_edit_timestamp)SELECT * FROM deleted;UPDATE inserted SET last_edit_timestamp = CURRENT_TIMESTAMP;END;The question is, does this do what I think it should do? What Iintended: An insert into test results in default values for an_ndx andlast_edit_timestamp. An update to test results in the original row(s)being copied to test_history, with a default value forcurrent_edit_timestamp, and the value of last_edit_timestamp beingupdated to the current timestamp. Each record in test_history shouldhave the valid time interval (last_edit_timestamp tocurrent_edit_timestamp) for each value a_var has had for the "object"or "record" identified by an_ndx.If not, what change(s) are needed to make it do what I want it to do?Will the trigger I defined above behave properly (i.e. as I intended)if more than one record needs to be updated?ThanksTed
View 3 Replies View RelatedI am using SQL Server Express and Visual Studio 2005. I am new to batches and am trying to understand how they work. I am trying to write a query that creates an assembly and the functions that are contained in it. Here is my query:
USE ProductsDRM
GO
IF NOT EXISTS (SELECT 'True' FROM sys.assemblies WHERE name = 'ComputedColumnFunctions')
BEGIN
CREATE ASSEMBLY ComputedColumnFunctions
FROM 'C:WebsitesAssemblyTestStoredFunctionsStoredFunctionsinStoredFunctions.dll'
GO
CREATE FUNCTION fImageFileName
(
@ProductID int,
@ImageSizeCode nvarchar(4000)
)
RETURNS nvarchar(4000)
AS EXTERNAL NAME [ComputedColumnFunctions].[StoredFunctions.UserDefinedFunctions].ImageFileName
GO
CREATE FUNCTION fTestInt
(
@ProductID int
)
RETURNS int
AS EXTERNAL NAME [ComputedColumnFunctions].[StoredFunctions.UserDefinedFunctions].TestInt
GO
CREATE FUNCTION fTestInt2
(
@TestInt int
)
RETURNS int
AS EXTERNAL NAME [ComputedColumnFunctions].[StoredFunctions.UserDefinedFunctions].TestInt2
END
ELSE
BEGIN
PRINT 'The assembly named "ComputedColumnFunctions" already exists. No new assembly was created.'
END
GO
I read in a book about SQL Server 2005 about including a test for whether the object (such as assembly in this case) exists before trying to create it. If I only include the CREATE ASSEMBLY statement and the FROM line below it and delete the next GO down through the last CREATE FUNCTION (just before the END ELSE), it works fine. If I leave it as is, I get a runtime error on the GO line just after the CREATE ASSEMBLY statement. What am I doing wrong?
I have a situation where I need to distribute a db to 'subscribers' for use during network, and preferred application downtime. Currently, we do this by employing MS-Access. We update our db on the 'subscriber' by sending a text file with the new data using FTP.
When the 'subscriber' opens their local copy of the Access db application, a macro fires off to check for any new file in the ftproot folder, and if one is detect that is newer than the last update, it truncates the existing table, and imports the text file using a predefined import specification format. The process works well enough as is. However, we were hoping to move beyond our dependency on MS-Access for a variety of reasons, so we are looking at developing windows forms apps using the new Asp.Net v2.0 technology and Sql2005 and SqlExpress.
I need some clarification on how replication works. Does it allow a 'snapshot' db to be created on a subscriber that can be used when the network is down? If not do we have alternatives? For example, I guess we could export/import to the subscriber in some manner.
Hope I've made the case clear enough for some responses. Thanks in advance for your thoughts and help. :)
Please forgive the basic-ness of my question, but I have only been using DTS and SSIS for 2 weeks now.
What I'd like to know is, is my understanding of SSIS package development correct.
In SQL Server 2005, I use BIDS to develop my SSIS packages. During development, I simply store my project on the local C drive.
However, once I am finished writing and testing my projects, I move them to SQL Server, using SAVE COPY AS... (for now, I only using one server for both development and production)
Then, anytime I need to change or modify the package, I change the local file system copy of the package, then do another COPY AS... to SQL Server.
That is, once the package is on SQL Server, if you need to change it, you can only do so through BIDS, correct?
I just want to know if I'm on the right track here.
What do other people do?
Thanks much
I just learned some basic FTP commands for the first time and was able to transfer over some files from one machine to another. Now I'm trying to to this using the FTP Task.
First of all, I understand that the destination machine has to be set up with something called an FTP Site. And I noticed that when I do a "cd" command in my FTP session, I have visibility only to those directories that are set up as an FTP Site.
Ok, onto the FTP Task. First I created and test connected my FTP Connection Manager. Next, I went into the FTP Task Editor, and I'm in the File Transfer page. Looking at the properties, I guess, this is where I tell what file to move where.
So, in IsRemotePathVariable, I selected "False". When I did this, I was expecting to be able to navigate the various FTP Sites I have set up on my destination computer. However, when I click on RemotePath, I only see "/" (Root). Was I wrong to expect to see the FPT Sites here? What am I doing wrong?
Hi, I'm fairly new to coding with ADO in C++. My previous experience with it was in VB 6.0 and VB.NET. I'm having difficultly deciphering HRESULTs that come back from it when an error occurs. Up until this point I was using,
DXGetErrorString9(), and DXGetErrorDescription9() to interpret any HRESULT that got sent back to my program, but now that I'm using ADO they always return "Unknown".
Is there a Microsoft provided group of functions that help programmers to better understand ADO's HRESULTs?
Thanks,
Kyle
Can someone help me to understand a stored procedure I am learning about? At line 12 below, the code is calling a function named"ttg_sfGroupsByPartyId" I ran the function manually and it returns several rows/records from the query. So I am wondering? does a call to the function return a temporary table? And if so, is the temporary table named PartyId? If so, the logic seems strange to me because earlier they are using the name PartyId as a variable name that is passed in.
1 ALTER PROCEDURE [dbo].[GetPortalSettings]2 (3 @PartyId uniqueidentifier,45 AS6 SET NOCOUNT ON7 CREATE TABLE #Groups8 (PartyId uniqueidentifier)910 /* Cache list of groups user belongs in */11 INSERT INTO #Groups (PartyId)12 SELECT PartyId FROM ttg_sfGroupsByPartyId(@PartyId)
Hello List,
I've couple of questions about Sql Server 2000 and would greatly appreciate, If somebody out there, please answer to it.
Questions:
I've a SQL 7 server dump file and I was wondering, If I can directly load this file into SQL Server 2000? If yes, Would this call a Upgrade of sql 7.0 database to SQl Server 2000? Or this is just a backward compatability support in 2000?
Well the main objective of this is, We are planning to upgrade one of our production SQL 7.0 server and I was wondering, If we can just take the SQL 7.0 dump file and load it into the SQL 2000 Server? If yes, What are the downgrades of this and if no, Should we upgrade the sql 7.0 server itself and along with it all the databases sitting on it? Please shed some light here.
Many thanks.
hi guys! Can anybody please explain what does lines 4,6,8,13 and 26 does? Thanks in advance!
1 USE [PawnShoppeDB]
2 GO
3 /****** Object: StoredProcedure [dbo].[sp_Customer_AddCustomer] Script Date: 06/14/2007 16:50:07 ******/
4 SET ANSI_NULLS ON
5 GO
6 SET QUOTED_IDENTIFIER ON
7 GO
8 ALTER Procedure [dbo].[sp_Customer_AddCustomer]
9 @Customer_FirstName varchar(50),
10 @Customer_LastName varchar(50),
11 @Customer_MiddleInitial varchar(5),
12 @Customer_Address varchar(100),
13 @Identity int output
14 AS
15 Begin
16 Declare @DateCreated DateTime
17 Set @DateCreated = getDate()
18
19 Insert into RCPS_Customer
20 Values (@Customer_FirstName,
21 @Customer_MiddleInitial,
22 @Customer_LastName,
23 @Customer_Address,
24 '0',@DateCreated)
25
26 Set @identity = Scope_identity()
27 End
There seem to be a slew of options.
Finally figuring out that I can't use Express, I need to temporarily have my own SQL Server 2000 db server running. Hopefully just a couple of months, but who knows how long it'll take to convert over to MySQL.
Anyway, I need this db server to be on one machine. One web server will be accessing it (for one db).
What type of license do I need? Is it really the massively-priced per processor type? Or can I use another?
Yesterday I was working to try to resolve a issue with not Using a cursor for a unique type of update.
TG was able to resolve my issue using this query
Declare @Stage Table(StartDate datetime,BenefitInterestID INT PRIMARY KEY Clustered, Amount MONEY, InterestAmount MONEY, Interest DECIMAL(10, 4), ai DECIMAL(10, 4))
Insert Into @Stage(StartDate ,BenefitInterestID, Amount, InterestAmount, Interest , ai )
Select
convert(datetime,'2006-12-01 00:00:00.000',101) as StartDate,1 as BenefitInterestID,1701.00 as amount,79.605 as InterestAmount ,0.1000 as Interest,0.0000 as ai
Union all
select '2007-12-01 00:00:00.000',2,172.80,7.92,0.0500,0
Union all
select '2008-12-01 00:00:00.000',4,0.00,0.00,0.0700,0
Union all
select '2009-12-01 00:00:00.000',5,0.00,0.00,0.0900,0
Union all
select '2010-12-01 00:00:00.000',6,0.00,0.00,0.0200,0
declare @ai decimal(10,4)
,@aiTot decimal(10,4)
,@a money
,@ia money
update s set
@ai = ai = s.interest * (isNull(@aiTot,0) + @a + @ia)
,@a = isNull(@a, 0) + s.amount
,@ia = isNull(@ia, 0) + s.interestamount
,@aiTot = isNull(@aiTot, 0) + @ai
from @stage s
TG pointed out that although this query should always work, there is a underlying issue in this type of update method, due to it is dependent on the order in which sql updates (Currently the clustered index is the only way I know of to secure the order is in tact).
Although it is unlikely, TG pointed out that do to this inherent flaw in the methodology, if future releases of SQL were to be modified to improve performance in updates, this method COULD possibly not work, and there is no 100% guarentee that the index would be used.
My questions are
1. Is there any way to Ensure that the style of update I am using uses the clustered index (i.e. "from @stage s WITH (INDEX = ...)") but I do not know the ID of the index since it is a TMP table?
2. Force the order of the update to be the order of the index (Just because there is a index, that doesn't ensure that a future release of sql will reference the index from top to bottom (I know this is unlikely, but I still would like to know if there was a way to ensure the update did always occur from top to bottom of the index).
hi all, i need someone to break this down for me, what it means, what each part does. thank you in advance, i would b lost without forums. :-)
$query = "INSERT INTO address_dtl
(srn_id,
box_id,
box_privacy,
box_start,
box_who,
box_what,
box_timestamp,
box_attribute)
VALUES
('$srn_id',
'$wwc_id',
'NNNN',
'" . date('Ymd') . "',
'$who',
'$what','" .
date("Ymd") . "',
'$attribute')";
$success = $conn->Execute($query);
site: http://www.deweydesigns.com
blog: http://www.deweydesigns.com/blog
myspace:
http://www.myspace.com/esotericstigma
case @namesflag when 3 then 1
else case @namesflag when 2 then names.new else names.old
end
end=1
I understand what it is doing for the second part when @namesflag=2
, but what does end=1 mean
Can somebody please explain it to me.
Thanks
A few questions:
1) We have numerous fact tables with surrogate keys which reference just one dimensional surrogate key. How does this work?
2) Are the ‘facts’ feeding data TO the ‘dimensions’ (back end warehousing)? Or are the ‘Dimensions’ feeding facts to the ‘facts’ tables for lookups!?
Nb: Im very inexperienced at database design.
Im really also using this thread to get contacts for future harder questions!
Thanks kindly
I'm implementing some database formatting and I need that values withina column have certain limits ... let's say for example, they shouldn'tbe <0 or >10000, but in the case I'm inserting values bigger then 10000I would like that MSSQL "clip" this value to the upper limit (10000 inthis case) and the same with the lower limit (zero in this case).Is that possible? or SQL just respond me with an error when the valuesgo beyond those limits and will abort the transaction?Can someone put some light on this please???Nacho
View 5 Replies View Related#1 Anyone seen any good BLOGs on building solutions and what occurs? BOL is pretty thin in this area. Does it just spin through all the projects in the solution, validate them, copy the packages to their respective in folder, and add them to the manifest?
View 3 Replies View RelatedWhen your report datasource is a cube, MDX is generated when you use the design view. In the MDX editor the generated MDX can be viewed. Using parameters I always get a where clause with code like the following:
IIF( STRTOSET(@OrgLevelname, CONSTRAINED).Count = 1, STRTOSET(@OrgLevelname, CONSTRAINED), [Organisation].[Level 2 name].currentmember )
I like to understand what is generated. Is there something I can read on the generated WHERE clause (I do understand the generated SELECT and FROM clauses)? Or can someone shed a light on it?
Why does the MDX need to branch on 'Count = 1' In what way does the result slice my data when Count = 1 or when Count <> 1?
Thanks,
Henk
I went thru the online tutorial, but I was not able to finish "Deploying Packages Tutorial" because for some reason the sample packages they had use in the tutorial came up with errors when I added them to my project.
EDIT: See the following post for the trouble I ran into using the sample tutorial. http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=3145471&SiteID=1&mode=1
But I read thru the rest and I was able to get the gist of it and I did a simple example on my own.
And here are some questions.
In my example, I have one package that does a simple load from an XL spreadsheet into a database table. The package also writes some kickout rows (bad data) to a flat file. To keep the example simple, I configure just one value in the .dtsConfig file, and that is the server name.
Question:
(1) According to the tutorial, after I've created the deployment bundle I'm supposed to copy this over to the destination computer and run it there. Can't I just run it from my computer and choose where to install the package in the Package Installation Wizard? That's what I did and it worked and I am able to see the package on the target SQL Server. So, I'm wondering why I need to copy the demployment bundle to the target and run it there.
(2) In the Package Installation Wizard, there is a step called "Select Installation Folder". And the description on this page says, "The installer will install SSIS package dependencies in the following folder". Without knowing what this is going to do, I just picked a folder, and finished out the wizard. When I go and check that folder after the wizard is completed, I see that the .dtsConfig file got put there (on my local machine). What implication does this have? (I don't know how to schedule a job in SQL Server Agent, so I havent actually tried running the deployed package.) I'm going to guess the package is not going to run because the config file ended up in my machine.
(3) In my example, I could've also configured the connection string for the XL file and the flat file directory for the kickout data. But since the file names are a part of the connection string and the file names are likely to remain the same, but the directory locations may change, should I handle this with system variables? (Where the variable will contain the directory path only)
Hello,
I have some basic understanding of SSIS and now I am looking to take the next step. Especially since I created all my packages with EncryptWithUserKey (I didn't know better) . My assumption is that I place into each of my packages an identically named Environment Variable. Then if I move my configuration files I just need to reassign the value to that variable. Do I have that correct?
For example, I have an XML Configuration file sitting in c:PackConfigs. It is called MyConfig.dtsConfig. How do I create an Environment Variable to point to this configuration file? I assume I create the variable and assign it the value the path of the file (c:PackConfigsMyConfig.dtsConfig). I sort of get that.
But what if I have multiple XML configuration files in my package? And exactly how do I change the variable's value if I move my packages?
I've read some great posts in this forum and at Jamie Thomson's blog http://blogs.conchango.com/jamiethomson/archive/2005/11/02/SSIS_3A00_-Indirect-configurations-ROCK_2100_.aspx
but I haven't been able to get to the nitty-gritty that I think I need.
Thank you for the help.
-Gumbatman
I am having trouble understanding how to correctly use stored procs. Here is what I think the process is. Correct me where I am wrong.
I am developing a read-only program with VB 2005 as a front end and SQL Server back end. The user will enter an ID# for search criteria which will be passed as a parameter to a stored proc. I want to create a stored proc that alters a view by changing the ID# the view filters on. A dataset will be created in the front end from that view.
So in SSMS, I create a new proc like this:
CREATE PROC {blah, blah}
@IDnum
AS
BEGIN
ALTER VIEW {blah, blah}
AS
SELECT {blah, blah}
FROM {blah}
WHERE blah.ID = @IDnum
END
I would run the code to create the proc, then modify it to an ALTER PROC. I've tried this without success. What am I missing. Yes, I am new to this.
Thanks for the help
I have tried using the linked report with CreateLinkedReport(...) but I do not understand the output.
initially I have a report with parameter that user needs to specified before generating . I use something like
CreateLinkedReport("test4", "/Report Server",
"/Report Server/xxx/yyy", props)
where "/Report Server/xxx/yyy", is the linked path to that file.
What happen is that this report 'template' has changed to a 'linked' report with specified parameters. and that new linked report 'test4" is created under /Report Server. And this test4 is like a copy of the orginal "/Report Server/xxx/yyy".
What I want is the opposite where the orginal template stays the same. And the new created test4 is the report that stores the specified values.
Regards
Alu
I have, a SSAS 2012 tabular instance with SP2, there is a database on the instance with a read role with everyone assigned permissions. When configuring the Power BI analysis services connector, at the point where you enter Friendly Name, Description and Friendly error message, when you click next I receive the error "The remote server returned an error (403)." I've tested connecting to the database from Excel on a desktop and connect fine.I don't use a "onmicrosoft" account so don't have that problem to deal with.
We use Power BI Pro with our Office 365. As far as I can tell that part is working ok as I pass that stage of the configuration with a message saying connected to Power BI.The connector is installed on the same server as tabular services, its a Win2012 Standard server. The tabular instance is running a domain account that is the admin account for the instance (this is a dev environment) that account is what I've used in the connector configuration. It's also a local admin account. There is no gateway installed on the server.
I have a cube that we are processing nightly via an Analysis Service Processing Task in SSIS. In order to increase the performance of the processing time, we elected to use a lot of rigid dimension attributes, and do a full process of everything in the SSIS task. The issue that I am having is that after that task completes, I need to go into Visual Studio to deploy the cube becuase we are unable to browse or use the cube. This issue seemed to start once we changed the SSIS Analysis Service Processing Task to do a full process on the dimensions, rather than an incremental.
I would expect that once development is done, and it is processed and deployed, that is it. My thinking is that the SSIS task should just update the already deployed cube,
How to right choose key column in"Mining Structure" for Microsoft Analysis Services?
I have table:
"Incoming goods"
Create table Income (
ID int not null identity(1, 1)
[Date] datetime not null,
GoodID int not null,
PriceDeliver decimal(18, 2) not null,
PriceSalse decimal(18, 2) not null,
CONSTRAINT PK_ Income PRIMARY KEY CLUSTERED (ID),
CONSTRAINT FK_IncomeGood foreign key (GoodID) references dbo.Goods ( ID )
)
I'm trying to build a relationship(regression) between “Price Sale” from Good and “Price Deliver”.But I do not know what column better choose as “key column”: ID or GoodID ?
It is possible to create Analysis Services project (*.dwproj) in Visual Studio 2012 Data Tools?
View 5 Replies View RelatedI am using
SSAS: version 2008
Excel: version 2013
I am connecting to SSAS cube from Excel and I have date dimension with 4 fields (I have others but I don't use it for this case). I created 4 fields in order to test all possible scenarios that I could think of:
DateKey:
- Type: System.Integer
- Value: yyyyMMdd
Date:
- Type: System.DateTime
DateStr0:
- Type: System.String
- Value: dd/MM/yyyy (note: I am not using US culture)
- Example: 01/11/2015
DateStr1:
- Type: System.String
- Value: %d/%M/yyyy (note: I am not using US culture)
- Example: 1/11/2015
Filtering on date is working fine:
Initially, in excel, filtering on date was not working. But after changing dimensional type to time, and setting DataType to Date, as mentioned in [URL] filter is working fine as you can see in the picture.Grouping on date is not working:
I have hierarchy in my Date dimension and I can group based on hierarchy, no problem. But user is used to pre-build grouping function of excel, and he wants to use that. Pre-build functions of Excel, Group and ungroup seems to be available as you can see in following picture:
But when user clicks 'Group', excel groups it as if it is a string, and that is the problem. User wants to group using pre-build grouping function available in Pivot table. I also find out that Power Pivot Table does not support this excel grouping functionality. And if I understood well, this pre-build grouping functionality of excel, needs to do calculation at run time, and that is not viable solution if you have millions of rows. So Power pivot table does not support pre-build grouping functionality of excel and hence we need to use dimension hierarchy to do the grouping. But I am not using Power Pivot table, I am using simple Pivot Table. So I expect grouping functionality to be working fine. Then I tried to do simple test. I created a simple data source in excel itself. And use it as source of my Pivot table. Then grouping is working fine. The only difference that I can see is (When double click the Measure value in Excel),For date values of my simple test, excel consider them as 'Date'.
For date values of my data coming from cube, excel consider them as 'General'
2.1. But value here is same as it was in simple test.
2.2. 'Date Filter' works just fine.
2.3. If I just select this cell and unselect it, then excel change type to 'Date' though for that cell.
2.4. I have created 4 different types of fields in my date dimension thinking that values of attribute of my dimension might be the problem, but excel consider 'General' for all of them.
2.5 This value (that can be seen when double clicking on measure) comes from 'Name Column' of the attribute. And the DataType defined is WChar. And I thought that might be the reason of issue. And I changed it to 'Date'. But SSAS does not allow it to change to 'Date' giving error : The 'Date' data type is not allowed for the 'NameColumn' property; 'WChar' should be used.
So, I don't know, what is the puzzle piece that I am missing.
1. Date filter works, group does not work
2. Excel consider it as 'General' string.
3. SSAS does not allow to change 'NameColumn' to Date.