How to right choose key column in"Mining Structure" for Microsoft Analysis Services?
I have table:
"Incoming goods"
Create table Income ( ID int not null identity(1, 1) [Date] datetime not null, GoodID int not null, PriceDeliver decimal(18, 2) not null, PriceSalse decimal(18, 2) not null, CONSTRAINT PK_ Income PRIMARY KEY CLUSTERED (ID), CONSTRAINT FK_IncomeGood foreign key (GoodID) references dbo.Goods ( ID ) )
I'm trying to build a relationship(regression) between “Price Sale” from Good and “Price Deliver”.But I do not know what column better choose as “key column”: ID or GoodID ?
I'm trying to return the EOY (12/31/14) value for measure Average Balance when I pass in a Date current member. This current member may also be at a Month, Quarter or Year level of the Post Date dimension. I've tried:
I am new one in MDX. Our PeriodsToDate function does not return any value. We have set type property of our Date Dimension as time.Actually CURRENT MEMBER does not return a valid value. So our PeriodsToDate function fail.
With MEMBER [Measures].[YTD Actual] AS Aggregate ( PeriodsToDate ( [DimDate].[CalendarHierarchyDateLevel].[Calendar Year] ,[DimDate].[CalendarHierarchyDateLevel].CURRENTMEMBER )
I have managed to use the BI Wizard for time intelligence and added YTD and MTD successfully. I notice the values returned are empty, and I think this is due to the fact that all the test data I use is many years old. What's the simplest way to resolve this issue so that I can see that these MDX functions return correct values? Changing the system date on this company laptop is not an option.
I am trying to create a whole number DAX calculated column that is derived from a date column. Basically it gets the date from the source data column and outputs it as an integer in the YYYYMMDD format.So 01/OCT/2015 would become --> 20151001...I've been fidgeting with DAX but my problem is that I keep missing the leading zeroes for months and days. So 01/March/2015 becomes 201531 which is not what I want (I need 20150301 in this case).
I'm working on a social network where I store my friend GUIDs in a table with the following structure:user1_guid user2_guidI am trying to write a query to return a single list of all a users' friends in a single column. Depending on who initiates the friendship, a users' guid value can be in either of the two columns. Here is the crazy sql I have come up with to give what I want, but I'm sure there's a better way... Any ideas?SELECT DISTINCT UserIdFROM espace_ProfilePropertyWHERE (UserId IN (SELECT CAST(REPLACE(CAST(user1_guid AS VarChar(36)) + CAST(user2_guid AS VarChar(36)), @userGuid, '') AS uniqueidentifier) AS UserId FROM espace_UserConnection WHERE (user1_guid = @userGuid) OR (user2_guid = @userGuid))) AND (UserId IN (SELECT UserId FROM espace_ProfileProperty))
Hi, Using SSIS, I am importing the data of a text file into a sql server table. After the import, I Can not figure out why the texts inside the sql server have double quotes around them. This is similar to the data inside the text file. For example, the value "Simpsons" appears with the "" as you can see whereas I want it to appear without the "" inside the sql server table. In the connection manager, the file connection has a text qualifier of <None>
I'm trying to use DTS to import a space delimited file. One column uses " as a text qualifier so I set this in the options. The problem arises when a " shows up between the 2 text qualifiers. It's seen as a set of qualifiers with a 2nd qualifier with no end. I obviously get an error at this point. Anyone have any good advice on how to squash this one?
I have created a package which will copy rows from csv file to SQL database. I have a field into the csv file which contains numeric data. and I am keeping this into the database as numeric too. for example, a column into the csv named "amount" needs to be transfer into the data table where the corresponding column name is "amount" and its data type is numeric and the field can contain null values. I am using the double quote(") text qualifier on to the csv file. Now my problem is, some rows into the csv file contains null values for amount column. for example..lets take a look on my csv file content...
"Name", "Salary"
"Jhon Stuart", "35.66"
"Maria Gree", ""
Notice the second row of the csv where the Salary value has left as an empty string. Now my intention is to import these data into the database and the salary value for Maria should be remain as null. But the package is generating an error for this row. it says..
There was an error with input column "Salary" (61) on input "OleDB Destination Input (47)" . The column status returned was : The value could not be converted because of potential loss of data.
Can any body help me on this? What would be the solution for this? if I modify the row into csv file as following
"Maria Gree", "0.00"
then it works. But I dont want to fill the field with zero into the DB. I want it would be set with NULL value..which make sense.
I'm starting to use SQL 2008 recently, and I'm just having trouble with the following problem:
The following query:
SELECT t_Category.Name as [Category] FROM t_Assets, t_Category, t_Priority, t_Location, t_User_Assets WHERE t_Assets.Asset_ID = t_User_Assets.Asset_ID AND t_Category.Category_ID = t_User_Assets.Category_ID AND t_Priority.Priority_ID = t_User_Assets.Priority_ID AND t_Location.Location_ID = t_User_Assets.Location_ID
Returns this result:
Category BMS BMS Water BMS BMS Air
And the following query:
SELECT COUNT(t_Category.Category_ID) AS AssetQty FROM t_Assets, t_Category, t_Priority, t_Location, t_User_Assets WHERE t_Assets.Asset_ID = t_User_Assets.Asset_ID AND t_Category.Category_ID = t_User_Assets.Category_ID AND t_Priority.Priority_ID = t_User_Assets.Priority_ID AND t_Location.Location_ID = t_User_Assets.Location_ID GROUP BY t_Category.Category_ID
Returns this result:
AssetQty 4 1 1
I need to have both of those results returned, as a single result. Such as:
Category AssetQty
BMS 4 WATER 1 AIR 1
However, I'm not able to, due to the fact, that if I add the "t_Category.Category.Name" in the SELECT clause, it gives me the following error:
Column 't_Category.Name' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
And if I try to use the "Name" as part of the count clause, it won't work, as text are not acceptable data types for aggregations.
Hi thereI have the following two tablesmainprofile (profile varchar(20), description)accprofile (profile varchar(20), acct_type int)Sample data could bemainprofile----------------prof1 | profile oneprof2 | profile twoprof3 | profile threeaccprofile--------------prof1 | 0prof1 | 1prof1 | 2prof2 | 0Now doing a join between these two tables would return multiple rows,but I would like to know whether it would be possible to returnacct_type horizontally in a column of the result set, e.g.prof1 | profile one | [0,1,2]prof2 | profile two | [0]I could probably manage this with cursors, but it would be veryresource intensive. Is there a better way?Regards,Louis
Hello, I'm using C# to access sql server. When I execute an insert command, I want to get the value of the column ID(ID is an identity column in my table defination) . Is there any method I can use? Thanks.
SELECT submitRep1 AS Rep, tc, COUNT(tc)AS TCCount FROM tbl_CYProcessedSales WHERE tc NOT LIKE 'T%' AND tc NOT LIKE 'R%' AND ISNUMERIC(TC) = 0 AND NOT submitrep1 = '' AND Submitrep1 = 'along' GROUP BY submitrep1, tc
Returns a result like this:
ALONG KL 65 ALONG KP 35
How can I return the one record that contains the MAX(TCCount)?
I'm having trouble with cube processing. While processing a code I'm getting a "Invalid column name MessageType" error.
I unfolded the cube, then I opened "measure groups", my failing dimension (ServiceRequestDim) and the partition.
In the partition I opened the "Source" attribute so it now includes my column which was missing. But it didn't solve the issue.
If I get the query used during the process I'm getting this :
SELECT DISTINCT [ServiceRequestDim].[MessageType] AS [ServiceRequestDimMessageType0_0] FROM ( Select IsNull(IsDeleted, 0) as IsDeleted, [ServiceRequestDimKey], IsNull([Status_ServiceRequestStatusId], 0) as [Status_ServiceRequestStatusId],[Status],IsNull([TemplateId_ServiceRequestTemplateId], 0) as
[Code] ....
In the nested query which defines ServiceRequestDim the messagetype attribute is still missing. In my source datamart the ServiceRequestDim has the "MessageType" column.
So the question is where do I change the nested request that the dim process use to reflect the actual columns in my datamart .
I have created an SSIS package, in my VS2005 solution, that Bulk Inserts a CSV file (see example below) "100",2006-10-03 00:00:00,"HEX012",1"101",2006-10-03 00:00:00,"DS00130",1
I have a Bulk Insert Task that uses a Flat File Connection Manager to import my CSV file into my SQL2005 database. My source CSV file (see example above), has double quatation marks surrounding any text fileds. I have set the Flat File Connection Manager's 'Text Qualifier' to double quatation marks. The Bulk Insert works ok, but ignores the Text Qualifier. My database table is left with the original quatation marks in any text field. Any help appreciated. Regards, Paul.
I'm exporting using a query to a flat .txt file. The problem I'm encountering is when I export the data and then open the .txt file into excel some columns cause line breaks to the next row. The columns that are breaking to a new row are varchar fields where the user has entered text into the field with double quotes ".
When I export, I'm using row delimiter {CR}{LF} column delimiter Comma and text qualifier Double Quote (")
Is there a way to prevent this from happening when I export and open the flat file into Excel?
I tried using replace, but I was getting a syntax error in my query. Here is the query without using replace:
SELECT e.session_date, l.lab_no, i.first_name + ' ' + i.last_name AS Teacher, tt.name, d.district_name, s.school_name, t.title, a.q1 AS Question1, a.q2 AS Question2, a.q3 AS Question3, a.q4 AS Question4, a.q5 AS Question5, a.q6 AS Question6, a.q7 AS Question7, a.q8 AS Question8, a.q9 AS Question9, a.q10 AS Question10 FROM evaluation e LEFT OUTER JOIN training t ON t.id = e.training LEFT OUTER JOIN lab l ON l.id = e.lab_no LEFT OUTER JOIN instructor i ON i.id = e.instructor LEFT OUTER JOIN trainee tt ON tt.id = e.trainee LEFT OUTER JOIN district d ON d.id = e.district LEFT OUTER JOIN school s ON s.id = e.school LEFT OUTER JOIN answers a ON a.id = e.answers WHERE session_date >= '20070401' AND session_date < '20070501'
I would need to use the replace on columns a.q7, a.q8, a.q9, and a.q10
I tried using another delimiter...pipes (|) and that didn't work? Maybe I was attempting it incorrectly?
I've discovered an issue with the text qualifier field in the file connection manager when upgrading a SSIS 2005/2008 package from a 32 bit platform to 64 bit platform runninn SQL Server 2008 R2 10.5.1600.
The package will convert <none> in this field to _x003C_none_x003E and therefore any package using the file connection manager i.e. import/export - common tasks on SSIS! will cause problems either with output data or imported data.
Simply replacing _x003C_none_x003E with <none> fixes the issue but ofcourse there can be many packages affected as a result.
Any existing/impending cumulative update for SQL Server 2008 R2 Standard that will fix the problem?double quote delimiters are converted to _x0022_ which I am assume by replacing with a double quote will fix the problem.
I am creating reports for an application, that when installed can have various different table owners/qualifiers depending on how client created the DB. How can I create standard reports across all the DB without hardcoding the tablenames qualifier/owner in the dataset query? Again the table structure remain the same just the qualifiers may be different. Any help would be great.
I have this query that returns the largest value in a row, but i need to know the column name that this value is in as well. any help in advance is appreciated
select clientID, (select max(incomeValue) from (select earnings as incomeValue union all select unemployment union all select pensionRetirement union all select alimony union all select childSupport union all select dividendInterest union all select SS union all select SSI union all select SSDI union all select veteranBenefits union all select FIP union all select workStudy union all select other union all select otherHHWS) as income) as MaxIncomeValue from tbl_income