hi
i have a problem
i am writing a stored procedure which contains "EXECUTE" statment which excutes the query and retrieves the attributes what i want.
by using that procedure it is working fine and i am able to get the result
but i am not able to build the report why because this dataset not listing the attributes.
my procedure is like this :
--------------------------------------------
USE [HOST_BPM_COVLTCP]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROC [dbo].[PB_Report_GetProjectAttributes]
@intProjectId INT
AS
BEGIN
SELECT
@SRC_ATTRIBUTE_COLUMNS = (CASE WHEN @SRC_ATTRIBUTE_COLUMNS IS NULL THEN '' ELSE ',' + @SRC_ATTRIBUTE_COLUMNS END),
@STR_ATTRIBUTES = ISNULL(@STR_ATTRIBUTES, ''''' DUMMY_COL'),
@SRC_ATRIBUTE_NAMES = ISNULL(@SRC_ATRIBUTE_NAMES, ' '''' WHERE 1 <> 1')
EXEC
(
'
SELECT
DP.IDX PROJECT_ID, dbo.FindAndReplace(DP.CODE) [Project Code], dbo.FindAndReplace(DP.NAME) [Project Name], dbo.FindAndReplace(DP.LABEL) [Project] ' + @SRC_ATTRIBUTE_COLUMNS + ',
ISNULL(DP.CREATED_BY,'''') AS CREATED_BY, ISNULL(DP.MODIFIED_BY,'''') AS MODIFIED_BY,
DP.CREATED_DATE, DP.MODIFIED_DATE
FROM
DIM_PROJECT DP,
(
SELECT
' + @STR_ATTRIBUTES + ', PROJECT_ID
FROM
PB_PROJECT_ATTRIBUTE_VALUE
WHERE
PROJECT_ID = ' + @intProjectId + '
GROUP BY
PROJECT_ID
) SRC
WHERE
DP.IDX = ' + @intProjectId + ' AND
DP.IDX *= SRC.PROJECT_ID
ORDER BY DP.LABEL
'
)
----------
actually this procedure should result the following attributes
I found out the data I need for my SQL Report is already defined in a dynamic dataset on another web service. Is there a way to use web services to call another web service to get the dataset I need to generate a report? Examples would help if you have any, thanks for looking
I have been banging my head against the wall for TWO days. I have gone back and forth with a very patient guy on thescripts.com. You can see the ridiculous thread here
If you have time, at least peruse that so we don't go in circles. Anyway, if you guys can help me solve this, I will be forever grateful!!
Here is the "basic" problem:
Here is an example for TWO different entities in the database.
EntityID XmlFieldName Value 1 City Austin 1 State TX 1 Country US 2 CityName Los Angeles 2 StateCode CA 2 CountryCode US 2 Zip 111111
Here is how the two different results should be
where EntityID = 1 <Address City="Austin" State="TX" Country="US"/>
where EntityID = 2 <Address CityName="Los Angeles" StateCode="TX" CountryCode="US" Zip="111111"/>
Notice how the attribute names (City or CityName, State or StateCode, etc) are based off the XmlFieldName and I don't know in advance what the possible values will be? I also don't know how many attributes there will be, but they can be different per entity, depending on how they have set up an address in our application.
Another thing to note, is that I kind of have this working in an sproc using PIVOT and generating a table with the values that have the correct dynamic column names (you can see this on my other thread I posted above) but I REALLY need this to not use dynamic SQL (so can use it in a function) if possible and be able to be used in a select statement, whether it be a temp table as I would like to get a result set back that I can do a FOR XML RAW on. If this is confusing, it is because I am delerious. OR is there a way to return a table from an SPROC that has dynamic columns built?
I am working in SQL Server Master Data Services Version 11.0.5058.0 (SP 2).
I have been asked to group all the financial attributes together. When I move one of the attributes up using the arrows, it works good jumping over one attribute at a time. Then I reach a section of attributes where it leap frogs over 24 attributes.
It appears these 24 attributes are in a subgroup but there are no attribute groups and I removed the subscription view from the entity. If I move one of the 24 attributes in the group, it moves it outside of the 24 attributes.
This is under leaf member attributes. There are no collection or consolidated groups.
I'm using a DW from Northwind database to build a cube to do some analitical taks. I already create the cube and now I am "cleaning" the dimensions. I'm having some difficults to understand the logical off this part. The reason is that When I create the Data Source View, I only import the Foreign Keys that connect the Dimensions to Fact_Table. I have to drag the attributes of Dimension from Data Source View to the tab attributes?
Imagine this:
I have the following dimension:
Dim_Customer: Customer_ID Name_Customer Job_Function Date_of_Birth Contact Address City Country
When I create the cube only Customer_ID appears in attributes tab, it's normal?
One more question:
I don't want to create a hierarchy like:
Customer ID -> Name_Customer Customer ID -> Date_of_Birth Customer ID -> Address Customer ID -> City Customer ID -> Country
My idea is to create the following hierarchy:
Name_Customer -> Date_of_Birth -> Address -> City -> Country
But the first hierarchy that I show is always appears to me. Do you know what is happens?
I have a specification table that has some attributes defined. SpecId - Id of the specification Attribute - Attribute of the spec. (Like Color, HP etc) Value - Is the value of the attribute Then I have a car table that actually has information about the cars. Intention is to take each specification and match the cars that match the specification. If the car has more attributes than the spec, we ignore the extra attributes for the match. But if the car has less attributes, we don't even consider the car as a match (even if the attributes present, match). To summarize, the car's attributes should be >= spec's attributes.
The code I have below is bad because I am joining the same tables twice. In addition, it fails in the condition "the car's attributes should be >= spec's attributes"
INSERT INTO @Specification VALUES ('S1', 'Type', 'Sedan') INSERT INTO @Specification VALUES ('S1', 'Transmission', 'Auto') INSERT INTO @Specification VALUES ('S1', 'HP', '220')
INSERT INTO @Specification VALUES ('S2', 'Type', 'SUV') INSERT INTO @Specification VALUES ('S2', 'Transmission', 'Manual') INSERT INTO @Specification VALUES ('S2', 'HP', '300')
INSERT INTO @Car VALUES ('Accord', 'Type', 'Sedan') INSERT INTO @Car VALUES ('Accord', 'Transmission', 'Auto') INSERT INTO @Car VALUES ('Accord', 'HP', '220') INSERT INTO @Car VALUES ('Accord', 'Color', 'Black')
INSERT INTO @Car VALUES ('Escape', 'Type', 'SUV') INSERT INTO @Car VALUES ('Escape', 'Transmission', 'Manual') INSERT INTO @Car VALUES ('Escape', 'HP', '300')
INSERT INTO @Car VALUES ('Explorer', 'Type', 'SUV') INSERT INTO @Car VALUES ('Explorer', 'Transmission', 'Manual')
SELECT DISTINCT Spec.SpecId, Car.CarName FROM @Specification Spec INNER JOIN @Car Car ON Spec.Attribute = Car.Attribute AND Spec.Value = Car.Value WHERE Spec.SpecId NOT IN (SELECT Spec.SpecId FROM @Specification Spec LEFT OUTER JOIN @Car Car ON Spec.Attribute = Car.Attribute AND Spec.Value = Car.Value WHERE Car.CarName IS NULL)
What is required is to split data of this format into 3 separate datasets:
1. One dataset for DividendRequirement of 100, i.e. select * from tableName where DividendRequirement = 100
2. One dataset for DividendRequirement > 100 i.e. select * from tableName where DividendRequirement > 100
3. One dataset for DividendRequirement < 100 i.e. select * from tableName where DividendRequirement < 100
I know that i can do it with 3 separate stored procedures using a different operator ('=', '>' and '<') in each one and that i can combine the 3 stored procedures into 1 using dynamic sql and pass the operator (or some number that maps to a particular identifier) as a parameter to the stored procedure. What i'm after though is a way to avoid dynamic SQL but still keep it as one stored procedure. Possibly some clever use of case statements or something along those lines?
We have built an analytics solution in Analysis Services with the following setup:
1) FactTable = Customer transactions 2) DimCustomerClassificationTable = For a customer over a trailing twelve month period, we have pre-calculated a series of segmentations such as their revenue segment at a Worldwide Level, Region Level, SubRegion Level, or Country Level, their product classification (e.g., what are they mainly buying) at a Worldwide Level, Region Level, SubRegion Level, or Country Level etc. For each combination of a customer and country over a trailing twelve month period, we have one dimension table record that has their segmentation classifications at each geography level. 3) DimTime = Allows a user to select a trailing twelve month time period 4) DimSalesLocation = the hierarchy of sales organization locations (Worldwide, Region, SubRegion, Country) credited with the sale to the customer
Ideally, we would like to create calculated members on the DimCustomerClassification dimension that would assume the value of the correct segmentation field based on a user's selection (e.g., if they has selected a country, we would have one calculated member called Revenue Segment that assumes the value of Revenue Segment Country, if they selected a SubRegion, it would assume the value of Revenue Segment SubRegion, etc.), but we are not sure if this is possible.
If this isn't possible, what would be the best approach to address this?
so that everything is then driven from a single selected Quarter value, the table report no longer gets populated, as it has hardcoded field values such as
I have a report with multiple datasets, the first of which pulls in data based on user entered parameters (sales date range and property use codes). Dataset1 pulls property id's and other sales data from a table (2014_COST) based on the user's parameters. I have set up another table (AUDITS) that I would like to use in dataset6. This table has 3 columns (Property ID's, Sales Price and Sales Date). I would like for dataset6 to pull the Property ID's that are NOT contained in the results from dataset1. In other words, I'd like the results of dataset6 to show me the property id's that are contained in the AUDITS table but which are not being pulled into dataset1. Both tables are in the same database.
I have a small number of rows in a dataset, Table 1. There is a CLOB on a large dataset, Table 2. They join on a PK. I would like to retrieve this CLOB and add it to the data flow for Table1. In short I want to emulate the following:
Table 1: Small table without CLOB, 10 rows. Table 2: Large table with CLOB, 10,000,000 rows
select CLOB from table2 where pk = (select pk from table1)
I want this to return the CLOBs for the small number of rows in Table 1. The PK is indexed obviously so it should be a fast look up.
Table 1 and Table 2 live on different Oracle databases. How do I perform this operation efficiently in SSIS? It seems the Lookup and Merge Join wont do this.
I have a report with multiple datasets, the first of which pulls in data based on user entered parameters (sales date range and property use codes). Dataset1 pulls property id's and other sales data from a table (2014_COST) based on the user's parameters.
I have set up another table (AUDITS) that I would like to use in dataset6. This table has 3 columns (Property ID's, Sales Price and Sales Date). I would like for dataset6 to pull the Property ID's that are NOT contained in the results from dataset1. In other words, I'd like the results of dataset6 to show me the property id's that are contained in the AUDITS table but which are not being pulled into dataset1. Both tables are in the same database.
Hi, I have a stored procedure attached below. It returns 2 rows in the SQL Management studio when I execute MyStorProc 0,28. But in my program which uses ADOHelper, it returns a dataset with tables.count=0. if I comment out the line --If @Status = 0 then it returns the rows. Obviously it does not stop in if @Status=0 even if I pass @status=0. What am I doing wrong? Any help is appreciated.
ALTER PROCEDURE [dbo].[MyStorProc]
(
@Status smallint,
@RowCount int = NULL,
@FacilityId numeric(10,0) = NULL,
@QueueID numeric (10,0)= NULL,
@VendorId numeric(10, 0) = NULL
)
AS
SET NOCOUNT ON
SET CONCAT_NULL_YIELDS_NULL OFF
If @Status = 0
BEGIN
SELECT ...... END If @Status = 1 BEGIN SELECT...... END
I am using db keyowrds as attributes in my table such as from, to and date. These are enclosed in [] in SQL Server Enterprise Manager. I'm just asking if doing this is a bad idea? Reason being these are the most applicable names for these attributes but don't want to run into problems further down the line.
Have a report that I want to keep certain attributes and as long as the report contains this certain attribute, bring all other attributes with it. Better with an example. In this report I am specifically looking for attribute "Alcohol", if I find this attribute I want to include all others that fit with this record's Primary Key which could include, "Drugs","Arson","Vandalism", etc. Problem is when I try to use a paramater or filter I get the "Alcohol" Attribute but not the "Drugs","Arson","Vandalism", etc. Conversely since I dont have any filter/paramater set I get everything even if it does not include "Alcohol"
i have two datasets.one dataset have old data from some other database.second dataset have original data from sql server 2005 database.both database have same field having id as a primary key.i want to transfer all the data from first dataset to new dataset retaining the previous data but if old dataset have the same id(primary key) as in the new one then that row will not transfer. but if the id(primary key) have changed values then the fields updated with that data.how can i do that.
D2 is a list of data. each row in D2 has a classid. D2 may or may not have all the classids in D1. all classids in D2 must be in D1.
I want to show fields in D2 and group the data with classids in D1 and show every group as a seperate table. If no data in D2 is available for a classid, It shows a empty table.
Bit of a design question as I'm interested to know if anyone's done anythign like this...This is my main table (ish) Thing(ThingId, Ref)I then need to be able to give this "Thing" any number of attributes. Thing1 - Type:Red, Location:LondonThing2 - Type:Blue, Height:400, Width: 300Thing3 - Height:500, Location:Norwich But I have no idea how to model this in the database - it needs to be in such a way that I can add a Thing and all its attributes in one database hit basically (is there a stored procedure you could pass an array into?) My initial thoughts were to have Thing(ThingId, Ref) Attribute(AttributeId, ThingId*, AttributeTypeId*, Value) AttributeType(AttributeTypeId, Description) Is that completely mad? It seems like quite a lot of data accesses to enter a ThingIt could be Thing(ThingId, Ref, Type, Location, Height, Width) but then when "Thing - Color:White" comes along the model is stuffed Any ideas? (hope that makes sense)
In query analyzer, what is the command to tell me the attributes of the entities in a table? In oracle I can use the describe command. I know their is a way to do it in Query analyzer but I can't remember how. Also I can look visually by expanding the node of the table. But if I can do this through the command line in query analayzer, it is sometimes quicker.
Example. I want to find out about a table named "Employee" What command would I type that would tell me all of the columns/attributes in that table, and the data types which they are? Bill
. When I copy tables from one database to another (Using DTS Wizard) I lose my settings .. primary keys + default values !! Any help would be appreciated.. . Thanks
I have a few tables that have an disabled attribute using a BIT datatype. A lot of my queries on the front end look like:
SELECT * FROM TableA WHERE disabled <> 1
There's usually some other constraints on the query (get TOP 10 and greater than a certain date for example). Right now my tables are very small (only a couple thousand rows). I don't anticipate these tables having more than 100,000 rows.
Right now let's say there's only a CLUSTERED INDEX on the date field, and regular INDEXES on the identity field and perhaps some other UNIQUE name in the table.
Unless I am doing ranged queries on the CLUSTERED INDEXED field, I'm going to be performing table scans almost every time, right?
This sort of goes along with another question:
Say you run the following (SQL Server):
CREATE TABLE TestA ( [id] INT IDENTITY (1, 1) PRIMARY KEY, disabled BIT DEFAULT 0 ) GO INSERT INTO TestA (disabled) VALUES ('0') GO INSERT INTO TestA (disabled) VALUES ('0') GO INSERT INTO TestA (disabled) VALUES ('1') GO INSERT INTO TestA (disabled) VALUES ('0') GO INSERT INTO TestA (disabled) VALUES ('0') GO INSERT INTO TestA (disabled) VALUES ('0') GO INSERT INTO TestA (disabled) VALUES ('0') GO INSERT INTO TestA (disabled) VALUES ('1') GO INSERT INTO TestA (disabled) VALUES ('0') GO INSERT INTO TestA (disabled) VALUES ('0') GO INSERT INTO TestA (disabled) VALUES ('1') GO INSERT INTO TestA (disabled) VALUES ('1') GO INSERT INTO TestA (disabled) VALUES ('0')
Since [id] is a PK there will be a CLUSTERED INDEX placed on it. My question is; what does the optimizer do when you perform the following query?
SELECT TOP 3 * FROM TestA WHERE disabled <> '1'
My assumption is that since there's a CLUSTERED INDEX it will simply iterate through every tuple and check to see if disabled is not '1'. If my assumption is correct then these kind of boolean fields aren't a big deal if TOP queries are performed on a CLUSTERED INDEX.
So I guess what I am getting at is: Are bit attributes a sign of bad design? As tables get larger will performance degrade significantly? Would a better design be to have a seperate table of disabled items (which may result in large NOT IN subqueries)?
Any information on his would be greatly appreciated.
I have a question about storing the history of particular objects in a database. For example, if I had a table of "People" which had fields "PersonId", "Name", "PhoneNumber", "Height", "Weight", "Proffession" the data in every field stored for each person can change over time, except for the "PersonId", of course, which is why it is included.
I would like to be able to view a persons attributes at any point in time and therefore need to maintain a history. The currenct approach in place is to archive images of the whole table at certain points in time, which is unacceptable as it misses some changes, is not very accessible and also stores data which does not change.
My solution would be to created seperate tables for each changing attribute and have corresponding date for each change. For example, for phone numbers have a table "PeoplePhoneNumbers" with fields "PersonId","PhoneNumber" and "ChangeDate". A few shortcomings I can see in this approach is that firstly there will be many tables, one for each changing attribute, which can be in far greater number than those mentioned. Secondly, joins will have to created between every attribute table to get the orignal single table form, although I don't see this as a very important issue.
I am wondering; is there a more elegent way to structure for objects of this changing nature, or is having seperate tables for each changing attribute the best solution? I'm sure this is a very common issue. Thanks very much for the help,
We have an entity such as a documentSearchKey that contains attribtes about a particular document. This document can have 1-N number of search keys or attributes. The classic Employee Table is a good example for a horizontal listing of attributes (fname, lname, SS#, address, etc.) because the employee entity has a "fixed" number of attributes so we can add columns across.
For the documentSearchKey entity attributes can be considered search keys or where clause values. The documentSearchKey entity has variable number of attributes (docType A has 5 keys, docType B has 15 keys, etc) For this example each docType lives inside its on table so there is not a problem with mixing a variable number attributes inside the same table i.e. we will assume this table has 20 keys vertical or 20 columns horizontal as defined below.
The problem is whether or not to add 20 columns across or to add 3 columns and create a non-normalized DB so additional keys can be added at will.
The proposed table now contains 3 columns (docID, KeyID, KeyValue). Of course, 10 keys for 1 million records create 10 million rows Versus the traditional table with 1 million records always has 1 million records(keys are cols) where some columns contain blanks or nulls.
Which design is better in terms of searching and performance? Also, books and links are welcome as well. This is a specific question to a production issue.
I apologize ahead of time for the long post...Background:Working on a CRM type custom application. The application is for anevent management company. The company will provide the application forother organizations to manage their own events. The events includeconferences, corp meetings, sales meetings, etc...An event planner will define what information is needed for an attendeeto register for an event. We will be providing a standard list ofattributes for the event planner to select from. This list includespersonal information (name, address, phone numbers), air travelinformation (preferred carriers, departure airports, etc...), hotelinformation, etc...we've included all of the information available tous from the business's previous experience. As far as the databasegoes, all of the standard information given to use will be normalized.The problem is each event may have unique information that needs to becollected that is not part of the standard list of attributes. Forexample, if McBurgers is planning an event, the event planner may wantto collect an attendee's McBurger employee code.Depending on the uniqueness of the event, there may be up to 200 uniqueattributes defined for it. This number comes from researching eventsplanned in the last 5 years. The number of attendees for an event rangefrom 100 to 10,000. The company expects about 3000 events per year.Database DesignI've done a fair amount of research and found a couple of options tomeet our requirements, more specifically the need for event planners todefine custom attributes for an event.1-)DynamicColumns:Add an Event specific custom attributes table. The table would looksomething like this:Event_McBurger05AttendeeID | McBurgerEmployeeCode | HiredDate | SomeOtherAttribute-Join Bytes! | AxEt356 | 01/01/2004 | Other val 22-)EAV:Add an EAV (entity, attribute, value) table. The table would looksomething like this:Event_AttributesEventCode | AttendeeID | Attribute | Value-McBurger05 | Join Bytes! | McBurgerEmployeeCode | AxEt356McBurger05 | Join Bytes! | HiredDate | 01/01/2004McBurger05 | Join Bytes! | SomeOtherAttribute | Other val 2The Value attribute would be a character (probably varchar) datatype.3-)Stronger Typed EAVHave an EAV table for each data type. The tables would look somethinglike this:Event_CharAttributesEventCode | AttendeeID | Attribute | CharValue-McBurger05 | Join Bytes! | McBurgerEmployeeCode | AxEt356McBurger05 | Join Bytes! | SomeOtherAttribute | Other val 2Event_DateAttributesEventCode | AttendeeID | Attribute | CharValue-McBurger05 | Join Bytes! | HiredDate | 01/01/2004There would be one Event_[DataType]Attribute table for each of thedatatypes allowed.Pros/Cons1-)DynamicColumnsPros:-Data integrity can be enforced-Simpler queries for reporting-Clearer data model for understanding data storedCons:-Row size limitation of 8k must be managed (probably need to addanother table if run out of room.-Stored procedures for CRUD operations would need to dynamicallycreated ORNeed to use dynamic SQL on the database or application.-Adding/Removing columns on the fly can be very error prone2-)EAVPros-Static CRUD stored procsCons-No data integrity-Complex queries for reporting-Worse performance than option 1.-Table can get BIG...fast.3-)Stronger Typed EAVPros-Static CRUD stored procs-Better data type integrity than EAVCons-Complex queries for reporting-Worse performance than option 1-Table can get BIG...fast.If you are still reading this...thank you!The Questions:-Are there other options other than the 3 described above? Or are thesepretty much it with slight variants.-Does anyone see any missing Pros/Cons for any of the options thatshould be considered?-Is there a "preferred" method for what I am trying to do?I suspect this will come down to the lesser of three devils. Justtrying to figure out which of the three it is.We have prototyped the three options and are leaning towards option 1and 3.Any comments/suggestions are appreciated.Thx
I am having a question about Microsoft Clustering algorithm here. When we train the clustering model, we gain the clusters based on the model training. So what are the relationship among all attributes within each cluster? When we sumarize the characteristics for each cluster, for example, based on criteria attribute A=X, we got the darker cluster for this criteria, along with this characteristics (A=X), we also got other characteristics, so what is the exact relationship among all these chracteristics? It seemed they dont have any relationship to each other at all? (A=X dose not mean most likely B=Y if A=X?, what it means only is within this cluster, most likely A=X and B=Y etc. and A=X has the largest population within this cluster). I therefore cant see these chracteristics really interested.
Looking forward to any guidance and advices for that.
the query below (from Adventure Works) displays the sales amount for three products and a custom member "aggregation" which is the aggregate of these three products, and it cross joins with the attribute "colour".
Code Snippet
with member [Product].[Product Categories].[Subcategory].&[31].[aggregation] as 'AGGREGATE({ [Product].[Product Categories].[Product].&[214], [Product].[Product Categories].[Product].&[215], [Product].[Product Categories].[Product].&[220] })'
SELECT { [Date].[Calendar].[All Periods] } ON COLUMNS ,
Can someone please explain me why I'm getting this result:
All Periods
Sport-100 Helmet, Red Red 39328.1586
Sport-100 Helmet, Black Black 12098.0788
Sport-100 Helmet, Blue Blue 13331.5816
aggregation Black 64757.819
aggregation Blue 64757.819
aggregation Red 64757.819 (note that 64757.819 is the total of the three products)
instead of something like this:
All Periods
Sport-100 Helmet, Red Red 39328.1586
Sport-100 Helmet, Black Black 12098.0788
Sport-100 Helmet, Blue Blue 13331.5816
aggregation Black 12098.0788
aggregation Blue 13331.5816
aggregation Red 39328.1586
and also if anyone knows of a possible way of getting the second type of result?
please note that if I create a custom member that aggregates members of any other level of the Product Category hierarchy, the problem doesn't exist (see code and results below)
Code Snippet WITH MEMBER [Product].[Product Categories].[Category].&[4].[Aggregation] as 'AGGREGATE({ [Product].[Product Categories].[Subcategory].&[31], [Product].[Product Categories].[Subcategory].&[32] })' SELECT { [Date].[Calendar].DEFAULTMEMBER } ON COLUMNS , NON EMPTY { { { [Product].[Product Categories].[Subcategory].&[31], [Product].[Product Categories].[Subcategory].&[32], [Product].[Product Categories].[Category].&[4].[Aggregation]} * { [Product].[Color].[All Products].CHILDREN } } } ON ROWS FROM [Adventure Works] WHERE ( [Measures].[Reseller Sales Amount] )