Analysis :: Standard IF Or Alternative Approach?

May 7, 2015

understanding the difference between a "standard" MDX IIF

(a>0, b, c)
and the two different approach
(a>0)*true*b +not(a>0)*true*c
or
(a>0)*true*b+(1+(a>0)) *c

are there any differences in the query plans? If yes, how they impact perfomances?

View 4 Replies


ADVERTISEMENT

Analysis :: Proclarity Reports Are Not In Standard Mode

Jun 13, 2012

I have Installed Proclarity 6.1 Web Professional in my local System and i am trying to open a report on Analysis server where sql server 2012 in installed.

There i found reports are only opening in professional mode and when ever i am trying to open in Stendard Mode i am getting below mentioned error for every report.

ERROR:- "The selected page could not be opened because the cube could not be found. Please choose a different page."

After getting this Error, i have uninstalled  ProClarity 6.1 web Professional from my system and installed ProClarity 6.3; but still i am facing the same issue (not able to open reports in a standard mode)

What is the solution for this and is this compatibility issue with ProClarity 6.1 and 6.3 with sql server 2012?

View 11 Replies View Related

Analysis :: Are Measure Expressions Supported In SSAS 2014 Standard Edition

Nov 12, 2014

Are Measure Expressions Supported in SSAS 2014 Standard Edition?In 2005 SSAS, I remember that Measure Expressions were not supported in the Standard Edition, only Enterprise Edition.

View 2 Replies View Related

Can I Install A Enterprise Version Analysis Service On A Standard Version Of SQL 2005 Server?

Jul 25, 2006

Hi all,

Since some analysis services features are only available in Enterprise version , I have to upgrade my SQL 2005 server from standard edition to enterpise edition.

So I uninstall originial standard version of analysis service and install a Enterprise version. However, the analysis service is still a standard version after installation.

Is it possible to keep data engine as standard version and install a enterprise version of analysis service?

Thank you very much

Tony

View 1 Replies View Related

Compare Standard Evalulation To Standard License Version

May 30, 2007

Hi All,



Good Evening.



I need a comparision between Evaluation copy and Standard license version of SQL Server 2005 /SSIS.



I'm assigned a task to evaluate SSIS and migrate a project to SSIS.



I have downloaded SSIS evaluation version and started working on that.



Now i'm being posed questions for the complete functionality of the tool...



- Whether it has a Bulk load trasformation ?

- Whether the evaluation version contain all the features of a standard license version ?



I need to submit a consolidated report for SSIS in comparision to the current ETL tool features.



Can you please let me is there any considerable features not given with a evaluation copy ?



Thanks in advance,

Suresh N

View 4 Replies View Related

How Do SQL 2000 Service Packs Play A Role In Upgrading?i.e.Can SQL 2000 Standard With No SP Be Upgraded To SQL 2005 Standard

Aug 2, 2006

How do SQL 2000 service packs play a role in upgrading? That is, can SQL 2000 Standard with no Service Packs(SP) be upgraded to SQL 2005 Standard, or does SQL 2000 Standard have to have a certain service pack??

View 1 Replies View Related

Best Approach With DTS

Mar 30, 2004

Let me see if I can explain this.

I have the need to pull data from multiple tables from a DB2 system via ODBC and update or insert as needed into tables in a SQL200 DB.

Step 1.
The data from the initial parent table will need to be limited to being a set number of days old, which I have in place and working.

Step 2
The next tables data needs to be limited from the data retrieved in step 1 (Id like to use the paprent table retrieved in step 1, that is in SQL now, rather than doing it on the DB2 side.

Step 3
The returned rows here, need to be limited to key values returned from step 2

Additional steps apply, but nearly all will be limited to the results of parent tables from the prior step.

What is the best approach to this? I really want to pull table A to SQL, and limit the next child set from Table A, that was pulled to SQL in the prior step.

I also need to do updates rather than dropping and creating the needed tables each time. Insert if no key exists, etc .etc.


What is the best approach?

View 2 Replies View Related

Best Way To Approach This

Mar 20, 2006

I've been working on this project, and had it working in MySQL, but it was badly done and couldn't last more than a few hours without growing so large that everything slowed way down. I don't expect anyone to tell me exactly what to do, just please provide an outline of what the best way to approach this in SQL Server 2005 is.

To simplify it, I have one table "Items" and another table "ItemPrices". Items has an id and a name. Each row in ItemPrices has an id for the item, a price and two datestamps (added, last updated).

On average, there's about 15,000 active items, 50% of them have new prices every couple minutes, so I'm looking at what seems like a ton of data being constantly imported. There's probably a good way to do this but I only know the bad way :)

So.. what I want to be able to do is have maybe a stored procedure (?) that takes the item name and price as parameters. (In MySQL I was using "INSERT... ON DUPLICATE KEY UPDATE")
A. If it's a new item name, it will add a row to the Items table and a row to ItemPrices
B. If it's an existing item with the same price as the current price (the most recent price for that item in ItemPrices) it will update the "last updated" date field
C. If it's a new price it will insert a row into ItemPrices for that item

Also, I want historical pricing data, but if I ever release this, 95% of the users will just be looking at current prices. I need the current prices to be very fast to query, in my MySQL version I was using something like this: "SELECT... join on lastupdated=(SELECT Max(lastupdated) FROM ItemPrices ...", after I had 300k price updates querying a list of items took like 15 seconds.. there's got to be a better way? What should I do to make this faster?

Does this make any sense? Hopefully someone can lead me in the right direction. Thank you very much!

View 3 Replies View Related

Appropriate MDX Approach ???

May 24, 2008



Hello all.


I use MDX on a cube which provides data about animal population.


The cube contains the keyfigure "ANIMALS" that takes the number of animals.

The cube has a dimension "VERSION" which is used to identify the keyfigure
as a target or an actual value (possible values: "actual" and "target")
The cube has another dimension "ZONE" for the population zones.
Possible values for zones: "A", "B", "C" and "D".


Now I want to create an MDX statement, that gives me a result row like this:

Actual number of animals (as sum of all 4 zones) in column no. 1,
Target number of animals (as sum of all 4 zones) in column no. 2,
Achieved percentage (as actual number / target number * 100) in column no. 3.


Until here my statement works and it looks like this:


WITH MEMBER [VERSION].[achieved] AS '[VERSION].[actual] / [VERSION].[target] * 100'
SELECT {[VERSION].[actual], [VERSION].[target], [VERSION].[achieved]} on COLUMNS
FROM [$MYCUBE]
WHERE ([Measures].[ANIMALS])


It surely is possible that the achieved value for all zones together is equal to or greater than 100%,
while single zones might have an achieved values less than 100%.


In order to account on this, i would like column no. 4 to display one of these words:
"ok" if none of the single zones has an achieved value smaller than 100%,
"warning" if any of the single zones has an achieved value between 96 and 99%,
"alert" if any of the single zones has an achieved value smaller than 95%.


That means, i want e.g. the word "yellow" if the lowest achieved value
of the 4 zones is between 96 and 99. I want to have "red" if the lowest value
is smaller than 95.


I am quite new to MDX and I have struggled quite a long time with this.
I would be grateful for a hint on how i have to modify / enhance my MDX statement.


Regards. Peter.

View 1 Replies View Related

Best Approach

Sep 27, 2006

I have what I feel like is a simple package I am working to create. I am teaching myself SSIS as I go along.

Source server SQL 2000 database allows NULL values in columns.

Destination Server also SQL 2000 but the database required a value in each column.

So I do a basic source select what I want. I next need to read the values and determine if null then insert a space, do some column matching and insert them into the destination sever.

I believe I should use a Derived Column and an expression ISNULL to accomplish what I want.

Maybe there is a better way. Suggestion and comment appreciated.

Ryan


View 1 Replies View Related

Help With Using The Right Approach

May 18, 2006

Greetings my friends

I am attempting to solve the following problem using SSIS, actually I am attempting to convert a SQL Server 2000 DTS package in to a SSIS package.

The package does the following :

1) Retrieve the maximum Price_ID (PK) from a PRICE dimension table.

2) Populate a staging table with data coming from a source system where the PRICE_ID > (Price_ID from above)

3) Update the actual DIM table with the new data help in the staging table.

For this task I want to learn the use of the Lookup component which I think is appropriate.



My questions are as follows :



If I create a global variable to hold the maximum PRICE_ID (see point 1). How do I get to use the variable in the my Data Flow Data source?!



I am totally confused... I don't even know where to start with this.



Your help would be appreciated.



Thanks SQL friends.

View 6 Replies View Related

Best Approach

Jun 21, 2006

I've been banging my head for a while now, and it is sore! :-P

I'm a best practice/Microsoft approach type of person and want to make sure I do things correctly.

I have a database, kind of like a forum.



Obviously executing multiple queries in one "batch" (stored proc) would have an impact on the performance.



Now, I would like to give a more detailed/specific error back to the caller (either by aid of error code or whatever) with such situations like...

"EditReply"

Edit reply takes the threadID, replyID and userID.

Before actually commiting the changes, it needs to check:

1) does the user exist in the database? (during the editing of the reply, perhaps the user may have been deleted before running the stored proc, who knows)

2) does the thread exist?

3) does the reply exist?



if the conditions are met, only then will it go ahead and update the database. Now that is 3 queries, and 4 statements overall to make a change to a field/table.



Obviously if one of the commands returns false, in other words if say "does the thread exist" returns 0 (thread doesnt exist) it will return back to the caller an errorcode, which they will handle in their application. Thats all fine but the question is



Am I doing this correctly? (no) - how can I improve this? What do I need to think about?

Of course I would like to give a more detailed error back to the caller (aid of errorcode designed in the application overall) instead of just "no, databases not updated".



In this situation, am I wrongly assuming that the database designers use this type of approach?



Please help, I value your feedback and suggestions. I want to improve and think of the right lines of doing these things.

View 12 Replies View Related

Analysis :: Power BI Analysis Services Connector - Remote Server Returned Error

Mar 5, 2015

I have, a SSAS 2012 tabular instance with SP2, there is a database on the instance with a read role with everyone assigned permissions. When configuring the Power BI analysis services connector, at the point where you enter Friendly Name, Description and Friendly error message, when you click next I receive the error "The remote server returned an error (403)." I've tested connecting to the database from Excel on a desktop and connect fine.I don't use a "onmicrosoft" account so don't have that problem to deal with.

We use Power BI Pro with our Office 365. As far as I can tell that part is working ok as I pass that stage of the configuration with a message saying connected to Power BI.The connector is installed on the same server as tabular services, its a Win2012 Standard server. The tabular instance is running a domain account that is the admin account for the instance (this is a dev environment) that account is what I've used in the connector configuration. It's also a local admin account. There is no gateway installed on the server.

View 10 Replies View Related

Analysis :: Cube Needs To Be Deployed From VS After SSIS Analysis Services Processing Task Completes?

May 13, 2014

I have a cube that we are processing nightly via an Analysis Service Processing Task in SSIS.  In order to increase the performance of the processing time, we elected to use a lot of rigid dimension attributes, and do a full process of everything in the SSIS task.  The issue that I am having is that after that task completes, I need to go into Visual Studio to deploy the cube becuase we are unable to browse or use the cube.  This issue seemed to start once we changed the SSIS Analysis Service Processing Task to do a full process on the dimensions, rather than an incremental.

I would expect that once development is done, and it is processed and deployed, that is it.  My thinking is that the SSIS task should just update the already deployed cube,

View 2 Replies View Related

Analysis :: How To Right Choose Key Column In Mining Structure For Microsoft Analysis Services

Jun 12, 2015

How to right choose key column in"Mining Structure" for Microsoft Analysis Services?
 
I have table:

"Incoming goods"

Create table Income (         
ID int not null identity(1, 1)            
[Date] datetime not null,             
GoodID int not null,               
PriceDeliver decimal(18, 2) not null,               
PriceSalse decimal(18, 2) not null,               
CONSTRAINT PK_ Income PRIMARY KEY CLUSTERED (ID),             
CONSTRAINT FK_IncomeGood foreign key (GoodID)  references dbo.Goods ( ID )            
)

I'm trying to build a relationship(regression) between “Price Sale” from Good and “Price Deliver”.But I do not know what column better choose as “key column”: ID or GoodID ?

View 2 Replies View Related

Analysis :: Create Analysis Services Project In Visual Studio 2012 Data Tools?

Feb 18, 2013

It is possible to create Analysis Services project (*.dwproj) in Visual Studio 2012 Data Tools?

View 5 Replies View Related

Analysis :: Excel 2013 Pivot-table / Grouping On Date That Comes From Analysis Service 2008?

Nov 24, 2015

I am using

SSAS: version 2008
Excel: version 2013

I am connecting to SSAS cube from Excel and I have date dimension with 4 fields (I have others but I don't use it for this case). I created 4 fields in order to test all possible scenarios that I could think of:

DateKey:
    - Type: System.Integer
    - Value: yyyyMMdd
Date:
    - Type: System.DateTime
DateStr0:
    - Type: System.String
    - Value: dd/MM/yyyy (note: I am not using US culture)
    - Example: 01/11/2015  
DateStr1:
    - Type: System.String
    - Value: %d/%M/yyyy (note: I am not using US culture) 
    - Example: 1/11/2015  

Filtering on date is working fine:

Initially, in excel, filtering on date was not working. But after changing dimensional type to time, and setting  DataType to Date, as mentioned in [URL] filter is working fine as you can see in the picture.Grouping on date is not working:

I have hierarchy in my Date dimension and I can group based on hierarchy, no problem. But user is used to pre-build grouping function of excel, and he wants to use that. Pre-build functions of Excel, Group and ungroup seems to be available as you can see in following picture:

But when user clicks 'Group', excel groups it as if it is a string, and that is the problem. User wants to group using pre-build grouping function available in Pivot table. I also find out that Power Pivot Table does not support this excel grouping functionality. And if I understood well, this pre-build grouping functionality of excel, needs to do calculation at run time, and that is not viable solution if you have millions of rows. So Power pivot table does not support pre-build grouping functionality of excel and hence we need to use dimension hierarchy to do the grouping. But I am not using Power Pivot table, I am using simple Pivot Table. So I expect grouping functionality to be working fine. Then I tried to do simple test. I created a simple data source in excel itself. And use it as source of my Pivot table. Then grouping is working fine. The only difference that I can see is (When double click the Measure value in Excel),For date values of my simple test, excel consider them as 'Date'.

For date values of my data coming from cube, excel consider them as 'General'

    2.1. But value here is same as it was in simple test.

    2.2. 'Date Filter' works just fine.

    2.3. If I just select this cell and unselect it, then excel change type to 'Date' though for that cell. 

    2.4. I have created 4 different types of fields in my date dimension thinking that values of attribute of my dimension might be the problem, but excel consider 'General' for all of them.

    2.5 This value (that can be seen when double clicking on measure) comes from 'Name Column' of the attribute. And the DataType defined is WChar. And I thought that might be the reason of issue. And I changed it to 'Date'. But SSAS does not allow it to change to 'Date' giving error : The 'Date' data type is not allowed for the 'NameColumn' property; 'WChar' should be used.

So, I don't know, what is the puzzle piece that I am missing.

1. Date filter works, group does not work

2. Excel consider it as 'General' string.

3. SSAS does not allow to change 'NameColumn' to Date.

View 2 Replies View Related

Analysis :: Running Analysis Service In Terms Of Port Usage?

May 30, 2015

I would like to know the best practice for running analysis service in terms of port usage. Is it better to run on a specific port or have dynamic ports ? We have clustered servers that run default on 2383 but not sure with non clustered what's the best way to get performance.

View 2 Replies View Related

SqlBulkInsert ?? Or Better Approach?

Aug 7, 2006

Hello,

I'm doing my best here, but need some help. I have a client that has a company list that they want searched by key word. This is exported from another program (in excel) that they want used and searched on their website.

Bad news, is each Keyword is listed with the company separately. So if a company has 5 different key words, they will be listed in the excel file 5 times.

The info I have is Name, Address, City, State, Phone, Keywords:

So example of excel is:Company A, 123 Main St, Mycity, Mystate, 123-123-1234, Green
Company A, 123 Main St, Mycity, Mystate, 123-123-1234, Furry
Company A, 123 Main St, Mycity, Mystate, 123-123-1234, Large
Company A, 123 Main St, Mycity, Mystate, 123-123-1234, Circular
Company B, 746 Sparrow Ave, Diffcity, Diffstate, 987-987-9876, Blue
Company B, 746 Sparrow Ave, Diffcity, Diffstate, 987-987-9876, Furry
Company B, 746 Sparrow Ave, Diffcity, Diffstate, 987-987-9876, Small

I am able to import this large (4.2 MB) file into a table called fctable

What i'm trying to do is write SQL scripts or queries that can insert into a Company Table and Keyword Table.

I'm trying to write this through asp.net 2.0 (so the excel file is uploaded) and have tried to write my inserts like

INSERT INTO [Company] ([Name], [address], [City], [State], [Phone]) VALUES (SELECT DISTINCT
Name, Address, City, State, Phone
FROM fctable )

But that doesn't seem to be working.

This is the only way my client can get the info to me, and it will be changed probably twice per month, so I'd hate to have to try to manipulate an excel file 24 times a year to import.

Any suggestions Appreciated

View 2 Replies View Related

SQL OPENXML: Best Approach

Feb 27, 2008

Hello everyone.
I am new to.Net and here is what I have to do.
 I needto update a SQL table with data coming from a XML file.  I have seen some Microsoft documentation on this (the nice SQL statement that updates and inserts in the same stored procedure) but I don'tknow what is the best approach for passing my XML file to the stored procedure.  The XML contains about 12 000 records, kind of phonebook info (name, email, phone).
 What would be the best approach to do this?  What objects should I use?
 Thanks a million,
Ben
 

View 2 Replies View Related

MS SQL Server 7.0 And Approach

Oct 18, 1999

Environment
NT Server 4.0 w/ SP4
SQL Server 7.0 w/ SP1
Win98 Client w/ Lotus Approach 9.5

I recently added SQL 7.0 to be a back end for my Approach front end. I transferred all the data from a dbase IV in approach to SQL. Most of the conversions worked ok. I have two big problems.

1) One particular repeat panel in Approach loses the children records of the master record. If I delete some of the records, more will appear. It's as if there is an imaginary limit of the number of records it can read in the repeat panel. I don't have this problem with any other records and children in repeat panels. I called Lotus and they don't have an answer. This is important because the children records need to be summed up so I can have a running total.

2) I original configured the clients to use the TCP/IP Netlib w/ the default port. I couldn't open enough databases so I changed to Multiprotocol. This allowed certain clients to open more databases, but others can't open additional databases. Also, after the change, the NT authentication login has had problems. I had to change to the SQL login to get all my clients back on line. Sometimes the same client can't open more than 10 databases while other times it will open 15. There is no consistent patten to when it can and can't open the additional databases.

If anyone knows how to fix either of these problems, I would greatly appreciate any advice. I'm getting tired of my boss yelling at me.

Thanks.

Keith
akumaboyz@aol.com

View 1 Replies View Related

Not Sure How To Approach This, If /then Select?

May 12, 2008

I have a select statement where I need to test two values that are returned and perform a different calculation if they return null.

Basically,
if TESTA is null and TESTB is not null
return TESTB

if TESTB is null and TESTA is not null
return TESTA

ELSE if both are not null
TestA + TestB / 2 is value is returned

is this possible to do in a select statement? thanks in advance.

See select statement below:

Select S.StudentDimKey,
(select ISNULL(R.Score, NULL) from CLT_StudentAssessmentRawFact R, CLT_LessonPlanDim L where L.LessonPlanKey = 1 and L.LessonPlanDimKey = R.LessonPlanDimKey and StudentDimKey = S.StudentDimKey) as TESTA,
(select ISNULL(R.Score, NULL) from CLT_StudentAssessmentRawFact R, CLT_LessonPlanDim L where L.LessonPlanKey = 2 and L.LessonPlanDimKey = R.LessonPlanDimKey and StudentDimKey = S.StudentDimKey) as TESTB,
from CLT_StudentPlacementFact P, CLT_StudentDim S, CLT_ClassHierarchyDim H, CLT_StudentClassFact C
where P.StudentDimKey = S.StudentDimKey
and H.ClassHierarchyDimKey = C.ClassHierarchyDimKey
and S.StudentDimKey = C.StudentDimKey
and S.StatusCode = 'A'
and S.CurrentRecord = 1
and S.SchoolID = 87577

View 2 Replies View Related

Can A Set Based Approach Help Me Here...

Feb 1, 2007

im practicing set based approaches... and what im trying to do is grab each value from a table , scramble it and put it back in the table... i dont want the solution to this as id rather figure it out myself for practice...

the thing im stuck at is i can do this with a cursor but i want to avoid cursors in future, how would i use a set based approach to get each value of a table and work with it?

View 4 Replies View Related

Set Approach Instead Of Cursor

Jul 20, 2005

Hi,I am trying a Set Approach instead of Using of Cursor (which works).I am attaching the SQL to create tables and the my Procedure, and apiece of code to execute the Procedure.I would like the Procedure ReplaceTags to work with 'a' the same aswith 'C'.Thanks in advance.Hareesh/*****************************//* Create Tables */IF EXISTS (SELECT * FROM dbo.sysobjects WHERE ID =OBJECT_ID(N'GlobalTags') AND OBJECTPROPERTY(id, N'IsUserTable') = 1)DROP TABLE GlobalTagsGOCREATE TABLE GlobalTags(Project VARCHAR(50) NULL,TagName VARCHAR(50) NULL,[Value] VARCHAR(50) NULL)GOIF EXISTS (SELECT * FROM dbo.sysobjects WHERE ID =OBJECT_ID(N'ProductDetails') and OBJECTPROPERTY(id, N'IsUserTable') =1)DROP TABLE ProductDetailsGOCREATE TABLE ProductDetails(Project VARCHAR(50) NULL,KeyName VARCHAR(50) NULL,[Value] VARCHAR(50) NULL)GO/*********************************//* Populate Tables */TRUNCATE TABLE GlobalTagsINSERT INTO GlobalTags (Project, TagName, Value)VALUES('ProjectName', 'FirstName', 'John')INSERT INTO GlobalTags (Project, TagName, Value)VALUES('ProjectName', 'LastName', 'Doe')INSERT INTO GlobalTags (Project, TagName, Value)VALUES('ProjectName', 'PhoneNo', '5248')INSERT INTO GlobalTags (Project, TagName, Value)VALUES('ProjectName', 'ZIPCode', '55555')TRUNCATE TABLE ProductDetailsINSERT INTO ProductDetails (Project, KeyName, Value)VALUES('ProjectName', 'FirstName', '%FirstName%')INSERT INTO ProductDetails (Project, KeyName, Value)VALUES('ProjectName', 'LastName', '%LastName%')INSERT INTO ProductDetails (Project, KeyName, Value)VALUES('ProjectName', 'PhoneNo', '%PhoneNo%')INSERT INTO ProductDetails (Project, KeyName, Value)VALUES('ProjectName', 'ZIPCode', '%ZIPCode%')/****************************//* Procedure */IF EXISTS (SELECT * FROM sysobjects WHERE name = 'ReplaceTags')DROP PROCEDURE ReplaceTagsGOCREATE PROCEDURE ReplaceTags(@aProjectName VARCHAR(50),@aProcessType CHAR(1))ASBEGINDECLARE @TagName VARCHAR(50)DECLARE @Value VARCHAR(50)IF @aProcessType = 'C'BEGINDECLARE REPLACE_CURSOR CURSOR FAST_FORWARD READ_ONLY FORSELECT TagName, Value FROM GlobalTagsWHERE Project = @aProjectNameOPEN REPLACE_CURSORFETCH NEXT FROM REPLACE_CURSOR INTO @TagName, @ValueWHILE (@@FETCH_STATUS = 0)BEGINUPDATE ProductDetailsSETValue =CASE WHEN CHARINDEX('%' + @TagName + '%', Value, 1) > 0THENREPLACE(Value, '%' + @TagName + '%', @Value)ELSEValueENDWHERE Project = @aProjectNameFETCH NEXT FROM REPLACE_CURSOR INTO @TagName, @ValueENDCLOSE REPLACE_CURSORDEALLOCATE REPLACE_CURSORENDELSEBEGINUPDATE ProductDetailsSETValue =CASE WHEN CHARINDEX('%' + GlobalTags.TagName + '%',ProductDetails.Value, 1) > 0 THENREPLACE(ProductDetails.Value, '%' +GlobalTags.TagName + '%', GlobalTags.Value)ELSEProductDetails.ValueENDFROM ProductDetails INNER JOIN GlobalTagsON (ProductDetails.Project = GlobalTags.Project)WHERE ProductDetails.Project = @aProjectNameENDEND/***********************************//* Run Procedure */EXECUTE ReplaceTags 'ProjectName', 'a'-- EXECUTE ReplaceTags 'ProjectName', 'C'SELECT * FROM GlobalTagsSELECT * FROM ProductDetails/* End*//***********************************/

View 6 Replies View Related

Should I Use A Trigger Or A Different Approach?

Apr 25, 2008



Lets say I have a dynamic table of 10 to 30 varchar(255) columns that I do not know the names of.



Each of these columns is updated with either an 'OK' or an error msg.



Here's the issue. I have a Status column (varchar(MAX)) which I want to be the concatenated sum of all the other columns. I'd like to do this via a trigger because the way I understand it, theres no way a trigger can skip over another update/trigger and write the wrong 'Status' value.



Example Row 1:

RandomColName1 = "1||Access Denied"

RandomColName2 = "0||OK"

RandomColName3 = "2||ID10T Error at Kbrd"

Status = "[RandomColName1] Access Denied | [RandomColName3] = "ID10T Error at Kbrd"



Example Row 2:

RandomColName1 = "0||OK"

RandomColName2 = "0||OK"

RandomColName3 = "0||OK"

Status = "0||OK"



The rules:

1. There is no chance of someone updating the same column at the same time.

2. There's a good chance that other columns in the same row will be updated at the same time.

3. Sometimes a column can be updated every 5 secs.

4. About 100 to 5000 Rows



I'm a SQL newb and this would be my first trigger to write (lol) but I feel if it could be done without causing a DEADLOCK then it should be done this way. Unless someone with more experience can shed some light



With up to 300 .net clients over fast and slow links, I don't want to xfer the whole 5000 row / 50 column table back to the app and have it weed it out, especially if the client only needs 100 of the rows. On the same token, I don't want to make an individual call back to the SQL server should the client need 4000 rows.
Thanks for the help!

View 9 Replies View Related

Mining Approach?

Jul 4, 2007

I have a scenario in mind and was wondering if anyone had any suggestions on an approach.



Lets say I have a dataset where I have captured various attributes about blade servers in a rack. I have internal temperature, fan speed, disk reads, cpu temperature and failure events. Each of these are continuous variables except for the failure event, which has only two states, 'true' or 'false' which indicates whether a failure occurred at that point in time.



The table looks like this:







MachineID
Timestamp
Temperature
FanSpeed
Reads
CPUTemp
Failure



At the time of failure, only the MachineID, Timestamp and Failure values will be populated in the table. The failures at this time are not categorized or discriminated from each other in any way although they will be in the future. We're looking to use a mining algorithm to determine which variable is the best predictor of failure, or combination of variables. What do you think is the right approach? How might this approach change once the failures do become categorized and differentiated?



Thanks,

Frank

View 4 Replies View Related

Configuration Approach

Aug 13, 2007

Background


I use four database connections strings.

I have about 30 packages that will use one or more of the connection stings.

I store the connection strings in XML configuration files.
I know that I can share configuration files across packages. Should I ?

Have one configuration file with all 4 connection strings? If I use this approach will I get errors in the packages that only use 1 of the connection strings OR

Create four separate configuration files (CnnString1.dtsconfig, CnnString2.dtsconfig, CnnString3.dtsconfig, CnnString4.dtsconfig) and use the appropriate ones for each package

Take another approach

View 5 Replies View Related

Question On An ETL Approach

May 23, 2008

Let me use a simplied example for what I'm trying to do.

Say, I have the following source tables.

T_EmpDept

Emp_Name, Dept_Name
John , IT
Mary , IT
Ted , HR

T_Employee

EmpID, Emp_Name
1 , John
2 , Mary
3 , Ted

T_Dept

DeptID, Dept_Name
1 , IT
2 , HR

Now, I wan to populate a target table that looks like

T_Target

EmpID, DeptID
1 , 1
2 , 1
3 , 2

So, it's basically a lookup by name and translated to the appropriate keys.

The way I've done this before, is I do one lookup at a time in serial (one data stream) and get the keys I need. But I want to do them in parallel (split the data stream in two), which I believe involves a Multicast Task and some sort of a Merge Task. I used to shy away from the parallel lookup because I never really understood how to bring the two data streams back together.

So, the question I have is this. Are Multicast and Merge task the right tasks to use for this? I don't want to go researching something that may not be appropriate.

Lastly, if someone has a link that can lay this out for me, I would appreciate it.




View 7 Replies View Related

Best Approach For A Solution

Oct 9, 2007



I don't have much background with SSIS or SQLServer or development for that matter. I was thrust in to this and I apologize if my questions are not up to snuff in any way.

I have built an SSIS package based on the import/export wizard and have customized it by adding a For Each Loop container for processing multiple source data files and it works great so far but I need to add some advanced functionality to make it more flexible. I'm having some problems with the best way to enhance the solution and I'm getting frustrated with variables and expressions, which is what I think I need to use. Hopefull what I need is something that's doable.

The package processes mutliple source files using the For Each Loop. The data is stored in a destination table that is dropped and recreated each time. I've also added a couple of CREATE INDEX statements to the source SQL.

I need to make the SQL DROP, CREATE TABLE and INDEX statements dynamic based on part of the filename string. I'm capturing the filename in the FOR EACH container using Variable Mapping. What I need to do is parse out part of the string from the filename and use it in the SQL statements. An example would be CREATE TABLE XXX_[stringvar] where [string] comes from the variable mapping done in the For Each loop.

My thinking was to setup a new variable within the scope of the SSIS package and enable Evaluate as Expression and then build an expression for the varible where I parse out the string from the filename variable that already exists. So if the filename was filename.filetype and I needed the last 4 characters from the filename the string value would be whatever characters were in 'name' each time. But there isn't an expression builder editor built in to the variable properties expression element, so I'm not clear that I can actually create a variable in SSIS and then have it's value be set using an expression. It seems to me that if this was doable, then the expression builder I've seen elsewhere in SSIS would be visable.

Then I was hoping I could edit the SQL behind the DROP and CREATE statements where I've hard coded the table name and insert the variable right in the SQL. My SQL isn't that strong and I don't know how to work with variables but I was thinking the SQL would look like CREATE TABLE XXX_@[stringvar].... I believe another approach would be to setup another string variable for the entire SQL statement and build the statement string dynamically. But I don't know if this is necessary or better.

Anyway, I hope someone can help and that my original post is clear enough.

Thanks

View 4 Replies View Related

What's The Best Way To Approach This Situation?

Jul 12, 2007

Hello everyone,



I'm creating a database for a new application and I'm currently facing a design problem, regarding a business requierment for the membership module.



The Membership Module of the application has several business requierments, specified by the client. One of them is the ability to add and remove details about their members. So far, I've created a schema, named Person, which will contain a number of tables responsible for everything related with the membership, as an individual.



To help you guys understand the design I'm trying to implement, I'll post the fields of two of the tables that belong to the Person schema, as follows:



Person.Base

Id (uniqueidentifier)

UniqueIdTypeId (uniqueidentifier)

UniqueId (uniqueidentifier)

Password (char(88))

PasswordSalt (char(10))

PasswordRecoveryQuestion (nvarchar(256)) [NULLABLE]

PasswordRecoveryAnswer (char(88)) [NULLABLE]

CreationDate (datetime)

AuthenticationWindowStartDate (datetime)

AuthenticationWindowAttemptCount (tinyint)

IsActive (bit)

IsBanned (bit)

IsLocked (bit)

Status (bit)



Person.Emails

PersonId (uniqueidentifier)

EmailAddress (nvarchar(256))

CreationDate (datetime)

ValidationCode (char(10))

ValidationDate (datetime) [NULLABLE]

IsValid (bit)

IsPrimary (bit)

IsRollbackTarget (bit)

Status (bit)



So far so good. This design works great to preserve the data integrety. Nonetheless, this is where the problems start. Now, imagin you need to let someone from that company add an item to the user (through the application). Let's say we want to allow the company application manager to add an item to the person called "PreviousEmployer". Such item would then be used for statics, thus would probably need to be indexed.



In order to meet this business requierment I would create some addicional tables. Let's get started:



Person.CustomFields

Id (uniqueidentifier)

Name (nvarchar(50))

Description (nvarchar(3000)) [NULLABLE]

Status (bit)



Person.CustomField_Value

FieldId (uniqueidentifier)

Value (nvarchar(450))



This could work just fine if both are indexed (that's why the nvarchar size is set to 450). But I'm guessing this is far from the optimal solution for many reasons, one of them being the efficiency of the index if the company decides to go and use this for a flag (true or false [bit]). Another "solution" breaking scenario would be if the company wants to add the CV of the person, situation in which we were unlikely to be able to add a file in this datafield.



How would you guys approach this issue? The bottom line is that the client needs to be able to add pretty mcuh any type of custom field and perform searches againts it. So, besides being a dynamic solution it needs to be efficient.



Best regards and thanks in advance.

View 11 Replies View Related

Need Advice On Approach

Mar 8, 2007

I am new to DTS/SSIS and would like some input on an approach to solve a problem.

I have a solution using SQL Server 2005 and another legacy solution running on Access. Status records are written to a table in SQL Server and as they are written, I need to write a record to a table in Access. This needs to be as real-time as possible.

I thought about writting a managed stored procedure in C# so I could also access MS Access, but someone pointed me to DTS.

Records are constantly written to the status table and need to remain in there even after they are processed.

Can you recommend an approach or an article to read?

Thanks,


Steve

View 1 Replies View Related

Alternative For DTC

Jul 30, 2007

Hi,
i am using the DTC in my code to connect to two different servers on the network through a SQL query which is unfortunately very slow; can u please guide me with an alternative for the same

Thanks

View 17 Replies View Related

Alternative Way

Jul 20, 2005

SELECT *FROM organizationWHERE (departmentID = divisionID) AND (divisionID = branchID) AND(branchID = sectionID) AND (sectionID = unitID)Is there anyway I can make this query more simlified w/o repeating thesame column in the where clause?thankss/RC

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved