How To Partition The Split The Dataset Into Training And Validation When Running Descision Tree Model?

Jun 15, 2007



Can I ask how to split the dataset into training and validation when running descision tree model?

View 3 Replies


ADVERTISEMENT

Can I Ask How To Use 'weight' Variable In Descision Tree And How To Use Cross Validation In The Data Mining Procedure?

Jun 15, 2007

Also when using cross validation, which descision tree should we choose?



Thanks very much

View 5 Replies View Related

How To Split The Data Into Training And Validation Sets When Doing Data Mining?

Jun 15, 2007

Could I ask how to spit the data into training and validation sets when doing data mining?



Thanks

View 1 Replies View Related

Possible To Save Up The Progress At Some Point Of Decision Tree Training?

Aug 24, 2007

Dear All,


If I have a decision tree training work which might last for many days or months. Is it possible to tell the data mining training program to save up the progress at some point? In case the computer hangs or power fail in the middle, the computer can resume the rest of the work at the saving point?

Thanks

Tony Chun Tung Siu

View 1 Replies View Related

How To Get The Training Error From The Model?

May 9, 2007

Hi, everyone here.

I am trying to get the training error of the model processed which can reveal how much the model fits the cases. The training error can reveal how many cases (from training set) are classified correctly. The lower traing error is, the more the model fits the training set. (Maybe overfitted) But I found it hard to get. I saw the life chart in AS 2005 which I am not quite understand and don't know how to code it in my program.



Is there some way to getting traing error or predicting error?



I am now using this awful way to get the training error:



select t.*,CollegeTree.CollegePlans as pred

from collegetree
prediction join
openquery(DSource,'select * from CollegePlans') as t
on CollegeTree.StudentID = t.StudentID and
...
where t.CollegePlans = CollegeTree.CollegePlans;



and then use datareader.ItemCount to get the count of cases which classified correctly.



Keyword: train error,predict error, data mining, analysis service

View 5 Replies View Related

How To Split A Transaction Table To Create Training And Testing Set For AR

Sep 30, 2006

Hi all,

I have a transaction table that has a composite key made up of transaction id and product id. where multiple products were purchased under same transaction, transaction ids got repeated.

I would like to split the table randomly into 70%, 30% ratio to create training and testing set respectively in such a way that it does not split a same transaction under which multiple products were purchased (rows with same transaction id should not get split).


is it possible? if possible what is the idea?
It would be of great help.


Thanks.


Fakhrul

View 9 Replies View Related

Error When Training Mining Model. Column Name Seems Can't Be Recognized

Jul 11, 2007



Dear All, I have a simple mining structure created by the DMX statement below. Then I tried to insert some data with MDX language by extracting data in OLAP. But I got the following error when I execute the insert statement.



Errors in the high-level relational engine. The 'Customer ID' column in the RELATES clause was not found in the results of the OPENROWSET query.


It seems that the append statement can't really recognize the name of the column which should be Customer ID.



How can I fix this problem?



Thanks

Tony Chun Tung Siu



The source code for create and insert is as below.



create mining model customerMiner
(
customerID long key ,
age long continuous,
orders table
(
orderID long key,
goodsID long discrete predict_only
)
)using [Microsoft_decision_trees] with drillthrough;

insert into mining model customerMiner
(
customerID,
age,
orders
(
skip,
orderID,
goodsID
)
)
shape
{
openQuery([Simple SSAS],
'
select {[Measures].[Customer ID], [Measures].[Age]} on columns,
{[Customer].[Customer].[Customer].members} on rows
from fi
')
}
append
(
{
openQuery([Simple SSAS],
'
select {[Measures].[Customer ID],[Measures].[Order ID2],[Measures].[Goods ID]} on columns,
[Goods].[Order ID2].[Order ID2].members on rows
from fi
')
}
relate [Customer ID] to [Customer ID]
)as orders

View 1 Replies View Related

Question On Large Volume Of Training Dataset

May 10, 2007

Hi, all experts here,

Thanks a lot for your kind attention.

I have a question on training large volume of datasets. In this case, the training will take a long while to complete, is there anything we can do to improve that? I know, we obviously cant split the training dataset into different smaller datasets. What we can do to improve that?

Hope my question is clear for your help.

Thank you very much in advance for your advices and help and I am looking forward to hearing from you shortly.

With best regards,

Yours sincerely,

View 3 Replies View Related

Fuzzy Lookup And Grouping Training Dataset?

Mar 2, 2008


Hi All,

Is there a way the fuzzy lookup or grouping can be trained so that similarities and confidence values rely on previously matched strong links?

For example: I can link 80% of my two datasets using one strong identifier (say phone #) which I trust. My goal then, is to use the probability of matching of the rest of my linking fields (say Name,Address,Gender,DOB) in a "matched by phone number" pair to train a fuzzy lookup task to be done on the unlinked 20% of the datasets.

This "training set" would in theory influence the similarity and confidence values of the fuzzy output since each linking column would carry a different weight or contribution towards a confident match.

Does anyone out there knows how to do this in practice in SSIS?

View 1 Replies View Related

Help With Celko's Tree Model

Jul 20, 2005

hello i have implemented joe celko's model to store heirarchy and itworks well and nicei have a question about SQLthis is the actual tablemember side left right------------------------------------------nancy L 1 36andrew L 4 21steven R 5 12ina L 6 7david R 10 11margaret L 13 20ann R 14 15laura L 18 19janet R 24 35michael L 25 30dan R 26 27ron L 28 29robert R 33 34the Side column is to tell its left, or right. this is a binaryheirarcy.i have this problem i have to solve, im still banging my head. Ifgiven the member'Nancy' , i need to find left-most(Laura) and right-most(Robert)'Janet' = left most is ron, right most is robert'Andrew = left most is laura, right most is DavidHope u get my plan. could u help me with the sql ?

View 3 Replies View Related

SPLIT RANGE Partition Error

Feb 8, 2008



Hello All,

I am using SQL 2005 SP2. I have a table partitioned on date range. I am trying to SWITCH, MERGE and SPLIT partitions.
My SWITCH and MERGE work great. When the SPLIT query is executed, an error 9002 is thrown....

"The transaction log for database is full. To find out why space in log cannot be resued, see log_reuse_wait_desc column in sys.databases."


Below are more details...

- All SWITCH, MERGE and SPLIT are executed in one TRANSACTION.
- After SWITCH and MERGE, I execute a query set the partition scheme "NEXT USED [PRIMARY]".
- Finally i execute SPLIT statement.

Could you please tell me where am I going wrong?

Any help would be appreciated.

Thanks..................

View 2 Replies View Related

Getting The Model's (Decision Tree) PMML

Feb 19, 2007

Hi

Can anyone tell me the steps involved in retrieving a model's (decision tree) pmml and use the model content to devleop a web based interface. I am using SQL Server 2005.

Thanks,

Nathan



View 5 Replies View Related

Questions About Regression Tree Model

Oct 21, 2007

I have two questions about the regression tree of Microsoft Decision Trees algorithm.

1. The mining legend window has a column named Histogram showing a bar for each coefficient. What does this bar mean?
2. Since each node of a regression tree corresponds to a linear regression, how can I find the regression coefficient of each node? I mean the coefficient that tells how good the regression is.

Any tip will be greatly appreciated.

View 1 Replies View Related

SSIS Data Mining Model Training Transform (Nested Tables)

Oct 26, 2006

I can't figure out how to put nested tables into the Data Mining Model Training Transform (SSIS). I can do a simple case table, but how do you get those nested tables with DM Training Transformation? Any ideas? Samples?

Thanks in advance,

-Young K

View 3 Replies View Related

I Receive MSG 7707 When Trying To Split A Partition For The Second Time. Why ?

Nov 21, 2006

Hi

I am trying to implement a sliding window on a table in SQL Server 2005 but i am having some problem.
I have two tables, "Letture" and "LettureStorico". The first one receives data on a few seconds basis, some thousands of rows each day. The second is the historical record and should store all the records till midnigh of two days before, that is, if today is November 21st, LettureStorico stores rows till November 19th 23.59:59.997.

At some time during morning of each day i want to run a stored procedures that takes the records older than midnight of two days before in "Letture" and switch them as a partition in "LettureStorico"

Here's what i do:

/*----------------------------------------------------------*/
CREATE PARTITION FUNCTION [partizioneLive](datetime) AS RANGE LEFT FOR VALUES (N'2006-11-15 00:00:00')

CREATE PARTITION FUNCTION [partizioneStorico](datetime) AS RANGE LEFT FOR VALUES (N'2006-11-15 00:00:00')

CREATE PARTITION SCHEME [schemapartizioneLive] AS PARTITION [partizioneLive] ALL TO ([PRIMARY])
ALTER PARTITION SCHEME [schemapartizioneLive] NEXT USED [PRIMARY];/*(1)*/

CREATE PARTITION SCHEME [schemapartizioneStorico] AS PARTITION [partizioneStorico] ALL TO ([PRIMARY])
ALTER PARTITION SCHEME [schemapartizioneStorico] NEXT USED [PRIMARY]; /*(1)*/

CREATE TABLE [dbo].[Letture](
[IdLettura] [bigint] IDENTITY(1,1) NOT NULL,
[IdTag] [int] NOT NULL,
[IdGatewayBox] [int] NOT NULL,
[IsEntrata] [bit] NOT NULL,
[Data] [datetime] NOT NULL,
[IsRettifica] [bit] NOT NULL,
CONSTRAINT [PK_Letture] PRIMARY KEY CLUSTERED
(
[Data], [IdLettura] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON schemaPartizioneLive(data)
) ON schemaPartizioneLive(data)

ALTER TABLE [dbo].[Letture] WITH CHECK ADD CONSTRAINT [CK_Letture] CHECK (([Data]>='20061115 00:00'))

CREATE TABLE [dbo].[LettureStorico](
[IdLettura] [bigint] IDENTITY(1,1) NOT NULL,
[IdTag] [int] NOT NULL,
[IdGatewayBox] [int] NOT NULL,
[IsEntrata] [bit] NOT NULL,
[Data] [datetime] NOT NULL,
[IsRettifica] [bit] NOT NULL,
CONSTRAINT [PK_LettureStorico] PRIMARY KEY CLUSTERED
(
[Data], [IdLettura] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON schemaPartizioneStorico(data)
) ON schemaPartizioneStorico(data)

ALTER TABLE [dbo].[LettureStorico] WITH CHECK ADD CONSTRAINT [CK_LettureStorico] CHECK (([Data]<'20061115 00:00'))

/*----------------------------------------------------------*/

Every morning i run a stored procedure that, after dropping the check constraints (i'll recreate the at the end), does the following:

/*-----------------------------------------------------------*/
SET @NewBoundary = dateadd(dd,-1, @dateOfToday)

--this new partition contains the rows i want to switch
ALTER PARTITION FUNCTION PartizioneLive() SPLIT RANGE (@NewBoundary)
--this new partition is empty
ALTER PARTITION FUNCTION PartizioneStorico() SPLIT RANGE (@NewBoundary)

--this works fine, rows are moved
ALTER TABLE Letture SWITCH PARTITION 2 TO LettureStorico PARTITION 2

--these two merges lead to two tables partitioned in two partitions each
ALTER PARTITION FUNCTION PartizioneLive() MERGE RANGE (@OldBoundaryLive)
ALTER PARTITION FUNCTION PartizioneStorico() MERGE RANGE (@OldBoundaryStorico)

/*------------------------------------------------------------*/

Till now, everything is working as expected.
Now, when i try to run the same Stored Procedure " a day later" (NewBoundary moved on 1 day) i receive, when i do the "ALTER PARTITION FUNCTION PartizioneLive() SPLIT RANGE (@NewBoundary)" i receive a 7707 error message:
"Msg 7707, Level 16, State 1, Line 1
The associated partition function 'PartizioneLive' generates more partitions than there are file groups mentioned in the scheme 'schemapartizioneLive'."

How is this possible if i used the "ALL TO [PRIMARY]" and specified which file to use next as in (1) ? Why all this succeeds the first time (when i have 3 partitions) but not the second (again i have just three partitions, i checked) ?

Someone can help me on this, please ?

Many thankx

Wentu

View 3 Replies View Related

Interpreting The Percentage In Decision-tree Model

Sep 15, 2006

Hi,

I used a decision-tree mining-model to describe and predict fraud. The table contains 1039 records with 775 distinct value of A-number (the calling party). I used 9 columns in the model. SQL Server reports that only 3 columns are significant in predicting the fraud

- BPN_is_too_short (called party-number is too short)
- Duration_is_zero
- Invalid_area_code

The key-column in A-number, and the predicted column is Is_Fraud with the range of values are only 0 and 1. There's no record with NULL (missing-value) in the column Is_Fraud.

Mining Legend shows in the first split
[-] 625 cases of fraud
[-] 150 cases of non-fraud
[-] 0 cases of missing

In addition to that, Mining Legend shows
[-] 79.69% of fraud
[-] 19.64% of non-fraud
[-] 0.67% Missing

Now when I compare those values, they don't match.
(A) 625/775 is 80.645%, not 79.69%
(B) 150/775 is 19.355%, not 19.64%
(C) 0 cases of NULL (missing value) should imply 0% of missing, not 0.67% of missing

Furthermore in one node (with the split on duration_is_zero), there are 541 cases of fraud and 0 cases of non-fraud. This implies the node is leaf-node. However, Mining Legend shows

514 cases of fraud, 99.35%

0 cases of non-fraud, 0.33%

[F] 0 cases of missing, 0.33%


My questions
(1) Why the values don't match like in cases A through C ?
(2) Why the values don't match even in cases D through F when we have no subtree at all ?

I've searched explanation by reading the mathematical reasoning, entropy, Gini index; but it does not answer the discrepancies of those values and percentages in the Mining Legend.

Regards,

Bernaridho

View 3 Replies View Related

Transact SQL :: Split Rows By Day / By Datetime And Partition By Columns

Jul 22, 2015

I am trying to spilt records into days by the start - End datetime.

I would send an image and data but because I am new to the forum, I am blocked sending images.

"Body text cannot contain images or links until we are able to verify your account"

How I can forward an image.

View 15 Replies View Related

Data Mining Model Viewer Of Decision Tree Out Of Memory Error

May 18, 2007

We've successfully processed a large decision tree model in SQL Server 2005. When I try to view the tree in the mining model viewer, I get the following error:



TITLE: Microsoft Visual Studio
------------------------------

The tree graph cannot be created because of the following error:

'Exception of type 'System.OutOfMemoryException' was thrown.'.

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%u00ae+Visual+Studio%u00ae+2005&ProdVer=8.0.50727.42&EvtSrc=Microsoft.AnalysisServices.Viewers.SR&EvtID=ErrorCreateGraphFailed&LinkId=20476


The link provides no other documentaiton on the error.



We're using 64-bit SQL on a Dell Workstation running XP-64 with 16GB of memory. From my view of things we aren't close to running out of memory. Since the model processed and the error occurs when viewing the model, is this a problem with Visual Studio and nont necessarily Anlaysis Services?



Thanks in advance.



Nick

View 4 Replies View Related

Error Not Enough Space For Temporal Database When Processing Decision Tree Model

Sep 4, 2007

Hello,
I have a table (in Access) with about 30 fields and 1,700,000 records.
I had created a mining model in AS2005 with only one key (the autonum column called ID)
and other attributes marked as Input and/or predict.
When processing the model, it finish (after 15 min.) with an error: 3183
"Not enough space in temporal disk"
After some search , I encountered that is close related to the memory asigned to the tempdb.
I tried to increase the size of tempdb but it is imposible, moreover, it starts
with 8MB but it is autosized when needed.

I don't know how to solve this issue. Or, if it is a question of memory/disk space management (I have 100GB of free space in disk).

I tried the same model changing the KEY (I assign StudyID as key) then with the same data but 60,000 StudyIDs it is ok, so the mining model is ok (no nested tables, no case, too easy for getting a memory error)...

Please, can anyone recommend a possible solution for this issue?.
Many Thanks.

View 2 Replies View Related

Split 1 Dataset Into 3 Without Dynamic SQL

May 16, 2008

I have some data in the following format

Timestamp Value DividendRequirement
01/01/2008 100 100
01/01/2008 200 90
02/01/2008 123 100
02/01/2008 436 90
03/01/2008 399 100
03/01/2008 5046 90
03/01/2008 45 130
04/01/2008 100 100
04/01/2008 233 90
04/01/2008 12 130

What is required is to split data of this format into 3 separate datasets:

1. One dataset for DividendRequirement of 100,
i.e. select * from tableName where DividendRequirement = 100

2. One dataset for DividendRequirement > 100
i.e. select * from tableName where DividendRequirement > 100

3. One dataset for DividendRequirement < 100
i.e. select * from tableName where DividendRequirement < 100

I know that i can do it with 3 separate stored procedures using a different operator ('=', '>' and '<') in each one and that i can combine the 3 stored procedures into 1 using dynamic sql and pass the operator (or some number that maps to a particular identifier) as a parameter to the stored procedure. What i'm after though is a way to avoid dynamic SQL but still keep it as one stored procedure. Possibly some clever use of case statements or something along those lines?

Any ideas?

Thanks

View 5 Replies View Related

Function Does Not Exist For Decision Tree When Running Tutorial

Feb 8, 2007

When I run the Microsoft tutorial for data mining I get this error when I get to the decision tree part.
I get a similar error for clustering in the same tutorial.
However, The Naive Bayes demo seems fine.
The messages said the project was built and deployed without errors.

Does anyone know how to fix the error:

TITLE: Microsoft Visual Studio
------------------------------

The tree graph cannot be created because of the following error:

'Query (1, 6) The '[System].[Microsoft].[AnalysisServices].[System].[DataMining].[DecisionTrees].[GetTreeScores]' function does not exist.'.

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%u00ae+Visual+Studio%u00ae+2005&ProdVer=8.0.50727.762&EvtSrc=Microsoft.AnalysisServices.Viewers.SR&EvtID=ErrorCreateGraphFailed&LinkId=20476

------------------------------
ADDITIONAL INFORMATION:

Query (1, 6) The '[System].[Microsoft].[AnalysisServices].[System].[DataMining].[DecisionTrees].[GetTreeScores]' function does not exist. (Microsoft OLE DB Provider for Analysis Services 2005)

------------------------------
BUTTONS:

OK
------------------------------

View 4 Replies View Related

Is There Any Way To Train A Portion Of A Training Data Set From A Selected Dataset For Data Mining?

Jun 19, 2007

Hi, all experts here,



I am wondering is there any way to select only a portion of a data set to train the mining model? In this case, I mean we dont need to split the dataset in advance, what I want to do is being able to select any random portion of a selected dataset to train a mining model. Any advices?



I am looking forward to hearing from you and thanks a lot in advance for your advices and help.



With best regards,



Yours sincerely,



View 3 Replies View Related

Best Way To Split A Dataset Into Manageable Chunks?

Sep 28, 2007

I have a table that's 25,000,000 records... about 10 fields. I need to export this data to a flat file in no more than 500,000 record chunks. I've tried the following algorithm, adding a flag field called "exported" with default value 0.

do:
- mark random 500,000 records, setting exported = -1
- export everything in that table where exported = -1
- set exported = 1 where exported = -1
loop

This was pretty slow, taking about 10 hours last night to run.

I find myself wanting a sort of a split dataset task in SSIS, being able to split records a chunk of records out of a dataset and handle them. Anyone have ideas for me?

View 5 Replies View Related

Validation Queries Running Too Long

Aug 30, 2006

I have a table that contains approx 200 thousand records that I need to run validations on. Here's my stored proc:

[code]
CREATE PROCEDURE [dbo].[uspValidateLoadLeads]
@sQuotes char(1) = null, @sProjectId varchar(10) = null, @sErrorText varchar(1000) out
AS BEGIN
DECLARE @ProcName sysname, @Error int, @RC int, @lErrorCode bigint, DECLARE @SQL varchar(8000)

IF @sQuotes = '0'
BEGIN
UPDATE dbo.prProjectDiallingList_staging
SET sPhone = RTrim(LTrim(Convert(varchar(30), Convert(numeric(20, 1), phone))))
END
ELSE
BEGIN
UPDATE dbo.prProjectDiallingList_staging
SET sPhone = phone
END

--2. Remove quotes
UPDATE dbo.prProjectDiallingList_staging
SET sphone = REPLACE(sphone,'"' , '')

--3. Remove decimal, comma, dashes, parenthesis
UPDATE dbo.prProjectDiallingList_staging
SET sphone = replace(replace(replace(replace(replace(replace(sphone,'.',''),',','' ),'-',''), ' ',''), '(', ''), ')', '')

--4. Update failed Validation column if not 10 digits
UPDATE dbo.prProjectDiallingList_staging
SET sFailedValidation = 'X'
WHERE(Len(RTrim(LTrim(sPhone))) <> 10)

--5. Dedup
UPDATE a
SET a.sFailedValidation = 'X'
FROM dbo.prProjectDiallingList_staging a (nolock)
INNER JOIN dbo.prProjectDiallingList_staging b
ON a.sPhone= b.sPhone
WHERE(a.iList_StagingID > b.iList_StagingID)

--6. Update failed Validation column if not numeric
UPDATE dbo.prProjectDiallingList_staging
SET sFailedValidation = 'X'
WHERE(IsNumeric(RTrim(LTrim(sPhone))) = 0)

--7. Update time zones
UPDATE s
SET s.sTimeZone =z.sTimeZone
FROM dbo.prProjectDiallingList_staging s (nolock)
LEFT OUTER JOIN dbo.prPhoneTimeZone z
ON left(rtrim(ltrim(s.sphone)),3) = z.sPhoneAreaCode

--8. Insert into dialing table only records that have not failed the validation
INSERT dbo.prProjectDiallingList(iPrProjectId, sPhoneNumber, sTimeZone)
SELECT @sProjectId,sPhone, sTimeZone
FROM dbo.prProjectDiallingList_staging
WHERE ISNULL(sFailedValidation,'1') = '1'

UPDATE d
SET d.bProcessReporting = 1
FROM dbo.prProjectDialling d
WHERE d.iPrProjectId = @sProjectId
END
[/code]

When I execute this stored proc it runs for more than 5 minutes. Is there anything i can do to speed it up? Maybe there is a faster way of writing these queries?

Thanks,

Ninel

View 1 Replies View Related

Could You Tell What's Wrong When I Split Table To The Target Partition Table?

Jan 22, 2007

Could you tell what's wrong when I split table to the target partition table?USE TEST--ADD FILEGROUP---------------------------------------------------------------------ALTER DATABASE TEST ADD FILEGROUP FG_01ALTER DATABASE TEST ADD FILEGROUP FG_02ALTER DATABASE TEST ADD FILEGROUP FG_03--ADD FILE--------------------------------------------------------------------------ALTER DATABASE TEST ADD FILE (NAME = DF_01,FILENAME = 'D:TESTDF_01.ndf',SIZE = 10MB,MAXSIZE = UNLIMITED,FILEGROWTH = 10MB)TO FILEGROUP FG_01ALTER DATABASE TEST ADD FILE (NAME = DF_02,FILENAME = 'D:TESTDF_02.ndf',SIZE = 10MB,MAXSIZE = UNLIMITED,FILEGROWTH = 10MB)TO FILEGROUP FG_02ALTER DATABASE TEST ADD FILE (NAME = DF_03,FILENAME = 'D:TESTDF_03.ndf',SIZE = 10MB,MAXSIZE = UNLIMITED,FILEGROWTH = 10MB)TO FILEGROUP FG_03--CREATE PARTITION FUNCTION---------------------------------------------------------CREATE PARTITION FUNCTION PF_HIS_HTTP_LOG(datetime)AS RANGE LEFT FOR VALUES ('20070101 23:59:59.997','20070102 23:59:59.997')--CREATE PARTITION SCHEME-----------------------------------------------------------CREATE PARTITION SCHEME PS_HIS_HTTP_LOGAS PARTITION PF_HIS_HTTP_LOG TO ( FG_01, FG_02, [PRIMARY])--CREATE PARTITION TABLE -----------------------------------------------------------CREATE TABLE HIS_HTTP_LOG ( USERID varchar(32) , USERIP varchar(15) ,USERPORT numeric(5,0) , OBJECTIP varchar(15) , OBJECTPORT numeric(5,0) , URL varchar(256) , HOST varchar(64) , DN varchar(64) , VISITIME numeric(5,0) , STARTIME datetime , ENDTIME datetime ) ON PS_HIS_HTTP_LOG(STARTIME)--INSERT DATA,PARTITION 1 20070101-------------------------------------------------DECLARE @i intSET @i = 1WHILE @i <= 100BEGININSERT INTO HIS_HTTP_LOG VALUES(CAST(@i AS varchar(32)),'192.168.1.1',5,'202.103.1.57',6,'www.sohu.com',11,CONVERT" target="_blank">http://sina.com.cn','','www.sohu.com',11,CONVERT(datetime,'20070101 13:25:26.100',121),GETDATE())SET @i = @i +1END--INSERT DATA ,PARTITION 2 20070102-------------------------------------------------SET @i = 1WHILE @i <= 200BEGININSERT INTO HIS_HTTP_LOG VALUES(CAST(@i AS varchar(32)),'192.168.1.1',5,'202.103.1.57',6,'www.sohu.com',11,CONVERT" target="_blank">http://sina.com.cn','','www.sohu.com',11,CONVERT(datetime,'20070102 11:25:26.100',121),GETDATE())SET @i = @i +1END--CREATE A TABLE -------------------------------------------------------------------CREATE TABLE TMP_HTTP_LOG( USERID varchar(32) , USERIP varchar(15) ,USERPORT numeric(5,0) , OBJECTIP varchar(15) , OBJECTPORT numeric(5,0) , URL varchar(256) , HOST varchar(64) , DN varchar(64) , VISITIME numeric(5,0) , STARTIME datetime , ENDTIME datetime ) ON FG_03--INSERT DATA TO TMP_HTTP_LOG 20070103-----------------------------------------------DECLARE @i intSET @i = 1WHILE @i <= 400BEGININSERT INTO TMP_HTTP_LOG VALUES(CAST(@i AS varchar(32)),'192.168.1.1',5,'202.103.1.57', 6,'www.sohu.com',11,CONVERT" target="_blank">http://sina.com.cn','','www.sohu.com',11,CONVERT(datetime,'20070103 09:25:26.100',121),GETDATE())SET @i = @i +1END--ADD CONSTRAINT--------------------------------------------------------------------ALTER TABLE TMP_HTTP_LOGWITH CHECKADD CONSTRAINT CK001CHECK (STARTIME >= '20070103 00:00:00.000' AND STARTIME <= '20070103 23:59:59.997')--SPLIT RANGE ,SWITCH DATA----------------------------------------------------------ALTER PARTITION SCHEME PS_HIS_HTTP_LOG NEXT USED FG_03ALTER PARTITION FUNCTION PF_HIS_HTTP_LOG() SPLIT RANGE ('20070103 23:59:59.997')ALTER TABLE TMP_HTTP_LOG SWITCH TO HIS_HTTP_LOG PARTITION 3--==========================================�======================================Why is the error in step of“ALTER TABLE TMP_HTTP_LOG SWITCH TO HIS_HTTP_LOG PARTITION 3�error infomation:message_id 4972,level 16,severity 1ALTER TABLE SWITCH statement failed. Check constraints or partition function of source table 'TEST.dbo.TMP_HTTP_LOG' allows values that are not allowed by check constraints or partition function on target table 'TEST.dbo.HIS_HTTP_LOG'.Please tell me why ? check constraints ?Thank you very much !

View 1 Replies View Related

Problem:long Running Validation When Pointing OLEDB Source To A View.

Jan 24, 2007

One of our developers has written a view which will execute completely (returns ~38,000 rows) in approx 1 min out of SQLMS (results start at 20 sec and completes by 1:10 consistently).

However, if he adds a data flow task in SSIS, adds an OLEDB Data Source and selects Data Access Mode to "Table or view" and then selects the same view, it is consistently taking over 30 minutes (at which point we've been killing it). I can see the activity in the Activity Monitor, it is doing a SELECT * from that view and is runnable the whole time.

If we modify the view to SELECT TOP 10, it returns in a short time.

Has anyone run into this problem? Any suggestions? It is very problematic, as if the views change we have to hack around this problem.

Thanks for any responses.

Jeff

View 5 Replies View Related

Running Value And NULL In A Dataset

Aug 6, 2007



Hi Everyone,

I have a dataset like below









X15ForecastsCounts
X16ActualsCounts
WeekEnding

38
18
7/23/07

38
14
7/30/07

38
35
8/6/07

37
NULL
8/13/07

37
NULL
8/20/07

37
NULL
8/27/07

37
NULL
9/3/07

36
NULL
9/10/07

36
NULL
9/17/07

22
NULL
9/24/07



I am plotting this data set as an S-Curve using the Running Value funtion.
The value and data point value is =RunningValue(Fields!X15ForecastsCounts.Value,Sum,"Series") similar code for X16Actuals Counts.

Unfortunately I am getting the curve like this http://tinypic.com/view.php?pic=4lyiuko

I dont want my X16ActualsCounts to extend beyond 8/6/07 in this case....and next week I dont want it to extend beyond 8/13/07.

How can I fix this issue??


View 3 Replies View Related

SQL 2012 :: Sort Tree Members In Right (tree) Structure?

Apr 6, 2015

I got assignment, how to make it appear in the right order .

/* DROP TABLE EMP
SELECT * INTO Emp FROM (
SELECT 'A' EmpID, NULL ManID, 'Name' EmpName UNION ALL
SELECT 'MAC' EmpID, 'A' ManID, 'Name__' EmpName UNION ALL
SELECT '1ABA' EmpID, 'MAC' ManID, 'Name____' EmpName UNION ALL
SELECT 'ABB' EmpID, '1ABA' ManID, 'Name______' EmpName UNION ALL
SELECT 'XB' EmpID, 'A' ManID, 'Name__' EmpName UNION ALL
SELECT 'BAC' EmpID, 'XB' ManID, 'Name____' EmpName ) b
*/

[code]....

View 2 Replies View Related

Problem Running Queries Of A Dataset

Jan 10, 2008

Hello
Could you please help me solving this problem?
I have a stored procedure called subscribe for inserting a new row to subscriptions table. Then I added a new query (nonquery) to my dataset called 'Subscribequery' for handling the stored procedure.
now, I want to run the query lke this:
 DataSet1TableAdapters.SubscribeQuery C = new DataSet1TableAdapters.SubscribeQuery();
C.Subscribe(Profile.UserName, Convert.ToInt32(Subscriptions.Rows[1].Cells[1].Text));
 
but nothing is added to table. what can I do?
Should I be looking for something like Update(dataset) method for my query?
 many thanks in advance

View 2 Replies View Related

Report Model Deployment : The Model ID Of The Submitted Model Must Match That Of The

Dec 5, 2005

Running 2005 Beta 3 Refresh.  When I first deploy, it works fine. Subsequent deployments yield the following error:

View 9 Replies View Related

Power Pivot :: Building A Model Based On Multinational Model With Different Languages?

Oct 19, 2015

I need to develop a language specific dwh, meaning that descriptions of products are available from a SAP system in multiple languages. English is the most important language and that is the standard. But, there are also requirements of countries that wants productdescriptions in their language. 

Productnr Productdesc Language
1            product       EN
1            produkt       DE

One option is to column the descriptions, but that is not very elegantly. I was thinking of using bridge tables to model this but you have to always select a language in a filter (I think)..

I'm thinking of a technical solution, such that when a user logs on, the language is determined and a view determines whether to pick a certain product table specific for a certain language. But then I don't have the opportunity to interchange the different language specific fields in a report (or in my case PowerPivot).

View 2 Replies View Related

Can We Pause Log Shipping, Bring Primary Db To Simple Recovery Model And Then Back To Full R Model?

Apr 25, 2008



We have the following scenario,

We have our Production server having database on which Few DTS packages execute every night. Most of them have Bulk Insert stored procedures running.

SO we have to set Recovery Model of the database to simple for that period of time, otherwise it will blow up our logs.

Is there any way we can set up log shipping between our production and standby server, but pause it for some time, set recovery model of primary db to simple, execute DTS Bulk Insert Jobs, Bring it Back to Full recovery Model AND finally bring back Log SHipping.

It it possible, if yes how can we achieve this.

If not what could be another DR solution in this scenario.

Thanks Much
Tejinder

View 6 Replies View Related

SQL Server 2008 :: Populate One Dataset In SSRS Based On Results From Another Dataset Within Same Project?

May 26, 2015

I have a report with multiple datasets, the first of which pulls in data based on user entered parameters (sales date range and property use codes). Dataset1 pulls property id's and other sales data from a table (2014_COST) based on the user's parameters. I have set up another table (AUDITS) that I would like to use in dataset6. This table has 3 columns (Property ID's, Sales Price and Sales Date). I would like for dataset6 to pull the Property ID's that are NOT contained in the results from dataset1. In other words, I'd like the results of dataset6 to show me the property id's that are contained in the AUDITS table but which are not being pulled into dataset1. Both tables are in the same database.

View 0 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved