Web Sync And Partitioned Snapshot - Failed From Time To Time. Why?
Oct 17, 2006
Hi!
Well..
There's operable web sync with parameterized filter.
But sometimes the strange errors appear.
Below the list of the errors I got from ComErrorCollection property of MergeSynchronizationAgent instance:
ERROR: -2147199433
SOURCE: Merge Replication Provider(Web Sync Server)
TEXT: The Merge Agent was unable to start the SQL Server Agent job to generate the partitioned snapshot for this subscription. Verify that the SQL Server Agent service is running at the Distributor.
ERROR: 22022
SOURCE: HOST3MAIN
TEXT: SQLServerAgent Error: Request to run job dyn_HOST3MAIN-Customers-Main-2__20061014_14 (from User distributor_admin) refused because the job is already running from a request by User distributor_admin
ERROR: 20633
SOURCE: HOST3MAIN
TEXT: Failed to start the dynamic snapshot job. Verify that the SQLServerAgent service is running on the distributor.
ERROR: 20628
SOURCE: HOST3MAIN
TEXT: Failed to generate dynamic snapshot.
I just encountered a problem with snapshot replication or, if more detailed, with snapshot agent.
My environment consist of W2K Server (sp1) and SQL Server 7.0 (sp2)
The issue is when I create publication and then start snapshot agent for initial creation of schema and articles, it fail with the error: "An exception occurred in the Snapshot subsystem. The step failed." Event Log registers this error with the following message: ------------------- "Step 2 of job --- () has caused an exception in the Snapshot subsystem and has been terminated" ------------------- the step 2 in REPL-Snapshot job is "Run Agent"
Once more detail. The metter is that this occurs not consistently but from time to time. Some publications succeed in snapshot process but others don't. And else, if I want to manually run snapshot agent for the second time after successful completion for the first, I get the same error.
I am having an intersting problem using sql replication.
I have a staging database inside my firewall that I am replicating out to my sales force over gprs. The web synchronziation server is in the dmz and the dbin in the firewall.
When I initiate the sync from the mobile devices, I am monitoring the replication monitor. The replication takes about 5-30 seconds to complete on the server but the operation can take up to 15 minutes to complete on the mobile device. The time delay happens over gprs, wi-fi or even activesync'ed so it is not the connection speed. I have tried tweaking settings on my IIS box but to no avail. To make things even more interesting the problem is intermittant. Anyone ever see anything like this?
I am really stuck on this, if anyone has some insight into this problem any help would appreciated... I'll try to explain what is happening the best I can:
We have a server running Windows Advanced Server 2000 (SP4) w/ SQL server 2000 (SP3) (from now on Server A). I have a publication on this machine with dynamic filters (Changing the HOST_NAME()). The publication is sending the snapshots to another machine (desktop machine). The Mobile agent is in the same machine as the snapshots.
The mobile application is syncing fine when hitting Server A. The sync is done Asynchronously.
Then we have Server B. Running Windows Server 2003 (SP1) w/ SQL 2000 (SP4), same publication w/ dynamic filter however the snapshots and the mobile agent are in this server.
The mobile application will sync the 1st time but any subsequent syncs will not work. I check on the Replication monitor and it tells me that the Merge was a success but the mobile application will not execute the download table callback, it will execute the Sync callback 5 times and not proceed in executing the download table callback.
If I change the configuration on the mobile app to point to Server A the sync will work just fine but, if I change it back to Server B the sync will work once then it will stop working.
I have various reports which have in the header a textbox with a time/date expression.
e.g. =Format(Now, "HH:mm on dddd, d MMMM yyyy")
These reports have a daily snapshot taken so users can look up historic versions of reports. The problem is that if the PDF renderer is used the time displayed is the time now not the time the data was generated. (If I view the snapshot in the browser the correct historic time and date is shown though.)
This is at best confusing and at worst could lead to some bad business decisions being made.
Is there any way I can get it to display the actual time and date the snapshot was generated?
I have tried replacing "Now" with "Globals!ExecutionTime" with no effect.
I have also seen a suggestion to move the header textbox into the report body and then add a new textbox to the header pointing at the textbox in the body
e.g. with Expression =ReportItems("LabelToday").Value
But this does not work either and it is still displaying the time now when viewed in PDF format.
Is there any straight forward way of doing this?
If there is no straight forward way can I somehow access the HistoryID from within Report Code so I can look up the correct value myself in the ReportServer database?
We have transactional replication running with a seperate publisher/distributor/subscriber. I want to add a couple of articles to the publication, and then initialise them. I have added the articles and run sp_refreshsubscriptions
I now want to refresh the subscriptions. I have selected not to lock the tables on the snaphot tab of the publication properties, but whenever I run the snapshot agent it locks the application solid! Its odd, as soon as I run the snaphot agent, the phones start ringing within minutes. The application is Great Plains and I have set the snapshot agent to run nightly anyway.
Is there any way I can run the snapshot agent during working hours to refresh this one article? Once I have successfully done this, I have a number of articles want to add - but I can't lock the tables when refreshing the initial snaphot.
conversion failed when converting date and or time from character string
I am using Sql Server 2008.(database designed for sql 2005 later moved to sql server 2008).
Pickup_time and actual_Pickup_time are varchar(5) in database.
What is wrong with this query?
Query:
SELECT COUNT(Trip_ID) AS OntimePickupCount FROM MyTABLE WHERE Start_Dt BETWEEN '01/01/2014' AND '04/30/2014' AND (DateDiff(minute, CAST (Pickup_Time AS time), CAST (Actual_Pickup AS time )) BETWEEN 0 AND 15 OR DateDiff(minute, CAST (Actual_Pickup AS time), CAST (Pickup_Time AS time) ) BETWEEN 0 AND 15) AND Actual_Pickup IS NOT NULL AND Actual_Dropoff IS NOT NULL
I'm using RMO to set up MergePullSubscription to a SQL Express database, the server is running SS2k5.
The subscription gets created just fine, and web sync seems to be working, going to replisapi.dll?diag in my new virtual directory returns no errors.
When creating the publication on the server (using management studio for this) the snapshot is created, and stored in the snapshot share. Creating the publication seems to go without any issues. When I open the "Configure Web Synchronization" wizard, and point it to my new snapshot share, it ALWAYS says that the snapshot folder appears to be empty, which it isn't.
I get an error when I attempt to sync the databases that I believe is caused by Web Sync not being able to find the snapshot. Here's the output from triggering the sync via RMO.
2007-11-12 14:45:49.225 The upload message to be sent to Publisher 'SandBoxWebApp' is being generated 2007-11-12 14:45:49.265 The merge process is using Exchange ID 'DF8FE565-F6BC-40B3-8AEA-7B6FA5EC225D' for this web synchronization session. 2007-11-12 14:45:49.928 No data needed to be merged. 2007-11-12 14:45:49.940 Request message generated, now making it ready for upload. 2007-11-12 14:45:49.952 Upload request size is 1290 bytes. 2007-11-12 14:45:49.988 Uploaded a total of 1 chunks. 2007-11-12 14:45:50.000 The request message was sent to 'https://SandBoxWebApp/WebSync/replisapi.dll' 2007-11-12 14:45:50.064 Downloaded a total of 3 chunks. 2007-11-12 14:45:50.076 The response message was received from 'https://SandBoxWebApp/WebSync/replisapi.dll' and is being processed. 2007-11-12 14:45:50.109 The process could not connect to Distributor 'SandBoxWebApp'. A first chance exception of type 'System.Runtime.InteropServices.SEHException' occurred in Microsoft.SqlServer.Replication.dll A first chance exception of type 'Microsoft.SqlServer.Replication.ComErrorException' occurred in Microsoft.SqlServer.Replication.dll A first chance exception of type 'Microsoft.SqlServer.Replication.ComErrorException' occurred in Microsoft.SqlServer.Replication.dll
That's it, always end up with "Cannot connect to distributor".
I'm using a free cert for testing, hence the non-FQDN name SandBox, but I have added the root CA to my machine, and don't get any cert errors when visiting the ?diag page in my browser.
Any ideas? I've been beating my head against the same wall for 2 days now!
USE [Testing] GO /****** Object: Table [dbo].[Testing] Script Date: 4/25/2014 11:08:18 AM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
[Code] ....
It seems to work fine with one million records.
Each primary key is unique, but the begindate is non-unique, and i guess even if i use datetime2 and add nanoseconds, from what i have read, there is a chance that i could have a duplicate datetime since the date is imported via XML from multiple sources.
Is there a way to keep track in real time on how long a stored procedure is running for? So what I want to do is fire off a trace in a stored procedure if that stored procedure is running for over like 5 minutes.
I am trying to load previous days data at 3 am via a SSIS job.
The Date variable is initiated as DATEADD("dd",-1, GETDATE()) in the for loop.
Now, as this job runs at 3 am, and I set the variable as GETDATE() - 1, it excluded the data from 12 am to 3 am in the resultset as Date is set as YYYY-MM-DD 03:00:00:000 I need this to be set as YYYY-MM-DD 00:00:00:000
I hope to update a DateTime column value with a Time input parameter.  Poor attempt below but it looks like the @ApptTime param is coming in as 10:45:00.0000000 and I might have an existing @SendOnDate as: 2015-10-05 07:00:00.000...I hope to end up with 2015-10-05 10:45:00.000
ALTER PROCEDURE [dbo].[SendEditUPDATE] @QuePoolID int=null ,@ApptTime time(7) ,@SendOnDate datetime
I am using VS2005 (VB) to develop a PPC WM5.0 Program. And I am using SQLCE 3.0. My PPC Hardware is in 400MHz.
The question is when the program try to insert the first record into sdf database after each time the program started. It takes a long time. Does anyone know why and how can I fix it?
I will load the whole database into a dataset when the program start and do all the "Insert", "Update", "Delete" in this dataset and fill it into database after each action.
cn.Open() sda = New SqlCeDataAdapter(SQL, cn) 'SQL = Select * From Table scb = New SqlCeCommandBuilder(sda) sda.Update(dataset) cn.Close()
I check the sda.update(), it takes about 0.08s for filling one record into database normally. But:
1. Start the PPC Program
2. Load DB into dataset
3. Create a ONE new record in dataset
4. Fill back to DB
When I take this four steps everytime, the filling time is almost 1s or even more!
Actually, 0.08s is just a normal case. Sometimes, it still takes over 1s to filling back a dataset which only inserted one record when the program is running. (Even all inserted records are exactly the same in data jsut different in the integer key)
However, when I give up the dataset and using the following code:
cn.Open() Dim cmd As New SqlCeCommand(SQL, cn) ' I have build the insert SQL before (Insert Into Table values(XXXXXXXXXXXXXXX All field)
I found that it is still the same that the first inserted record takes more time, but just about 0.2s. And the normal insert time is around 0.02s. It is 4 times faster!!!
We need to select rows from the database that have been recently inserted/updated. We have a main primary table (COMMIT_TEST) and a second update table (COMMIT_TEST_UPDATE). The update table contains the primary key and a LAST_UPDATE field which is a datetime (to tell us when an update occurred). Triggers on the primary table are used to populate the update table.
If we insert or update the primary table in a transaction, we would expect that the datetime of the insert/update would be at the commit, however it seems that the insert/update statement is cached and getdate() is executed at the time of the cache instead of the commit. This causes problems as we select rows based on LAST_UPDATE and a commit may occur later but the earlier insert timestamp is saved to the database and we miss that update.
We would like to know if there is anyway to tell the SQL Server to not execute the function getdate() until the commit, or any other way to get the commit to create the correct timestamp.
We are using default isolation level. We have tried using getdate(), current_timestamp and even {fn Now()} with the same results. SQL Queries that reproduce the problem are provided below:
/* Different functions to get current timestamp €“ all have been tested to produce the same results */ /* SELECT GETDATE() GO SELECT CURRENT_TIMESTAMP GO SELECT {fn Now()} GO */ /* Use these statements to delete the tables to allow recreate of the tables */ /* DROP TABLE COMMIT_TEST DROP TABLE COMMIT_TEST_UPDATE */ /* Create a primary table and an UPDATE table to store the date/time when the primary table is modified */ CREATE TABLE dbo.COMMIT_TEST (PKEY int PRIMARY KEY, timestamp) /* ROW_VERSION rowversion */ GO CREATE TABLE dbo.COMMIT_TEST_UPDATE (PKEY int PRIMARY KEY, LAST_UPDATE datetime, timestamp ) /* ROW_VERSION rowversion */ GO /* Use these statements to delete the triggers to allow reinsert */ /* drop trigger LOG_COMMIT_TEST_INSERT drop trigger LOG_COMMIT_TEST_UPDATE drop trigger LOG_COMMIT_TEST_DELETE */ /* Create insert, update and delete triggers */ create trigger LOG_COMMIT_TEST_INSERT on COMMIT_TEST for INSERT as begin declare @time datetime select @time = getdate()
insert into COMMIT_TEST_UPDATE (PKEY,LAST_UPDATE) select PKEY, getdate() from inserted end GO create trigger LOG_COMMIT_TEST_UPDATE on COMMIT_TEST for UPDATE as begin declare @time datetime select @time = getdate()
update COMMIT_TEST_UPDATE set LAST_UPDATE = getdate() from COMMIT_TEST_UPDATE, deleted, inserted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end GO /* In our application deletes should never occur so we don€™t log when they get modified we just delete them from the UPDATE table */ create trigger LOG_COMMIT_TEST_DELETE on COMMIT_TEST for DELETE as begin if ( select count(*) from deleted ) > 0 begin delete COMMIT_TEST_UPDATE from COMMIT_TEST_UPDATE, deleted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end end GO /* Delete any previous inserted record to avoid errors when inserting */ DELETE COMMIT_TEST WHERE PKEY = 1 GO /* What is the current date/time */ SELECT GETDATE() GO BEGIN TRANSACTION GO /* Insert a record into the primary table */ INSERT COMMIT_TEST (PKEY) VALUES (1) GO /* Simulate additional processing within this transaction */ WAITFOR DELAY '00:00:10' GO /* We expect at this point that the date is written to the database (or at least we need some way for this to happen) */ COMMIT TRANSACTION GO /* get the current date to show us what date/time should have been committed to the database */ SELECT GETDATE() GO /* Select results from the table €“ we see that the timestamp is 10 seconds older than the commit, in other words it was evaluated at */ /* the insert statement, even though the row could not be read with a SELECT as it was uncommitted */ SELECT * FROM COMMIT_TEST GO SELECT * FROM COMMIT_TEST_UPDATE
Any help would be appreciated, we understand we could make changes to the application/database to approximate what we need, but all the solutions have identified suffer from possible performance issues, or could still lead to missing deals (assuming the commit time is larger than some artifical time window).
I have a publication on a Sql 2000 (SP4) server. This publication has dynamic filtering enabled. What I want to do is create an interface which will generate a new dynamic snapshot based on filtering input from users. So far I can create the dynamic snapshot easy enough and I can see the filtered results on a Sql 2000 subscriber. However, there seems to be no way to configure a mobile database to point to the dynamic snapshot. This is easy to configure in a Sql 2000 subscription - the options are right there are on the properties page. Yet, it seems like the mobile database only points to the required unfiltered snapshot through the IIS proxy.
Is there any way to force the mobile database to use the dynamic snapshot instead?
There's *strange* behaviour with snapshot size during subscription initialization.
So...
The initial snapshot has size about 9Mb at the publisher.
After subscription creation, I notice that the subscriber downloads about 21 MB.
It seems to me, that the snapshot is unpacked on the IIS, next, part of the files are packed again and just after it the changed snaphot is sent to the subscriber.
I need to take a temporary table that has various times stored in a text field (4:30 pm, 11:00 am, 5:30 pm, etc.), convert it to miltary time then cast it as an integer with an update statement kind of like:
Update myTable set MovieTime = REPLACE(CONVERT(CHAR(5),GETDATE(),108), ':', '')
how this can be done while my temp table is in session?
We are using SQL Server 2008 as our database and use Access as a GUI. I am looking to create a form in Access where employees can access their time card and request changes from management. I want to use the format from the attached screen shot for the form. I pretty much know how to do it all, the only point of complication is trying to figure out the easiest way to get the transaction punch record data on employee_punch_record into a format where I can easily populate the form in the horizontal format you see in the screen shot.
I am not super strong in SQL, but figure I can do it using a formatting table of some sort. quick and easy way to move transaction records into a more horizontally oriented record?
I have a very simple time series model which processing works fine without any problem. However when I run the following query
SELECT
[TimeSeries].[PriceChange],
[TimeSeries].[Symbol],
PredictTimeSeries(PriceChange, -3, 2)
From
[TimeSeries]
WHERE
[TimeSeries].[Symbol] = 'x'
I get the following error:
TITLE: Microsoft SQL Server 2005 Analysis Services ------------------------------ Error (Data mining): A time series prediction was requested with a start time further in the past than the internal models of the mining model, TimeSeries, specified in the HISTORIC_MODEL_GAP and HISTORIC_MODEL_COUNT parameters can process.
The following is the excerpt of the minding model script related to the two parameters:
<AlgorithmParameters>
<AlgorithmParameter>
<Name>MISSING_VALUE_SUBSTITUTION</Name>
<Value xsi:type="xsdtring">Previous</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_GAP</Name>
<Value xsi:type="xsd:int">1</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_COUNT</Name>
<Value xsi:type="xsd:int">10</Value>
</AlgorithmParameter>
</AlgorithmParameters>
These HISTORIC_MODEL_GAP (1) and HISTORIC_MODEL_COUNT (10) should accommodate PredictTimeSeries(PriceChange, -3, 2). Could anyone shed some light on this?
we have problems with our SQL Reporting Service 2012 (SSRS) server . We have setup Kerberos delegation between SSRS and the database server (SQL Server Always-on cluster) so users are authenticated down to the database. The issue occurs from time to time that SSRS loses the ability to delegate the user credentials to the database. At this point in time the Report Server logs contain rejected database connections because of ANONYMOUS logon. After restarting SSRS the problem is gone.
I have a table which has a few fields, one being "datetime_traded". I need to write a query which returns the row which has the closest time (down to second) given a date/time. I'm using MS SQL.
Here's what I have so far:
Code:
select * from TICK_D where datetime_traded = (select min( abs(datediff(second,datetime_traded , Convert(datetime,'2005-05-30:09:31:09')) ) ) from TICK_D)
But I get an error - "The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value.".
Does anyone know how i could do this? Thanks a lot for any help!
Ok, so I have some horribly convuluted SQL that I would love to optomize. I'm not happy leaving it in it's current state, that's for sure!
I'm currently working on our test bed servers, so obviously my stats are out because of the "crap-ness" (yes, that's the technical term) of the hardware, but still, it should NEVER need to take this long!!
Basically, the issue arises in the nasty join to the career table (one employee can have multiple career lines). Just to make things complicated, employees can have any number of career records on any given date, these can even be input for future career events. The following SQL picks out the latest-current career date for each employee based on the career_date being <= GetDate() and the date of entry for this date being the greatest.
From the above we want to return 2007-01-01 | 2006-05-05 13:54:18.000
SET STATISTICS IO ON SET STATISTICS TIME ON
SELECT a.sAMAccountNameAs 'sAMAccountName' , a.userPrincipalNameAs 'userPrincipalName' , 'TRUE'As 'Modify' , RTRIM(e.unique_identifier)As 'employeeID' , RTRIM(e.employee_number)As 'employeeNumber' , RTRIM(e.known_as) + CASE WHEN RTRIM(e.surname) IS NOT NULL THEN ' ' + RTRIM(e.surname) ELSE NULL ENDAs 'displayName' , RTRIM(e.known_as)As 'givenName' , RTRIM(e.surname)As 'sn' , RTRIM(c.job_title)As 'title' , RTRIM(c.division)As 'company' , RTRIM(c.department)As 'department' , RTRIM(l.description)As 'physicalDeliveryOfficeName' , RTRIM(REPLACE(am.dn,'\',''))As 'manager' , t.full_mobile + CASE WHEN RTRIM(t.mobile_number) IS NOT NULL THEN ' (DD: ' + RTRIM(t.mobile_number) + ')'ELSE NULL END As 'mobile' , t.mobile_numberAs 'otherMobile' , ad.address_ad_countryAs 'c' , ad.address_ad_address1 + CASE WHEN ad.address_ad_address2 IS NOT NULL THEN ', ' + ad.address_ad_address2 ELSE NULL END + CASE WHEN ad.address_ad_address3 IS NOT NULL THEN ', ' + ad.address_ad_address3 ELSE NULL END + CASE WHEN ad.address_ad_address4 IS NOT NULL THEN ', ' + ad.address_ad_address4 ELSE NULL END + CASE WHEN ad.address_ad_address5 IS NOT NULL THEN ', ' + ad.address_ad_address5 ELSE NULL ENDAs 'streetAddress' , ad.address_ad_poboxAs 'postOfficeBox' , ad.address_ad_cityAs 'l' , ad.address_ad_CountyAs 'st' , ad.address_ad_postcodeAs 'postalCode' , RTRIM(ad.address_ad_telephone) + CASE WHEN RTRIM(a.othertelephone) IS NOT NULL AND RTRIM(ad.address_ad_telephone) IS NOT NULL THEN ' (Ext: ' + RTRIM(a.othertelephone) + ')' ELSE CASE WHEN RTRIM(a.othertelephone) IS NOT NULL AND RTRIM(ad.address_ad_telephone) IS NULL THEN 'Ext: ' + RTRIM(a.othertelephone) ELSE NULL END ENDAs 'telephoneNumber' FROM employee e LEFT JOIN career c ON c.parent_identifier = e.unique_identifier AND c.career_date =( SELECTmax(c2.career_date) FROMpwa_master.career c2 WHEREc2.parent_identifier = c.parent_identifier ANDc2.career_date <= GetDate() ) AND c.datetime_created =( SELECT max(c3.datetime_created) FROMpwa_master.career c3 WHEREc3.parent_identifier = c.parent_identifier ANDc3.career_date = c.career_date ) LEFT OUTER JOIN AD_Import am ON am.employeeNumber = c.manager_number INNER JOIN AD_Import a ON a.employeeID = e.unique_identifier LEFT JOIN AD_Telephone t ON t.unique_identifier = e.unique_identifier LEFT JOIN AD_Address ad ON ad.address_pwa_location = e.location LEFT JOIN xlocat l ON l.code = c.location WHERE (a.employeeNumber IS NOT NULL OR a.employeeID IS NOT NULL)
SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times: CPU time = 15203 ms, elapsed time = 8114 ms.
Any advice on what I can do to optomize?
Oh judt to point out that "employee" is a view on the "Table 'people'." EDIT: I know it's pointing out the obvious, but I'm pulling out the managers "DN" from AD_Import based on the manager_number and employeeNumber matching.
I need a formula to calculate the time (let's say in minutes) between two dates/times. The problem is that I have to exclude the time between 06 PM and 06 AM and also exclude the time in the weekend (Saturday and Sunday). I will use this in a couple of reports made in Reporting Services. If anyone have an algoritm that could be modified for this and is willing to share this I would be very grateful. Many thanks! /Per Lissel
I have created several global temp tables to cache some intermediate results ... However, it seems that after a while those tables will be dropped by SQL Server 2005 automatically (I have not restarted the server and no drop table statement ever executed against those tables). Is this a feature by design? How to make those global temp tables persistence to next service restart?
I have an identical SQL database on two machines (my machine and a webserver) that links to a database on a third server (S3). When I executea stored procedure on my machine that accesses a database on S3, italways runs without a problem. However, when I run the same storedprocedure on the webserver (via Query Analyzer on my machine, connectedto the webserver), the stored procedure runs without a problem forabout ten minutes, and then I suddenly receive the error "Login failedfor user '(null)'. Reason: Not associated with a trusted SQL Serverconnection.". If I then connect remotely to the webserver via RemoteDesktop Connection and run the stored procedure there it works, andthen I can run it again on the webserver via Query Analyzer on mymachine, but the same problem will occur again after about ten minutes.The only difference between my machine and the webserver is that thewebserver is running Windows Server 2003, whereas my machine runsWindows XP. All settings in SQL on the two machines are identical, so Iwould love to know if anyone has a possible solution to this seeminglyrandom problem.
I am reading about the RESTORE command to a point in time using logs, I would like to know the minimum point in time recovery for a backup image using T-SQL command before applying a log restore and what are the log ranges needed for the restore during restore.
I was working with Microsoft Time Series model (MTS) with some data, when in the mining model viewer, decision tree tab, I realized that the key time variable that I define, it was acting like a split variable.
So, I ask you, this is possible?, because, for me, this should not happen€¦.
After, I review the Data Mining Tutorial by Seth Paul, Jamie MacLennan, Zhaohui Tang and Scott Oveson, and I found, in the Forecasting part, that the key time variable (Time Index) it was acting like a split variable too, in for example, M200 pacific:Quantity and R250 Europe:Quantity.
So people, it€™s possible that a key time variable act like a split variable in a MTS model?