Choosing The Most Frequent
Jul 14, 2006
There must be a way to do this simply. We're running SQL Server 2000. I'm looking for some generic SQL statement that I can apply.
If I have a table with a person column and a location column and multiple records for the same person / locatioin combination, how do I select the person with the location they most frequently visited? Say George visits Mexico 5 times, and the Bahamas twice and costa rica once. I would have 8 records in my table for George. The data looks something like this:
George/Mexico
George/Mexico
George/Mexico
George/Mexico
George/Mexico
George/Bahamas
George/Bahamas
George/Costa Rica
Ben/Brazil
Ben/Brazil
Ben/Peru
The results would be:
George/Mexico
Ben/Brazil
Thanks!
Myles
View 4 Replies
ADVERTISEMENT
Sep 26, 2001
I want to get the top 10 most frequent cpt's for each dssid.
nuclear medicine 12345
54321
64536
87648
98356
13254
76534
87638
24364
98354
urology 63547
98745
ect...
So,
select dssid,cpt,count(*) from enc_vis_cpt group by dssid,cpt
will give me the cpt's and their frequency for each dssid.
dssid cpt count
SPINAL CORD INJURY 9934120
AMB SURGERY EVAL BY NON-MD 622703
PSYCHOSOCIAL REHAB - GROUP 993414
SPINAL CORD INJURY 983419
AMB SURGERY EVAL BY NON-MD 6327031
PSYCHOSOCIAL REHAB - GROUP 9734114
SPINAL CORD INJURY 9934280
AMB SURGERY EVAL BY NON-MD 6227353
PSYCHOSOCIAL REHAB - GROUP 9934524
How do I limit the output to just display the 10 most frequent cpt's for each dssid. Thank you...
View 2 Replies
View Related
Jan 16, 2001
Hello all
I'm not sure if this is anything to be concerned about, but I'd appreciate some input. I've created an alert that invokes a log backup scheduled job to start if one of the production database log becomes over 80% full.
I've noticed that the log gets backed up every couple of minutes- sometimes even more frequently- while a series of scheduled jobs are in the process of execution. Otherwise, it runs and backs up the log every hour as scheduled. I've also noticed that some of the jobs are taking longer to complete (they've been running for about 6 months now). Each job truncates the table that it populates with data, so I'm not sure what the cause of the delay is. It doesn't look like there is any fragmentation. Am I missing anything? Thanks
View 1 Replies
View Related
Oct 9, 2006
I have used Base SAS for analysis for a while and it was really great.. everything is easy just with a simple command.. I am sure it's not the same in SQL Server but I need some help on how to start with the following:
I have a field called call_country and another field called call_minute. Each call will be saved with the destination country and the total number of minutes..
and I want to run a query to see what are the TOP frequent destinations in this format:
United States - Count: 420 - Total Minues: 12,345
View 2 Replies
View Related
Jul 26, 2004
I have a reviews table where all reviews are submitted. On the main page I want to display the 10 most reviewed products. I have a Product_ID column in this table which identifys the product. How can i write a query which will select the product_ID of records which have the most frequent product_ID's?
I came up with something like this:
"Select Top 10 Product_ID, COUNT(*) AS Occurances FROM reviews GROUP BY Product_ID ORDER BY occurances DESC"
But it does not work.?? It says "Declaration expected" as error
View 5 Replies
View Related
Apr 28, 2008
Hi MSDN ppl,
I seek your expertise yet one other time.
Scenario:
We have 7 databases mirrored on two servers which are mirroring partners. 3 of the 7 databases are live on server1 and mirrored on server2; and the remaining 4 databases are live on server2 and mirrored on server1. The data is exposed through .NET Widows Application.
The configurations of the servers are as follows.
System: Microsoft Windows Server 2003 R2
Standard x64 Edition
Service Pack 2
Computer: Intel(R) Xeon (R) CPU
5130 @ 2.00 GHz
2.00 GHz, 32.0 GB of RAM
SQL Version: Microsoft SQL Server 2005 - 9.00.3175.00 (X64) Jun 14 2007 11:45:39
Copyright (c) 1988-2005 Microsoft Corporation Enterprise Evaluation Edition (64-bit)
on Windows NT 5.2 (Build 3790: Service Pack 2)
Problem:
The databases for no apparent reason keep randomly failing over to one server quite frequently. At least twice a day. There is no pattern associated for me to make out as to why this is happening.
My Questions:
1. Is it a good practice to divide the databases on each server, the way it is now? Or should all the databases be kept on one server and mirrored on other all the time?
2. From the above mentioned scenario, do you find the reason for database to 'failing over' so frequently? Could the Win Application which is used to expose the data be responsible for the failovers?
3. What steps can be taken to check for the reason which is causing the databases to failover? Alternatively and most importantly, how can I this problem of 'Databases Failing over randomly' be solved?
Thank you,
Little_Birdie
View 13 Replies
View Related
May 30, 2008
Hi guys , may I know is there any way for getting the information about the tables that most use frequently in the db?
Best Regards,
Hans
View 2 Replies
View Related
Jun 5, 2007
What function(s) can be used to find the mode of data? I have a column that is populated with codes and I'd like to summarize the data by the code that occurs the most frequently. Any help is appreciated!!
View 5 Replies
View Related
Mar 11, 2008
How can I make a statement that will return the 10 most frequently occuring values in a column?
I have no idea if that is even possible, if you have an idea on how I could do that I would really appreciate it.
Im trying to make a page that would show some statistics on a table I have.
Im also trying to make something that would show the count of the number of records inserted in the last 24 hours, week, month, year etc. The table has a column called "DateInserted" as SmallDate, right now i can use a Where DateInserted > '20080310' to get the count, but its not dynamic, is there anyone to merge all these results into one row with each column being a diffrent time period?
I know this a lot of questions, but I would really appreciate any pointers.
View 4 Replies
View Related
Jun 30, 2014
we have a handful of developers and each of us are responsible for laying out and creating our own database backends. This often leads to inconsistencies in table and column structures.One obvious situation that comes up often is whether or not the other developers are building in history into their primary tables, using history/archive tables or (usually in cases of helper tables) no historical data at all.
My thought on how to alleviate this a little was to suggest that we all build a IS_DELETED computed column into our tables so that someone else trying to work with their data doesn't have to play the guessing game. In most cases, this column would just be running date comparisons on an Expiration Date and either checking to see if it's in the future (usually 12/31/9999) or NULL.
I have read that computed columns can be a performance hit if used/returned unnecessarily but is that also the case on fields where their main use would be filtering? It just seems that the calculation that the computed column is doing would be necessary for the WHERE anywhere so it seems like a wash ... and worth the benefit of not having to decipher someone else's work.
View 0 Replies
View Related
Jun 28, 2007
I am working on a text mining application wherein I need to detect unusual/anomalous sentences in text. Certain sentences, that I know occur very frequently, are given a likelihood of 0.2 by PredictCaseLikelihood. Other sentences that are just as frequent get a much higher likelihood (>0.9). I am using the NORMALIZED option. The only significant difference between these sentences is their length. The one with the lower likelihood has only 2 words in it, whereas the one with the higher likelihood has more than 10 words. The problem is that the shorter sentences end up being interpreted as anomalous, when in fact they are'nt. Any suggestions?
View 2 Replies
View Related
Oct 5, 2006
Hi
I am constantly getting this error message in the Application log after installing SQL 2005 last night followed by SP1 (say 5 times a minutes). See below:
EventType sql90exception, P1 reportingservicesservice.exe, P2 9.0.2047.0, P3 443f5953, P4 sqldumper_unknown_module.dll, P5 0.0.0.0, P6 00000000, P7 0, P8 00e8ed9d, P9 00000000, P10 NIL.
I have reservice packed SQL 2005 but made no difference - it is running on a Windows 2003 server with all the latest MS patches.
Does anyone know the solution or possible solution to this issue?
Thanks
Matt
View 4 Replies
View Related
Feb 21, 2006
Hi,
log backup done every 5 min.
so sql server log file full of entries
"Log backed up: Database: Prices, creation date(time):...."
could loging for Log backed for db Prices be disabled ?
Thanks
Alex
View 4 Replies
View Related
Jun 12, 2002
Apologies for the way in which I describe the tables and data, I know I'm not using a very proper way to get my point across:
Table A: "tblJobs" Contains the following:
--------------------------------------------
COLUMNS:
1. JobPK (char(35))
2. LocationName (varchar(50))
DATA (csv):
6643C9C9-7618-472F-9859844AA6C0F47B, Jonesport ME
08563708-3830-4507-B3154E9C4D49C6F2, Garden City NY
Table B: "tblJobDates" contains the following data, related to the two rows above):
--------------------------------------------
COLUMNS:
1. JobPK (char(35))
2. DateData (datetime)
3. CRD (datetime, "Created Date" the date and time that the date was entered)
DATA (csv):
6643C9C9-7618-472F-9859844AA6C0F47B, 6/8/2002, 6/10/2002 12:44:58 PM
6643C9C9-7618-472F-9859844AA6C0F47B, 6/17/2002, 4/22/2002 2:07:31 PM
08563708-3830-4507-B3154E9C4D49C6F2, 6/12/2002, 6/7/2002 4:05:06 PM
08563708-3830-4507-B3154E9C4D49C6F2, 6/13/2002, 6/12/2002 11:38:22 AM
tblJobDates serves two purposes: to give us the most recently entered due date for a job, and to serve as a "repository" to track changes to the due date.
Report C: The report I want to generate does NOT provide historical information... it only serves to show the CURRENT due date for each job in the tblJobs table:
--------------------------------------------
COLUMNS:
LocationName
Due Date (alias of DateData)
OUTPUT (csv):
Jonesport ME, 6/8/2002
Garden City NY, 6/13/2002
Note that for Jonesport, an initial due date of 6/17/2002 was entered (based on the CRD). Then someone changed it so that the job was due EARLIER.
Note that for Garden City, an initial due date of 6/12/2002 was entered (based again on the CRD). Then someone changed it so that the job was due LATER.
The "most recently entered due date" is what should be reflected in my report -- just as it does above ("C")
Other Notes:
-- There are other columns of information from both tables that i would like to return, but above is the most basic form of my request. Most notably, we would need to return the JobPK in report (C).
-- A job should only appear ONCE in report (c), with it's "current" due date, regardless of the other due dates that may have been entered for that job.
-- If a job has no due date, it should not appear on the report.
-- Although not shown here, each row in (B) DOES have a unique identifier (DatePK) as well... if that helps in your solution.
-- Note that the job that is "due first" appears at the top of report (C). This allows a person looking at the report to quickly determine which job "gets priority" -- the one on top!
Okay gurus -- how should the query look that would generate the desired output in Report C?
THANKS IN ADVANCE if you even can point me in the right direction!!
View 1 Replies
View Related
Jul 20, 2005
I need to decided between Standard and Enterprise Edition (Cost is acriteria - but its secondary to performance - <!--and I am not paying forit myself-->)The server spec under consideration: Dual Xeon, 1GB RAM, 36GB - RAID 1(Dell PowerEdge 1850).Application: Windows 2003 Std Server, ASP.NET, MS SQL Server 2000 baseddata driven web application.Approximately 25 simultaneous clients. Peak activity would probably be 50transactions/activities per second (2 per second per client). I expectthe database size to grow up to 4GB in 1 year.The application would use only basic OLAP features (if at all)...sofeature set wise I believe that standard edition is good enough.What I am concerned about is when MS documentation says that StandardEdition is for "organization that do not require the advanced scalability,availability, performance, or analysis features of the SQL Server 2000Enterprise Edition"Is there a difference in performance between Std and Ent editions? Interms of number of transactions per second that can be serviced?What other criteria should I be aware of before deciding to go one way orthe other?Any ideas?
View 4 Replies
View Related
Aug 18, 2006
Please help me out:
I have some records in a sqldatasource and want to show it column wise. Now I do it with a datalist because it's easy. But other options are open.
Every item/record should have a radiobutton (in a group, so that you can only choose one from all). People advised me to do this with a html radiobutton inside the template.
After the user has selected an item and chooses the next-button I need to know what item the user has choosen.
Furthermore, when the user likes to step back, the same radiobutton should allready be selected.
Please help, this is bothering me for a while,
best regards from The Netherlands,
Gert
View 1 Replies
View Related
Jun 12, 2007
My company has a website that connects to a sql server (on a different box). I am trying to convince them to get sql server 2005. However, I do not know if SQL Server 2005 Workgroup edition is okay for our needs. Can someone please tell me if it is.
Basically, our setup is the following:
The SQL Server will only have one/two clients - the web server
View 7 Replies
View Related
Jul 26, 2007
i have to store some data on a remote sever(MS SQL SERVER2000). The scenario is like 1. The web application runs on a local machine. User (who inputs) uses through LAN.2. The Input should be stored in the remote server. if the remote connection is ok. otherwise it should be saved in local server's database(MS SQL 2000).3. In the application's web.config there is a connection string pointing to the remote server and another one (alternating one) points the local server's database. in scenario like this i first to tested the remote connection. if it is not ok then i initialize the local server's connection like thisprivate MyConnection() { try { connectionSql = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnForRemote"].ToString()); connectionSql.Open(); } catch (Exception ex) { connectionSql = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnForLocal"].ToString()); } finally { connectionSql.Close(); } connectionSql2 = new SqlConnection(ConfigurationManager.ConnectionStrings["Temp"].ToString()); }My problem is when the remote connection is lost it takes almost 1 minute to store in local database. how can i make it more time efficient. Thanks....
View 5 Replies
View Related
Apr 15, 2006
Hi ... I have question on datatype on SQL Server 2005 EE
What is a good data type for email, password, Phone Number and ISBN number?
Thanks!
View 3 Replies
View Related
Jun 8, 2002
Hello,
I have a table with some data in it.
What I want to do is to create a query that returns me randomly
one of the records of the table. Can this be done?
If this is not possible from SQL server I have thought an
alternative way. This is:
I want to return all rows of the table with SELECT *,
but I want the select to return in the first column an
autoincreament number for each row without the need to add
an autoincrement field in the table. e.g
Table
------
Banana
Tomatoe
Aple
...
...
Orange
Result from select
------------------
1 Banana
2 Tomatoe
3 Aple
. ....
. ....
23 Orange
Can this be done?
At least this way
1) I can travel to the end of the results (from ASP),
2) read the ID of the last row
3) Create a random integer number from 1 to last ID,
4) and finaly select the appropriate random row from that integer.
Can anybody help me please?
Thanks for any help in advance!
Yours, sincerely
Efthymios Kalyviotis
ekalyviotis@comerclub.gr
View 1 Replies
View Related
Jun 4, 2006
Greetings!
I am purchasing a new/first server and could use some help with the details.
I am purchasing the server with the intent of managing a large database that will be quite extensive and requires a good amount of processing power. I have decided to go with windows server 2003 and SQL Server 2000 as a database. Within next year I hope to have this database directly flowing to a website that I could possibly be hosting as well as 2-3 offsite employess logging into the system remotely.
I would say my biggest question is whether or not to choose the raid 1 configuration or the raid 5. I want to be able to have the Hard drives mirror eachother. I was thinking of going with three hard drives but im not really sure if I would even need that setup. With that, I will just show my current system:
Dell poweredge 1800
3.0 ghz xeon
2 gb memory
sata 1 raid
cerc 6-Channel sata raid controller
160 gb hd x 2
onboard NIC network adapter
Im going price savvy on this one so no ups redundant, power supplies, or tape backup. Although I am open to any suggestions.
Definately appreciate any help with this as I have been hard pressed to find some quality reseller help. They just want to throw the biggest and baddest thing at me.
Thanks!
-Shawn
View 4 Replies
View Related
Jan 20, 2008
Hi All,
I would like to know the experts views on the following I have listed below.
1. Is there any significant performance gain by choosing the Native SQL server driver rather than OLEDB for example. I know there are lot of specified features in the Native SQL Driver but I am thinking in terms of the performance.
2. Why not develop for the generic database rather than specific database?
3. More generic mean less work when migrating database to a different database?
Appreciate your valuable thoughts and any recommendations.
Cheers,
Amal
View 1 Replies
View Related
Dec 12, 2007
I have an SQL as follows
UPDATE TB
SET [Deleted] = 1
WHERE TB.[ QuestionId] = @QuestionId
I have an index in this table as follows
CREATE NONCLUSTERED INDEX [IX_AssessedAnswers1] ON [TB]
(
[Id] ASC,
[QuestionId] ASC,
[Deleted] ASC
)WITH (PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]
Whether this index will be considered by the query optimiser to lock records? If I created another index with only the QuestionId field will it boost the performance? Actually how the optimiser chooses the right index while update?
View 2 Replies
View Related
Oct 13, 2004
hi,
I need to choose a database based on the following criteria (using .NET app):
1) a light but fully functional database, preferably with the support of store proc and constraints, less than 8000 transaction a day.
2) portable or the database can be export/import very easily
3) reliable and stable
4) least maintenance
I have two db in my mind, Access and MSDE?
Does anyone have some hand-ons experience on the above two? Or any other better suggestions?
Any advice is appreciated.
thanks,
bryan
View 1 Replies
View Related
Sep 18, 2004
Hello, I am really dripping wet behind the ears on this and would really appreciate some help. I am setting up my first SQL table and am lost at trying to choose data types for my fields. Basically, all I am doing is setting up a contact form. It is going to ask for phone number, name, address, city, state, zip, etc. I will also have two fields which if I were using an Access db, would be "memo" with say, 500 characters. So in researching SQL data types, I came across the following:
char
Fixed-length non-Unicode character data with a maximum length of 8,000 characters.
varchar
variable-length non-Unicode data with a maximum of 8,000 characters.
text
Variable-length non-Unicode data with a maximum length of 2^31 - 1 (2,147,483,647) characters.
nchar
Fixed-length Unicode data with a maximum length of 4,000 characters.
Can someone shed some light on what I need for simple fields like street, name, city, and more importantly, description? I will also have a "premium" field which should be a "yes" or "no". I am thinking a data type of bit, which is set to 1 or 0? Thanks for any help, I appreciate it so much.
TOm
View 1 Replies
View Related
Apr 28, 2008
Hi all
I'm a newbie in SQL server and please excuseme for this silly question, Could anyone tell me when i should use which of the following types:
Decimal
Float
Real
I've mixed up !!! all of them can have floating point BUT what's the difference? some advise please!
Thanks in advance.
Kind Regards.
View 6 Replies
View Related
Jul 20, 2005
Hello group:I've done alot of reading on this subject somewhat and have found thatmany people have many different opinions on this subject. My questioncenters mainly around using a lookup table to enable users to select apre-defined list of values.I have developed a practice myself of avoiding AutoNumber type datafields for primary keys where the primary key will be related to achild table. Nevertheless, what do most users do with lookup tables?My thoughts are to create a small key value for each value in thelookup table. For example:I might have a Carriers table which shows a list of carriers that Imight ship an order by. One of the entries may be 'Air Freight -Overnight', or 'Air Freight - 2nd Day Air'. I've seen a few exampleswhere the primary key field for each entry like these would beautonumber, or at least, a numeric value. What I like to do is createmy own key, like for 'Air Freight - Overnight', I might use 'AFO' forthe key, and for 'Air Freight - 2nd Day Air', I might use 'AF2'. Anythoughts on this? Mine are that even tho the users may never see thisvalue - I, as the developer will see it and I tend to prefer a keyvalue based on real data that means something other than anauto-incremented number. In referencing the well-known Northwind.mdbdatabase, I noticed their Categories table used a number field value,like 1, 2, 3....etc, but their customers table used values like'ALFKI' to represent their key values.What are some other thoughts out there? I'm working with Accesscurrently, but this project is about to move to SQL Server.James
View 3 Replies
View Related
Jan 12, 2007
Hi everyone
Primary platform is 2005 on 64-bit.
I've got a couple of questions linked to partitionating tables.
-What sort of criteria follows Database Engine when you have two NDF assigned to one filegroup and this filegroup is part of partition
What's more: Could I force that Sql will use one by default?
I mean, my first partition encompass from 20020101 till 20030101. When I add data for example March or June, could I decide that these months belong to NDF1 rather than NDF2?
Let me know if you need further details.
Thanks in advance for your time,
View 4 Replies
View Related
Dec 16, 2007
Lets assume database A is production, B is copy. SQL Server 2005 sp2, SQL CE 3.5
Database A has a variety of transactions against it 24x7
Database B (the copy) is for reporting and as a source of merge replication for SQL CE instances
Merge replication and reporting is used 24x7 as well
I have the following requirements:
Maintain an up to date copy of the production database (need not be up to the minute, could be hourly, even daily update)
Database B is read-only. The merge replication is NOT bi-directional.
Here is the caveat (which I think prohibits using some solutions to this problem):
The production application accomplishes much of it's functionality with in-memory copies of records. I have no control over the production application. When it works against the database, it sort of does a 'withdrawal-deposit' scenario. (to the best of my knowledge it's not using SQL Server transactions) So, for every record it works with, a copy is made out of the database, changes are made in memory, a delete of the database record is done, then the record is re-inserted.
With this kind of behavior in db A, I'm not sure what it would do to log-shipping or transactional replication. I do know that I want to minimize the changes required at the SQL CE instances to keep the sync operation to a minimal cost.
Any suggestions?
View 1 Replies
View Related
Apr 30, 2014
choosing a primary key for the database which i am designing.
I have few tables which contains 5 -15 fields out of it 3 - 9 columns combined to form the uniqueness of the row.
All are un-related tables. Three parent tables connect with 20 child non-related child tables.
I believe it would not be a wise choice to choose 3 to 9 fields for primary key. But if i use an auto increment as a key will there be of any use as it might never be used to fetch rows. Then why do i still have to go with that?
Or Is it ok to create a primary key of upto 5 attributes?
View 9 Replies
View Related
Jul 20, 2005
Hi to allI have to choose a DBMS and a database architecture for an Ebay likewebsite about to be launched.The company wants to use a web hosting service and not host thedatabase on dedicated servers at the office.The database will contain web-only information and lots of back endinformation that is not really needed to be stored on the web host.I'm wondering how to design that part, should I store all informationon the web host only ? miror that DB every evening on some local DBserver to be able to use the data without eating up lots of bandwith ?separate the database in 2 parts ? how to sync and assure integritythen ? having a local DB will also mean the company will have to pay alicence for the DBMS ...What DBMS should I pick considering that the database will have tohold at least 1 million products to sale (eBay like) and all theinformation that goes with it. I thought any DBMS weaker than SQLServer or Sybase or Oracle will not be enough. What do you think ?Thanks a lot, hope I have made myself clear enoughP.S. I would really like to get lots of different points of view. Ithink I'll use Sybase after all, so I wonder if that'sa good choiceand I still want to know your thoughts about the 2 or 1 DB design(separate Web & Billing information for example, or leave all the infoin the hosted database, what techniques to use to keep the integrityand to have the latest information in-house)... Thanks a lot
View 2 Replies
View Related
Nov 30, 2007
Hi--
I am a newbie to datamining, but have nearly a decade of solid database experience with the last 6 years in SQL Server 2000. We are moving our accounting system to SQL Server 2005 and I have been asked to explore the possibilities of mining an inventory table. I'd like to get some opinions prior to spending too much time potentially barking up the wrong tree!
We have an inventory table with approximately 10 million serialized records. Each row contains the serial number of the individual unit and its manufacturer/model designation. We have no control over the assigning of the serial numbers as they come from multiple manufacturers and some of the manufacturers correlate serial numbers to model and some don't.
My thought was to use a cluster model to try to predict the model of a new serial number as it is entered into the database. Is this thought feasibile? Is the mining model choice appropriate? If pointed in the right direction, I'm sure that I can run with this.
Thanks in advance-- Jim
View 3 Replies
View Related
Jan 15, 2007
Hi
I am having a query
SELECT Dur1.rootId
FROM DurableEventTab Dur1
WHERE (Dur1.dev_ReferenceClusterRoot = 'iyrwd.52' )
AND Dur1.dev_Action = 'Order:Ordered')
AND (Dur1.dev_Active = 1) AND (Dur1.dev_PurgeState = 0)
AND (Dur1.dev_PartitionNumber = 0)
This table has a primary key : aribapk11
and the indexes on the dev_ReferenceClusterRoot,
dev_Action,dev_purgestate .
Now when I fire this query
the query execution plan is actaull doing a Clustered Index scan on the PK :aribaPK11 . What I was expecting was an index seek on the key defined on dev_referenceClusterRoot. Please not the index seek is the behaviour in sql server 2000.
Any idea what is going wrong ?
Clustered Index Scan(OBJECT:([typhoon1902].[dbo].[DurableEventTab].[AribaPK7] AS [Dur1]), WHERE:([typhoon1902].[dbo].[DurableEventTab].[dev_Active] as [Dur1].[dev_Active]=(1.) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PurgeState] as [Dur1].[dev_PurgeState]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PartitionNumber] as [Dur1].[dev_PartitionNumber]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_ReferenceClusterRoot] as [Dur1].[dev_ReferenceClusterRoot]='iyrwd.52' AND [typhoon1902].[dbo].[DurableEventTab].[dev_Action] as [Dur1].[dev_Action]=N'Order:Ordered')) 0 0 Clustered Index Scan Clustered Index Scan OBJECT:([typhoon1902].[dbo].[DurableEventTab].[AribaPK7] AS [Dur1]), WHERE:([typhoon1902].[dbo].[DurableEventTab].[dev_Active] as [Dur1].[dev_Active]=(1.) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PurgeState] as [Dur1].[dev_PurgeState]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_PartitionNumber] as [Dur1].[dev_PartitionNumber]=(0) AND [typhoon1902].[dbo].[DurableEventTab].[dev_ReferenceClusterRoot] as [Dur1].[dev_ReferenceClusterRoot]='iyrwd.52' AND [typhoon1902].[dbo].[DurableEventTab].[dev_Action] as [Dur1].[dev_Action]=N'Order:Ordered') [Dur1].[rootId] 1 0.00386574 0.0002263 71 0.00409204 [Dur1].[rootId] PLAN_ROW 0 1
View 3 Replies
View Related