SQL Server 2014 :: Eliminating Distinct SORT Operation
Jun 19, 2015
I have an UPDATE statement that joins two table by SendId. One table, I'll call it T1, has a clustered index on SendId ASC. The other table I will call T2 also has a clustered index on SendID ASC. All the columns from T2 are used to update T1. The execution plan shows a Clustered index scan on T2 and a Clustered Index Seek on T1 going into a Nested Loops inner join. Immediately following is a Distinct Sort that is done on SendId ASC. Why the Distinct SORT if the tables are already ordered by SendID?
I am currently running SQL 2014 in my production environment. Both machines are configured for Windows Failover Cluster Services so that we could take advantage of AlwaysOn Availability Groups.
Currently we are experiencing a issue where our web application is throwing an error, "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." I am pretty sure that the error is being thrown from the application, but I can't prove it. Because the error states it is SQL, the consensus seems to be that SQL is having the issue. I need to know how to troubleshoot it to ensure that SQL is not the problem.
How can I accomplish the above. I need to insert distinct column from TableA to RecordProcess and col1+'Tr' to recordprocessExt.I can do it with cursor. I don't know any other way.
Also, there are other columns in Table A. I am using sql server 2005.
I have SSIS 2012 Enterprise, using catalog deployment and have more that 50 environment variables for connection to databases across my enterprise.
The problem when i go to configure the packages after deployment and pick the proper env variables, that are not sorted, so i have to browse all entries in order to find the proper entry in environment variables.
I have a table sent to me from another source over which i have no control of the content (they don't sem to be able to get a distinct instance of persons for some reason? - anyway that is out of my control) Now this table contains multiple instances of persons, each person has a unique identifier plus a set of dates and a field that indicates the state of that person (can be 'Curr' or 'Ex').
Fields are as follows:-
PersonID - char(10) - This is unique identifier for each person Date1 - DateTime P_State - char(20)
Sample of data from table1:- ------------------------------------------------------------------------------------------- PersonID - Date1 - P_State A11324 1998-04-21 Curr A11324 1999-05-01 Ex A11324 1998-07-12 Ex A11324 1998-05-23 Curr B44321 1999-07-01 Curr B11111 1999-07-01 Ex B11111 1998-03-01 Curr B22222 1999-03-31 Ex B22222 1998-04-11 Curr B33333 2004-03-10 Curr B44443 2002-09-01 Curr -------------------------------------------------------------------------------------------
What I need to get is the latest instance of each person and their present p_state, I've tried many ways but can't seem to get a distinct instance of the latest record for each person, can anyone point me in the right direction please.
I'm using SQL 2012 express.. and just recently learned how to code.
I wrote a query and keep receiving this error...
Error converting data type varchar to float.
here's the query code
SELECT SUM(cast(lc as float)) FROM [dbo].[LaborCosts] WHERE ppty = 'ga' AND PL = 'allctd ktchn expns' AND ACCT like 'payroll%'
I am trying to sum up the values in column LC, and realized I have unnecessary quotations marks. How can I eliminate the quotations from the column, and only query the numerical values?
I'm using SQL Server 2012 Analysis services in Tabular mode and connected to Oracle Database and while importing, I'm getting below error after importing some rows.
OLE DB or ODBC error: Accessor is not a parameter accessor.. The current operation was cancelled because another operation in the transaction failed.
I am trying to set sorting up on a DataGrid in ASP.NET 2.0. I have it working so that when you click on the column header, it sorts by that column, what I would like to do is set it up so that when you click the column header again it sorts on that field again, but in the opposite direction. I have it working using the following code in the stored procedure: CASE WHEN @SortColumn = 'Field1' AND @SortOrder = 'DESC' THEN Convert(sql_variant, FileName) end DESC, case when @SortColumn = 'Field1' AND @SortOrder = 'ASC' then Convert(sql_variant, FileName) end ASC, case WHEN @SortColumn = 'Field2' and @SortOrder = 'DESC' THEN CONVERT(sql_variant, Convert(varchar(8000), FileDesc)) end DESC, case when @SortColumn = 'Field2' and @SortOrder = 'ASC' then convert(sql_variant, convert(varchar(8000), FileDesc)) end ASC, case when @SortColumn = 'VersionNotes' and @SortOrder = 'DESC' then convert(sql_variant, convert(varchar(8000), VersionNotes)) end DESC, case when @SortColumn = 'VersionNotes' and @SortOrder = 'ASC' then convert(sql_variant, convert(varchar(8000), VersionNotes)) end ASC, case WHEN @SortColumn = 'FileDataID' and @SortOrder = 'DESC' THEN CONVERT(sql_variant, FileDataID) end DESC, case WHEN @SortColumn = 'FileDataID' and @SortOrder = 'ASC' THEN CONVERT(sql_variant, FileDataID) end ASC And I gotta tell you, that is ugly code, in my opinion. What I am trying to do is something like this: case when @SortColumn = 'Field1' then FileName end, case when @SortColumn = 'FileDataID' then FileDataID end, case when @SortColumn = 'Field2' then FileDesc when @SortColumn = 'VersionNotes' then VersionNotes end
case when @SortOrder = 'DESC' then DESC when @SortOrder = 'ASC' then ASC end and it's not working at all, i get an error saying: Incorrect syntax near the keyword 'case' when i put a comma after the end on line 5 i get: Incorrect syntax near the keyword 'DESC' What am I missing here? Thanks in advance for any help -Madrak
Have a pretty simple wuestion but the answer seems to be evading me:
Here's the DDL for the tables in question:
CREATE TABLE [dbo].[Office] ( [OfficeID] [int] IDENTITY (1, 1) NOT NULL , [ParentOfficeID] [int] NOT NULL , [WebSiteID] [int] NOT NULL , [IsDisplayOnWeb] [bit] NOT NULL , [IsDisplayOnAdmin] [bit] NOT NULL , [OfficeStatus] [char] (1) NOT NULL , [DisplayORD] [smallint] NOT NULL , [OfficeTYPE] [varchar] (10) NOT NULL , [OfficeNM] [varchar] (50) NOT NULL , [OfficeDisplayNM] [varchar] (50) NOT NULL , [OfficeADDR1] [varchar] (50) NOT NULL , [OfficeADDR2] [varchar] (50) NOT NULL , [OfficeCityNM] [varchar] (50) NOT NULL , [OfficeStateCD] [char] (2) NOT NULL , [OfficePostalCD] [varchar] (15) NOT NULL , [OfficeIMG] [varchar] (100) NOT NULL , [OfficeIMGPath] [varchar] (100) NOT NULL , [RegionID] [int] NOT NULL , [OfficeTourURL] [varchar] (255) NULL , [GeoAreaID] [int] NOT NULL , [CreateDT] [datetime] NOT NULL , [UpdateDT] [datetime] NOT NULL , [CreateByID] [varchar] (50) NOT NULL , [UpdateByID] [varchar] (50) NOT NULL , [OfficeBrandedURL] [varchar] (255) NULL ) ON [PRIMARY] GO
CREATE TABLE [dbo].[OfficeManagement] ( [OfficeID] [int] NOT NULL , [PersonnelID] [int] NOT NULL , [JobTitleID] [int] NOT NULL , [CreateDT] [datetime] NOT NULL , [CreateByID] [varchar] (50) NOT NULL , [SeqNBR] [int] NOT NULL ) ON [PRIMARY] GO
CREATE TABLE [dbo].[OfficeMls] ( [OfficeID] [int] NOT NULL , [SourceID] [int] NOT NULL , [OfficeMlsNBR] [varchar] (20) NOT NULL , [CreateDT] [datetime] NOT NULL , [UpdateDT] [datetime] NOT NULL , [CreateByID] [varchar] (50) NOT NULL , [UpdateByID] [varchar] (50) NOT NULL ) ON [PRIMARY] GO
CREATE TABLE [dbo].[Personnel] ( [PersonnelID] [int] IDENTITY (1, 1) NOT NULL , [PersonnelDisplayName] [varchar] (100) NOT NULL , [FirstNM] [varchar] (50) NOT NULL , [PreferredFirstNM] [varchar] (50) NOT NULL , [MiddleNM] [varchar] (50) NOT NULL , [LastNM] [varchar] (50) NOT NULL , [PersonalTaxID] [varchar] (9) NOT NULL , [HireDT] [datetime] NOT NULL , [TermDT] [datetime] NOT NULL , [HomePhoneNBR] [varchar] (15) NULL , [HomeADDR1] [varchar] (50) NOT NULL , [HomeADDR2] [varchar] (50) NOT NULL , [HomeCityNM] [varchar] (50) NOT NULL , [HomeStateCD] [char] (2) NOT NULL , [HomePostalCD] [varchar] (15) NOT NULL , [PersonnelLangCSV] [varchar] (500) NOT NULL , [PersonnelSlogan] [varchar] (500) NOT NULL , [BGColor] [varchar] (50) NOT NULL , [IsEAgent] [bit] NOT NULL , [IsArchAgent] [bit] NOT NULL , [IsOptOut] [bit] NOT NULL , [IsDispOnlyPrefFirstNM] [bit] NOT NULL , [IsHideMyListingLink] [bit] NOT NULL , [IsPreviewsSpecialist] [bit] NOT NULL , [AudioFileNM] [varchar] (100) NULL , [iProviderID] [int] NOT NULL , [DRENumber] [varchar] (10) NOT NULL , [AgentBrandedURL] [varchar] (255) NOT NULL , [CreateDT] [datetime] NOT NULL , [UpdateDT] [datetime] NOT NULL , [CreateByID] [varchar] (50) NOT NULL , [UpdateByID] [varchar] (50) NOT NULL , [IsDisplayAwards] [bit] NOT NULL ) ON [PRIMARY] GO
CREATE TABLE [dbo].[PersonnelMLS] ( [PersonnelID] [int] NOT NULL , [SourceID] [int] NOT NULL , [AgentMlsNBR] [varchar] (20) NOT NULL , [CreateDT] [datetime] NOT NULL , [UpdateDT] [datetime] NOT NULL , [CreateByID] [varchar] (50) NOT NULL , [UpdateByID] [varchar] (50) NOT NULL ) ON [PRIMARY] GO
ALTER TABLE [dbo].[Office] ADD CONSTRAINT [FK_Office_OfficeProfile] FOREIGN KEY ( [OfficeID] ) REFERENCES [dbo].[OfficeProfile] ( [OfficeID] ) NOT FOR REPLICATION GO
alter table [dbo].[Office] nocheck constraint [FK_Office_OfficeProfile] GO
ALTER TABLE [dbo].[OfficeManagement] ADD CONSTRAINT [FK_OfficeManagement_LookupJobTitle] FOREIGN KEY ( [JobTitleID] ) REFERENCES [dbo].[LookupJobTitle] ( [JobTitleID] ), CONSTRAINT [FK_OfficeManagement_Office] FOREIGN KEY ( [OfficeID] ) REFERENCES [dbo].[Office] ( [OfficeID] ) NOT FOR REPLICATION , CONSTRAINT [FK_OfficeManagement_Personnel] FOREIGN KEY ( [PersonnelID] ) REFERENCES [dbo].[Personnel] ( [PersonnelID] ) ON DELETE CASCADE GO
alter table [dbo].[OfficeManagement] nocheck constraint [FK_OfficeManagement_Office] GO
ALTER TABLE [dbo].[OfficeMls] ADD CONSTRAINT [FK_OfficeMls_Office] FOREIGN KEY ( [OfficeID] ) REFERENCES [dbo].[Office] ( [OfficeID] ) NOT FOR REPLICATION GO
alter table [dbo].[OfficeMls] nocheck constraint [FK_OfficeMls_Office] GO
ALTER TABLE [dbo].[PersonnelMLS] ADD CONSTRAINT [FK_PersonnelMLS_Personnel] FOREIGN KEY ( [PersonnelID] ) REFERENCES [dbo].[Personnel] ( [PersonnelID] ) NOT FOR REPLICATION GO
alter table [dbo].[PersonnelMLS] nocheck constraint [FK_PersonnelMLS_Personnel] GO
Here's the query I'm having trouble with:
SELECT distinct Personnel.PersonnelID, Personnel.FirstNM, Personnel.LastNM, Office.OfficeNM, Office.OfficeID, OfficeMls.SourceID AS OfficeBoard, PersonnelMLS.SourceID AS AgentBoard FROM Personnel INNER JOIN OfficeManagement ON Personnel.PersonnelID = OfficeManagement.PersonnelID INNER JOIN Office ON OfficeManagement.OfficeID = Office.OfficeID INNER JOIN OfficeMls ON Office.OfficeID = OfficeMls.OfficeID INNER JOIN PersonnelMLS ON Personnel.PersonnelID = PersonnelMLS.PersonnelID where officemls.sourceid <> personnelmls.sourceid and office.officenm not like ('%admin%') group by PersonnelMLS.SourceID, Personnel.PersonnelID, Personnel.FirstNM, Personnel.LastNM, Office.OfficeNM, Office.OfficeID, OfficeMls.SourceID order by office.officenm
What I'm trying to retrieve are those agents who have source id's that are not in the Office's domain of valid source id's. Here's a small portion of the results:
PersonnelID FirstNM LastNM OfficeNM OfficeID OfficeBoard AgentBoard ----------- -------------------------------------------------- -------------------------------------------------- -------------------------------------------------- ----------- ----------- ----------- 18205 Margaret Peggy Quattro Aventura North 650 906 908 18205 Margaret Peggy Quattro Aventura North 650 918 908 15503 Susan Jordan Blackburn Point 889 920 909 15503 Susan Jordan Blackburn Point 889 921 909 15503 Susan Jordan Blackburn Point 889 921 920 15279 Sandra Humphrey Boca Beach North 890 917 906 15279 Sandra Humphrey Boca Beach North 890 906 917 15279 Sandra Humphrey Boca Beaches 626 917 906 15279 Sandra Humphrey Boca Beaches 626 906 917 13532 Michael Demcho Boca Downtown 735 906 917 14133 Maria Ford Boca Downtown 735 906 917 19126 Michael Silverman Boca Glades Road 736 917 906 18920 Beth Schwartz Boca Glades Road 736 906 917
If you take a look at Sandra Humphries, you'll see she's out of office 626. Office 626 is associated with source id's 907 and 916. Sandra Humphries is also associated with those two source id's , but she shows up in the results.
I know this was AWFULLY long winded, but just wanted to make sure made myself as clear as possible.
How do I eliminate others from viewing one of the 2 databases on our production server???Is there any security not to allow all users to including sa and developers not to access one of the 2 databases on our server.. The other of the 2 databases can be accessed.... Please advise
I need to eliminate Duplicates in my Sql Query, tried to use distinct and that doesn't seem to work, can anybody pls.help.
duplicates are in #ddtempC table, and am writing a query to get a country name from the hash table where hash table has duplicates
hash table contains (THEATER_CODE, COUNTRY_CODE, COUNTRY_NAME). and trying to write condition on THEATER_CODE and COUNTRY_CODE to get Country_name
and THEATER_CODE AND COUNTRY_CODE HAS DUPLICATES. whenever i do a sub query i get the below error.
Msg 512, Level 16, State 1, Line 1 Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
SELECT USER_FIRSTNAME, USER_LASTNAME, user_countryCode, USER_COUNTRY = (SELECT DISTINCT RTRIM(LTRIM(COUNTRY_NAME)) FROM #ddtempC WHERE RTRIM(LTRIM(COUNTRY_CODE)) = USER_COUNTRYCODE AND RTRIM(LTRIM(THEATER_CODE)) = USER_THEATERCODE) FROM [user] WHERE USER_USERNAME IS NOT NULL AND User_CreationDate BETWEEN '1/2/2007' AND '4/11/2008' ORDER BY User_TheaterCode;
I'm in the process of doing a major data clean up and I'm just wondering how I would go about eliminating some redundant data.
The Table Layout
Contracts
CNTRID CONTRACTNUM STARTDATE CUSTOMNUM ======================================================= 0 1234567 091885 A 1 1234567 091885 A 2 1111111 111111 B 3 1234567 081205 A
Equipment
EQUIPID DEVICENAME CNTRID CUSTOMNUM ======================================================= 0 DEVICE1 0 A 1 DEVICE2 2 B 2 DEVICE3 1 A 3 DEVICE4 3 A
You will notice that each customer may have multiple devices. Each device may be tied to a contract, and each contract may have one or more devices tied to it.
In the example above, you will notice in the contracts table the contracts with the IDs 0 and 1.
Fig 1.
CNTRID CONTRACTNUM STARTDATE CUSTOMNUM ======================================================= 0 1234567 091885 A 1 1234567 091885 A
These contracts have the exact same information.
Furthermore, if you look down the table you will notice the contract with the ID 3.
Fig 2.
CNTRID CONTRACTNUM STARTDATE CUSTOMNUM ======================================================= 3 1234567 081205 A
This contract shares the same contract and customer number, but has a different start date.
Now lets take a look devices in the equipment table that refer to these records.
EQUIPID DEVICENAME CNTRID CUSTOMNUM ======================================================= 0 DEVICE1 0 A 2 DEVICE3 1 A 3 DEVICE4 3 A
You will notice that DEVICE1 and DEVICE 3 refer to the contract records that contain identical data. (As shown in 'Fig 1')
My question is as follows:
How do I eliminate the any duplicate records from the contracts table, and update the records in the equipment table with id of the left over contract.
Results Should be as follows:
Contracts
CNTRID CONTRACTNUM STARTDATE CUSTOMNUM ======================================================= 0 1234567 091885 A 2 1111111 111111 B 3 1234567 081205 A
Equipment
EQUIPID DEVICENAME CNTRID CUSTOMNUM ======================================================= 0 DEVICE1 0 A 1 DEVICE2 2 B 2 DEVICE3 0 A 3 DEVICE4 3 A
Any help you may provide would be greatly appreciated!
I have a SQL statement with two left outer joins which connects 3 tables. Vendors, Tracking & Activity. For whatever reason, even though each is a one-to-many relationship, I am able to join 2 tables (from Vendors to Tracking) without an issue. when I then join Activity, I get a Cartesian product.I suspected that 'DISTINCT'.
SELECT DISTINCT CASE WHEN `vendor`.`companyname` IS NULL then 'No Company Assigned' ELSE `vendor`.`companyname` END AS companyNameSQL, `tracking`.`pkgTracking`, CASE
This above query returns all requests that meets atleast one criteria. How do i edit my query such that i get requests that meet both criteria and the result set looks like below
edit: this came out longer than I thought, any comments about anythinghere is greatly appreciated. thank you for readingMy system stores millions of records, each with fields like firstname,lastname, email address, city, state, zip, along with any number of userdefined fields. The application allows users to define message templateswith variables. They can then select a template, and for each variablein the template, type in a value or select a field.The system allows you to query for messages you've sent by specifyingcriteria for the variables (not the fields).This requirement has made it difficult to normalize my datamodel at allfor speed. What I have is this:[fieldindex]id int PKname nvarchartype datatype[recordindex]id int PK....[recordvalues]recordid int PKfieldid int PKvalue nvarcharwhenever messages are sent, I store which fields were mapped to whatvariables for that deployment. So the query with a variable criterialooks like this:select coalesce(vm.value, rv.value)from sentmessages sminner join variablemapping vm on vm.deploymentid=sm.deploymentidleft outer join recordvalues rv onrv.recordid=sm.recordid and rv.fieldid=vm.fieldidwhere coalesce(vm.value, rv.value) ....this model works pretty well for searching messages with variablecriteria and looking up variable values for a particular message. thebig problem I have is that the recordvalues table is HUGE, 1 millionrecords with 50 fields each = 50 million recordvalues rows. The value,two int columns plus the two indexes I have on the table make it into abeast. Importing data takes forever. Querying the records (with a fieldcriteria) also takes longer than it should.makes sense, the performance was largely IO bound.I decided to try and cut into that IO. looking at a recordvalues tablewith over 100 million rows in it, there were only about 3 million uniquevalues. so I split the recordvalues table into two tables:[recordvalues]recordid int PKfieldid int PKvalueid int[valueindex]id int PKvalue nvarchar (unique)now, valueindex holds 3 million unique values and recordvaluesreferences them by id. to my suprise this shaved only 500mb off a 4gbdatabase!importing didn't get any faster either, although it's no longer IO boundit appears the cpu as the new bottleneck outweighed the IO bottleneck.this is probably because I haven't optimized the queries for the newtables (was hoping it wouldn't be so hard w/o the IO problem).is there a better way to accomplish what I'm trying to do? (eliminatethe redundant data).. does SQL have built-in constructs to do stuff likethis? It seems like maybe I'm trying to duplicate functionality at ahigh level that may already exist at a lower level.IO is becoming a serious bottleneck.the million record 50 field csv file is only 500mb. I would've thoughtthat after eliminating all the redundant first name, city, last name,etc it would be less data and not 8x more!-GordonPosted Via Usenet.com Premium Usenet Newsgroup Services----------------------------------------------------------** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **----------------------------------------------------------http://www.usenet.com
WITH cte_OrderProjectType AS ( select Orderid, min(TypeID) , min(CTType) , MIN(Area) from tableA A inner join tableB B ON A.PID = B.PID left join tableC C ON C.TypeID = B.TypeID LEFT JOIN tableD D ON D.AreaID = B.ID group by A.orderid )
This query uses min to eliminate duplicates. It takes 1.30 seconds to complete..
Is there any way I can improve the query performance ?
Being one step removed from innumerate, I was wondering whether there was a more elegant way to avoid divide by zero error instead of trudging through a bunch of isnulls.
My intuition tells me that since multiplication looks like repeated addition, that maybe division is repeated subtraction? If that's true is there a way to finesse divide by zero errors by somehow reframing the statement as multiplication instead of division?
The sql statement that is eating my kishkas is
cast(1.0*( (ISNULL(a.DNT,0)+ISNULL(a.rex,0)+ISNULL(a.med,0))-(ISNULL(b.dnt,0)+ISNULL(b.rex,0)+ISNULL(b.med,0))/ ISNULL(a.DNT,0)+ISNULL(a.rex,0)+ISNULL(a.med,0)) as decimal(10,4)) TotalLossRatio
Is there a way to nucleate the error by restating the division? My assertion underlying this statement is that the a alias represents a premium paid, so between medical, pharmacy and dental, there MUST BE at least one premium paid, otherwise you wouldn't be here. the b alias is losses, so likewise, between medical, pharmacy and dental, there MUST BE at least one loss (actually, it just occurred to me that maybe there are no losses, but that would be inconceivable, but ill check again)) so that's when it struck me that maybe there's a different way to ask the question that obviates the need to do it by division.
I am querying several tables and piping the output to an Excel spreadsheet. Several (not all) columns contain repeating data that I'd prefer not to include on the output. I only want the first row in the set to have that data. Is there a way in the query to do this under SQL 2005?
As an example, my query results are as follows (soory if it does not show correctly): OWNERBARN ROUTE DESCVEHDIST CASE BARBAR TRACKING #70328VEH 32832869.941393 BARBAR TRACKING #70328VEH 32832869.941393 BARBAR TRACKING #70328VEH 32832869.941393 DAXDAX TRACKING #9398VEH 39839834.942471 DAXDAX TRACKING #9398VEH 39839834.942471 DAXDAX TRACKING #9398VEH 39839834.942471 TAXTAX TRACKING #2407 40754.391002 TAXTAX TRACKING #2407 40754.391002 TAXTAX TRACKING #2407 40754.391002
I only want the output to be: OWNERBARN ROUTE DESCVEHDIST CASE BARBAR TRACKING #70328VEH 32832869.941393
I am new to sql server and I am having deficulties writing sql script to perform the following: 1) Merging data from two tables A and B 2) Eliminate duplicate present in table B (Conditions to satisfy for dublicate:If similar address is found in both tables AND class type in Table A =1 3) merge data related to dup(eliminated records) to new table. Not sure if we can eliminate records first before merging two tables. Tables are as follow:
Table A Fields: ID, NAME, Address, city, zip, Class type Value:123, John, 123 Main, NY, 71690,1 Value:124, Tom, 100 State, LA, 91070,0
Table B Field: ID, NAME, Address, city, zip, Class Type Value:200, Tim, 123 Main, NY, 71690,0 (duplicate; satisfied both conditions and left out in final table) Value:124, Jack, 100 State, LA, 91070,0 (same condition but second condition is not met) Value:320,Bob, 344 coast hwy, slc, 807760,0
Final Table: Field: ID, NAME, Address, city, zip, Class Type Value:123, John, 123 Main, NY, 71690,1 (should also show t Value:124, Tom, 100 State, LA, 91070,0 Value:124, Jack, 100 State, LA, 91070,0 Value:320,Bob, 344 coast hwy, slc, 807760,0
Table d:(relate to table A:showing all products that are related to table A) table_A.ID, Products 123, Paper 1 123, paper 2
Table e:(relate to table B: showing all products that are related to table B) table_B.ID, Products 200, Paper 3
Final Table: ID, Product 123, Paper 1 123, Paper 2 123, Paper 3 (changing table b id to table a)
Would appreciate any help writing script to perform such transformation. Thanks
I'm trying to eliminate the duplicate 'URL' rows in the query:
SELECT ni.[Id], ni.[Abstract], ni.[MostPopular], ni.[URL] FROM dbo.[NewsCategory] nc WITH (READUNCOMMITTED) INNER JOIN dbo.[NewsItem] ni WITH (READUNCOMMITTED) ON nc.[Id] = ni.NewsCategoryId WHERE --nc.[ProviderId] = @ProviderId --AND ni.[URL] in ( select DISTINCT URL from dbo.NewsItem where mostpopular = 1 -- OR mostemailed = 1 ) ORDER BY ni.[DateStamp] DESC
If you look at this line in the query :
select DISTINCT URL from dbo.NewsItem where mostpopular = 1
IF i run this query alone it will return 8 unique rows. I expect that the SELECT IN statemnet would help return a distinct set but it doesn't. This entire query returns like 20 rows with duplicate rows.
The reason why I can't do a distinct in the first set of columns is because the column ni.[Abstract] is TEXT and it says that data type is NOT COMPARABLE.
Hi i have a table value which contains value ----- a a a b b b c c c
Now i need to have the results as
a 1
b 1
c 1
I tried using distinct.But OLEDB returns error that invalid syntax.It doesn't support distinct keyword.Actually i read these table from a file thru OLEDB.Not from a database.Any idea ? Thanks in Advance
Am I going about this the right way? I want to find pairs of entitiesin a table that have some relationship (such as a field being thesame), so Iselect t1.id, t2.id from sametable t1 join sametable t2 ont1.id<>t2.idwhere t1.fieldx=t2.fieldx ...The trouble is, this returns each pair twice, e.g.B CC BM NN MIs there a way to do this kind of thing and only get each pair once?Kerry
Suppose I have users that can belong to organizations. Organizationsare arranged in a tree. Each organization has only one parentorganization but a user maybe a member of multiple organizations.The problem that I'm facing that both organizations and individualusers may have relationships with other entities which aresemantically the same. For instance, an individual user can purchasethings and so can an organization. An individual user can havebusiness partners and so can an organization. So it seems that I wouldneed to have a duplicate set of link tables that link a user to apurchase and then a parallel link table linking an organization to apurchase. If I have N entities with which both users and organizationsmay have relationships then I need 2*N link tables. There is nothingwrong with that per se but just not elegant to have two differenttables for a relationship which is the same in nature, e.g.purchaser->purchaseditem.One other approach I was thinking of is to create an intermediateentity (say it's called "holder") that will be used to hold referencesto all the relationships that both an organization and an individualmay have. There will be 2 link tables linking organizations to"holder" and users to "holder". Holder will in turn reference thepurchases, partners and so on. In this case the number of link tableswill be N+2 as opposed to 2*N but it will have a performance cost ofan extra join.Is there a better way of modelling this notion of 2 different entitiesthat can possess similar relationships with N other entities?
INSERT INTO #LatLong SELECT DISTINCT Latitude, Longitude FROM RGCcache
When I run it I get the following error: "Violation of PRIMARY KEY constraint 'PK__#LatLong__________7CE3D9D4'. Cannot insert duplicate key in object 'dbo.#LatLong'."
Im not sure how this is failing as when I try creating another table with 2 decimal columns and repeated values, select distinct only returns distinct pairs of values.
The failure may be related to the fact that RGCcache has about 10 million rows, but I can't see why.
How can I create a row wise trigger in MSSQL Server .
CREATE TRIGGER trigName ON tableName for INSERT , UPDATE , DELETE AS ...
For a multiple delete , I got only one trigger invocation . But I need individual trigger calls for each row ... How can I do this in t-sql ? Is there any usage like FOR EACH ROW in Oracle ? Is it possible through INSTEAD OF TRIGGER ? Please help !!!!!!!!
I need to run a SELECT DISTINCT query acrossmultiple fields, but I need to add another field that is NON-DISTINCTto my record set.Here is my query:SELECT DISTINCT lastname, firstname, middleinitial, address1,address2, city, state, zip, age, genderFROM gpresultsWHERE age>='18' and serviceline not in ('4TH','4E','4W')and financialclass not in ('Z','X') and age not in('1','2','3','4','5','6','7','8','9','0')and (CAST (ADMITDATE AS DATETIME) >= DATEDIFF(day, 60, GETDATE()))ORDER BY zipThis query runs perfect. No problems whatsoever. However, I need toalso include another field called "admitdate" that should be treatedas NON-DISTINCT. How do I add this in to the query?I've tried this but doesn't work:SELECT admitdateFROM (SELECT DISTINCT lastname, firstname, middleinitial, address1,address2, city, state, zip, age, gender from gpresults)WHERE age>='18' and serviceline not in ('4TH','4E','4W')and financialclass not in ('Z','X') and age not in('1','2','3','4','5','6','7','8','9','0')and (CAST (ADMITDATE AS DATETIME) >= DATEDIFF(day, 60, GETDATE()))ORDER BY zipThis has to be simple but I do not know the syntax to accomplishthis.Thanks
Hi all,I'm getting this error when trying to import data from a text file intoSQL Server 2000 (Windows Server 2003) using the DTS import wizard.Any ideas what could be causing this? There aren't any restrictions(that i can find) on the file sizes etc.Thanks in advance.Dave
When I highlight a few partitions and start processing, the process occasionally stops with a message that operation has been canceled, like this:
Response 3 Server: the operation has been cancelled. Response 4
Server: the operation has been cancelled.
Response 5 Server: the operation has been cancelled.
..etc...
(no further error message details are provided) SQL profiler shows that batch was completed (but rowcount shown in process progress log is too small).
Analysis Services profiler shows no messages at that time at all. It just shows messages when it started, and when I restarted the processing.
The Analysis trace appears to be stopped when processing stops with "Server: the operation has been cancelled."This is an occasional error and sometime occurs within 15-20 minutes from starting to process. It could fail on 1st partition in the process list , or on some partition in the middle. Some partitions might run for a few hours and not error out, but sometimes it fails quickly.I was not sure if this is an issue with the underlying fact data, so I broke up the last partition where it failed into 5 smaller partitions, they processed OK separately.I think restarting SQL Server and SSAS helps to process a few more partitions. Some that failed with this problem process OK after restart, but some fail again. (apologies for re-posting, wanted to put under more specific thread title)
Storing large graph in relational form doesn't allow us to perform graph operations such as shortest path quite efficiently. I'm wondering if storing the graph as objects would be better? How should I design the schema? Thanks!