T-SQL (SS2K8) :: Eliminating Duplicates While Insert?
Apr 10, 2014
WITH cte_OrderProjectType AS
(
select Orderid, min(TypeID) , min(CTType) , MIN(Area)
from tableA A inner join
tableB B ON A.PID = B.PID left join
tableC C ON C.TypeID = B.TypeID LEFT JOIN
tableD D ON D.AreaID = B.ID
group by A.orderid
)
This query uses min to eliminate duplicates. It takes 1.30 seconds to complete..
Is there any way I can improve the query performance ?
View 9 Replies
ADVERTISEMENT
Feb 11, 2005
Have a pretty simple wuestion but the answer seems to be evading me:
Here's the DDL for the tables in question:
CREATE TABLE [dbo].[Office] (
[OfficeID] [int] IDENTITY (1, 1) NOT NULL ,
[ParentOfficeID] [int] NOT NULL ,
[WebSiteID] [int] NOT NULL ,
[IsDisplayOnWeb] [bit] NOT NULL ,
[IsDisplayOnAdmin] [bit] NOT NULL ,
[OfficeStatus] [char] (1) NOT NULL ,
[DisplayORD] [smallint] NOT NULL ,
[OfficeTYPE] [varchar] (10) NOT NULL ,
[OfficeNM] [varchar] (50) NOT NULL ,
[OfficeDisplayNM] [varchar] (50) NOT NULL ,
[OfficeADDR1] [varchar] (50) NOT NULL ,
[OfficeADDR2] [varchar] (50) NOT NULL ,
[OfficeCityNM] [varchar] (50) NOT NULL ,
[OfficeStateCD] [char] (2) NOT NULL ,
[OfficePostalCD] [varchar] (15) NOT NULL ,
[OfficeIMG] [varchar] (100) NOT NULL ,
[OfficeIMGPath] [varchar] (100) NOT NULL ,
[RegionID] [int] NOT NULL ,
[OfficeTourURL] [varchar] (255) NULL ,
[GeoAreaID] [int] NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[UpdateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[UpdateByID] [varchar] (50) NOT NULL ,
[OfficeBrandedURL] [varchar] (255) NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[OfficeManagement] (
[OfficeID] [int] NOT NULL ,
[PersonnelID] [int] NOT NULL ,
[JobTitleID] [int] NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[SeqNBR] [int] NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[OfficeMls] (
[OfficeID] [int] NOT NULL ,
[SourceID] [int] NOT NULL ,
[OfficeMlsNBR] [varchar] (20) NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[UpdateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[UpdateByID] [varchar] (50) NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Personnel] (
[PersonnelID] [int] IDENTITY (1, 1) NOT NULL ,
[PersonnelDisplayName] [varchar] (100) NOT NULL ,
[FirstNM] [varchar] (50) NOT NULL ,
[PreferredFirstNM] [varchar] (50) NOT NULL ,
[MiddleNM] [varchar] (50) NOT NULL ,
[LastNM] [varchar] (50) NOT NULL ,
[PersonalTaxID] [varchar] (9) NOT NULL ,
[HireDT] [datetime] NOT NULL ,
[TermDT] [datetime] NOT NULL ,
[HomePhoneNBR] [varchar] (15) NULL ,
[HomeADDR1] [varchar] (50) NOT NULL ,
[HomeADDR2] [varchar] (50) NOT NULL ,
[HomeCityNM] [varchar] (50) NOT NULL ,
[HomeStateCD] [char] (2) NOT NULL ,
[HomePostalCD] [varchar] (15) NOT NULL ,
[PersonnelLangCSV] [varchar] (500) NOT NULL ,
[PersonnelSlogan] [varchar] (500) NOT NULL ,
[BGColor] [varchar] (50) NOT NULL ,
[IsEAgent] [bit] NOT NULL ,
[IsArchAgent] [bit] NOT NULL ,
[IsOptOut] [bit] NOT NULL ,
[IsDispOnlyPrefFirstNM] [bit] NOT NULL ,
[IsHideMyListingLink] [bit] NOT NULL ,
[IsPreviewsSpecialist] [bit] NOT NULL ,
[AudioFileNM] [varchar] (100) NULL ,
[iProviderID] [int] NOT NULL ,
[DRENumber] [varchar] (10) NOT NULL ,
[AgentBrandedURL] [varchar] (255) NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[UpdateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[UpdateByID] [varchar] (50) NOT NULL ,
[IsDisplayAwards] [bit] NOT NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[PersonnelMLS] (
[PersonnelID] [int] NOT NULL ,
[SourceID] [int] NOT NULL ,
[AgentMlsNBR] [varchar] (20) NOT NULL ,
[CreateDT] [datetime] NOT NULL ,
[UpdateDT] [datetime] NOT NULL ,
[CreateByID] [varchar] (50) NOT NULL ,
[UpdateByID] [varchar] (50) NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Office] ADD
CONSTRAINT [FK_Office_OfficeProfile] FOREIGN KEY
(
[OfficeID]
) REFERENCES [dbo].[OfficeProfile] (
[OfficeID]
) NOT FOR REPLICATION
GO
alter table [dbo].[Office] nocheck constraint [FK_Office_OfficeProfile]
GO
ALTER TABLE [dbo].[OfficeManagement] ADD
CONSTRAINT [FK_OfficeManagement_LookupJobTitle] FOREIGN KEY
(
[JobTitleID]
) REFERENCES [dbo].[LookupJobTitle] (
[JobTitleID]
),
CONSTRAINT [FK_OfficeManagement_Office] FOREIGN KEY
(
[OfficeID]
) REFERENCES [dbo].[Office] (
[OfficeID]
) NOT FOR REPLICATION ,
CONSTRAINT [FK_OfficeManagement_Personnel] FOREIGN KEY
(
[PersonnelID]
) REFERENCES [dbo].[Personnel] (
[PersonnelID]
) ON DELETE CASCADE
GO
alter table [dbo].[OfficeManagement] nocheck constraint [FK_OfficeManagement_Office]
GO
ALTER TABLE [dbo].[OfficeMls] ADD
CONSTRAINT [FK_OfficeMls_Office] FOREIGN KEY
(
[OfficeID]
) REFERENCES [dbo].[Office] (
[OfficeID]
) NOT FOR REPLICATION
GO
alter table [dbo].[OfficeMls] nocheck constraint [FK_OfficeMls_Office]
GO
ALTER TABLE [dbo].[PersonnelMLS] ADD
CONSTRAINT [FK_PersonnelMLS_Personnel] FOREIGN KEY
(
[PersonnelID]
) REFERENCES [dbo].[Personnel] (
[PersonnelID]
) NOT FOR REPLICATION
GO
alter table [dbo].[PersonnelMLS] nocheck constraint [FK_PersonnelMLS_Personnel]
GO
Here's the query I'm having trouble with:
SELECT distinct Personnel.PersonnelID,
Personnel.FirstNM,
Personnel.LastNM,
Office.OfficeNM,
Office.OfficeID,
OfficeMls.SourceID AS OfficeBoard,
PersonnelMLS.SourceID AS AgentBoard
FROM Personnel INNER JOIN
OfficeManagement ON
Personnel.PersonnelID = OfficeManagement.PersonnelID
INNER JOIN
Office ON OfficeManagement.OfficeID = Office.OfficeID
INNER JOIN
OfficeMls ON Office.OfficeID = OfficeMls.OfficeID
INNER JOIN
PersonnelMLS ON Personnel.PersonnelID = PersonnelMLS.PersonnelID
where officemls.sourceid <> personnelmls.sourceid
and office.officenm not like ('%admin%')
group by PersonnelMLS.SourceID,
Personnel.PersonnelID,
Personnel.FirstNM,
Personnel.LastNM,
Office.OfficeNM,
Office.OfficeID,
OfficeMls.SourceID
order by office.officenm
What I'm trying to retrieve are those agents who have source id's that are not in the Office's domain of valid source id's. Here's a small portion of the results:
PersonnelID FirstNM LastNM OfficeNM OfficeID OfficeBoard AgentBoard
----------- -------------------------------------------------- -------------------------------------------------- -------------------------------------------------- ----------- ----------- -----------
18205 Margaret Peggy Quattro Aventura North 650 906 908
18205 Margaret Peggy Quattro Aventura North 650 918 908
15503 Susan Jordan Blackburn Point 889 920 909
15503 Susan Jordan Blackburn Point 889 921 909
15503 Susan Jordan Blackburn Point 889 921 920
15279 Sandra Humphrey Boca Beach North 890 917 906
15279 Sandra Humphrey Boca Beach North 890 906 917
15279 Sandra Humphrey Boca Beaches 626 917 906
15279 Sandra Humphrey Boca Beaches 626 906 917
13532 Michael Demcho Boca Downtown 735 906 917
14133 Maria Ford Boca Downtown 735 906 917
19126 Michael Silverman Boca Glades Road 736 917 906
18920 Beth Schwartz Boca Glades Road 736 906 917
If you take a look at Sandra Humphries, you'll see she's out of office 626. Office 626 is associated with source id's 907 and 916. Sandra Humphries is also associated with those two source id's , but she shows up in the results.
I know this was AWFULLY long winded, but just wanted to make sure made myself as clear as possible.
Any help would be greatly appreciated.
Thanks in advance!
View 8 Replies
View Related
Apr 11, 2008
Hi All,
I need to eliminate Duplicates in my Sql Query, tried to use distinct and that doesn't seem to work, can anybody pls.help.
duplicates are in #ddtempC table, and am writing a query to get a country name from the hash table where hash table has duplicates
hash table contains (THEATER_CODE, COUNTRY_CODE, COUNTRY_NAME).
and trying to write condition on THEATER_CODE and COUNTRY_CODE to get Country_name
and THEATER_CODE AND COUNTRY_CODE HAS DUPLICATES. whenever i do a sub query i get the below error.
Msg 512, Level 16, State 1, Line 1
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
SELECT USER_FIRSTNAME, USER_LASTNAME,
user_countryCode,
USER_COUNTRY = (SELECT DISTINCT RTRIM(LTRIM(COUNTRY_NAME)) FROM #ddtempC WHERE RTRIM(LTRIM(COUNTRY_CODE)) = USER_COUNTRYCODE AND RTRIM(LTRIM(THEATER_CODE)) = USER_THEATERCODE)
FROM [user]
WHERE USER_USERNAME IS NOT NULL AND User_CreationDate BETWEEN '1/2/2007' AND '4/11/2008'
ORDER BY User_TheaterCode;
Thanks in Advance.
View 3 Replies
View Related
Jul 20, 2005
Am I going about this the right way? I want to find pairs of entitiesin a table that have some relationship (such as a field being thesame), so Iselect t1.id, t2.id from sametable t1 join sametable t2 ont1.id<>t2.idwhere t1.fieldx=t2.fieldx ...The trouble is, this returns each pair twice, e.g.B CC BM NN MIs there a way to do this kind of thing and only get each pair once?Kerry
View 2 Replies
View Related
Jul 27, 2015
We are trying to do some utilization calculations that need to factor in a given number of holiday hours per month.
I have a date dimension table (dimdate). Has a row for every day of every year (2006-2015)
I have a work entry fact table (timedetail). Has a row for every work entry. Each row has a worked date, and this column has a relationship to dimdate.
Our holidays fluctuate, and we offer floating holidays that our staff get to pick. So we cannot hard code which individual dates in dimdate as holidays. So what we have done is added a column to our dimdate table called HolidayHoursPerMonth.
This column will list the number of holiday hours available in the given month that the individual date happens to fall within, thus there are a lot of duplicates. Below is a brief example of dimdate. In the example below, there are 0 holiday hours for the month of June, and their are 8 holiday hours for the month of July.
DateKey MonthNumber HolidayHoursPerMonth
6/29/2015 6 0
6/30/2015 6 0
7/1/2015 7 8
7/2/2015 7 8
I have a pivot table create based of the fact table. I then have various date slicers from the dimension table (i.e. year, month). If I simply drag this column into the pivot table and summarize by MAX it works when you are sliced on a single month, but breaks if anything but a single month is sliced on.
I am trying to create a measure that calculates the amount of holiday hours based on the what's sliced, but only using a single value for each month. For example July should just be 8, not 8 x #of days in the month.
Listed below is how many hours per month. So if you were to slice on an entire year, the measure should equal 64. If you sliced on Jan, Feb and March, the measure should equal 12. If you were to slice nothing, thus including all 15 years in our dimdate table, the measure should equal 640 (10 years x 64 hours per year).
MonthNumberOfYear HolidayHoursPerMonth
1 8
2 4
3 0
4 0
5 8
6 0
7 8
8 0
9 8
10 4
11 16
12 8
View 3 Replies
View Related
Oct 22, 2014
I have a table with 22 million Business records. I can see that there are duplicates when I group by BusinessName and Address and Phone. I'd like to place only the duplicates into a table, with a ranking, oldest business key gets a ranking of 1.
As a bonus I'd like each group to have a distinct group name (although not necessary, just want to know how to do this)
Later after I run more verifications to make sure these are not referenced elsewhere I'll delete everything with a matchRank > 1 out of the main Business table.
DROP TABLE [dbo].[TestBusiness];
GO
CREATE TABLE [dbo].[TestBusiness](
[Business_pk] INT IDENTITY(1,1) NOT NULL,
[BusinessName] VARCHAR (200) NOT NULL,
[Address] VARCHAR(MAX) NOT NULL,
[code]....
View 9 Replies
View Related
Oct 21, 2014
Being one step removed from innumerate, I was wondering whether there was a more elegant way to avoid divide by zero error instead of trudging through a bunch of isnulls.
My intuition tells me that since multiplication looks like repeated addition, that maybe division is repeated subtraction?
If that's true is there a way to finesse divide by zero errors by somehow reframing the statement as multiplication instead of division?
The sql statement that is eating my kishkas is
cast(1.0*(
(ISNULL(a.DNT,0)+ISNULL(a.rex,0)+ISNULL(a.med,0))-(ISNULL(b.dnt,0)+ISNULL(b.rex,0)+ISNULL(b.med,0))/
ISNULL(a.DNT,0)+ISNULL(a.rex,0)+ISNULL(a.med,0)) as decimal(10,4)) TotalLossRatio
Is there a way to nucleate the error by restating the division? My assertion underlying this statement is that the a alias represents a premium paid, so between medical, pharmacy and dental, there MUST BE at least one premium paid, otherwise you wouldn't be here. the b alias is losses, so likewise, between medical, pharmacy and dental, there MUST BE at least one loss (actually, it just occurred to me that maybe there are no losses, but that would be inconceivable, but ill check again)) so that's when it struck me that maybe there's a different way to ask the question that obviates the need to do it by division.
View 6 Replies
View Related
Sep 13, 2014
I have the piece of sql code here below that keeps giving out duplicates. How to resolve this.
isnull((select distinct (SUM(a1.ActualDebit) - SUM(a1.ActualCredit) ) from #MainAccount a1
LEFT OUTER JOIN
#BudgetAccount bb ON aa.AccountID = bb.AccountID AND a1.PeriodStartdate = bb.PeriodStartDate and
a1.DateMonth=bb.DateMonth and a1.Budget = bb.Budget WHERE a1.AccountID = aa.AccountID and
a1.Refdate >= @FROMDATE and a1.Refdate <= @TODATE GROUP BY a1.group1, a1.Group2),0)
As Actual_CurrentMonth,
View 1 Replies
View Related
Aug 12, 2014
join three tables and wont be duplicate records.
I have tried and attached the computed results and also expecting results.
IF OBJECT_ID('tempdb..#tmpExam1')IS NOT NULL DROP TABLE #tmpExam1
IF OBJECT_ID('tempdb..#tmpExam2')IS NOT NULL DROP TABLE #tmpExam2
IF OBJECT_ID('tempdb..#tmpExam3')IS NOT NULL DROP TABLE #tmpExam3
[Code]....
View 4 Replies
View Related
Mar 9, 2015
I have this statement:
SELECT top 100 P.LastName ,
P.FirstName,
P.MiddleName,
P.PractitionerTypeID,
P.SuffixID,
A.Linenumber1,
A.LineNumber2,
Z.City,
[code]....
Seems like no matter which join type I choose i still get duplicates.
View 5 Replies
View Related
May 20, 2015
Assuming I have a table similar to the following:
Auto_ID Account_ID Account_Name Account_Contact Priority
1 3453463 Tire Co Doug 1
2 4363763 Computers Inc Sam 1
3 7857433 Safety First Heather 1
4 2326743 Car Dept Clark 1
5 2342567 Sales Force Amy 1
6 4363763 Computers Inc Jamie 2
7 2326743 Car Dept Jenn 2
I'm trying to delete all duplicate Account_IDs, but only for the highest priority (in this case it would be the lowest number).
I know the following would delete duplicate Account_IDs:
DELETE FROM staging_account
WHERE auto_id NOT IN
(SELECT MAX(auto_id)
FROM staging_account
GROUP BY account_id)
The problem is this doesn't take into account the priority; in the above example I would want to keep auto_ids 2 and 4 because they have a higher priority (1) than auto_ids 6 and 7 (priority 2).
How can I take priority into account and still remove duplicates in this scenario?
View 3 Replies
View Related
Aug 11, 2015
I have a bunch of contacts that I've scored how well their names match to other contacts in the same business. I can programmatically figure out how to parse the results, but would like to know how to do this via SQL. My problem is for Business_fk 968976 I have 7 contacts. In the end I should have 4 contacts based on name match. For the business key listed Gerardo Lopez is in the ContactScore table twice for Contact keys 7355719 and 57028145. I then have two rows like so:
PossibleBusinessContactMatch_pk BusinessContact_fk Business_fk BusinessContactMatch_fk MatchTypeCode MatchScore MatchRank FirstName LastName Phone Email
------------------------------- ------------------ ----------- ----------------------- ------------- ----------- ----------- -------------------------------------------------- -------------------------------------------------- ---------- --------------------------------------------------------------------------------------------------------------------------------
1772960 57028145 968976 7355719 C 46 1 GERARDO I LOPEZ 8162214000
838834 7355719 968976 57028145 C 50 1 GERARDO
Each reference each other, and 2 is a good case, a more difficult case would have key 1 listed 10 times showing a ContactMatch_fk of 2 - 11, and then Contact_fk 2 listed 10 times with a ContactMatch_fk of 1, 3-11.I know 57028145 maps to 7355719 from the first row in the ContactScore table, so when Contact_fk of 7355719 comes up I should be able to skip it and not process that match. Hopefully that makes sense. Anyway here is the test data:
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ContactScore]') AND type in (N'U'))
DROP TABLE [dbo].[ContactScore];
GO
CREATE TABLE [dbo].[ContactScore]
(
[ContactScore_pk]INT NOT NULL,
[Contact_fk]INT NOT NULL,
[code]..
View 9 Replies
View Related
Aug 11, 2014
I have a patient record and emergency contact information. I need to find duplicate phone numbers in emergency contact table based on relationship type (RelationType0 between emergency contact and patient. For example, if patient was a child and has mother listed twice with same number, I need to filter these records. The case would be true if there was a father listed, in any cases there should be one father or one mother listed for patient regardless. The link between patient and emergency contact is person_gu. If two siblings linked to same person_gu, there should be still one emergency contact listed.
Below is the schema structure:
Person_Info: PersonID, Person Info contains everyone (patient, vistor, Emergecy contact) First and last names
Patient_Info: PatientID, table contains patient ID and other information
Patient_PersonRelation: Person_ID, patientID, RelationType
Address: Contains address of all person and patient (key PersonID)
Phone: Contains phone # of everyone (key is personID)
The goal to find matching phone for same person based on relationship type (If siblings, then only list one record for parent because the matching phones are not duplicates).
View 9 Replies
View Related
Oct 16, 2006
Hi Forum, I want to stop insert data if value already entered in Column, Im used to .mdb and am new to .mdf much thanks Paul
View 14 Replies
View Related
Aug 5, 2014
I managed to transpose rows into columns.
;WITH
ctePreAgg AS
(
select top 500 act_reference "ActivityRef",
row_number() over (partition by act_reference order by act_reference) as rowno,
t3.s_initials "Initials"
from mytablestuff
order by act_reference
[code]...
But what I would love to do next is take each of the above rows - and return the initials either in one column with all the nulls and duplicate values removed, separated by a comma ..
ref, initials
Ag-4xYS
Ag-6xYS,BL
Ap-1xKW
At-2x SAS,CW
At-3x SAS,CW
OR the above but using variable number of columns based on the maximum number of different initials for each row.this is not strictly required, but maybe neater for further work on the view
ref, init1,init2
Ag-4xYS
Ag-6xYS,BL
Ap-1xKW
At-2x SAS,CW
At-3x SAS,CW
View 6 Replies
View Related
Oct 23, 2014
I'm working on inserting data into a table in a database. The table has two separate triggers, one for insert and one for update (I don't like it this way, but that's how it's been for years). When there is a normal insert, done via a program, it looks like the triggers work fine. When I run an insert manually via a script, the first insert trigger will run, but the update trigger will fail. I narrowed down the issue to a root cause.
This root issue is due to both triggers using the same temporary table name. When the second trigger runs, there's an error stating that a few columns don't exist. I went to my test server and test db and changed the update trigger so that the temporary table is different than the insert trigger temporary table, the triggers work fine. The weird thing is that if the temporary table already exists, when the second trigger tries to create the temporary table, I would expect it to fail and say that it already exists.I'm probably just going to update the trigger tonight and change the temporary table name.
View 1 Replies
View Related
Aug 2, 2007
Hi,
I'm new to SQL Server 2005 SSIS. I'm trying to do something very simple, but I cannot figure it out, PLEASE HELP!
I have a flat file, which I read and then insert the data in a database table, that works fine. The problem is that I don't want to insert duplicate records. For example; if I run the package again, it will appent to the table. What I need to do is that if the package runs again, check if the record already exist, based one two columns, date and hour, and do not insert the record.
Thank you,
Aldo
View 1 Replies
View Related
Jan 26, 2015
Is there a query or a way to convert duplicates value in a column to non duplicates.
View 14 Replies
View Related
Oct 14, 2014
I have database that holds column depicting various events and the last time that the event occurred.I query the database and I get a result like this:
Event Update Time
Event A 2014-10-14 00:35:00.000
Event B 2014-10-14 01:30:00.000
Event C 2014-10-14 00:35:00.000
Event L 2100-01-01 00:00:00.000..I would like to create another database to hold the various timestamps for each event. Some of these events happen every 2 minutes, others hourly, daily or monthly.I can query the source every minute. I do not want duplicate entries in the destination when the 'Update Time' has not been updated. Is it better to query the destination for the latest 'Update Time' value and only INSERT the new record when the time has been updated or just put a UNIQUE constraint on the Event and UpdateTime columns?
View 9 Replies
View Related
Aug 9, 2000
Hi,
I have a table with four columns. like id,lastname,
firstname,acctname. I have duplicate values for the three columns other
than id column. like
ID FirstNameLastname Acctname
1 john hopkins jh
2 john hopkins Jh
3 david webb dw
4 david webb dw
5 david webb dw
6 Dan Kennedy DK
I want to eliminate the duplicate rows. id can be any one of them.
Can any one suggest me with a query by which i can do this.
Thanks in advance
Mohan
View 2 Replies
View Related
Jun 26, 2000
How do I eliminate others from viewing one of the 2 databases on our production server???Is there any security not to allow all users to including sa and developers not to access one of the 2 databases on our server..
The other of the 2 databases can be accessed....
Please advise
Newbie
View 1 Replies
View Related
May 13, 2008
Hey There.
I'm in the process of doing a major data clean up and I'm just wondering how I would go about eliminating some redundant data.
The Table Layout
Contracts
CNTRID CONTRACTNUM STARTDATE CUSTOMNUM
=======================================================
0 1234567 091885 A
1 1234567 091885 A
2 1111111 111111 B
3 1234567 081205 A
Equipment
EQUIPID DEVICENAME CNTRID CUSTOMNUM
=======================================================
0 DEVICE1 0 A
1 DEVICE2 2 B
2 DEVICE3 1 A
3 DEVICE4 3 A
You will notice that each customer may have multiple devices. Each device may be tied to a contract, and each contract may have one or more devices tied to it.
In the example above, you will notice in the contracts table the contracts with the IDs 0 and 1.
Fig 1.
CNTRID CONTRACTNUM STARTDATE CUSTOMNUM
=======================================================
0 1234567 091885 A
1 1234567 091885 A
These contracts have the exact same information.
Furthermore, if you look down the table you will notice the contract with the ID 3.
Fig 2.
CNTRID CONTRACTNUM STARTDATE CUSTOMNUM
=======================================================
3 1234567 081205 A
This contract shares the same contract and customer number, but has a different start date.
Now lets take a look devices in the equipment table that refer to these records.
EQUIPID DEVICENAME CNTRID CUSTOMNUM
=======================================================
0 DEVICE1 0 A
2 DEVICE3 1 A
3 DEVICE4 3 A
You will notice that DEVICE1 and DEVICE 3 refer to the contract records that contain identical data. (As shown in 'Fig 1')
My question is as follows:
How do I eliminate the any duplicate records from the contracts table, and update the records in the equipment table with id of the left over contract.
Results Should be as follows:
Contracts
CNTRID CONTRACTNUM STARTDATE CUSTOMNUM
=======================================================
0 1234567 091885 A
2 1111111 111111 B
3 1234567 081205 A
Equipment
EQUIPID DEVICENAME CNTRID CUSTOMNUM
=======================================================
0 DEVICE1 0 A
1 DEVICE2 2 B
2 DEVICE3 0 A
3 DEVICE4 3 A
Any help you may provide would be greatly appreciated!
Thanks
--mike
View 11 Replies
View Related
Jul 29, 2013
I have a SQL statement with two left outer joins which connects 3 tables. Vendors, Tracking & Activity. For whatever reason, even though each is a one-to-many relationship, I am able to join 2 tables (from Vendors to Tracking) without an issue. when I then join Activity, I get a Cartesian product.I suspected that 'DISTINCT'.
SELECT DISTINCT CASE
WHEN `vendor`.`companyname` IS NULL then 'No Company Assigned'
ELSE `vendor`.`companyname`
END AS companyNameSQL, `tracking`.`pkgTracking`, CASE
[code]....
View 4 Replies
View Related
Sep 29, 2014
Need to eliminate certain records from my query. The below is a simple query to illustrate my problem
My Query
Select RequestNo,Event_type from Event_log where Event_type in (10,20)
Data
RequestNo Event_type
123456 10
123457 10
123457 20
123458 10
123459 10
123459 20
This above query returns all requests that meets atleast one criteria. How do i edit my query such that i get requests that meet both criteria and the result set looks like below
Data
RequestNo Event_type
123457 10
123457 20
123459 10
123459 20
View 2 Replies
View Related
Jul 20, 2005
edit: this came out longer than I thought, any comments about anythinghere is greatly appreciated. thank you for readingMy system stores millions of records, each with fields like firstname,lastname, email address, city, state, zip, along with any number of userdefined fields. The application allows users to define message templateswith variables. They can then select a template, and for each variablein the template, type in a value or select a field.The system allows you to query for messages you've sent by specifyingcriteria for the variables (not the fields).This requirement has made it difficult to normalize my datamodel at allfor speed. What I have is this:[fieldindex]id int PKname nvarchartype datatype[recordindex]id int PK....[recordvalues]recordid int PKfieldid int PKvalue nvarcharwhenever messages are sent, I store which fields were mapped to whatvariables for that deployment. So the query with a variable criterialooks like this:select coalesce(vm.value, rv.value)from sentmessages sminner join variablemapping vm on vm.deploymentid=sm.deploymentidleft outer join recordvalues rv onrv.recordid=sm.recordid and rv.fieldid=vm.fieldidwhere coalesce(vm.value, rv.value) ....this model works pretty well for searching messages with variablecriteria and looking up variable values for a particular message. thebig problem I have is that the recordvalues table is HUGE, 1 millionrecords with 50 fields each = 50 million recordvalues rows. The value,two int columns plus the two indexes I have on the table make it into abeast. Importing data takes forever. Querying the records (with a fieldcriteria) also takes longer than it should.makes sense, the performance was largely IO bound.I decided to try and cut into that IO. looking at a recordvalues tablewith over 100 million rows in it, there were only about 3 million uniquevalues. so I split the recordvalues table into two tables:[recordvalues]recordid int PKfieldid int PKvalueid int[valueindex]id int PKvalue nvarchar (unique)now, valueindex holds 3 million unique values and recordvaluesreferences them by id. to my suprise this shaved only 500mb off a 4gbdatabase!importing didn't get any faster either, although it's no longer IO boundit appears the cpu as the new bottleneck outweighed the IO bottleneck.this is probably because I haven't optimized the queries for the newtables (was hoping it wouldn't be so hard w/o the IO problem).is there a better way to accomplish what I'm trying to do? (eliminatethe redundant data).. does SQL have built-in constructs to do stuff likethis? It seems like maybe I'm trying to duplicate functionality at ahigh level that may already exist at a lower level.IO is becoming a serious bottleneck.the million record 50 field csv file is only 500mb. I would've thoughtthat after eliminating all the redundant first name, city, last name,etc it would be less data and not 8x more!-GordonPosted Via Usenet.com Premium Usenet Newsgroup Services----------------------------------------------------------** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **----------------------------------------------------------http://www.usenet.com
View 5 Replies
View Related
May 22, 2014
since ten years perhpas the society has a database with some table (person, country, and so on)since last years , the project manager decides to create a new database with some new design because before some information was save in one table well, the problem is when we pass all application to the new model we've notice that some id are not the same !
sample :
table Language old model
-------------------------
id frenchDescription english description
----------------------------------------
3 Anglais English
.....
......
table Language new model
id frenchDescription english description
-----------------------------------------
5 Anglais English
.....
.....
Alright , you can understand that the new model must be the same id => 3
my question is how to modify id on the table ?
View 4 Replies
View Related
Jul 11, 2014
I currently have a AFTER UPDATE trigger that looks at a specific column being updated. It works great, but I am now needing to also capture the field data when it is initially created as well. Do I need to build a sperate FOR INSERT trigger to capture the initial column data when inserted, or is it possible to build it into my current AFTER UPDATE trigger?
Here is my current trigger:
CREATE TRIGGER [dbo].[xResponsible_By] ON [dbo].[PARTS_REQUESTOR]
AFTER UPDATE
[code]....
View 2 Replies
View Related
Oct 15, 2014
I am having below two tables:
1) TableA : Which contains 5 columns(Column1,..........Column5)
2)TableB : Which contains 10 columns(Column1,..........Column10)
TableB contains millions of data.Now I want select all 5 columns from tableA but combination of Column1,Column2,Column3 if present in tableB, then i want exclude that records.I am doing as below:
select * from TableA a join TableB b a.column1!=b.column1 and a.column2!=b.column2 and a.column3!=b.column3 )
But query is taking almost 5 minutes. Is there is another approach?
View 4 Replies
View Related
Feb 9, 2015
I am trying to insert the queries in a table such as the following, not able to insert it.
create table test (txt Varchar(200))
insert into test values('select *from master.dbo.sysprocesses WHERE status NOT IN('sleeping','background')')
View 4 Replies
View Related
May 13, 2008
I am querying several tables and piping the output to an Excel spreadsheet.
Several (not all) columns contain repeating data that I'd prefer not to include on the output. I only want the first row in the set to have that data. Is there a way in the query to do this under SQL 2005?
As an example, my query results are as follows (soory if it does not show correctly):
OWNERBARN ROUTE DESCVEHDIST CASE
BARBAR TRACKING #70328VEH 32832869.941393
BARBAR TRACKING #70328VEH 32832869.941393
BARBAR TRACKING #70328VEH 32832869.941393
DAXDAX TRACKING #9398VEH 39839834.942471
DAXDAX TRACKING #9398VEH 39839834.942471
DAXDAX TRACKING #9398VEH 39839834.942471
TAXTAX TRACKING #2407 40754.391002
TAXTAX TRACKING #2407 40754.391002
TAXTAX TRACKING #2407 40754.391002
I only want the output to be:
OWNERBARN ROUTE DESCVEHDIST CASE
BARBAR TRACKING #70328VEH 32832869.941393
DAXDAX TRACKING #9398VEH 39839834.942471
TAXTAX TRACKING #2407 40754.391002
Thanks,
Walt
View 4 Replies
View Related
Feb 11, 2007
I am new to sql server and I am having deficulties writing sql script to perform the following:
1) Merging data from two tables A and B
2) Eliminate duplicate present in table B (Conditions to satisfy for dublicate:If similar address is found in both tables AND class type in Table A =1
3) merge data related to dup(eliminated records) to new table.
Not sure if we can eliminate records first before merging two tables. Tables are as follow:
Table A
Fields: ID, NAME, Address, city, zip, Class type
Value:123, John, 123 Main, NY, 71690,1
Value:124, Tom, 100 State, LA, 91070,0
Table B
Field: ID, NAME, Address, city, zip, Class Type
Value:200, Tim, 123 Main, NY, 71690,0 (duplicate; satisfied both conditions and left out in final table)
Value:124, Jack, 100 State, LA, 91070,0 (same condition but second condition is not met)
Value:320,Bob, 344 coast hwy, slc, 807760,0
Final Table:
Field: ID, NAME, Address, city, zip, Class Type
Value:123, John, 123 Main, NY, 71690,1 (should also show t
Value:124, Tom, 100 State, LA, 91070,0
Value:124, Jack, 100 State, LA, 91070,0
Value:320,Bob, 344 coast hwy, slc, 807760,0
Table d:(relate to table A:showing all products that are related to table A)
table_A.ID, Products
123, Paper 1
123, paper 2
Table e:(relate to table B: showing all products that are related to table B)
table_B.ID, Products
200, Paper 3
Final Table:
ID, Product
123, Paper 1
123, Paper 2
123, Paper 3 (changing table b id to table a)
Would appreciate any help writing script to perform such transformation. Thanks
View 5 Replies
View Related
Jul 13, 2007
Hello,
I'm trying to eliminate the duplicate 'URL' rows in the query:
SELECT
ni.[Id],
ni.[Abstract],
ni.[MostPopular],
ni.[URL]
FROM dbo.[NewsCategory] nc WITH (READUNCOMMITTED)
INNER JOIN dbo.[NewsItem] ni WITH (READUNCOMMITTED)
ON nc.[Id] = ni.NewsCategoryId
WHERE
--nc.[ProviderId] = @ProviderId
--AND
ni.[URL] in (
select DISTINCT URL
from dbo.NewsItem
where mostpopular = 1
-- OR mostemailed = 1
)
ORDER BY ni.[DateStamp] DESC
If you look at this line in the query :
select DISTINCT URL
from dbo.NewsItem
where mostpopular = 1
IF i run this query alone it will return 8 unique rows. I expect that the SELECT IN statemnet would help return a distinct set but it doesn't. This entire query returns like 20 rows with duplicate rows.
The reason why I can't do a distinct in the first set of columns is because the column ni.[Abstract] is TEXT and it says that data type is NOT COMPARABLE.
Thanks so much.
View 5 Replies
View Related
Jul 20, 2007
Hi i have a table value which contains
value
-----
a
a
a
b
b
b
c
c
c
Now i need to have the results as
a 1
b 1
c 1
I tried using distinct.But OLEDB returns error that invalid syntax.It doesn't support distinct keyword.Actually i read these table from a file thru OLEDB.Not from a database.Any idea ? Thanks in Advance
View 8 Replies
View Related