Duplicates Again! UNION Join - Remove Records With Column Diff.
Sep 9, 2004
Hello All,
We all were new at one point.... any help is appreciated.
Objective:
Combining two 49,000 row tables and remove records where there is only 1 column difference. (keeping the specified column value removing the one with a blank.)
Reason:
I have 2 people going through a list, coding a specific column with a single letter value. They both have different progress on each sheet. Hence I am trying to UNION them and have a result of their combined efforts without duplicates.
My progress/where I'm stuck:
Here is my first query/union:
SELECT * FROM [Eds table]
UNION SELECT * FROM [Vickis table];
As shown above, I have unioned these 2 tables and my results removed th obvious whole record duplicates, but since 1 column is different on these, a union without criteria considers them unique.....
an example of duplicates that I must remove are as follows:
142301 - Product 5000 - 150# - S (Keep)
142031 - Product 5000 - 150# - "" <--- Blank (Remove)
I am trying to run another query on my first query results so I don't mess my first query up. Here it is:
SELECT DISTINCT [Prod #], [Prod Name], [Prod Description], [Product Type]
FROM [Combined Tables]
WHERE [Product Type]<>" ";
Please Help! Thank you in advance.
--------------------
5 minutes away from pulling my last one!
BaldNAskewed
View 7 Replies
ADVERTISEMENT
Jun 22, 2015
I have some duplicate values for my query results, about 200 duplicates out of 30000 rows. Of these 200 duplicates I want to keep the ones that have a higher value for... 'UpdatedBatchID'.
SELECT
IR.Id as 'ID'
, CAST(IR.Priority as varchar) as 'Priority'
, IRSupportGroupDN.DisplayName as 'Support Group'
, DATEADD(MI,DATEDIFF(mi,GETUTCDATE(),GETDATE()),IR.CreatedDate) as 'Created Date'
, DATEADD(MI,DATEDIFF(mi,GETUTCDATE(),GETDATE()),IR.ResolvedDate) as 'Resolved Date'
, SLOConfig.DisplayName as 'SLO'
, DATEADD(MI,DATEDIFF(mi,GETUTCDATE(),GETDATE()),SLOFact.TargetEndDate) as 'SLO Target'
, SLOStatusDN.DisplayName as 'SLO Status'
, SLOMetric.DisplayName as 'SLO Metric'
, SLOFact.UpdatedBatchId as 'UpdatedBatchID'
View 11 Replies
View Related
Jan 26, 2015
I ran this query to populate a field with random numbers but it keeps populating with some duplicate records. how I can remove the duplicates?
UPDATE APRFIL
SET ALTATH = CONVERT(int, RAND(CHECKSUM(NEWID())) * 10000);
Below are sample output that I need the dupes not show. The table already exist and its sql 2008
155957
155957
155968
155974
155976
15599
155990
155997
155997
156005
156008
View 2 Replies
View Related
Jul 28, 2015
I have a requirement where i want to delete the records based on the Date column. I have table which contain the columns like machinename ,lasthardwarescandate
I want to delete the records based on the max(Lasthardwarescandate) i.e. latest one, column where the machine name is duplicate menace it repeats. So how would i remove the duplicate machine names based on the Lasthardwarescandate column(There are multiple entries for the Lasthardwarescandate so i want to fetch the latest date column).
Note: Duplication should be removed based on “Last Hardware Scan” date.
Only latest date should be considered from multiple records for the same system. "
View 4 Replies
View Related
Apr 30, 2008
Hello,
I am doing a UNION (not UNION ALL) of two tables and two columns. The first column is AcctCode VARCHAR(16), and the second column is Revenue FLOAT.
I am getting two rows back, using UNION, where the values in both columns are the same. Basically, it looks like this:
AcctCode Revenue
AM247 300.64
AM247 300.64
There are trailing spaces after the AcctCode, and I have tried RTRIM.
The following is the query I am using:
Select
RTRIM(AcctCode)AS Acctcode,
SUM(ISNULL(ScFee,0)) as Revenue
From cdnbwfin1.txnRptg.dbo.dailySummary
where TxnDate = '4/22/2008'
group by Acctcode
union
Select
RTRIM(AcctCode) As AcctCode,
SUM(ISNULL(ScFee,0)) as Revenue
From bwdbfin1.txnrptg.dbo.tbl_dailySummary
where TxnDate = '4/22/2008'
group by Acctcode
order by acctcode
Why would I get this duplicate if I'm using UNION?
Thank you for your help!
cdun2
View 9 Replies
View Related
Mar 12, 2008
Hi -
I am a newbie to t-sql and have an issue that I am not sure is me or sql
I want to merge several tables from several databases.
I have created a union statement:
create view myViewas
select *, ‘1’ as compId, ‘AAA’ as SiteID from ClientAAA.dbo.stats
UNION
(select *, ‘2’ as compID, ‘ABC’ as SiteID from clientABC.dbo.stats
UNION
(select *, ‘3’ as compID, ‘ABD’ as SiteID from clientABD.dbo.stats
UNION
(select *, ‘4’ as compID, ‘ABF’ as SiteID from clientABF.dbo.stats
UNION
(select *, ‘5’ as compID, ‘AGG’ as SiteID from clientAGG.dbo.stats
))))
Its ok till the last statement then they repeat from the 4th line stats.
Any help would be great
View 2 Replies
View Related
Jan 26, 2015
Is there a query or a way to convert duplicates value in a column to non duplicates.
View 14 Replies
View Related
Jan 9, 2008
I have a query which gives the following output, How can i get a output like this:
QUERY
COL1COL2COL3
A1AAGG
A1BBHH
A1CCJJ
B1DDKK
B1EELL
B1FFMM
OUTPUT
COL1COL2COL3
A1AAGG
BBHH
CCJJ
B1DDKK
EELL
FFMM
View 5 Replies
View Related
Jan 29, 2008
Hi All
I have the dbo.OperatingHour It has many duplicates and I want to remove duplicates permanently
The statement below works but when I open the table there are no changes
Insert into OperatingHour(Weekdays, Wednesdays, Fridays,Saturdays, [Sundays/Public Holidays])
(SELECT DISTINCT Weekdays, Wednesdays, Fridays,Saturdays, [Sundays/Public Holidays] FROM OperatingHour)
View 2 Replies
View Related
May 24, 2007
Welcome,how can I alter following table in order to reduce neighbouringduplicates (symbol, position, quantity, price).Nr Symbol Position QuantityPrice Date1. wz9999b 1 1.02500.0 2007-05-09 08:09:42.6532. wz9999b 2 12.02500.0 2007-05-09 08:09:42.6533. wz9999b 1 100.02590.0 2007-05-10 15:47:04.1404. PZ0008VX 1 2280.8842090.55000000000022007-05-1612:43:12.4035. PZ0008VX 1 2280.8842102.05000000000022007-05-1612:45:27.4206. wz9999b 1 0.0012500.0 2007-05-18 09:47:16.0337. wz9999b 1 0.0012500.0 2007-05-18 09:47:53.2708. wz9999b 1 1.01.0 2007-05-22 12:35:07.8939. PZ0008VX 1 2280.8842102.05000000000022007-05-2409:38:26.16010. PZ0008VX 1 2280.8842102.05000000000022007-05-2409:38:38.80011. wz9999b 1 0.001 2500.02007-05-24 12:35:07.20712 wz9999b 1 0.002 2500.02007-05-24 12:35:14.98713. wz9999b 1 0.001 2500.02007-05-24 12:38:07.207In the result set I would like to get the rows number 6 and 10.Any suggestions??
View 2 Replies
View Related
Sep 27, 2006
I have a situation where we get XML files sent daily that need uploading into SQL Server tables, but the source system producing these files sometimes generates duplicate records in the file. The tricky part is, that the record isn't entirely duplicated. What I mean, is that if I look for duplicates by grouping the key columns, having count(*) > 1, I find which ones are duplicates, but when I inspect the data on these duplicates, the other details in the remaining columns may differ. So our rule is: pick the first record, toss the rest of the duplicates.
Because we don't sort on any columns during the import, the first record kept of the duplicates is arbitrary. Again, we can't tell at this point which of the duplicated records is more correct. Someday down the road, we will do this research.
Now, I need to know the most efficient way to accomplish this in SSIS. If it makes it easier, I could just discard all the duplicates, since the number of them is so small.
If the source were a relational table, I could use a SQL statement to filter the records to remove the duplicates, but since the source is an XML file, I don't know how to filter these out in the pipeline, since the file has to be aggregated to search for dups.
Thanks
Kory
View 5 Replies
View Related
Jan 7, 2012
This SQL is meant to show the changes that will be made, when removing a selected user's email address from a batch.
However, when executed, each row is duplicated, and in the duplication, the semi-colon or comma isn't removed. For example, if I wanted to remove user "sam@mail.com"
The table results displayed would be:
Row 1:
BatchID: 50
ParamName:EmailTo
ParamValue: jack@mail.com;sam@mail.com;frank@mail.com
NewParamValue: jack@mail.com;frank@mail.com
Row 2:
BatchID: 50
ParamName:EmailTo
ParamValue: jack@mail.com;sam@mail.com;john@mail.com
NewParamValue: jack@mail.com;;frank@mail.com
Ideally, it should only display each row once, and not have the semicolon error. It seems to be a union error, because when I comment out the First and second union statements, it runs fine.
-- Delete email address from a.Batch
IF(@EmailAddress IS NOT NULL)
BEGIN
IF(LEN(@EmailAddress) > 0)
BEGIN
IF(@ShowOnly = 1)
[Code] ......
View 2 Replies
View Related
Oct 2, 2006
DELETE
FROM tblContacts
WHERE tblContacts.ID IN(
SELECT F.ID
FROM tblContacts AS F
WHERE Exists (
SELECT email, Count(ID)
FROM tblContacts
WHERE tblContacts.email = F.email
GROUP BY tblContacts.email
HAVING Count(tblContacts.ID) > 1
)
)
AND tblContacts.ID NOT IN(
SELECT Min(ID)
FROM tblContacts AS F
WHERE Exists (
SELECT email, Count(ID)
FROM tblContacts
WHERE tblContacts.email = F.email
GROUP BY tblContacts.email
HAVING Count(tblContacts.ID) > 1
)
GROUP BY email
)
I readily admit that I've shamelessly copied 'n pasted this from a tutorial and then taken a stab at tweaking it for my own ends. But I really don't understand what it's doing.
Really, all I want to know is that it will remove records with duplicate email fields. But I could also do with confirming - looking at the "SELECT Min(ID)" bit - does that mean that if it finds a duplicate, it'll delete the latest-added one? And if so, that changing it to remove the earliest-added one is simply a case of changing MIN to MAX?
Thanks :)
View 11 Replies
View Related
Dec 3, 2006
If we want to remove the duplicate row and leave only one row instead of 2 or 3 rows for example with the same column values.
2/ The same question but when all the columns of the row are duplicate except the id field.
Thanks a lot.
View 3 Replies
View Related
Oct 6, 2015
I am working with a bunch of records that have duplicates on the Persid and the intPercentID where there are duplicates I want to remove when I stick them in the temp table, I tried join on tempo table and doing not exists but still inserts, so now I am trying a merge but same thing. how can I keep duplicates from being inserted in the temp table. I made a cursor as well but its slow as heck, but it does work. trying better ways.
Create table #TempStr (STRId int not null Identity(1,1) primary key, Persid int, percentId int, dtCreated datetime, CreatedBy int)
Create table #NewStr (STRId int, Persid int, percentId int, dtCreated datetime, CreatedBy int)
INSERT #TempStr (Persid, percentId, dtCreated, CreatedBy)
select intPersonnelID, intPercentID, dtSubmitted, intSubmittedBy from tblSTR
whereintpercentId in (61,62) group by intPercentID, intPersonnelID, dtSubmitted, intSubmittedBy
UNION ALL
[code]....
View 3 Replies
View Related
Sep 1, 2015
I have table with columns as ID, DupeID1, DupeID2. ID column is unique. DupeID1 and DupeID2 -- the combination should only be there once. I don't want reverse combination of duplicates, i.e. DupeID2, DupeID1 in the table. How can I delete the reverse duplicates from this table?
View 10 Replies
View Related
Jul 13, 2015
I have 2 tables below:
Table 1:
Product No Quantity
A 1
B 2
C 3
Table 2:
Product No Grade Quantity
A Good
A Normal
A Bad
B Good
B Bad
C Good
C Normal
C Bad
In Table 2, Product No divided by Grade. I want to lookup the Quantity from Table 1 to Table 2. The same Product No will have 1 value, the other value is 0. The result for Column Quantity should be like this:
Table 2:
Product No Grade Quantity
A Good 1
A Normal 0
A Bad 0
B Good 2
B Bad 0
C Good 3
C Normal 0
C Bad 0
View 8 Replies
View Related
Jan 22, 2015
I have a table containing the following data:
LinkingIDID1 ID2
166202180659253178
166202253178180659
166334180380253179
166334253179180380
166342180380180659
166342180659180380
166582253179258643
166582258643253179
264052258642258643
264052258643258642
264502258643258663
264502258643259562
Within the LinkingID, there are duplicates in ID1 and ID2 but just in opposite columns. I have been trying to figure out a way to remove these set based. It doesn't matter which duplicate is removed. Essentially these are just endpoints and I don't care which side they are on. The solution must recognize the duplicates and not just remove based on every 2nd row.
View 8 Replies
View Related
Aug 11, 2015
I have a bunch of contacts that I've scored how well their names match to other contacts in the same business. I can programmatically figure out how to parse the results, but would like to know how to do this via SQL. My problem is for Business_fk 968976 I have 7 contacts. In the end I should have 4 contacts based on name match. For the business key listed Gerardo Lopez is in the ContactScore table twice for Contact keys 7355719 and 57028145. I then have two rows like so:
PossibleBusinessContactMatch_pk BusinessContact_fk Business_fk BusinessContactMatch_fk MatchTypeCode MatchScore MatchRank FirstName LastName Phone Email
------------------------------- ------------------ ----------- ----------------------- ------------- ----------- ----------- -------------------------------------------------- -------------------------------------------------- ---------- --------------------------------------------------------------------------------------------------------------------------------
1772960 57028145 968976 7355719 C 46 1 GERARDO I LOPEZ 8162214000
838834 7355719 968976 57028145 C 50 1 GERARDO
Each reference each other, and 2 is a good case, a more difficult case would have key 1 listed 10 times showing a ContactMatch_fk of 2 - 11, and then Contact_fk 2 listed 10 times with a ContactMatch_fk of 1, 3-11.I know 57028145 maps to 7355719 from the first row in the ContactScore table, so when Contact_fk of 7355719 comes up I should be able to skip it and not process that match. Hopefully that makes sense. Anyway here is the test data:
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[ContactScore]') AND type in (N'U'))
DROP TABLE [dbo].[ContactScore];
GO
CREATE TABLE [dbo].[ContactScore]
(
[ContactScore_pk]INT NOT NULL,
[Contact_fk]INT NOT NULL,
[code]..
View 9 Replies
View Related
Apr 23, 2015
How can i perform this task with ssis OR TRANSACT SQL? I HAVE THESE ROWS WITH THE NEXT DATA, I want to take just the valid one, BUT I HAVE A LOT OF COMBINATIONS AS following names, it can be animals, things or personal names
GABRIEL OBANDO --CORRECT
GABRIEL OVANDO
Gavriel OVANDO
gAbriel OBANDO
GABRIE OBANDO
Gabri OBONDA
MANAGUA --CORRECT
NANAGUA
NAMAGUA
View 5 Replies
View Related
Oct 15, 2006
Im working through the MS example of "removeDuplicates". I cant seem to figure out how to add custom property for input column.
I added the helper method:
private static void AddIsKeyCustomPropertyToInput(IDTSInput90 input, object value)
{
IDTSCustomProperty90 isKey = input.CustomPropertyCollection.New();
isKey.Name = "IsKey";
isKey.Value = value;
}
I call it from:
public override void ProvideComponentProperties()
{
//...
AddIsKeyCustomPropertyToInput(input, false);
//...
}
public override void ReinitializeMetaData()
{
IDTSInput90 input = ComponentMetaData.InputCollection[0];
if (input.CustomPropertyCollection.Count == 0)
{
AddIsKeyCustomPropertyToInput(input, false);
}
// ...
}
However when I deployed it and added the component to SSIS package - I cant see the Custom Column "IsKey" in the input column properties window.
What am I missing - please help
View 3 Replies
View Related
Jun 23, 2015
I have created a phone list and am using a union to be able to display letter category. However, what I would like to do is only show the letter category if their is an employee with the corresponding last name.
For example, if someone does not have a last name starting with "Z", then "Z" should not show up on my report.
SELECT LastName, FirstName, Dept, Phone
UNION ALL
SELECT v.letter,NULL,NULL,NULL,NULL
FROM (VALUES('A'),('B'),('C'),('D'),('E'),('F'),('G'),('H'),('I'),('J'),('K'),('L'),('M'),('N'),('O'),('P'),('Q'),('R'),('S'),('T'),('U'),('V'),('W'),('X'),('Y'),('Z')) AS v(letter)
ORDER BY vchLastName, vchFirstName
View 9 Replies
View Related
Jun 23, 2015
I had Excel file input & import to DB Table by using Data flow in SSIS.but it had duplicates so I dont use the Dupe Records
So I planned like below:
Method 1:
Here OLEDB Destination are Good Records(Without Duplicates)
OLEDB Destination are Not Good Records(only Duplicates)
or
Method :2
If I add a column(GOOD_RECORD) in DB Table and Should I update '1' for top 1 record (for Good Record) and remaining as '0' for other Records (for Dups)latter I utilize Through flag of GOOD_RECORD
i.e.,, select * from DB_TABLE where GOOD_RECORD='1' .
I think that Method :2 Advisable for Performance/flexible but Here How can I update by using SSIS(Data flow) ????
View 4 Replies
View Related
Jul 23, 2005
Using SqlServer :Query 1 :SELECT def.lID as IdDefinition,TDC_AUneValeur.VALEURDERETOUR as ValeurDeRetourFROM serveur.Data_tblDEFINITIONTABLEDECODES def,serveur.Data_tblTABLEDECODEAUNEVALEUR TDC_AUneValeurwhere def.TYPEDETABLEDECODES = 4and TDC_AUneValeur.PERIODE_ANNEEFISCALE_ID = 2and def.lID *= TDC_AUneValeur.DEFINITIONTABLEDECODES_DEFINITION_I DQuery 2 :SELECT def.lID as IdDefinition,TDC_AUneValeur.VALEURDERETOUR as ValeurDeRetourFROM serveur.Data_tblDEFINITIONTABLEDECODES def LEFT OUTER JOINserveur.Data_tblTABLEDECODEAUNEVALEUR TDC_AUneValeurON def.lID = TDC_AUneValeur.DEFINITIONTABLEDECODES_DEFINITION_I Dwhere def.TYPEDETABLEDECODES = 4and TDC_AUneValeur.PERIODE_ANNEEFISCALE_ID = 2The query 1 returns :IdDefinition ValeurDeRetour23 null24 null25 null29 36The query 2 returns :IdDefinition ValeurDeRetour29 36The first result is the good one.How is it that the second query doesn't return the same resultSet ?I've been told about problems comparing NULL ???What is the solution ???Thanks a lot.Damien
View 3 Replies
View Related
Feb 29, 2008
What happens when you add the Ignore Case flag into the mix?
I'm having a hell of a time - I'm dealing with an SCD situation using TableDifference component and I have both existing dimensions and new data coming in, each go through identical Case-Insensitive/Sort with remove duplicates, but I'm getting identical new and deleted records detected - I think because of ordering issues. I'm still trying to whittle the test case down, but I think data from all around the records I'm investigating seems to get sorted in between them, so I'm having trouble getting a small test case built.
I think the mixed case data is the root of the problem, and I think the design is bad, but before I go back to the technical lead, I need to understand enough to show that you cannot take two pipelines sorted and de-duped case-insensitively and then do a case-sensitive table difference operation.
View 4 Replies
View Related
Dec 10, 2007
Hi Madhu,
my table does not have primary key so i created a seperate index on each of the table.
I used the recommended tablediff utility and it works successfully. But its only show the difference of records in each table and does not copy rows from source to destination and destination to source table. I was expecting database1.dbo.table1 contains same records as in database1.dbo.table2.
C:Program FilesMicrosoft SQL Server90COM>tablediff /sourceserver kashif-pcs
qlexpress /sourcedatabase AB /sourcetable table1 /destinationserver kashif-pcsq
lexpress /destinationdatabase CD /destinationtable table2
Microsoft (R) SQL Server Replication Diff Tool
Copyright (C) 1988-2005 Microsoft Corporation. All rights reserved.
User-specified agent parameter values:
/sourceserver kashif-pcsqlexpress
/sourcedatabase AB
/sourcetable table1
/destinationserver kashif-pcsqlexpress
/destinationdatabase CD
/destinationtable table2
Table [AB].[dbo].[table1] on kashif-pcsqlexpress and Table [CD].[dbo].[table2]
on kashif-pcsqlexpress have 5 differences.
Err Sno
Src. Only 101
Src. Only 102
Dest. Only 103
Dest. Only 104
Dest. Only 105
The requested operation took 0.466767 seconds.
Can you write a short script for my problem, just like comparison of database1.dbo.table1 compares in database2.dbo.table2 and which ever records not present it should copy those and vice-versa.
It means Database1.dbo.table1 contains 5 records
Database2.dbo.table2 contains 5 records
Regards
Kashif Chotu
View 3 Replies
View Related
Oct 22, 2004
Can some kind person out there please help me, I've been stuck on this for daaaa-y-s.
I have a database that allows users to search for pdf's of technical drawings.
Basically I have one huge table with multiple columns, which the user can only search on any combination of one of these two columns
"drawing_series" eg 0100, 0046, 1000
"drawing_number" eg 0076000, 0000123, 0000004
There is also a Revision column(which the user can't see) that goes up by 1 each time a drawing has been modified and resubmitted to the database.
"revision" eg 01, 02, 03, ....... 99
So a search on 0046 series might pull back drawings
0046-0010000-01
0046-0010000-02
0046-0010000-03
0046-0076000-01
0046-0076888-01
0046-0076888-02
The problem is that I only want drawings with the highest revisions returned eg
0046-0010000-03
0046-0076000-01
0046-0076888-02
The code below worked like a charm in the test stages pulling back a few hundred records but now that i've uploaded 10's of thousands of records to the DB the whole lot dies if the search result pulls back more than a few thousand records.
SELECT * FROM dbo.Drawing_Database
where dbo.Drawing_Database.revision=(select max(revision) from dbo.Drawing_Database self where self.drawing_series + self.drawing_number = dbo.Drawing_Database.drawing_series + dbo.Drawing_Database.drawing_number) Drawing_Series like '0046' order by Drawing_Series, Drawing_Number
There must be a simpler way of doing this as i can pull out duplicate series + numbers using " HAVING Count(*)>1" but dont know where to go from there.
Help!
TheMaster
View 2 Replies
View Related
Jul 23, 2005
Hi All,I am banging my head against a brick wall over this problem, so anyhelp in the correct direction would be muchly appreciated!I have 2 SQL (MS SQL) server tables, realated to -a Property,Sales of that property.A property is uniquely identifed by its Roll, valuation Number andSuffix (not my choosing).Each property can only appear in the property table once, and can onlyhave 1 assessment - but can have multiple sales (ie - over theannalysis period the same property can sell more than once).There is approximatly 19000 properties relating to about 8000 sales.When creating a query to list property and most recent sale (if thereis any) I end up with somthing like this -SELECT [roll], [valuation], [suffix], [sale date]FROM [property]LEFT JOIN [sales]ON[property].[roll] = [sales].[roll] AND[property].[valuation] = [sales].[valuation] AND[property].[suffix] = [sales].[suffix](table names simplifed).I get rows where there is all the property data there, but sale date(etc.) is null (as I would expect from a left join), but the problem is- when there is more than 1 sale for a property it pulls out anothercopy of the property data.In short, because of that I come out with more records than properties.ie -roll valuation suffix sale date12 456789 A 1/1/200312 788988 B NULL14 123456 A 1/1/200314 123456 A 1/1/2004(Note - the last two are the same property).I didn't know that the left join can affect both joined tables!Is there any way around this? Any suggestions/hints in the rightdirection would be very much appreciated!THANKS!
View 3 Replies
View Related
Jul 20, 2005
When I run the attached query, I get duplicates when there is one tomany relationship between tableA and tableB. The query, tested schemaand the result is attached. Sorry for the long post.Here is tested Schema and Data inserts.----------------------create table TestTblA(ShipDate datetime,CPEID varchar(30),phonenum char(14))gocreate table TestTblB(CPEID varchar(30),itemID varchar(30),active char(1))gocreate table TestTblC(itemID varchar(30),descr varchar(50))goinsert into TestTblA values (getdate(),'TWMUA','(408)-555-1211')insert into TestTblA values (getdate(),'TWMUA','(408)-555-1212')insert into TestTblA values (getdate(),'TWMUB','(408)-555-1211')insert into TestTblA values (getdate(),'TWMUB','(408)-555-1212')insert into TestTblA values (getdate(),'TWMUB','(408)-555-1213')insert into TestTblA values (getdate(),'TWMUC','(408)-555-1211')insert into TestTblA values (getdate(),'TWMUC','(408)-555-1212')insert into TestTblA values (getdate(),'TWMUC','(408)-555-1213')insert into TestTblA values (getdate(),'WWEXI','(408)-555-1211')insert into TestTblA values (getdate(),'WWEXI','(408)-555-1212')insert into TestTblA values (getdate(),'WWEXI','(408)-555-1211')insert into TestTblB values ('TWMUA','1000-000043-000','Y')insert into TestTblB values ('TWMUB','1000-100002-001','Y')insert into TestTblB values ('TWMUC','1000-200005-000','Y')insert into TestTblB values ('WWEXI','1000-401001-000','Y')insert into TestTblB values ('WWEXI','1000-401002-000','Y')insert into TestTblC values ('1000-000043-000','descrUA')insert into TestTblC values ('1000-100002-001','descrUB')insert into TestTblC values ('1000-200005-000','descrUC')insert into TestTblC values ('1000-401001-000','descrWW')insert into TestTblC values ('1000-401002-000','descrWW')----------------Query follows------------SELECT A.ShipDate,A.CPEId,ItemId = CASEWHEN A.CPEId = 'TWMUA' THEN 'New - Single User'WHEN A.CPEID = 'TWMUB' THEN 'New - Multi User'WHEN A.CPEID = 'TWMUC' THEN 'New - Triple User'When B.ITEMID is NULL THEN 'Unknown'When B.ITEMID = ' ' THEN 'Unknown'else B.ItemIdend,MODEL_NO = CaseWhen B.ITEMID = '1000-000043-000' Then rtrim(C.DESCR)When B.ITEMID = '1000-100002-001' Then rtrim(C.DESCR)When B.ITEMID = '1000-200005-000' Then rtrim(C.DESCR)WHEN A.CPEId = 'TWMUA' THEN '1100'WHEN A.CPEID = 'TWMUB' THEN '1100'WHEN A.CPEID = 'TWMUC' THEN '1000SW'When C.DESCR is NULL THEN 'Unknown'else 'Unknown'end ,COUNT(A.phonenum)FROM TestTblA A LEFT OUTER JOIN TestTblB B ON A.CPEID=B.CPEID andb.active = 'Y'LEFT OUTER JOIN TestTblC C ON B.ItemId=C.ITEMIDGROUP BY A.ShipDate,A.CPEId,B.ItemId,C.DESCRORDER BY A.ShipDate,A.CPEId,B.ItemId,C.DESCR---- end of queryThe result (modified the output format to fit a single line)ShipDate CPEId ItemId MODEL_NO Count2003-07-18 TWMUA New - Single User descrUA 22003-07-18 TWMUB New - Multi User descrUB 32003-07-18 TWMUC New - Triple User descrUC 32003-07-18 WWEXI 1000-401001-000 NULL 32003-07-18 WWEXI 1000-401002-000 NULL 3** The problem **I need WWEXI or any similar entry to only show once, it shows twice.Thanks for your help.
View 3 Replies
View Related
Feb 6, 2014
Got a data set like this:
rowID PersonID Start Date End Date
===== ======== ========== ==========
001 6575556 19/06/2013 09/07/2013
001 6575556 20/06/2013 12/07/2013
001 6575556 21/06/2013 12/07/2013
002 9478522 15/05/2013 18/05/2013
003 7753423 22/08/2013 01/09/2013
Person can have more than one start/end date therefore I get multiple of the same row ID and Person ID when looking at their dates.
I want to display the most recent end date and associated data if there is more than one start/end date for the same person. I decided to do a self join with max Date aggregate using this against a main select from the Table1:
SELECT PersonID,
MAX([End Date]) AS MaxEndDate
FROM Table1
GROUP BY
PersonID
And join it this way:
select RowID,
PersonID,
[End Date]
FROM Table1 INNER JOIN (
SELECT PersonID,
MAX([End Date]) AS MaxEndDate
[Code] ....
When I run the sub-query on its own it gives me the single PersonID and Max Date but on self-joining with Table1 I still get the duplicates values.
View 2 Replies
View Related
Sep 2, 2004
I really must be missing something here...
Trying to cross-update 2 tables.
Picture a checkbook reconcilliation without common check numbers. The checkbook has uniqueids and the bank has transaction ids but they are different. So the match is on date/payee and amount. So I wrote 2 checks to the same person, on the same day, for the same amount but forgot to enter one in the register.
when i run the update statement:
update b set b.bankid=c.myid
from checks c
join bank b on c.cdate=b.cdate
and c.payee=b.payee
and c.cost=b.cost
Both bank statement records would be updated to my one check record [can't happen]
Also: this will be running on a hundred thousand records per month with potential for duplication/ommission on either side.
What's a poor newbie missing??
I'm doing something similar on a lesser volume by running sequential statements through an ASP script but performance is poor. I know SQL can do this, just not how to approach it.
Thanks for any guidance
Dale
View 4 Replies
View Related
Jun 8, 2015
I am doing some audit and i have below query, how can i get rid of duplicates from the below query any T SQL to get rid of duplicates...
I am using SP_Who2 and sql server Audit for auditing all data happening on sql server databases and dumping them to tables Audit_DBAudit abd Audit_sp_who2 and from then i am trying to get data which is not repeating/duplicate ...
SELECT
A.ProgramName
,a.HostName,[Server_principal_name],[Server_instance_name],[Database_name],[Object_name],F.Statement
FROM Audit_DBAudit as F
Join [Audit_sp_who2] AS a
on LTRIM(RTRIM(F.server_principal_name))=LTRIM(RTRIM(A.Login))
View 11 Replies
View Related
Jan 10, 2007
Hi there SQLTEAM
I have a problem, and need your help.
table1 has 1 single field,example
pkiTownID
DATA
1
2
3
4
table2 has 1 single field, example
pkiTownID
DATA
6
7
8
9
What SQL Query should i run to merge or join these 2 tables into 1
The output that i would like it the following
DATA
1 6
2 7
3 8
9 9
Is this possible?
View 6 Replies
View Related