Update Table_awhere Records Are Not Matching From Table_b
Jan 18, 2008
Hi Gurus!!!
I have two tables tabl_a and tbl_b
now tbl_a has some records which are not in tbl_b. I want to update tbl_b with records in tbl_a
eg:
tbl_a tbl_b
a a
b b
c c
d d
x
y
z
Now I want to update tbl_b with records 'x', 'y', 'z'. I want to keep the matching record just untouched.
Something similar.
How can I do that???
Thanks in advance!!!
update wce_contact set blank = 'missing' where website in ('www.name1.co.uk','www.name2.co.uk','www.name3.co.uk')
I know this query will set 'blank' to missing when it matches the above websites. However if i wanted to set blank to 'missing' where mail1date is not null and mail2date is not null (keep going to mail18date not null) how exactly would i go about this?
I guess it would be a case of adding another bracket somewhere but im unsure?
I have a strange request that might not be possible based on the laws of relational databases but I thought I'd give it a try.
I have three tables which for simplicity I will call A, B and C. Table A contains my master records, Table B contains user details and the final table contains some extra data
In my initial search when joining A and B, I return 100 records. I then need to search in table C for these 100 records based on a criteria. the expected result should return all 100 rows for the ones that match and also the ones that do not match. The problem is that in Table C, not all the 100 IDs exist, so there will not be a corresponding record. Unfortunately, our users still want to see all 100 records in the output. Is this possible
As always any help or direction would be appreciated.
Hi, This is a where clause I am using in a search. WHERE (ADDRESS_STREET LIKE '%' + @Search + '%' ) I am trying to do a search which returns the most matching record. For example if I have a record with Denver as text . If I search for Denvr the spell error is intended , I will not get the result. How can I create a stored procedure to counter probable spelling errors and return matching results in a ranked order. Thanks
Init SC --- 89 Post NCOA --- 89 Post Supp --- 89 Revised Final State Counts --- 89 Revised Final State Counts --- 94 ***********************************************************************************************
Since "Revised Final State Counts" appears in both cycles 89 & 94. How can I query the table so that I only get that 1 record?
There is something I don't understand. When I use join
SELECT r.CHECK_NUMBER, i.orig_file from (AP_INVOICEDOCS i join AP_DETAIL_REG r on r.PAYABLE_ID= i.PAYABLE_ID)
I am getting 76 orig_file records
But when I do
SELECT r.CHECK_NUMBER, i.orig_file from (AP_INVOICEDOCS i right outer join AP_DETAIL_REG r on r.PAYABLE_ID= i.PAYABLE_ID)
I am showing only 8 records under i.orig_file column and I am not sure why. What I need is to get all the AP_INVOICEDOCS in the matching orig_file records.
How to return only non matching left join records. Currently I am doing a traffic management database to learn sql.
I am checking for all parishes with no associated drivers. Currently I only have 2 of such.
The regular left join
select parish.name, driver.fname from parish left join driver on driver.parish=parish.name
Returns the all the names of the parishes and the first name of the associated drive, followed by the matches, however the two parishes with no matches have null for the first name.
I'm hoping someone can tell me how to construct a stored procedure thatdeletes all records in tblA not matching the PK in tblBThis gives me the recordset of all records in tblA with no matchingrecords in tblB (ID is the PK in tblB)SELECT a.IDFROM dbo.tblB bRIGHT OUTER JOIN dbo.tblA a ON b.ID = a.IDWHEREb.ID IS NULLthanks,lq
I have a table (tblA) that records the RecordID, UserID andLastViewedDate (DateTime) of each record opened in tblB where RecordIDis the PK in tblB. I want to construct a query that groups all recordsin tblA by RecordID, filters by UserID and keeps only the most recent25 RecordIDs and deletes the rest.This gets me a recordset of all RecordIDs filtered by UserID in tblAbut I can't figure out how to sort it by LastViewedDate DESC and toeliminate those not in the Top25:SELECT RecordIDFROM dbo.tblAWHERE (UserID = 1234)GROUP BY RecordIDAny help is appreciated!lq
I have a database with thousands of records that contain personal details of customers. Some of these records pertain to the same customer - however, they have been submitted by different people, so they differ slightly in detail.
I've been looking to see if any of the data mining tools provided by Business Intelligence Studio in SQL Server 2005 will enable me to achieve a high degree of accuracy in matching records that pertain to the same customer. From what I can see, these tools seem more suited to making general predictions based on large groupings rather than the kind of precise prediction I am looking for.
So I'd appreciate it if anyone could tell me if there is any way I could use Business Intelligence Studio to match these 'duplicate' records together, or whether I will have to create a more SQL-based solution which attempts to match the customer records using SELECT statements and making assumptions about the data.
Problem: I am working on a price comparison system which matches the best prices for a purchase (or an order) from exisiting purchase data. The order is stored in multiple tables including order details (stores major items purchased: e.g., PC) and order sub-details (optional items purchased with the major items: e.g., speakers, backup device, webcam etc.). There could be a number of major items in an order and each major item could have multiple related sub items. The other variables that affect the price include trade-ins if any, sales going on at the time of order, number of units etc.
Now, for any new configuration (major items/related sub items), the system should be able to return a list of previous purchases made with similar configurations, and similar variables (quatities, trade-ins etc). Even if the same model is not present, similar pcs by the same vendor should be considered. etc etc.
Questions: Is this possible using Data mining? If yes, which algorithm is recommended?
Also, can I assign/modify any kind of weights to certain variables (if same model: .6 ; if same model not available but pcs made by same manufacturer available: .3 ; by other manufacturers: .1)?
Basically without going into too much detail, our company gets databases arriving and put onto our systems which have been made my other organizations with no guarantee of what the primary key is or if this is one at all.
I should probably give my main problem in an example for clarity:
Currently I have a .csv file full of data that needs to be put into say TableA. However I do not know if TableA has a primary key or not, or if the file that needs to be inserted into TableA contains duplicate data. I have the importer sorted that does this if you ignore the problem of duplicate data, however what I would like is an MS SQL query that does the following (but I cannot figure it out):
Assuming we are reading through the file line-by-line and a check is performed each time:
1.If there is a line with a primary key in the file that matches a primary key in TableA in the database update that row in the database with the line in the file.
2. If there is no primary key on the table and there is an exact data match between the line in the file and a row in the database then update it.
3. If neither 1 or 2 are successful then just insert the data.
Obviously the potential lack of a PK here makes things a lot more convoluted.
I am doing some analysis on our customer base and their payment profiles. I have generated two profile strings, one for whether the balance of an account has gone up or down and one for the size of the balance in relation to the normal invoice amount for the customer. So (for example) the balance movement string will look like this:
UUUDUUUDUUUD-D00 Where U = Up, D = Down, - = no change and 0 = no change and no balance
I want to analyse these strings in two ways. The first is that I want to find customers with a similar pattern: in the example below the first and last patterns are the same, just one out of sync but should be considered the same
Movement Multiple CountRecords UUUDUUUDUUUD1230123012301175 ------------0000000000001163 UDUUUDUUUDUU3012301230121082
The second type of analysis is to find customers whose pattern has changed: in the examples above the patterns are repeated and therefore 'normal' in the records below the patterns have changed in that the first part does not match the second part.
Movement Multiple CountRecords UUDUUUDUUUUU-----------07 UDUUUDUUUUUU------------7
good way to approach this without either a cursor or a hidden REBAR. The challenge as I see it is that I have to interrogate every string to find out if there is a repeating pattern and if so where it starts and how long it is (heuristic because some strings will start with a repeating pattern and then the pattern may change or deteriorate) and then compare the string for N groups of repeating characters to see if and when it changes and I can't think of an efficient method to do this in SQL because it is not a set based operation.
I have 4 rows below in file tblTEST, and I want to be able to transfer the CODE from the MAIN location to the INT location (replacing all existing "A" codes), preceeded by an "I".
ID LOC CODE -- ----- ------ 11 MAIN B 11 INT A 22 MAIN C 22 INT A
I want the result to be:
ID LOC CODE -- ----- ------ 11 MAIN B 11 INT IB 22 MAIN C 22 INT IC
I am stumped as to how to do this - any help or advice would be appreciated.
The only thing I've come up with is:
UPDATE S SET s.code = B.code FROM tbltest B LEFT OUTER JOIN tbltest S ON B.id = S.id WHERE (S.loc = 'INT')
Folks,Using NorthWind as Example: Parent Table derived from: Categories. I added a new Column E-Mail and Selecting rows where Category Id <=3. Here is my Data.
Category ID Category Name Category E-mail
1 Beverages Beverages.com
2 Condiments Condiments.com
3 Confections Child Table derived from: Products. I am Selecting rows where Category Id <=3. Here is my Sample Data.
Category ID Product Name Quantity Per Unit
1 Chang 24 - 12 oz bottles
1 Côte de Blaye 12 - 75 cl bottles
1 Ipoh Coffee 16 - 500 g tins
1 Outback Lager 24 - 355 ml bottles
2 Aniseed Syrup 12 - 550 ml bottles
2 Chef Anton's Gumbo Mix 36 boxes
2 Louisiana Hot Spiced Okra 24 - 8 oz jars
2 Northwoods Cranberry Sauce 12 - 12 oz jars
3 Chocolade 10 pkgs.
3 Gumbär Gummibärchen 100 - 250 g bags
3 Maxilaku 24 - 50 g pkgs.
3 Scottish Longbreads 10 boxes x 8 pieces
3 Sir Rodney's Scones 24 pkgs. x 4 pieces
3 Tarte au sucre 48 piesI would like to read 1st Category Id, Category E-Mail from Categories Table (ie. Category Id = 1), find that in Products Table. If match, extract matching records for that Category from Both Tables (Categories.CategoryID, Products.ProductName, Products.QuantityPerUnit) and e-mail them based on E-Mail Address from Parent (Categories ) Table. If no E-Mail Address is listed, do not create output file. In this instance Category Id = 3.Basically I want to select 1st record from Parent Table (Here is Category) and search for all matching Products in Products Table. And Create an E-mail and sending just those matching records. Repeat the same process for remaining rows from Categories Table. I am expecting my E-Mail Output like this: For Category Id: 1
2 Northwoods Cranberry Sauce 12 - 12 oz jarsI am not extracting the Data for any user Interface (ie. Grid View/Form View Etc). I will just create a Command Button in an ASP.NET 2.0 form to extract Data. My Tables are in SQL 2005. I was thinking to read the Category records in a Data Reader and within the While Loop, call a SP to retrieve the matching records from Products Table. If matching records found, call System SP_Mail to send the E-mail. The drawback with that for every category records (Within While Loop) I need to call my SP to get Products Data. Will be OVERKILL? Ideally I would like extract my records with one call to a SP. Is there any way I can run a while loop inside the SP and extract Child Data based on Parent Record? Any Help or sample URL, Tutorial Page will be appreciated. Thanks
I have an Excel file which contains some data. I want to load that into a SQL server Table. Here are my conditions :
1. If the table doesn't have any matching records from the Excel file, then my DFT should load the data from that Excel to the Dest Table.
2. If the table has even one or more matching records, then the DFT should not process at all, instead I should send an email to the business stating that there are some matching records and hence the package is not process...ed.
P.S. If i use Lookup, I have two matching and non-matching output. which will process the non matching records into the table and matching can be redirected to any flat/Excel file. But i don't want to do this. I just want to lookup the Sql Server table and excel.
It'll be good if there is an additional option in the Lookup "Fail component on matching records".
create table a (id int, name varchar(10)); create table b(id int, sal int); insert into a values(1,'John'),(1,'ken'),(2,'paul'); insert into b values(1,400),(1,500);
select * from a cross apply( select max(sal) as sal from b where b.id = a.id)b;
Below is the result for the same:
idname sal 1John500 1ken500 2paulNULL
Now I'm not sure why the record with ID 2 is coming using CROSS APPLY, shouldn't it be avoided in case of CROSS APPLY and only displayed when using OUTER APPLY.
One thing that I noticed was that if you remove the Aggregate function MAX then the record with ID 2 is not shown in the output. I'm running this query on SQL Server 2012.
I need a little help here..I want to transfer ONLY new records AND update any modified recordsfrom Oracle into SQL Server using DTS. How should I go about it?a) how do I use global variable to get max date.Where and what DTS task should I use to complete the job? Data DrivenQuery? Transform data task? How ? can u give me samples. Perhaps youcan email me the Demo Package as well.b) so far, what I did was,- I have datemodified field in my Oracle table so that I can comparewith datelastrun of my DTS package to get new records- records in Oracle having datemodified >Max(datelastrun), and transferto SQL Server table.Now, I am stuck as to where should I proceed - how can I transfer theserecords?Hope u can give me some lights. Thank you in advance.
I would like to update Table 1 with the data from Table 2. Here is my problem..Lets say that I have two records in Table 2 that have the same policyNumber but two different NewEmpNames, it only takes the first. In other words, a single policynumber can be moved to a New EmpName and then again later on to another NewEmpName adn even again if need be
I am trying to update a field within one table with the values from another table. With the criteria that another field in each table are equal. What is the correct way to do this. My syntax is all wrong.
I have table1 which has many unique ID numbers and table2 that has many records for each ID. some of the ID numbers in table1 have changed and I have created a translation table (table3) that links the old and new ID numbers.
What I need to do is some sort of update sql statement that updates all the records in table2 changing all the oldID numbers to the new ones using the translation table.
Table1 and table2 are not linked...can anyone help me with the sql statement
example
Table 1 IDNUM NAME 12345 Joe 12346 Mary 12347 David
Good day to all, I am new here so i hope i am doing things correctly.
The Company i work for make coils of shaped wire and work a 6 - 6 shift pattern
I have a database that is updated from a data collection source (MS Access) at 06:00 every morning. This seems to be working ok, my problem is that most coils fit nicely into the 6 - 6 shift pattern, but some now and again drift over into the next shift. I have written a crystal report that picks up this data. at the moment the coils are put in the database as: [Coil Start Time], [Coil Finish Time], [Coil Start Weight], [Coil Finish Weight], etc.
I have written (been helped to write) a SQL statement that will do the following:
Step 1: If the Coil Finish time is greater than the shift end time, then set the shit end time to be coil end time and zero start and finish wheight. Step 2: The original Coil record is duplicated and Coil Start time set to start time of shift, all other data left alone.
Example of code:
-->>
SELECT [Batch Name], [Batch Start], [Batch End], [Coil Start Weight], [Coil Finish Weight], [Product], [Shift], [Operator ID], [Works Order No] FROM dbo.tblCoilData WHERE (DATEPART(hour, [Batch Start]) >= 6 AND DATEPART(hour, [Batch End]) < 18) OR ((DATEPART(hour, [Batch Start]) < 6 OR DATEPART(hour, [Batch Start]) >= 18) AND (DATEPART(hour, [Batch End]) < 6 OR DATEPART(hour, [Batch End]) >= 18)) UNION ALL SELECT [Batch Name], [Batch Start], DATEADD(hour, 17, DATEADD(minute, 59, CONVERT(char(10), [Batch End], 101))), 0, 0, [Product], [Shift], [Operator ID], [Works Order No] FROM dbo.tblCoilData WHERE DATEPART(hour, [Batch Start]) >= 6 AND DATEPART(hour, [Batch Start]) < 18 AND (DATEPART(hour, [Batch End]) < 6 OR DATEPART(hour, [Batch End]) >= 18) UNION ALL SELECT [Batch Name], DATEADD(hour, 18, CONVERT(char(10), [Batch Start], 101)), [Batch End], [Coil Start Weight], [Coil Finish Weight], [Product], [Shift], [Operator ID], [Works Order No] FROM dbo.tblCoilData WHERE DATEPART(hour, [Batch Start]) >= 6 AND DATEPART(hour, [Batch Start]) < 18 AND (DATEPART(hour, [Batch End]) < 6 OR DATEPART(hour, [Batch End]) >= 18) UNION ALL SELECT [Batch Name], [Batch Start], DATEADD(hour, 5, DATEADD(minute, 59, CONVERT(char(10), [Batch End], 101))), 0, 0, [Product], [Shift], [Operator ID], [Works Order No] FROM dbo.tblCoilData WHERE (DATEPART(hour, [Batch Start]) < 6 OR DATEPART(hour, [Batch Start]) >= 18) AND DATEPART(hour, [Batch End]) >= 6 AND DATEPART(hour, [Batch End]) < 18 UNION ALL SELECT [Batch Name], DATEADD(hour, 6, CONVERT(char(10), [Batch Start], 101)), [Batch End], [Coil Start Weight], [Coil Finish Weight], [Product], [Shift], [Operator ID], [Works Order No] FROM dbo.tblCoilData WHERE (DATEPART(hour, [Batch Start]) < 6 OR DATEPART(hour, [Batch Start]) >= 18) AND DATEPART(hour, [Batch End]) >= 6 AND DATEPART(hour, [Batch End]) < 18
<<--
I have 2 options now
option 1: Leave this as a SQL View and report from this
option 2: Insert updated records to the tblCoilData table so that the data in the table is permanent
I would prefer option 2 but am a bit of a nugget when it comes to writing update / insert statements, Could someone please help me with this
Update the taskname as 'projectname' + '_' + 'workname' for any projectid. projectname and workname coming from the projectid and workid in the task record
so the task table becomes 1,project1_work1,1,1 2,project1_work2,1,2 3,project2_work3,2,3
I can get all the records doing this
SELECT p.projectname+ ' ' + w.workname AS 'NEWNAME' FROM task t JOIN work w ON t.workid = w.workid JOIN project p ON t.projectid = p.projectid WHERE projectid = 2
I have one table of new records (tableA) that may already exist intableB. I want to insert these records into tableB with insert if theydon't already exist, or update any existing ones with new data if theydo already exist. A column (Action) in tableA already tells me whetherthis is an INSERT, UPDATE, or DELETE. I'm able to derive that I can doan insert withselect * into tableB from tableA where Action = 'INSERT'....and I think I can handle the delete.But I'm stuck on the update. How do I do the update? An ordinaryUPDATE statement just won't do unless I use a cursor to cycle throughthe recordset. I want to avoid a cursor.
How to use db-lib to update/insert database records without using SQLlanguage. I want to change the value of the data individually withoutplugging in the new values in the SQL language then execute it. Theperfect situation for me is loop through the retrieved records thenedit the values individually until EOF.Thanks,Carlo
I'm fairly basic in SQL and I was wondering if it was possible to update the id field of a certain table in all records of that table. And to update the links to those ids in other tables.