SQL Server 2012 :: Update ID In The Table?
Oct 11, 2014I wrote query to update id int(1,1) to some value but its not giving me the result. Is it not possible to update id on the table.
View 7 RepliesI wrote query to update id int(1,1) to some value but its not giving me the result. Is it not possible to update id on the table.
View 7 RepliesI am trying to use a stored procedure to update a column in a sql table using the value from a variable table I getting errors because my syntax is not correct. I think table aliases are not allowed in UPDATE statements.
This is my statement:
UPDATE [dbo].[sessions_teams] stc
SET stc.[Talks] = fmt.found_talks_type
FROM @Find_Missing_Talks fmt
WHERE stc.sessionid IN (SELECT sessionid FROM @Find_Missing_Talks)
AND stc.coupleid IN (SELECT coupleid FROM @Find_Missing_Talks)
To avoid locking/blocking, or in transaction scope, we are trying make a common practice of writing coide for update commands in our all SPs based on primary key columns in where clause. I have a following scenario...
UPDATE [dbo].[TL_CST_Locker_Issuance] SET
[isActive] = 0
WHERE
LockerIssuanceId IN (SELECT LockerIssuanceId
[Code] ...
What is the better approach and should be followed to avoid locks and gain performance or best approach.
my table payment_details structure is
payment_id payment_code
1 null
2 null
3 null
4 null
here payment_id is a primary key and i need to update the whole payment_id column to payment_code column.so i just tried the below query
update payment_details
set payment_code = payment_no
where payment_code is null
but it shows subquery error?
To see where is the problem I am trying to count rows in the database.First I create a table A with 2 columns namely tablename, rowbefore and rowafter and I insert records in it as below.
INSERT INTO A
SELECT TableName = o.name, '', Rows = max(i.rows) FROM sysobjects o
INNER JOIN sysindexes i ON o.id = i.id
WHERE xtype = 'u' AND OBJECTPROPERTY(o.id,N'IsUserTable') = 1
GROUP BY o.name
ORDER BY o.name
Then I update rowbefore with rowafter as below.
UPDATE A SET rowbefore = rowafter
Now I launch my application with update records in the database.Then I am trying to update rowafter with new records as below.
UPDATE A
SET rowafter = (SELECT max(sysindexes.rows) FROM sysobjects
INNER JOIN sysindexes ON sysobjects.id = sysindexes.id
WHERE xtype = 'u' AND OBJECTPROPERTY(sysobjects.id,N'IsUserTable') = 1 AND A.tablename = sysobjects.name)
Does this update really update my column rowafter or not?
I have some stored procedures that do updates using table aliases, like this:
UPDATE TableAlias
SET ColVal = 1
FROM RealTable AS TableAlias
WHERE TableAlias.ColVal <> 1
This allows me to include joins and stuff in the update (not shown) and make it all more readable for me.
It all works fine, and the SSMS parser says it's fine. But I also have another script which looks at sys.sql_expression_dependencies and sys.objects to find stored procedures with invalid object references (see below), and it's understandably saying that all of the above type stored procedures have invalid references.
SELECT
OBJECT_NAME(DEP.referencing_id) AS referencing_name,
DEP.referenced_entity_name
FROM sys.sql_expression_dependencies AS DEP
WHERE
-- Only validate local references:
(
DEP.referenced_database_name = DB_NAME()
[Code] ....
So I have a couple questions.
1. Is the UPDATE syntax I'm using kosher?
2. Can you recommend any updates to my stored procedure validation script that will better accommodate table aliases like mine?
I want to update one table with the value from variable using a set based approach.
so the line
>> select @List = coalesce(@list + ',','') + cast(id as varchar(10)) + cast(feetype as varchar(25))
works fine but I want to take @list and use it to update my customers table. here is the full code.
create table customers
(custid int, fee_history varchar(max))
insert into customers
(custid
, fee_history
[code]....
I am writing a query to update table variable. It is throwing me some error.
I have a table variable declared and inserted the data with three columns. I want to update col1 of that table variable when the second column of that table variable= one column from a physical table
update @MYtabvar set @Mytabvar.LatestDate=B.LatestDate from TableB B where @Mytabvar.id=B.ID
I have a table with 8 columns, I need to update data in multiple columns on this table, this table contains 1 million records, having single update was taking time so I broke the single update into multiple update statements and running multiple update statements in parallel, Each update statement updates different column.
This approach is working fine but I am getting the deadlock error.
Transaction (Process ID 65) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I tried with various lock hints but no success.
In a stored procedure I dynamically create a temp table by selecting the name of Applications from a regular table. Then I add a date column and add the last 12 months. See attachment.
So far so good. Now I want to update the data in columns by querying another regular table. Normally it would be something like:
UPDATE ##TempTable
SET [columName] = (SELECT SUM(columName)
FROM RegularTable
WHERE FORMAT(RegularTable.Date,'MM/yyyy') = FORMAT(##TempMonths.x,'MM/yyyy'))
However, since I don't know what the name of the columns are at any given time, I need to do this dynamically.
how can I get the column names of a Temp table dynamically while doing an Update?
Our system runs a SQL Server 2012 DB, it has a table (table_a) which has over 10M records. Our system have to receive data file from previous system daily which contains approximate 3M updated or new records for table_a. My job is to update table_a with the new data.
The initial solution is:
1 Create a table (table_b) which structur is as the same as table_a
2 Use BCP to import updated records into table_b
3 Remove outdated data in table_a:
delete from table_a inner join table_b on table_a.key_fileds = table_b.key_fields
4 Append updated or new data into table_a:
insert into table_a select * from table_b
As the test result, this solution is very inefficient. Step 3 costs several hours, e.g. How can I improve it?
I have a stored proc that contains an update which utilizes a case statement to populate values in a particular column in a table, based on values found in other columns within the same table. The existing update looks like this (object names and values have been changed to protect the innocent):
UPDATE dbo.target_table
set target_column =
case
when source_column_1= 'ABC'then 'XYZ'
when source_column_2= '123'then 'PDQ'
[Code] ....
The powers that be would like to replace this case statement with some sort of table-driven structure, so that the mapping rules defined above can be maintained in the database by the business owner, rather than having it embedded in code and thus requiring developer intervention to perform changes/additions to the rules.
The rules defined in the case statement are in a pre-defined sequence which reflects the order of precedence in which the rules are to be applied (in other words, if a matching value in source_column_1 is found, this trumps a conflicting matching value in source_column_2, etc). A case statement handles this nicely, of course, because the case statement will stop when it finds the first "hit" amongst the WHEN clauses, testing each in the order in which they are coded in the proc logic.
What I'm struggling with is how to replicate this using a lookup table of some sort and joins from the target table to the lookup to replace the above case statement. I'm thinking that I would need a lookup table that has column name/value pairings, with a sequence number on each row that designates the row's placement in the precedence hierarchy. I'd then join to the lookup table somehow based on column names and values and return the match with the lowest sequence number, or something to that effect.
The objective is to identify orders where an order fee has been applied incorrectly. I have multiple orders per customer, my table contains an orderID and a customerID. Currently if the customer places additional orders before the previous orders have been closed/cancelled, then additional fees are being applied.
Let's say I'm comparing order #1 to order #2. I need to identify these rows where the following is true:-
The CustID is the same.
Order #2 has a more recent order date.
Order #2 has a FeeDate Before the CancelledDate of Order #1 (or Order #1 has no cancellation date).
So in the table the orderID:2835692 of CustID: 24643 has a valid order fee. But all the subsequently placed orders have fees which were applied before the first order was cancelled and so I want to update the FeeInvalid column with a 'Y'. The first fee will always be valid.
I think I understand why the code I am trying doesn't achieve the result I want but I can't figure out how to write it correctly. Below is one example of code I've tried and also code to create the table and insert some test data.
update t1
SET FeeInvalid = 'Y'
FROM MockData t1 Join MockData t2 on t1.CustID = t2.CustID
WHERE t1.CustID = t2.CustID
AND t2.OrderDate > t1.OrderDate
AND t2.FeeDate > t1.CancelledDate
CREATE TABLE [dbo].[MockData](
[OrderID] [float] NULL,
[code]....
I want to compare two columns in the same table called start date and end date for one clientId.if clientId is having continuous refenceid and sartdate and enddate of reference that I don't need any caseopendate but if clientID has new reference id and it's start date is not continuous to its previous reference id then I need to set that start date as caseopendate.
I have table containing 5 columns.
caseid
referenceid
startdate
enddate
caseopendate
[code]...
I have a temporary table #Temp with, among others, a column CountryCode and a column Lastname.I would like to change the ü that appears in some names to u (u umlaut to a plain u), but only for those that have the nationality 'Ned'
My code so far:
Update #Temp
set LastName = replace(Lastname, 'ü', 'ue') WHERE CountryCode = 'Ned'
This code deletes all entries in the column Lastname
I think it must be easy, but I keep staring at the code.
What to do?
I have table A
|account | Unmort |
| A |10.000.000 |
and a Table B
|account| Jenis | Nominal | Unmort |Total|
-------------------------------------------
| A | 021 | 200.000| - | - |
| A | 028 | 3.200.000| - | - |
| A | 023 | 7.200.000| - | - |
how to update to be like this??
|account| Jenis |Nominal | Unmort |Total |
| A | 021 |200.000 | - |200.000 |
| A | 028 |3.200.000 | 2.800.000 |400.000 |
| A | 023 |7.200.000 | 7.200.000 | 0 |
for this type of account number jenis 021 Field Unmort Fill set= 0 and Field Total must not be a minus...
Is it possible to allow a user to insert and update data in a table but prevent them from performing deletes against that same table? For auditing purposes I need to prevent the end users from being able to delete data.
View 1 Replies View RelatedI have created a table(T1) from select query result, that Select query is parameterised. Now I need to update the select query table(T1) based on the result every time.
Below is my Query:
ALTER PROCEDURE [dbo].[RPT_Cost_copy]
SELECT MEII.*, SIMM.U_SP2DC, UPPER(SIMM.U_C3C2) AS GRP3,sb.cost, PREV.Z1, PREV.Z3, SB.Z2, SB.Z4,SIMM.U_C3DC1 AS FAM
INTO T1
FROM
(SELECT a.meu, a.mep2, SUM(a.mest) as excst
FROM mei as A WHERE a.myar=@yr and a.mprd=@mth AND LTRIM(A.MCU) <> '' AND LTRIM(A.MRP2) <> ''
[code]....
We are facing a weird scenario in which the snapshot is getting corrupted after insertupdate few million records in to a table .
SQL Server 2012
windows server 2008 R2
service pack 1
64-bit OS
I run the following statement and it will not update beyond 7 million plus rows and I have about 38 million to complete. I keep checking updated row counts and after 1/2 day it's still the same so I know something is wrong because it was rolling through no problem when I initiated it. I need to complete ASAP so it's adding to my frustration. The 'Acct_Num_CH' field is an encrypted field (fyi).
SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
WHILE @@ROWCOUNT > 0
BEGIN
SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
END
SET rowcount 0
I am trying to Write an update string for individual partID's. I wrote this query below but it isn't populating the time to test.
SELECT 'UPDATE Parts SET = [TimeToTest]' + ' ' +
Convert(varchar, (select test From [dbo].[db_PartsCats] as c Join Parts As P on P.category = C.CatID Where PartID = 48727))
+ ' ' + 'WHERE PartID = ' + CONVERT(varchar, P.PartID)
From Parts As P
Where FRID = 0 And TimeToTest = 0 and TimeToInstall = 0 and TimeToProgram = 0 And TimeToTrain = 0 And manufacturer = 187
Order By categoryMy results:
Should get UPDATE Parts SET = [TimeToTest] 0.5000 WHERE PartID = 48871 But getting Nulls instead
Create Table #Temp (ID int not null primary Key, Name varchar(25) not null, DOB datetime not null, Sex char(1), Race char(1), Height int, Weight int)
insert #Temp
select 1, 'Kenneth', '1963-02-26 00:00:00.000', 'M','C', 516, 160
union
select 2, 'Kenneth', '1963-02-26 00:00:00.000', NULL,NULL, NULL, NULL
union
select 3, 'William', '1962-06-28 00:00:00.000', 'M','C', 600, 223
[Code] .....
/* Expecting Output */
select 1 as ID, 'Kenneth' as Name, '1963-02-26 00:00:00.000' as DOB, 'M' as Sex,'C' as Race, 516 as Height, 160 as Weight
union
select 2, 'Kenneth', '1963-02-26 00:00:00.000', 'M','C', 516, 160
union
select 3, 'William', '1962-06-28 00:00:00.000', 'M','C', 600, 223
union
[Code] .....
I am working on an existing infrastructure and i do not have liberty to change much right now. I am in a situation where app issues update statistics command quite often. So frequently that sometimes one blocks another. Is there any way i can do something like this
IF ( update_statistics going on)
dont do anything
else
run update statistics
This is temporary solution untill i fix bad inline SQL code (in app) and use SPs.
We have an UPDATE trigger that is failing. This seems like a basic task - we want to write a record to a separate tracking table when our main transaction table is updated for any reason. Our assumption is that we have a reference to the data from the "inserted" record that was just updated. The scenario here is that we are running a batch process which READS several thousand records in our transaction table each evening.
We then mark each individual record as processed on the transaction table and expect that the UPDATE trigger will successfully fire (it is not). The version of our trigger listed below shows our attempt to deal with the fact that TransactionID does NOT exist from "inserted." We also have a version of this trigger that deals with INSERTS - it works flawlessly.
ON [dbo].[FPS_Transaction]
AFTER UPDATE
AS declare @trxId uniqueidentifier;
BEGIN TRY
SET NOCOUNT ON
[code]...
I am looking for standard sql code for below 2 concern.
1- I want to drop the column Rowchecksum to all the table where table name start with ArchiveBbx
2-I want to update all the table where table name start with ArchiveBbx
example:-
Update table Archivebbxfbcc
set Rowchecksum=HASHBYTES('MD5', CAST(CHECKSUM(Col001, Col002, Col003, Col004) AS varchar(max)))
I open query analyser and on one tab I update a record in a transaction and hold it.
begin tran
update customers set territory = 'x' where customer = 'A00001'
--rollback tran
In a second tab I attempt to update all records in the table
update customers set carrier = ''
Clear this fails because of the lock placed during the first script and this is fine.
However, is there a way to get the 2nd script to ignore the locked rows and just update as many as it can? The obvious answer seemed to be the READPAST hint like follows…
update customers with (READPAST) set carrier = ''
…but this is still blocked by the original lock. I’ve tried combining it with all sorts of other table hints but all seem to get blocked.
The following does work, ignoring the lock and not returning the data
Select * from customers with (READPAST) where customer = 'A00001'
I’ve tried combining this with the update like so…
update customers
set carrier = ''
from customers with (READPAST)
where customer = 'A00001'
..but this is blocked too.
I’m so desperate I tried moving the update into a cursor and update one row at a time. Nothing worked. I thought I might be able to do something like this
If (Select count(*)
from customers with (READPAST)
where customer = 'A00001') > 0
--then perform update
..but this returns a value of 1 even though the following returns no rows.
Select * from customers with (READPAST) where customer = 'A00001'
Look at the following code,
Create table #test
(
id int primary key,
Name varchar(100)
)
insert into #test values (1,'John')
insert into #test values (2,'Walker')
[Code] ....
-- Query 1 :
update #test set name = 'Joney' where id = 1
-- Query 2 :
set rowcount 1
update #test set name = 'Joney' where id = 1
set rowcount 0
1. #test table have primary key & clustered index.
2. Obviously only one row will be available for an id.
3. In query 1, will the sql server look for matching rows even after it found 1 row?
4. Will query 2 really gains some performance?
I have two tables that can be created with sample data using the DDL at the bottom of this post. What I'm looking to do is update the QtyReceived column in tblPurchaseOrderLineDetail from the Qty column in tblReceivedItems. However, the tricky part that I can't figure out is splitting these quantities out over multiple lines. I should only be allowed to receive up to the QtyOrdered column in tblPurchaseOrderLineDetail.
For a specific example from the sample data we'll look at PurchaseOrderDetailID 28526. From the tblReceivedItems, there are three records with quantities of 48, 48, and 20. From the tblPurchaseOrderLineDetail there are three records of QtyOrdered of 55, 45, and 20. What I would like to happen is fulfill the records in the tblPurchaseOrderLineDetail sequentially (essentially in order of ExpectedDate). So, the QtyReceived would be 55, 45, and 16 for the corresponding records. If there is already a quantity in the QtyReceived column, but it's less than the QtyOrdered column, the quantity needs to be added to the column (not overwritten).
DDL To CREATE Sample Tables and Data:
CREATE TABLE [dbo].[tblReceivedItems](
[ID] [int] IDENTITY(1,1) NOT NULL,
[PurchaseOrderDetailID] [int] NULL,
[Qty] [int] NULL)
SET IDENTITY_INSERT [dbo].[tblReceivedItems] ON
INSERT [dbo].[tblReceivedItems] ([ID], [PurchaseOrderDetailID], [Qty]) VALUES (1, 28191, 48)
[code]....
In a t-sql 2012 stored procedure, I would like to know how I can complete the task I am listing below:
In an existing t-sql 2012 stored procedure, there is a table called 'Atrn' that is truncated every night. The Table 'Atrn' has a column called 'ABS' that is populated with incorrect data.
The goal is to place the correct value into 'ABS' column that is located in the Atrn table while the t-sql 2012 stored procedure is excuting.
**Note: The goal is to fix the problem now since it is a production problem. The entire stored procedure that updates the 'dbo.Atrn' table will be rewritten in the near future.
My plan is to:
1. create a temp table called '#Atrnwork' that will contain the columns called,
Atrnworkid int, and ABSvalue with a double value.
2. The value in the column called Atrnworkid in the '#Atrnwork' table, will obtain its value from the key of the 'Atrn' called atrnid by doing a select into. At the same time, the value for ABSvalue will be obtained by running some sql when the select into occurs?
3. The main table called 'Atrn' will be changed with a update statement that looks something like:
Update Atrn
set ABS = ABSvalue
join Atrn.atrnid = #Atrnwork.Atrnworkid
In all can you tell me what a good solutiion is to solve this problem and/or display some sql on how to solve the problem listed above?
I am writing a trigger for getting values to auditlog table when the values gets updated. Below is the code of my trigger.
CREATE TRIGGER [dbo].[Update_Temp] ON [dbo].[Temptable1] FOR UPDATE
AS
DECLARE @bit INT ,
@field INT ,
@maxfield INT ,
@char INT ,
@fieldname VARCHAR(128) ,
@TableName VARCHAR(128) ,
[Code] ....
The code is working fine when the table has primary key associated. However due to some restrictions I will not be able to have a primary key for some tables. I want to implement the same trigger in those tables too. When there is primary key, that primary key needs to get inserted into the audit table and if there is no primary key, i want a specific column value to get inserted instead of the primary key value into the audit table.
For example, i have a student table in which there is a student id, name, dob. there is no primary key defined for the table. So when i update the name or dob, i need the student id to get inserted into the Pk column of the audit table.
I tried modifying the code by checking the @pkcols for Null and if its null to get the old value as the primary key however I was not able to do it .
I am unable to update the data using record by record below scenario.
Required output:
patient will able to Admit/Re-admit multiple times in hospital, if a patient readmitted multiple times in hospital after the first visit, first visit record will get Re-admission=0 and Index=1. This visit should cal Index_Admission of that patient. using this index_admission should calculate the 30-day readmission.
Current Output:
Calculation: From index_admission discharge date to next admit_visit date,
1) if the diff is having less than 30 days, readmission=1 and Index=0
else readmission=0 and Index=1 should be update.
For checking this every time should check using the latest index_admission discharge_date.
To get this result i written below logic, but it's updating readmission=0 and Index=1 after 30-day post discharge of using first index admission.
UPDATE Readmission
SET Index_AMI = (CASE WHEN DATEDIFF(DD, (SELECT Sub.Max_Index_Dis FROM
(SELECT Patient_ID, MAX(Discharge_Date_Time) Max_Index_Dis FROM Readmission
WHERE Index_AMI = 1 AND FPR.Patient_ID = Patient_ID GROUP BY Patient_ID) Sub)
, FPR.Admit_Date_Time) between 0 and 31 THEN 0 ELSE 1 END),
[Code] ....
Expected Result:
I have a simple update statement (see example below) that when runs, I expect to see the number of records updated in the Results tab. This information shows up in the Messages tab; however, what is displayed in the Results tab is (No column name) 40. From where the 40 is being generated. I have tried restarting SSMS 2012, restarting my computer, turning NOCOUNT on and off.
"UPDATE TableA
SET Supervisor = 'A123'
WHERE PersonnelNumber = 'B456'"
I have created a Dynamic Merge statement SCD2 Store procedure , which insert the records if no matches and if bbxkey matches from source table to destination table thne it updates old record as lateteverion 0 and insert new record with latest version 1.
I am getting below error when I ahve more than 1 bbxkey in my source table. How can I ignore this.
BBXkey is nothing but I am deriving by combining 2 columns.
Msg 8672, Level 16, State 1, Line 6
The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches more than one source row. A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times. Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows.