How To Replace The Existing Records In The Table From Updates In The Transactional File
Sep 13, 2007
Hi i am getting a weekly transaction file which has two columns trans code and trans date to indicate whether the record is changed, added or modified . and the monthly master file contains blank in these two fields.
How do i update the row coming from the transaction file in the tables which contain the rows from the master file .
to better explain the example is
Master File
ID Name AGE Salary Transcode TransDate
2 dev 27 2777
Transaction File indicating change
ID Name AGE Salary Transcode TransDate
2 dev 27 24444 C 08072007
the ouput should be
2 dev 27 24444 C 08072007
replacing the existing row in the table(updating the whole row)
i have 50 columns in my table and based on the two fields i should replace the rows exisiting in table and if ID doesnot match exist just add them as a new row.
what transformation should i use .... to replace all the columns which have matching ID in table to the current record from trans file and if there doesnt exist matching id just add them as new row.
I have a situation where deleting old records is blocking updating latest records on highly transactional table and getting timeout errors from application.
In details, I have one table called Tran_table1 in OLTP database. This Tran_table1 is highly transactional table, it will receive data for insert/update continuously
While archiving 2 years old records from Tran_table1 into Tran_table1_archive in batches(using DELETE OUTPUT INTO clause), if there is any UPDATEs on Tran_table1,these updates are getting blocked and result is timeout errors in application.
Is there any SQL Server hints to avoid blocking ..
Hi,In the Web application I am working on, data is read from a SQL Serverdatabase. At any time, there are about 15 people browsing the web.The SQL Server database is updated with new information once everyhour. The update takes a couple of minutes. The isolation level duringthe update is so to Serialazable so that the front-end does not getany incorrect data.Now, here is my problem. When the web page is being loaded, the serverside ASP.NET code uses several SELECT statements at multiple places.For various design reasons, these SELECT statements cannot be combinedinto a single statement. As a result, it may happen that during thepage load, we get some data before an update and some data after anupdate.I am wondering if I must used a transactional lock even for the Webapplication although technically it is not updating the database.Also, after playing with various transactional settings, I noticed thefollowing behavior for the readers when a writer enters a transaction:1. If the reader app has not yet executed the query, the call to queryexecution blocks until the writer has done its job.2. If the reader app has already begun executing the query, the callis not blocked and SQL Server provides the needed isolation.I do not wish to block the readers while the update is going on.Ideally, I would like it to be such that even if the writer isupdating, the readers must continue to get the old data, that is,until the writer commits the update. However, I did not find anyisolation settings that would let me achieve this non-blockingbehavior. Am I missing something?Thank you in advance for enlightening me.Pradeep
I have a table that stores all the processed data from other tables. How can I replace the same data in this table when I do "reprocessing"? It's kinda like a combination of delete then insert kind of thing. I cannot simply insert as it will become duplicates. any idea?
Dear All,I am Using MS SQL EXPRESS SERVER .I have installed all tools available to Express Edition site. Now I have created my database on this .I have imported a table from my MS ACCESS database (Using ODBC Datasource).This table contains 10,000 records ,Now I want to append 1 more access Table(5500 records) to the existing table having same fields.How to do this.Can any body tell me? Thanks and Regardsmukesh
I have a primary and secondary servers both running Windows 2000 SP3 with SQL 2000 SP3. I have set up transactional replication with the primary server as publisher and the secondary server has the distributor and subscriber DB. I am testing the scenerio where my primary server goes down and I have to make updates to the secondary server until my primary server comes back up. I am able to update my subscriber database and the transactions go into the MSreplication_queue table to be pushed back to the primary when it comes back up. When I bring the primary server back up and start the queue agent job it starts pushing the transactions over and then stops after 4 or 5 transactions with the error "Failed while applying queued message to publisher". I have attached part of the log file for the agent below
In the sql server logs I am getting this message: Replication-Replication Transaction Queue Reader Subsystem: agent Repl Queue Reader failed. Failed while applying queued message to publisher. Error: 14151, Severity: 18, State: 1
I am trying to make a DDL change to an existing column for a transactionally replicated table. I am attempting to effect the ALTER on the Publisher. Below is the statement I am using:
[Begin Code Snippet]
ALTER TABLE Products ALTER COLUMN LegacyProductID varchar(20)NULL
[Begin Code Snippet]
This is the error message:
Msg 4928, Level 16, State 1, Line 3 Cannot alter column 'LegacyProductID' because it is 'REPLICATED'.
Column properties currently are:
INT NULL Not For Replication - False Primary Key - False
I read this article http://www.replicationanswers.com/AlterSchema2005.asp but still cannot either get the statement to work or identify a reason it isn't working.
I am using a DTS package to output a view to a pre-determined Excel file. Currently it just adds the output to the bottom of the current table in excel but i would like it to delete the contents of the worksheet before adding the new rows.
I need to add an existing shared folder to a SQL FileTable.So this is the path and I created a SQL Filetable and now I need to add it to the filetable.
I have an existing publication in sql 2012 with 2 articles, and then I add 2 more articles. After that when I generate a snapshot, will the snapshot be generated for 2 new articles only or for all 4 articles?
I remember adding 1 new articles to one existing publication with 150 articles and when I generated snapshot, it was generated only for 1 article. But I don't remember clearly.
Does it behave differently for small and large number of articles?
-----Table Proc Index Performance TSQL &&%$#@*(#@$%.......------------
I need to add an existing shared folder to a SQL FileTable. So this is the path and I created a SQL Filetable and now I need to add it to the filetable.
I am trying my first bulk update to an existing SWL table from a CSV text file,The text file naming is exacrtly the same as the SQL table, with the same attributes
The statements: BULK INSERT [Jedox_prod].[dbo].[B_BP_Customer] FROM 'c:Baanjedox_dailyjdcom4401.txt' WITH
[code]....
The error message is: [size=1Msg 4864, Level 16, State 1, Line 1
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 2, column 3 (BP_Country). Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".size=1]..The have checked and re-checked the BP_Country field ( the 1st field after the key) and I am not seeing any mismatches.
Hi all,I have a problem and need some ideas.What I have done: I created a page to upload an excel file into a SQL Server table along with some customer info (from the login, day, etc.). This excel file contains several rows (some of them may be blank) and columns (also some may be blank). The file is stored in an image object.The file will be checked (they want to do it manually, because contents is a problem). If they say it is OK, I want to run a program to add a record into an existing table with the request no. (from the first table, where the object is stored) and all the information available from the filled rows (first row is header). I have a column, which can be checked, if the row contains data or not.Any ideas?I know how to read from and write the contents of the object to a field in the SQL table. Can I use this?Thanks for any idea / code / link.
Is it possible to do an update on multiple records. I have fields CusOrderID and CusCreditAmt in table CusOrder; and fields CusOrderID and CreditAmt in table CusCredits. I would like to update the value in the CusCreditAmt field in CusOrder to equal the CreditAmt field in CusCredits where CusOrderID is the same in both tables.
I tried doing this in a stored procedure based on a View with an inner join between the 2 tables as follows:
Update View1 Set CusCreditAmt = CreditAmt
I received an error that the subquery returned more than one value. Is there anyway to do something like this and make it work?
I am trying to update a small subset of records of a given table (TRValue) using the records contained in ParcelTemp. The difficult part is getting the summation from a child file, TRGreen, for those same parcels contained in ParcelTemp. Instead of updating just a few records, all the records in TRValue are being updated, with the wrong values of course!
Basically, Update records in TRValue that are equal to:
Year = P.Year Code = 'LG01' Parcel = P.Parcel
with the summation of child records where the child records needed are:
Year = P.Year Parcel = P.Parcel
Code: UPDATE TRValue SET Acres = SumAcres, CurrentMarket = SumMarket, CurrentTaxable = SumTaxable, CurrentTaxAmt = ((SumTaxable * D.CertifiedRate) + 0.50) FROM ParcelTemp P
I am having trouble getting an Update call to actually update records. The select statement is a stored procedure which is uses inner joins to link to property tables. The update, insert, and delete commands were generated by Visual Studio and only affect the primary table. So, to provide a simple example, I have a customer table with UID, Name, and LanguageID and a seperate table with LanguageID and LanguageDescription. The stored procedure can be used to populate a datagrid with all results (this works). The stored procedure also populates an edit page with one UID (this works). After the edit is completed, I attempt to update the dataset, which only has one row at this time, which shows that it has been modified. The Update modifies 0 rows and raises no exceptions. Is this because the update, insert, and delete statements do not match up one-to-one with the dataset? If so, what are my choices?
Hi All,I have a table with a column DeletedDate which stores a logical deleteof a record.I need to set up transactional replication for reporting purposes thatthis deleted records should not be replicated to the subscriber. Thatis, if i see a value on the DeletedDate, I don't want that record tobe picked up for replication.At the same time, when someone marks the record for deletion (byputting a date on the DeleteDate column), I want that record to bedeleted on the subscriber database. (I can also set up a job to do thedeletes on the subscriber but i'd rather let the replication take careof it).Can this scenario be implemented in Microsoft SQL 2000? I wouldappreciate any ideas / thoughts in this matter.Thanks in advance,Aravin Rajendra.
I have triggers in place on a table that do various checks on data input. It is clear that because of these triggers I cannot do updates on multiple records in this table. When I do, I receive an error that "subquery returned more than one value." Is there anyway to work around this by temporarily turning off triggers or something else?
Hi... I have data that i am getting through a dbf file. and i am dumping that data to a sql server... and then taking the data from the sql server after scrubing it i put it into the production database.. right my stored procedure handles a single plan only... but now there may be two or more plans together in the same sql server database which i need to scrub and then update that particular plan already exists or inserts if they dont...
this is my sproc... ALTER PROCEDURE [dbo].[usp_Import_Plan] @ClientId int, @UserId int = NULL, @HistoryId int, @ShowStatus bit = 0-- Indicates whether status messages should be returned during the import.
AS
SET NOCOUNT ON
DECLARE @Count int, @Sproc varchar(50), @Status varchar(200), @TotalCount int
SET @Sproc = OBJECT_NAME(@@ProcId)
SET @Status = 'Updating plan information in Plan table.' UPDATE Statements..Plan SET PlanName = PlanName1, Description = PlanName2 FROM Statements..Plan cp JOIN ( SELECT DISTINCT PlanId, PlanName1, PlanName2 FROM Census ) c ON cp.CPlanId = c.PlanId WHERE cp.ClientId = @ClientId AND ( IsNull(cp.PlanName,'') <> IsNull(c.PlanName1,'') OR IsNull(cp.Description,'') <> IsNull(c.PlanName2,'') )
SET @Count = @@ROWCOUNT IF @Count > 0 BEGIN SET @Status = 'Updated ' + Cast(@Count AS varchar(10)) + ' record(s) in ClientPlan.' END ELSE BEGIN SET @Status = 'No records were updated in Plan.' END
SET @Status = 'Adding plan information to Plan table.' INSERT INTO Statements..Plan ( ClientId, ClientPlanId, UserId, PlanName, Description ) SELECT DISTINCT @ClientId, CPlanId, @UserId, PlanName1, PlanName2 FROM Census WHERE PlanId NOT IN ( SELECT DISTINCT CPlanId FROM Statements..Plan WHERE ClientId = @ClientId AND ClientPlanId IS NOT NULL )
SET @Count = @@ROWCOUNT IF @Count > 0 BEGIN SET @Status = 'Added ' + Cast(@Count AS varchar(10)) + ' record(s) to Plan.' END ELSE BEGIN SET @Status = 'No information was added Plan.' END
SET NOCOUNT OFF
So how do i do multiple inserts and updates using this stored procedure...
I have a table that is increasing quite largely each day. By now, I have average 300 million of records over 2.5 years. Before we received our new interface, the data we received was aggregated and thus not that big.The problem is that the table is so huge that I cannot use the Slowly Changing Component. I was thinking about making a temp table where I load the incremental data before I load it into the final data mart table.Based on this temporary table I use a script to compare the temp data with the already existing data in the data mart. However, this requires a compare of each records (300 mil records).
I have few tables. I want to identify the RECORDS for a table which has been created/modified for a particular date and time. I don't want to write a trigger to capture the event for add/update.
Is there any system table which track for date and time using stored procedure each individual records which has been last updated or newly created records??
Note : The application already created without lastModified date and each table... so, we don't want to modify the application or db.
I'm using sqlserver 2000 enterprise edition. I am an oracle dba and we have some tables in sqlserver 2000 that we need to write out to the flat file. I have a procedure in oracle to do this for oracle tables. But, how would I do this in sqlserver 2000. I have 10 columns on this table and I only want 3 columns data to be dumped on the flat file. We are on NT sever 4.0.
What I would LIKE to do is noted in the subject line. What I'm findingis that "edit SQL" appears to only be an option if I am creating atable. If I select "append to" the option to edit SQL shades itself asunavailable.The reason I'd like this is that there is a datum in the flat file thatindicates whether that record should be appended to that table notedabove. There are other ways of dealing with this "problem" but it wouldbe nice to be able to control it using SQL, in the DTS import/exportwizard.If the source of my data is an SQL table, I can generate an SQL queryto specify what fields to import in an append, to check for existingvalues, etc...Is there a way around this? I can reserve a table for data transfers,regularly overwrite it with new data from text file inputs, and use SQLto insert select fields from that transfer table to other databasetables. (From this "transfer" table, data needs to be inserted intofour separate tables in our database).I hope this is clear. If it CAN'T be done this way, it's okay...just alittle ugly with the need to re-create the transfer table.
I am in a trouble without doing any mistake. I restarted while in parallely my sql server was restarting i.e I opened a management studio.
When it booted up next time. I see that my database is gone. Though it is showed up in the server but has no node below it. I then checked the mdf files and ldf files and found mdf file to be of 0 KB and ldf file is of 1.73 GB. My database was in full recovery mode but i did not take any database backup.
Here ais the entry from my SQL server log:
The operating system returned error 38(Reached the end of the file.) to SQL Server during a read at offset 0000000000000000 in file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAHello.mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. I wonder if now there is a case to recover my database..Can I restore/recover the database from .ldf file as it was in full recovery model?Any help i shighly appreciated.
I have a transactional replication environment that creates subscribers on another server as a staging area for an ETL process to a data warehouse application on a 3rd server which is the report repository. Currently the ETL process runs every 10 minutes and performs it's function across approx 150+ subscriber databases and consolidates it to the data warehouse.
I have an SLA of 2 minutes. I'd like to rework the ETL process (which run as SSIS job at the moment) to be specific to a single database and fire that one ETL proces when changes have been applied to that subscriber database only. Of these 150+ databases generally only about 8-10 are updating the subscriber at any given time per Repl Monitor. I'm thinking that if I only have a few transactions to apply to a single db the ETL would run in seconds dynamically as the subscriber is update.
The issue is how to fire the ETL process upon completion of updates to the subscriber DB? I'm thinking of using SP_Start_job passing the DBID to update the warehouse but unsure whether this is possible but if so where to trigger it.
I am writing program in VC++ through SQl-DMO calls.My problem is when i when i tranfer(import) a text file(comma seperated) into SQl server through a SQl-DMO method called ImportData which is a method of Bulk copy object.Its is not able to convert the data field in the text file to corresponding value datetime in SQl server whereas other data types are working perfectly.
This is the record i need to convert:
90,MichaelB,Wintriss,Inspection,Paper,11,Job101,1, {ts '2000-12-10 15:54:56.000'},D:public233 and 247233.mcs,
There are about 1000 records like this .the text file is generated by SQl when i export data from Sql server tables.This file has lot of records.Now i need to put the record in the text file into SQl server tables.During which when i pass the text file it gives problem in converting date and time value.I cannot remove the bracket and ts as ,{ts '2000-12-10 15:54:56.000'} it generated by SQl server tables
and this is the date field {ts '2000-12-10 15:54:56.000'}
Whereas if i export a table in SQl server in Binary mode and then import the file back it works but when do it as text it gives the above error
Pls help me in this i would be very thankful to you.
I have one table of new records (tableA) that may already exist intableB. I want to insert these records into tableB with insert if theydon't already exist, or update any existing ones with new data if theydo already exist. A column (Action) in tableA already tells me whetherthis is an INSERT, UPDATE, or DELETE. I'm able to derive that I can doan insert withselect * into tableB from tableA where Action = 'INSERT'....and I think I can handle the delete.But I'm stuck on the update. How do I do the update? An ordinaryUPDATE statement just won't do unless I use a cursor to cycle throughthe recordset. I want to avoid a cursor.
I have a following problem: I€™m importing records from an Access table into a SQL Server Table. I€™m using lookup to determine if a record already exists in the SQL Server table and in that case I should update the record if it was modified.
I thought of using OLE DB Destination for new records (done, works fine) and OLE DB Command Transformation to update the modified records. It all works but the thing that bothers me is that my table has approx. 40 columns so my OLE DB Command is very long (update table set col1 = ?, col2 = ?, col3 = ? €¦€¦). The other problem is that I€™m always updating all the columns even if only one column was modified.
Is there a better way to update the existing records?
I have a SSIS package that dumps data from an internal table to a flat file output using standard data flow tasks. The entire table is output - no special SQL. Most of the time the records are placed in the output file in the same order as the internal DB table, but occasionally the order appears to be more random. When that happens, the record order in the internal table is correct - it is just the output.
I can find no properties that seem to affect this. I would appreciate any hints and advice that anyone can give me. Has anyone else encountered this same problem?