I'm working on a project that is being built piece by piece. The first part
is in place. I will occasionally need to either change thing within a table
(only adding fields) or add stored procedures etc.
What is the best way of making these changes to the production database
without interfering with the existing data? The users are remote and I won't
necessarily have direct access to the server. I'll have to walk the primary
user through the process.
Hi... I have data that i am getting through a dbf file. and i am dumping that data to a sql server... and then taking the data from the sql server after scrubing it i put it into the production database.. right my stored procedure handles a single plan only... but now there may be two or more plans together in the same sql server database which i need to scrub and then update that particular plan already exists or inserts if they dont...
this is my sproc... ALTER PROCEDURE [dbo].[usp_Import_Plan] @ClientId int, @UserId int = NULL, @HistoryId int, @ShowStatus bit = 0-- Indicates whether status messages should be returned during the import.
AS
SET NOCOUNT ON
DECLARE @Count int, @Sproc varchar(50), @Status varchar(200), @TotalCount int
SET @Sproc = OBJECT_NAME(@@ProcId)
SET @Status = 'Updating plan information in Plan table.' UPDATE Statements..Plan SET PlanName = PlanName1, Description = PlanName2 FROM Statements..Plan cp JOIN ( SELECT DISTINCT PlanId, PlanName1, PlanName2 FROM Census ) c ON cp.CPlanId = c.PlanId WHERE cp.ClientId = @ClientId AND ( IsNull(cp.PlanName,'') <> IsNull(c.PlanName1,'') OR IsNull(cp.Description,'') <> IsNull(c.PlanName2,'') )
SET @Count = @@ROWCOUNT IF @Count > 0 BEGIN SET @Status = 'Updated ' + Cast(@Count AS varchar(10)) + ' record(s) in ClientPlan.' END ELSE BEGIN SET @Status = 'No records were updated in Plan.' END
SET @Status = 'Adding plan information to Plan table.' INSERT INTO Statements..Plan ( ClientId, ClientPlanId, UserId, PlanName, Description ) SELECT DISTINCT @ClientId, CPlanId, @UserId, PlanName1, PlanName2 FROM Census WHERE PlanId NOT IN ( SELECT DISTINCT CPlanId FROM Statements..Plan WHERE ClientId = @ClientId AND ClientPlanId IS NOT NULL )
SET @Count = @@ROWCOUNT IF @Count > 0 BEGIN SET @Status = 'Added ' + Cast(@Count AS varchar(10)) + ' record(s) to Plan.' END ELSE BEGIN SET @Status = 'No information was added Plan.' END
SET NOCOUNT OFF
So how do i do multiple inserts and updates using this stored procedure...
I have many sql 2000 DTS packages that I support from my development workstation running v2000 sp4. Packages are altered on the development machine and then go through a normal release mechanisms to production via testing servers etc.
I have recently installed the client tools for SQL Server 2005 on my desktop to evaluate the product. The 2005 DB instance is running on a seperate server.
So, I have dev edition of sql 2000 and 2005 client tools (including BI Dev studio etc) on my workstation.
I have recently had to make changes to a 2000 DTS package and used my 2000 enterprise manager to do so. No Problem- saved and tested fine on my workstation.
But when I try and release it to another server, or open the package using enterprise manager from another machine that does not have sql 2005 installed - I get an error message 'Unspecified error'. This I've seen before when trying to open packages created in v2000 , using v7 or where the service packs are different between machines.
Digging around my workstation and comparing some of the DLLs I know to be required to distribute DTS packages (from RDIST.txt) it seems that some of the SQL 2000 dll files have been updated by my 2005 installation.
E.g
DTSFFILE.DLL on my machine is 2000.85.1054.0 whilst on any 'clean' 2000 machine is at version v2000.80.760.0
Surely it cant be right that SQL 2005 has newer versions of components for SQL 2000 than is available with the latest SP for the actual product! Especially considering that the installation of 2005 does not even allow you to edit 2000 DTS packages through the management studio without a 'special' download' of the feature pack,(whihc by the way does not work very well either)
So am I to conclude that you can not run side by side installations of SQL 2000 and 2005 on a single machine and expect 2000 to run as it did previously !!!
Hi, I have the following problem: Within a VBScript, I use a component (written in C++ I think with use of ADO) for sending "Insert", "Update" Statements to an SQL Server 2000 for inserting, updating data. If I insert 100 - 120 Records in a Loop, all works fine. If I insert 1000 records, approximately 150 records will be inserted very quick, then the program pause fo approx. 8 - 15 minutes and then it proceed for the next 150 recs, pause for 8 - 15 minutes and so on.
If I use a SQL Server 2005 for the database, all works fine. The same happens with another customer and another program written in Visual Basic 6.0 with ADO. The access to SQL 2000 pause and with SQL 2005 all works fine. It seems to me that this is a problem with some buffers, timeout, or so. Has anyone an idea on what screw I can turn?
I have few tables. I want to identify the RECORDS for a table which has been created/modified for a particular date and time. I don't want to write a trigger to capture the event for add/update.
Is there any system table which track for date and time using stored procedure each individual records which has been last updated or newly created records??
Note : The application already created without lastModified date and each table... so, we don't want to modify the application or db.
I have a backup application that uses SQL as the backend (both 2000 and 2005) The backup application performs a specific function that enters data into the Database but does not have any reporting. I am looking for a way to query the DB directly to see if there is any info I can grab. But the problem is I don't know where its stored. So my questions) are:
Is there anyway to tell what tables get updated by a certain process. For example if I run this one action how could I then tell what tables were effected or even what data was changed. I tried looking for a logging function that would list this but did not find it. I also tried looking for some type of real time monitor. I even tried looking for a way to search for records / tables that had been recently updated.
I am new to SQL so not sure I am using the correct terms but any help would be appreciated. Also this is SQL2000 and a test server
I need to update information for a user and if the user is classified as a primary (@blnPrimary) then I need to update information for all users within his agency (AgencyUniqueId). The issue is that the second UPDATE to "cdds_User_Profile" always returns a rowcount of 0 (should be 1) even though the values for "@Original_AgencyUniqueId" and "@Original_UserId" are correct. This is just a snippet of the whole procedure. I'm trying to implement similar logic in other parts of the procedure and I'm observing the same behavior there as well. Any help anyone can provide is greatly appreciated. </p><pre>/*** Update User Profile ***/UPDATE [cdds_User_Profile] SET [FirstName] = @FirstName, [LastName] = @LastName, [Title] = @Title, [Phone] = @Phone, [AcctType] = @AcctType, [AcctStatus] = @AcctStatus, [LastUpdatedDate] = GETDATE() WHERE ([FirstName] = @Original_FirstName AND [LastName] = @Original_LastName AND [Title]=@Original_Title AND [Phone]=@Original_Phone AND [AcctType]=@Original_AcctType AND [AcctStatus]= @Original_AcctStatus AND [AgencyUniqueId] = @Original_AgencyUniqueIdAND [UserId] = @Original_UserId);IF @@ROWCOUNT = 0BEGINSET @err_message = 'Data has been edited by another user since you began viewing this information.'RAISERROR (@err_message,11, 1)ROLLBACK TRANSACTIONRETURNEND IF @@ERROR <> 0 BEGINROLLBACK TRANSACTIONRETURN ENDIF @blnPrimary = 1 BEGIN IF LOWER(@AcctStatus) <> LOWER(@AgencyAcctStatus)/*** Update Users Acct. Status ***//* update all users in same agency profile */UPDATE [cdds_User_Profile] SET [AcctStatus] = @AcctStatus,[LastUpdatedDate] = GETDATE() WHERE ([AgencyUniqueId] = @Original_AgencyUniqueIdAND [UserId] = @Original_UserId); IF @@ROWCOUNT = 0BEGINSET @err_message = 'Data for this agency has been edited by another user since you began viewing this information.'RAISERROR (@err_message,11, 1)ROLLBACK TRANSACTIONRETURNENDIF @@ERROR <> 0 BEGINROLLBACK TRANSACTIONRETURN ENDEND</pre><pre>
I am trying to do selective updates for rows where a column matches a column in another table. I want to do something like this, only 'this' does not work, and nothing else I could think of (I tried joins also) worked. What am I missing? I hope this explanation makes sense.
UPDATE queryresultsmodel SET queryresultsmodel.tableforcedoutdate = getdate() Where Exists (Select tablename from queryresultsmodel q inner join orphanul o on q.tablename = o.name)
Has anyone had any problems on one row updates on a table where you have defined horizontal and vertical partitioning of the data to be replicated? When I execute an update clause that modifies just one row the log reader misses the modification and it does not get replicated to the other databases.
If I do the same update clause but on several rows then all the modifications are read by the log reader and the replication task goes ok.
Hi,I have a table with 20.000.000 of tuples.I have been monitoring the performance of the insertion and updates,but not convince me at all.The table have 30 columns, what and 12 of it, are calcultated column.The test that i do was this:1 Insertion with all the columns and calculing the calcultated columnsin the insertion sentence.1 insertion and all the columns calculated in @vars..1 insertion with the basic fields, and 10 updates.And the result was that the last test was the most performant.What is your opinion?
Hi. We're using SQL Server 2005 Express Edition for maintaining a relational DB for our soon-to-be released .NET app. The problem is that we expect that the definitions of the tables in the DB to occasionally change over time as we make updates to the software. Thus, during the installation of a software upgrade, we expect to run an SQL script that grabs the data from a table in its "old" format, re-structures the table, and then deposits the data back in the updated table. This seems to require some sort of version stamp on the table definition.
My main question is: What is the conventional way for handling versioning of table definitions?
Another question is: Is there a preferred procedure for handling the updating of the data during the installation of a software update?
I have two servers S1 and S2. Inmediately after new data on S1 is available I want to perform some actions on S2.
I can use a trigger on S1, but if S2 is down the transaction on S1 will be lost. I could use database replication but I only need one single table in S1 to report changes to S2
Hey, I have couple of triggers to one of the tables. It is failing if I update multiple records at the same time from the stored procedure.
UPDATE table1 SET col1 = 0 WHERE col2 between 10 and 20
Error I am getting is :
Server: Msg 512, Level 16, State 1, Procedure t_thickupdate, Line 12 Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. The statement has been terminated.
What is the best possible way to make it work? Thank you.
I am looking for pros and cons for the following scenarios:
When a table contains a unique key constraint is it viable to always do an insert and immediately check the @@ERROR value and if @@ERROR states a duplicate key exception then perform an update statement?
Another possible solution would be to always check if the key exists and then do the insert / update based upon that result. This method will always require two steps.
I was able to catch one update but not multiple updates or batch updates done to the table. I know the updated records are residing in inserted and deleted tables. Without using cursors, how can i read and compare all the rows in these two tables?
The driver table , which keeps track of what datamarts ran and for what date range gets updated frequently during the etl run . There can be as many as 250 updates issued on this table in a single second.
Now this table is a heap , and there are no indexes on it .
During these updates , we encounter deadlocks causing the ETL job to fail .
I am basically trying to update a table which reflects account transactions. Accounts get paid in full but occasionally balance payments can be reversed and I want to update the table to show this - I need to show which period the account was previously paid in full.I've created a simplified version of the scenario and below are a couple of examples of things I've tried that do not work. I understand why they do not work but I'm struggling to figure out how to update the 'PeriodPrevPaidInFull' field.
create table Trans ( AccNo int, Transaction_Period_Index int, PeriodOpeningBalance money, DebtBalance money, PeriodPaidInFull int NULL, PeriodPrevPaidInFull int NULL,
Loop through #Temp_1 -Execute Sproc_ABC passing in #Temp_1.Field_TUV as parameter -Store result set of Sproc_ABC into #Temp_2 -Update #Temp_1 SET #Temp_1.Field_XYZ= #Temp_2.Field_XYZ End Loop
It appears scary from a performance standpoint, but I'm not sure there's a way around it. I have little experience with loops and cursors in SQL. What would such code look like? And is there a preferable way (assuming I have to call Sproc_ABC using Field_TUV to get the new value for Field_XYZ?
Hopefully someone can at least point me in the right direction for moreresearch (e.g.: correct terminology). My only previous experience was justdumping data into a database using ODBC, and that was some years ago so nowmostly forgotten.I need to write an NT Service/Application (in C/C++) that will be gettingdata sent to it via SQL Server 2000. The data will arrive in my SQL Server(read-only access), via replication of tables from another remote SQLServer.My application needs know when new row are inserted, or updated so it can toread this data (needs to be quick/timely so hopefully no polling) to theninterface with other remote proprietary systems.T.I.A.PS: If you can recommend appropriate books on SQL Server 2000 that wouldalso be useful.
Hello,Can someone point me to getting the total number of inserts and updates on a tableover a period of time?I just want to measure the insert and update activity on the tables.Thanks.- Vish
I posted the questions in sql forum and got good sql statement to work with it.. However, I want to see if there is a way to do it in SSIS..
May be this is really basic questions but I am having hard time to do it in sql server 2005 SSIS..
I have a flat file that I want to merge with table in SQL server 2005.
1> I have successfully created a data flow task to import data from flat file to Table X (new table I created for this package).
Now here is my question. I have a Table A already in the database with the same column structure as of TableX (Both the tables have 20 columns/same Name/Same design).
I want to merge Table A and Table X and stored the data in TableA. However, I just don't want to merge blindly, I need to insert a new row in Table A only if the same row does not exist in Table A (there is no primary key, i am looking certain fields to see if the rows are same)..
Here is an example: Table A -------------- 1 test test1 test2 test3 test4 test5 2 test test6 test7 test8 test9 test10
Table X ------------ 1 test test1 test2 test99 test4 test5 2 test test98 test97 test 96 test95 test94 -------------------------------------------------------- Now, I want to only insert row 2 of Table X since there is match on 4 of the fields in row1.. The new Table A should look like
NEW Table A' -----------------
test test1 test2 test3 test4 test5 test test6 test7 test8 test9 test10 test test98 test97 test 96 test95 test94
------------------------------------ I think, I could do this using Execute SQL task and write all the code in sql, but that will be cumbersome and time consuming.. Is there a simpler way to achieve this?
Hi i am getting a weekly transaction file which has two columns trans code and trans date to indicate whether the record is changed, added or modified . and the monthly master file contains blank in these two fields.
How do i update the row coming from the transaction file in the tables which contain the rows from the master file .
to better explain the example is Master File
ID Name AGE Salary Transcode TransDate 2 dev 27 2777
Transaction File indicating change
ID Name AGE Salary Transcode TransDate 2 dev 27 24444 C 08072007
the ouput should be
2 dev 27 24444 C 08072007
replacing the existing row in the table(updating the whole row)
i have 50 columns in my table and based on the two fields i should replace the rows exisiting in table and if ID doesnot match exist just add them as a new row.
what transformation should i use .... to replace all the columns which have matching ID in table to the current record from trans file and if there doesnt exist matching id just add them as new row.
I will be getting data in either Excel or Access form on a daily basis. I would like to automate the process of converting this (excel or access) data to a table in an existing SQL database. Since this conversion needs to performed on a daily basis, note that I need to update the table that contains data from the day before.
Is it possible to do this and if it is possible, can someone tell me how to do it.
I have a project that consists of a SQL db with an Access front end as the user interface. Here is the structure of the table on which this question is based:
Code Block
create table #IncomeAndExpenseData ( recordID nvarchar(5)NOT NULL, itemID int NOT NULL, itemvalue decimal(18, 2) NULL, monthitemvalue decimal(18, 2) NULL ) The itemvalue field is where the user enters his/her numbers via Access. There is an IncomeAndExpenseCodes table as well which holds item information, including the itemID and entry unit of measure. Some itemIDs have an entry unit of measure of $/mo, while others are entered in terms of $/yr, others in %/yr.
For itemvalues of itemIDs with entry units of measure that are not $/mo a stored procedure performs calculations which converts them into numbers that has a unit of measure of $/mo and updates IncomeAndExpenseData putting these numbers in the monthitemvalue field. This stored procedure is written to only calculate values for monthitemvalue fields which are null in order to avoid recalculating every single row in the table.
If the user edits the itemvalue field there is a trigger on IncomeAndExpenseData which sets the monthitemvalue to null so the stored procedure recalculates the monthitemvalue for the changed rows. However, it appears this trigger is also setting monthitemvalue to null after the stored procedure updates the IncomeAndExpenseData table with the recalculated monthitemvalues, thus wiping out the answers.
How do I write a trigger that sets the monthitemvalue to null only when the user edits the itemvalue field, not when the stored procedure puts the recalculated monthitemvalue into the IncomeAndExpenseData table?
I need a script that inserts the data of an excel sheet into a table. If something already exists it should leave it, unless it's edited in the excel sheet and so on and so on. This proces has to go through a stored procedure... ...But how?
Hi All, I have a Problem while updating one table data from another table's data using sql server 2000. I have 2 tables named TableA(PID,SID,MinForms) , TableB(PID,SID,MinForms) I need to update TableA with TableB's data using a single query that i have including in a stored procedure.
I have a web form that collects details on books (as an example), and in that form is a checkboxlist that displays an entry for each potential author in the database (as an example).
The user can obviously tick as many authors as they want to represent Authors of the book. The ticked entries form the entries in the BooksToAuthors table which only has BookID and AuthorID columns.
I have a number of questions:
How do I take what is in the CheckBoxList to the database and how does this relate to Stored Procedures?
Do I fill the checkbox selections into an Array? How do I get these 'many items' to a Stored Procedure that runs a transaction to put the book in and then the many rows in AuthorsToBooks.
What is being passed? Can you pass an array or something to a stored procedure?