I have a backup application that uses SQL as the backend (both 2000 and 2005) The backup application performs a specific function that enters data into the Database but does not have any reporting. I am looking for a way to query the DB directly to see if there is any info I can grab. But the problem is I don't know where its stored. So my questions) are:
Is there anyway to tell what tables get updated by a certain process. For example if I run this one action how could I then tell what tables were effected or even what data was changed. I tried looking for a logging function that would list this but did not find it. I also tried looking for some type of real time monitor. I even tried looking for a way to search for records / tables that had been recently updated.
I am new to SQL so not sure I am using the correct terms but any help would be appreciated. Also this is SQL2000 and a test server
Hi... I have data that i am getting through a dbf file. and i am dumping that data to a sql server... and then taking the data from the sql server after scrubing it i put it into the production database.. right my stored procedure handles a single plan only... but now there may be two or more plans together in the same sql server database which i need to scrub and then update that particular plan already exists or inserts if they dont...
this is my sproc... ALTER PROCEDURE [dbo].[usp_Import_Plan] @ClientId int, @UserId int = NULL, @HistoryId int, @ShowStatus bit = 0-- Indicates whether status messages should be returned during the import.
AS
SET NOCOUNT ON
DECLARE @Count int, @Sproc varchar(50), @Status varchar(200), @TotalCount int
SET @Sproc = OBJECT_NAME(@@ProcId)
SET @Status = 'Updating plan information in Plan table.' UPDATE Statements..Plan SET PlanName = PlanName1, Description = PlanName2 FROM Statements..Plan cp JOIN ( SELECT DISTINCT PlanId, PlanName1, PlanName2 FROM Census ) c ON cp.CPlanId = c.PlanId WHERE cp.ClientId = @ClientId AND ( IsNull(cp.PlanName,'') <> IsNull(c.PlanName1,'') OR IsNull(cp.Description,'') <> IsNull(c.PlanName2,'') )
SET @Count = @@ROWCOUNT IF @Count > 0 BEGIN SET @Status = 'Updated ' + Cast(@Count AS varchar(10)) + ' record(s) in ClientPlan.' END ELSE BEGIN SET @Status = 'No records were updated in Plan.' END
SET @Status = 'Adding plan information to Plan table.' INSERT INTO Statements..Plan ( ClientId, ClientPlanId, UserId, PlanName, Description ) SELECT DISTINCT @ClientId, CPlanId, @UserId, PlanName1, PlanName2 FROM Census WHERE PlanId NOT IN ( SELECT DISTINCT CPlanId FROM Statements..Plan WHERE ClientId = @ClientId AND ClientPlanId IS NOT NULL )
SET @Count = @@ROWCOUNT IF @Count > 0 BEGIN SET @Status = 'Added ' + Cast(@Count AS varchar(10)) + ' record(s) to Plan.' END ELSE BEGIN SET @Status = 'No information was added Plan.' END
SET NOCOUNT OFF
So how do i do multiple inserts and updates using this stored procedure...
I have installed the Adobe iFilter 11 64 bit and set the path to the bin folder. I still cannot find any text from the pdf files. I suspect I am missing something trivial because I don't find much when I Bing for this so it must not be a common problem.Here is the code.
--Adobe iFilter 11 64 bit is installed --The Path variable is set to the bin folder for the Adobe iFilter. --SQL Developer version 64 bit on both Windows 7 and Windows 8. USE master; GO DROP DATABASE FileTableStudy; GO CREATE DATABASE FileTableStudy ON PRIMARY
I have a table that contains words that will be used to search another table where FullText index has been created on searchable columns. I'm basically trying to run something like this:
SELECT t1.col1, t2.col3 FROM tbl1 t1, tbl2 t2 WHERE CONTAINS (t1.col1, t2.col1)
I know this won't work but is there a way to join these two tables so the words (t2.col1) can be passed as search conditions? There is no common key on both tables so normal join won't work. I'm trying to find a way to pass the search words from one table to another.
I'm working on a project that is being built piece by piece. The first partis in place. I will occasionally need to either change thing within a table(only adding fields) or add stored procedures etc.What is the best way of making these changes to the production databasewithout interfering with the existing data? The users are remote and I won'tnecessarily have direct access to the server. I'll have to walk the primaryuser through the process.TIA--Jake
I need to update information for a user and if the user is classified as a primary (@blnPrimary) then I need to update information for all users within his agency (AgencyUniqueId). The issue is that the second UPDATE to "cdds_User_Profile" always returns a rowcount of 0 (should be 1) even though the values for "@Original_AgencyUniqueId" and "@Original_UserId" are correct. This is just a snippet of the whole procedure. I'm trying to implement similar logic in other parts of the procedure and I'm observing the same behavior there as well. Any help anyone can provide is greatly appreciated. </p><pre>/*** Update User Profile ***/UPDATE [cdds_User_Profile] SET [FirstName] = @FirstName, [LastName] = @LastName, [Title] = @Title, [Phone] = @Phone, [AcctType] = @AcctType, [AcctStatus] = @AcctStatus, [LastUpdatedDate] = GETDATE() WHERE ([FirstName] = @Original_FirstName AND [LastName] = @Original_LastName AND [Title]=@Original_Title AND [Phone]=@Original_Phone AND [AcctType]=@Original_AcctType AND [AcctStatus]= @Original_AcctStatus AND [AgencyUniqueId] = @Original_AgencyUniqueIdAND [UserId] = @Original_UserId);IF @@ROWCOUNT = 0BEGINSET @err_message = 'Data has been edited by another user since you began viewing this information.'RAISERROR (@err_message,11, 1)ROLLBACK TRANSACTIONRETURNEND IF @@ERROR <> 0 BEGINROLLBACK TRANSACTIONRETURN ENDIF @blnPrimary = 1 BEGIN IF LOWER(@AcctStatus) <> LOWER(@AgencyAcctStatus)/*** Update Users Acct. Status ***//* update all users in same agency profile */UPDATE [cdds_User_Profile] SET [AcctStatus] = @AcctStatus,[LastUpdatedDate] = GETDATE() WHERE ([AgencyUniqueId] = @Original_AgencyUniqueIdAND [UserId] = @Original_UserId); IF @@ROWCOUNT = 0BEGINSET @err_message = 'Data for this agency has been edited by another user since you began viewing this information.'RAISERROR (@err_message,11, 1)ROLLBACK TRANSACTIONRETURNENDIF @@ERROR <> 0 BEGINROLLBACK TRANSACTIONRETURN ENDEND</pre><pre>
I am trying to do selective updates for rows where a column matches a column in another table. I want to do something like this, only 'this' does not work, and nothing else I could think of (I tried joins also) worked. What am I missing? I hope this explanation makes sense.
UPDATE queryresultsmodel SET queryresultsmodel.tableforcedoutdate = getdate() Where Exists (Select tablename from queryresultsmodel q inner join orphanul o on q.tablename = o.name)
Has anyone had any problems on one row updates on a table where you have defined horizontal and vertical partitioning of the data to be replicated? When I execute an update clause that modifies just one row the log reader misses the modification and it does not get replicated to the other databases.
If I do the same update clause but on several rows then all the modifications are read by the log reader and the replication task goes ok.
Hi,I have a table with 20.000.000 of tuples.I have been monitoring the performance of the insertion and updates,but not convince me at all.The table have 30 columns, what and 12 of it, are calcultated column.The test that i do was this:1 Insertion with all the columns and calculing the calcultated columnsin the insertion sentence.1 insertion and all the columns calculated in @vars..1 insertion with the basic fields, and 10 updates.And the result was that the last test was the most performant.What is your opinion?
Hi. We're using SQL Server 2005 Express Edition for maintaining a relational DB for our soon-to-be released .NET app. The problem is that we expect that the definitions of the tables in the DB to occasionally change over time as we make updates to the software. Thus, during the installation of a software upgrade, we expect to run an SQL script that grabs the data from a table in its "old" format, re-structures the table, and then deposits the data back in the updated table. This seems to require some sort of version stamp on the table definition.
My main question is: What is the conventional way for handling versioning of table definitions?
Another question is: Is there a preferred procedure for handling the updating of the data during the installation of a software update?
I have two servers S1 and S2. Inmediately after new data on S1 is available I want to perform some actions on S2.
I can use a trigger on S1, but if S2 is down the transaction on S1 will be lost. I could use database replication but I only need one single table in S1 to report changes to S2
Hey, I have couple of triggers to one of the tables. It is failing if I update multiple records at the same time from the stored procedure.
UPDATE table1 SET col1 = 0 WHERE col2 between 10 and 20
Error I am getting is :
Server: Msg 512, Level 16, State 1, Procedure t_thickupdate, Line 12 Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. The statement has been terminated.
What is the best possible way to make it work? Thank you.
I am looking for pros and cons for the following scenarios:
When a table contains a unique key constraint is it viable to always do an insert and immediately check the @@ERROR value and if @@ERROR states a duplicate key exception then perform an update statement?
Another possible solution would be to always check if the key exists and then do the insert / update based upon that result. This method will always require two steps.
I was able to catch one update but not multiple updates or batch updates done to the table. I know the updated records are residing in inserted and deleted tables. Without using cursors, how can i read and compare all the rows in these two tables?
The driver table , which keeps track of what datamarts ran and for what date range gets updated frequently during the etl run . There can be as many as 250 updates issued on this table in a single second.
Now this table is a heap , and there are no indexes on it .
During these updates , we encounter deadlocks causing the ETL job to fail .
I am basically trying to update a table which reflects account transactions. Accounts get paid in full but occasionally balance payments can be reversed and I want to update the table to show this - I need to show which period the account was previously paid in full.I've created a simplified version of the scenario and below are a couple of examples of things I've tried that do not work. I understand why they do not work but I'm struggling to figure out how to update the 'PeriodPrevPaidInFull' field.
create table Trans ( AccNo int, Transaction_Period_Index int, PeriodOpeningBalance money, DebtBalance money, PeriodPaidInFull int NULL, PeriodPrevPaidInFull int NULL,
Loop through #Temp_1 -Execute Sproc_ABC passing in #Temp_1.Field_TUV as parameter -Store result set of Sproc_ABC into #Temp_2 -Update #Temp_1 SET #Temp_1.Field_XYZ= #Temp_2.Field_XYZ End Loop
It appears scary from a performance standpoint, but I'm not sure there's a way around it. I have little experience with loops and cursors in SQL. What would such code look like? And is there a preferable way (assuming I have to call Sproc_ABC using Field_TUV to get the new value for Field_XYZ?
Hopefully someone can at least point me in the right direction for moreresearch (e.g.: correct terminology). My only previous experience was justdumping data into a database using ODBC, and that was some years ago so nowmostly forgotten.I need to write an NT Service/Application (in C/C++) that will be gettingdata sent to it via SQL Server 2000. The data will arrive in my SQL Server(read-only access), via replication of tables from another remote SQLServer.My application needs know when new row are inserted, or updated so it can toread this data (needs to be quick/timely so hopefully no polling) to theninterface with other remote proprietary systems.T.I.A.PS: If you can recommend appropriate books on SQL Server 2000 that wouldalso be useful.
Hello,Can someone point me to getting the total number of inserts and updates on a tableover a period of time?I just want to measure the insert and update activity on the tables.Thanks.- Vish
I posted the questions in sql forum and got good sql statement to work with it.. However, I want to see if there is a way to do it in SSIS..
May be this is really basic questions but I am having hard time to do it in sql server 2005 SSIS..
I have a flat file that I want to merge with table in SQL server 2005.
1> I have successfully created a data flow task to import data from flat file to Table X (new table I created for this package).
Now here is my question. I have a Table A already in the database with the same column structure as of TableX (Both the tables have 20 columns/same Name/Same design).
I want to merge Table A and Table X and stored the data in TableA. However, I just don't want to merge blindly, I need to insert a new row in Table A only if the same row does not exist in Table A (there is no primary key, i am looking certain fields to see if the rows are same)..
Here is an example: Table A -------------- 1 test test1 test2 test3 test4 test5 2 test test6 test7 test8 test9 test10
Table X ------------ 1 test test1 test2 test99 test4 test5 2 test test98 test97 test 96 test95 test94 -------------------------------------------------------- Now, I want to only insert row 2 of Table X since there is match on 4 of the fields in row1.. The new Table A should look like
NEW Table A' -----------------
test test1 test2 test3 test4 test5 test test6 test7 test8 test9 test10 test test98 test97 test 96 test95 test94
------------------------------------ I think, I could do this using Execute SQL task and write all the code in sql, but that will be cumbersome and time consuming.. Is there a simpler way to achieve this?
Hi i am getting a weekly transaction file which has two columns trans code and trans date to indicate whether the record is changed, added or modified . and the monthly master file contains blank in these two fields.
How do i update the row coming from the transaction file in the tables which contain the rows from the master file .
to better explain the example is Master File
ID Name AGE Salary Transcode TransDate 2 dev 27 2777
Transaction File indicating change
ID Name AGE Salary Transcode TransDate 2 dev 27 24444 C 08072007
the ouput should be
2 dev 27 24444 C 08072007
replacing the existing row in the table(updating the whole row)
i have 50 columns in my table and based on the two fields i should replace the rows exisiting in table and if ID doesnot match exist just add them as a new row.
what transformation should i use .... to replace all the columns which have matching ID in table to the current record from trans file and if there doesnt exist matching id just add them as new row.
I will be getting data in either Excel or Access form on a daily basis. I would like to automate the process of converting this (excel or access) data to a table in an existing SQL database. Since this conversion needs to performed on a daily basis, note that I need to update the table that contains data from the day before.
Is it possible to do this and if it is possible, can someone tell me how to do it.
I have a project that consists of a SQL db with an Access front end as the user interface. Here is the structure of the table on which this question is based:
Code Block
create table #IncomeAndExpenseData ( recordID nvarchar(5)NOT NULL, itemID int NOT NULL, itemvalue decimal(18, 2) NULL, monthitemvalue decimal(18, 2) NULL ) The itemvalue field is where the user enters his/her numbers via Access. There is an IncomeAndExpenseCodes table as well which holds item information, including the itemID and entry unit of measure. Some itemIDs have an entry unit of measure of $/mo, while others are entered in terms of $/yr, others in %/yr.
For itemvalues of itemIDs with entry units of measure that are not $/mo a stored procedure performs calculations which converts them into numbers that has a unit of measure of $/mo and updates IncomeAndExpenseData putting these numbers in the monthitemvalue field. This stored procedure is written to only calculate values for monthitemvalue fields which are null in order to avoid recalculating every single row in the table.
If the user edits the itemvalue field there is a trigger on IncomeAndExpenseData which sets the monthitemvalue to null so the stored procedure recalculates the monthitemvalue for the changed rows. However, it appears this trigger is also setting monthitemvalue to null after the stored procedure updates the IncomeAndExpenseData table with the recalculated monthitemvalues, thus wiping out the answers.
How do I write a trigger that sets the monthitemvalue to null only when the user edits the itemvalue field, not when the stored procedure puts the recalculated monthitemvalue into the IncomeAndExpenseData table?
I need a script that inserts the data of an excel sheet into a table. If something already exists it should leave it, unless it's edited in the excel sheet and so on and so on. This proces has to go through a stored procedure... ...But how?
Hello, I have an SQL database with a table. This table has two fields: [Title] and [Description]. In my web site I have a text box where the user inserts some keywords, as follows: 1 > book english 2 > book AND english 3 > book OR english Is it possible to create an SQL procedure which looks which records have the words "book" and "english" in the fields [Title] or [Description]? And is it possible to somehow also use the words "AND" and "OR"? I never did something like this and I just would like to create a simple search in my web site which looks into my documents table. Thank You Very Much,Miguel