DB Engine :: Table Transaction History
Oct 22, 2015i have a table and i would like to know what are the queries that were executed referring to that table in the past 5 days. is it possible?
View 4 Repliesi have a table and i would like to know what are the queries that were executed referring to that table in the past 5 days. is it possible?
View 4 RepliesI am unable to check old history in sql server jobs. I have set limit size of job history. but it will show the old history.
View 2 Replies View RelatedI was asked to determine the last time 2 databases were accessed.
Are the .trc files an accurate way to determine the database access history?
one of my SQL Developer member had one observation that, size of the parameter 'Parameter_XYZ' in certain stored procedure had changed from 25 to 255 during some production fixes, however suddenly its looks like that, someone has changed it back to 25 instead of 255.
DECLARE @Parameter_XYZ
varchar(25);
Can we figure out in which sprint/drop the stored procedure was changed and the Parameter_XYZ back to 25. Can any log recovery mechanism will get such details.
Can we get stored procedure text between different alteration.
Hello, I browsed the internet for an answer but all I can
find is the mention of a third party tool called log explorer
that cost a grand for a single license ... no thank you,
but basically what i want to do is be able to open the .ldf file
that is created to log transactions when you first create a database.
Or if there is another method to view transaction logs then please mention it.
I tried DBCC Log('database') which did not provide me with much information
which I can use.
Thanks in advance,
Sharp_At_C
I have a requirement to delete 1 Million records from a table having 10 Million data and it's being queried on 24/7 basis (don't have a downtime). how can I achieve that?
View 13 Replies View RelatedI am using alwayson on my SQL 2012 databases. I am using ola hallengren scripts for backing up databases. Full & diff db backups work fine, but the log is not getting backed up. The tran log backup job doesn't error out too. Trying to figure out what I may be missing?
View 10 Replies View RelatedI am having an issue for transaction log database backup getting failed and throw me a following error. I never seen this corruption error before so Is there any solution for it?This error is from my Log file:-
Failed:(-1073548784) Executing the query "BACKUP LOG [Xe] TO DISK = N'D:XeXeXe_backup_201507230922.trn' WITH NOFORMAT, NOINIT, NAME = N'Xe_backup_20150723092224', SKIP, REWIND, NOUNLOAD, STATS = 10
" failed with the following error: "BACKUP detected corruption in the database log. Check the errorlog for more information. BACKUP LOG is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.This is from SQL JOB error"-
Executed as user: SqlAdmin. ...ion 9.00.5324.00 for 64-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 9:22:20 AM Progress: 2015-07-23 09:22:24.08 Source: {297F9C99-05AE-47BD-AA70-3E25DDD78CAB}
Executing query "DECLARE @Guid UNIQUEIDENTIFIER EXECUTE msdb..sp".: 100% complete End Progress Progress: 2015-07-23 09:22:24.91 Source: Back Up Database (Transaction Log)
[code]....
I actually am just looking for some supporting documentation on some facets of SQL Server.As far as I have always known, when anyone does a READ from a SQL Server database (SELCT * from <TABLE>), SQL Server does not create a log record...since there's no data or database structure being modified. A colleague is under the impression READ's are logged operations.
View 5 Replies View Relatedtoday I've put in production a big database accessed by 200 concurrent users, this database has READ_COMMITTED_SNAPHOT set to ON.I know that RCSI set to ON is very aggressive on tempDB so I'm monitoring it.I've noticed that the Transaction log space usage (%) on TempDB is slowly but ever increasing, I mean in the last 24 hours I've started from a 99% space free, now we are 37% space free...is it normal? TempDB log is 35GB in size.
View 6 Replies View RelatedI have 4 servers, 2 each for application (Dev & Prod)
DEV 1 & DEV 2 are standalone servers
Prod 1 & Prod 2 are Windows Clustered Servers.
From one application to other we do Distributed transactions. Dev 1 - Dev 2 or Dev 2 - Dev 1 can start DTC and working fine,but issue comes when Prod 1 - Prod 2 or Prod 2 - Prod 1. I get error message OLE DB provider "SQLNCLI" for linked server "xyz" returned message "No transaction is active.".
Msg 7391, Level 16, State 2, Line 3
The operation could not be performed because OLE DB provider "SQLNCLI" for linked server "xyz" was unable to begin a distributed transaction.
I have tested Dev 1 - Prod 1, Dev 1 - Prod 2, Dev 2 - Prod 1, Dev 2 - Prod 2 everything is working fine only Production servers are causing issue.
I enabled all settings needed for DTC on Cluster MSDTC service but no luck.
I have a database design question. There're lots of ways to rome they say, and I want to hear what you think of this way.
Government supplies wheel chairs (and thinks alike) to people who need them. They stay in possesion of the (local) government and are distrubuted by a company X.
So we have Tools (Wheelchairs) and Users (of wheel chairs). The life-cycle of a wheel chair is that more than one user while use it over time.
I want to keep track of which users used a instance of a wheelchair.
No there's a developer who likes to put this in one table. (the chair and it's user) in a way like this
UID, WheelChairId, UserId, OwnerId, SerialNumber, BeginDateTime, EndDateTime, SomeOtherColumns
The UID is unique, the WheelChairId is a GUID which is Unique per wheelchair, but can have mutliple records in the table with no overlap.
If one of the values of the columns is changed a new record is made with the same wheelchair and a new begin date (the closed record gets an Enddate). So history is made automaticaly. By using the right query's I can see what users uses the chair in what period of time. But also changed ownerships and other changes in Someother columns overtime.
Is this a good or a common practice? Why use it, or stay away from it?
Henri
~~~~
There's no place like 127.0.0.1
Hi there!
I'm working on an application designed like this:
There's a table "DailyTransations" (DT) containing daily transactions...
Then there's an archive table "TransationsArchive" (TA) with the exact same structure.
When a record is inserted in DT, it is also in TA (via a trigger) and the reporting is done against TA.
Now for performance issues, we delete from DT the records older than 2 days since they are not needed for processing.
First, what do you think of that implementation?
We thought about using partitions based on the transaction date and completely eliminate TA, but it seems once a record is assigned to a partition, it is not moved automatically...
What is the common solution for this need?
Thanks
Frantz
Hi all,
this is more of a design issue for a History table.
Suppose if i have a transaction table and then based on the transactions i want to keep a history of those do i need to define Primary Key and Foreign Key for history table.
Regards,
General Problem
I am running a website of crossword puzzle and Sudoku games. The website is designed to be:
There are 20-30 games onlines each day.
Every registered user could play and submit the game to win scores.
For each game, every registered user could get the score for ONLY one time. i.e., No score will be calculated if the user had finished the game before.
To avoid wasting time on a game finished before, user will be notified with hint message in the page when enter a already finished game.
The current solution is:
3 tables are designed for the functions mentioned above.
Table A: UserTable --storing usering information, userid
Table B: GameList --storing all the game information.
Related fields:
GameID primary key
FinshiedTimes recording how many times the game has been finished
Table C: FinishHistory --storing who and when finished the game
Related fields:
GameID ID of the game
UserID ID of the user
FinishedDate the time when the game was finshied
PS: Fields listed above are only related ones, not the complete structure.
Each time when user enters the game, the program will read Table B(GameList), listing all the available game and the times games have been finished. User could then choose a desired game to play.
When user clicks the link and enter a page showing the detail content of the game, the program will read Table C(FinishHistory) to check whether user has finished this game before. If yes, hint message will be shown in the page.
When user finishes the game and submit, the program will again read Table C(FinishHistory) to check whether user has finished this game before. If yes, hint message will be shown in the page. If no, user will get the score.
Existing Problems:
With the increase of game and users, the capacity of Table C(FinishHistory) grows rapidly. And each time when a game is loaded, the Table C will be loaded to check, and when a game is submitted, the Table C will be loaded to check again. So it is only a time question to find out Table C to become a bottleneck.
Does any one here have any good suggestions to change / re-invent a new structure or design to avoid this bottleneck?
I have a table history of Employee data.
id | EmpNo | EmpName | MobileNo | Email | EmpSSS | UpdateDate | UpdateUser
I have to make a stored procedure that will show the history and changes made to a given EmpNo, with the UpdateDate, UpdateUser and indicate which field is modified. Ex. Employee Mobile number was changed from '134151235' to '23523657'.
Result must be:
EmpNo | UpdateDate | UpdateUser | Field changed | Change from | change to
how to get products on which SalesPerson changed. Here is table with data.
View 9 Replies View RelatedI have order history table that tracks who worked on a order
ID. ColA. ColB. ColC
1. Process 1234
2. Work. 7666
3. Return. 6789
4. Work. Null Role1
5. Return. 6538
I want a query to return recent colB or ColC where colA or colB are not null. In this case row 4
I have order history table that tracks who worked on a order
Another example
ID. ColA. ColB. ColC
1. Work. 1234
2. Process. 7666
3. Return. 6789
I want a query to return most recent colB or ColC where colB or ColC are not null. In this case row 2
I work for a college and have recently been working on our enquiries and applications process (getting it onto our big enrollment db rather than standalone). It has all been going well but now they have asked for a report of students where it has taken more than x days or weeks to progress to the next stage code.
For stage codes they basically follow something like application, guidance interview, programme area interview, conditional/ unconditional offer... Although they could skip a stage code.
Any ideas how to do this bearing in mind I can't guarantee them to go to every stage so really I need to look in the history table and find records more than x days apart where one is the next progression date of the other. Hope I explained that ok.
history table is similar to:
StudentID CourseCode StageDate InputDate Stage
Hello,
I am using SQL Server 2005 and am having trouble with making a history table like mentioned in my earlier thread:
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=102811
This is the table "People" I have created:
|PersonId (PK)|DateFrom (PK)|DateTo|PersonName|Other Attributes....
Each change to a person's attributes results in a new row formed with the same PersonId as in the row with old attributes and the Date these new attributes are valid (DateFrom). So as shown above the Primary Key is a combination of the PersonId and DateFrom as a change to a person's attributes should never happen at the same time twice.
My problem is when I want to create a new person, how do I get a new unique id? Ideally I want the a new incremented id, so that all peoples' ids are in a sequential order.
As always, thanks for the help!
I've recently finished an application for a small company with perhaps two hundred employees. Each employee was set up in a Users table in the database, against which application logins were processed.
For just about every other table in the database, other than pure lookup tables, we created columns to indicate the user who created the entry, and the user who last modified the entry. This was done using FK references back to the Users table. Each table contains two references back to the Users table, and there are over 150 tables now that follow this scheme. At first I was not concerned, other than the fact that it makes a visual picture of the data model look very confusing (almost every table has a pair of links back to the Users table), until I encountered an issue where I could no longer delete from the Users table. Upon surpassing 253 FK references to Users, I can no longer delete users, as the Query Optimizer can't complete the query.
Now, all of that so far is really not a big deal. Deleting users was never my intent anyway. The only real question I have is whether this is the standard way of maintaining history for table records. Have others used this method? Is there a better way?
Hi guys,
acutally i have setup a Disaster Recovery plan for my database.. i m taking a full back once in a week,. i dont' know when i right click on the job and trying to check the view history option to check when was last backup has been taken, it's showing nothing..but when i check on acutall location the backup was taken there.. i don't know y it's not writing any info in view history table.. or is it clear once in a week and i cann't see that...
Can any one tell's me about this...
I have a table of users including: UserName, Password (comuted col), FirstName, LastName, Address and other details....
I have to keep 10 Recent passwords , so I created another table "ut_Password " (Table2)
This table contains the following columns : Username, Password , and Password_Date.
I searched a lot but could not find something similar in my opinion need SP for it.
- 10 row Max for Password History in table 2
- when user change password it's need to be uniqe and it should not appear last 10 passwords
- Each user can have a maximum of 10 lines containing history password table
- Most old password deleted and replaced with a new password will enter the correct date (FIFO method first in first out).
Project has 2 tables process(parent) and processchild(child).
Project workflow recorddsany changes to these tables as a history.
I want to find out all the process that are in status = saved(1) where processchild is at status = started(1).
Here is example.
Process table
PK, processid, status , other data
1, 1, 1,...
2, 1, 2,...
3, 2, 1,...
4, 3, 1,...
ProcessChild table
PK, processid, processchildid status, other data
1, 1, 1, 1,..
2, 1, 1, 2,..
3, 1, 2, 1,...
4, 1, 2, 2,...
5, 2, 1, 1,..
6, 2, 1, 2,...
7, 2, 2, 1,...
8, 3, 1, 1,..
I want to find out all the processes where processchildid=2 and processchild.status =1
I have a history table with the following values
CREATE TABLE History (SnapShotDate DATETIME, UID VARCHAR(10), DUEDATE DATETIME)
INSERT INTO History VALUES ('03-23-2015','PT-01','2015-04-22')
INSERT INTO History VALUES ('03-30-2015','PT-01','2015-04-20')
INSERT INTO History VALUES ('04-06-2015','PT-01','2015-06-30')
[Code] ....
I need an output in the below format. I need the most recent changed value for any given UID. Need to get the below result
OUTPUT
UID PreviousDueDate CurrentDueDate
----------------------------------------
PT-01 2015-04-20 2015-06-30
PT-02 2015-04-22 2015-04-22
PT-03 2015-04-18 2015-04-22
PT-042015-04-222015-04-18
I have a table history of Employee data.
id | EmpNo | EmpName | MobileNo | Email | EmpSSS | UpdateDate | UpdateUser
I have to make a stored procedure that will show the history and changes made to a given EmpNo, with the UpdateDate, UpdateUser and indicate which field is modified. Ex. Employee Mobile number was changed from '134151235' to '23523657'.
Result must be:
EmpNo | UpdateDate | UpdateUser | Field changed | Change from | change to
Hi All,
I have a table that hold status history records for cases. In this table is a status field with values, opened, assigned, or complete. Each case can be assigned a number of times before it is complete, and can be reassigned. I have the need to run a query that will get each case that is still assigned, and not yet complete. I wrote a stored procedure that contains a cursor containing each case, and get the last status history record for each case and puts it into a temp table to return to the user, but is hurting performance as there are .5 million records here. Does anyone know of a better way of doing this?
Thanks in advance : )
Hi everyone,
I have a big table which contains approx. 31,524,044 rows. The structure of the table look like this:
date ID A B C D
1/1/65 X Null Null Null Null
1/4/65 X 1 2 3 4
...
2/25/05 X 2 3 4 5
1/1/65 Y Null Null Null Null
1/4/65 Y Null Null Null Null
...
2/25/05 Y 2 3 4 5
...
The number of distinct(ID) is approx 3200 and each one has daily historical A, B, C, and D back for 40 years. For going forward I need to update daily information for 3200 ids. Currently, I am runing query against to this table is ok. I am thinking by the time pass by the table will be hudge since "stored historical information". It probably takes "long long long" time to run the query against this table. Any suggestion or comments... what is the best/better solution? Or it is not problem at all?
Thank you everyone for the help.
shiparsons
Given the following data how do I make a SQL query that returns only 1 row per product?
The returned rows need consist of only currently active products (that is WHERE (DateEffective <= { fn NOW() }).
The twist: Sometimes a product will have duplicate DateEffective records. In that case, only return the record created latest because that's the most current data that exists for a product. RowTimeStap is when the record was created.
Example Data:
HistoryID ProductID Name Color DateEffective RowTimeStamp
(auto-number PK)
1 1 Wheel Red 2/1/2008 2/1/2008
2 1 Wheel Blue 3/5/2008 3/1/2008
3 1 Wheel Orange 3/5/2008 3/2/2008
4 1 Wheel Black 1/1/2010 3/3/2008
5 2 Knob Blue 3/2/2008 3/2/2008
6 2 Knob Green 3/3/2008 3/3/2008
Query should return:
3 1 Wheel Orange 3/5/2008 3/2/2008
5 2 Knob Green 3/3/2008 3/3/2008
The query I've created fails on the twist part. I have to allow duplicate DateEffective to keep a history of changes.
Can anyone help?
Let's say you have a Users table, which simply contains a collection of people. Each person has attributes like their password, first and last name, username, etc. It would be useful to track changes to that table over time, so that if an entry is changed, you can simply look at a history table and see which changes occured, and when.
An INSERT/UPDATE trigger works well for this sort of thing, where the trigger inserts all of the 'INSERTED' values into a history table that has basically the same table definition as the Users table. But it gets tricky when considering Deletes.
If my History table entries reference back to the User in the Users table, this means that if I ever want to delete the user, I need to delete all their History first. This means I can't keep records of user deletions, which is significant. The other approach is not to have a foreign key reference in the History table, so that if a user is deleted, I can still keep my History about that user. Including deletes.
I have been timid about doing it this way, since I felt it broke the idea of a well structured database. Is there a better way to approach this? Is there a standard way to set up triggered history to track changes, including deletions, from a table?
Thanks,
-Dan
How can I easily identify who dropped a table?
View 8 Replies View RelatedThe space allocated to the Log in question is 180 GB. During this time period I was running TLog backups every 5 minutes, yet the log continued to chew through to 80 GB used, even after the process was complete and a final TLog backup had been taken. It continued to stay very large until the Full backup was complete -- or something else that I'm unaware of completed. Like every other DBA I typically take a TLog backup to shrink the log, but what appeared to be the case here was the Full completed and it released the used log space. All said, will Transaction Log backups not free up the log during Full backups?
View 3 Replies View RelatedHi,
My scenario:
I have a master securities table which has 7 fields. As a part of the daily process I am uploading flat files into database tables. The flat files contains the master(static) security data as well as the analytics(transaction) data. I need to
1) separate the master (static) data from the flat files,
2) check whether that data is present in the master table, if not then insert that data into the master table
3) If data present then move that existing record to an history table and then update the main master table.
All the 7 fields need to be checked to uniquely identify a single record in the master table.
How can this be done? Whether we can us a combination of data flow items or write a sql procedure to do all this.
Thanks in advance for your help.
Regards,
$wapnil