if any of the columns in (col2 and col3) updated, no need to log those changes to audit table; we can achieve this by using UPDATE( col2) or UPDATE ( col3) checking.. But i have to log the changes if any of other columns (col1m col4 or col5) changed along with col2 or col3
...
Is there any RDBMS concept ,If i do any DML operations in any of table i need to know how many rows are inserted/updated/deleted in particular table. I dont want write any trigger to get those information. because the table count nearly 142.
I have a table and the data in this table (for no rhyme or reason)is being deleted somehow. I'm looking for suggestions on how to audit this table and find out who or what process could be deleting my data.
Hello I am more of a reporting person, recently I was asked to create stored procedure for an upcoming ASP.NET application. We have a problem that we are facing and any suggestion would be very helpfull.
The problem is that we have like 8 differtent tables each with 10 to 15 columns in it. The front end application has pages with save , update , delete button which are insert,update,delete for each of the 8 tables i.e they save , update , delete these 8 tables
They want to a way to Update , this audit table which stores information like ---
Date User Table Column name Previous column name new column name
So for each row that was updated in those 8 tables , each column will have the above fields updated as a row of data
Initially we thought about triggers but it will be like 60 triggers ...Is there a better or other way of handling this?
I am using the SCD Wizard and it is working nicely. Can someone point be to an article/tuorial that would explain how you could create an "audit trail" on the items that may have been changed (type I and II)?
Basically, what I want to be able to do is run a query that tells me what data may have changed. I figured I would have to create an auditkey field in my table which would then link the key to the change detail?
I have more than 1100 tables in my databse. From existing application data will insert into Database tables. I need to track the tables module wise when data inserting/Updating into these tables.
One way is, I have to write the triggers for each table[1100 tables with auditing]. In my case this is not possible to write like that.
Is thesre any other solution to find the updated,inserted tables when data is changing from application without doing bulk changes.
I'm a VB programmer creating apps to write/edit/delete data to a SQLServer 2000 database.For HIPAA requirements, I need to track all changes to all the tables inour database. I'm looking for the easiest and cheapest solution.I have no experience writing Sql Server 2000 triggers and storedprocedures.I have found the following application which might do what I need to do:Upscene: MSSQL Log ManagerPrice $125http://www.upscene.comKrell Software: OmniAuditPrice $399http://www.krell-software.comApex SQL Software: Apex SQL AuditPrice $599http://www.apexsql.comLogPI: LogPIPrice $825http://www.logpi.comLumigent: Entegra for SQL ServerPrice ???http://www.lumigent.comAny comments sugestions appreciated.Gregory S. MoyInformation Processing ConsultantEpiSense Research ProgramDepartment of Ophthalmology & Visual SciencesUniversity of Wisconsin - Madison
Hello,I'm creating an audit table and associated triggers to be able to captureany updates and deletes from various tables in the database. I know how tocapture the records that have been updated or deleted, but is there any waythat I can cycle through a changed record, look at the old vs new values andcapture only the values that have changed?To give you a better idea of what I'm trying to do, instead of creating acopy of the original table (some tables have many fields) and creating awhole record if a type or bit field has been changed, I'd like to onlycapture the change in a single audit table that will have the followingfields;AuditID int INDENTITY(1,1)TableName varchar(100)FieldName varchar(100)OldValue varchar(255)NewValue varchar(255)AuditDate datetime DEFAULT(GetDate())Any direction would be greatly appreciated.Thanks!Rick
Hi all, I am not over familiar with SQL, I am a VB programmer, simply I need to achieve the following within Enterprise Manager.
I have 2 tables, different designs, different number of rows, I simply need to check whether the contents of a column in the first table is in a column in the second table, just simply a table/column to table/column data check for the same data content.
Easy Peasy for you guys, any help would be appreciated.
We have several applications that work with product catalog data. Data is entered and maintained, searched, and reported on. We're using CSLA business objects to create our Biz Objects and our front end apps are ASP.NET pages and web services. SQL 2k5 is our database. Currently all data is done in Factory methods in our business objects using SQL Stored Procedures and UDF's.
We want to start storing auditing and statistics data on our product searches. In SQL 2k we were using SQL Profiler to capture data and storing the information in tables, but it really wasn't very flexible and was difficult to maintain. What we want to do is every time someone submits a search we store the critiera and the results. Every time someone edits a product we want to save the old record. This will allow us to provide historical reporting and statistical reporting to our users.
In our old system the search results table was at about 3 million records. And since we've moved to a web based application we're hoping to save this information asynchronously so our search results or postbacks are not held up by saving this audit data. We were talking about writing logic into our biz objects code but it all seemed a bit slow and difficult to do asynchronously. Then I read a couple posts suggesting Service Broker.
Now we're considering either writing triggers on our tables or adding code to our factory stored procedures to send messages to Service Broker that would save the data into our audit tables but not hold up our business processes. We would be saving to the same database on the same server, but different tables.
Does Service Broker seem like it could be the right tool for this job? There looks like a bit of a learning curve and before I jump in i'm looking for some advice or direction.
I am trying to import this years worth of failed logins and last successful login for each user out of the logs using master.dbo.xp_readerrorlog. The script essentially loops through the linked servers I have on my DBA box and reaches out for the log data. It works, but here is the error I am getting on most of our production servers:
OLE DB provider "SQLNCLI11" for linked server "AWSCADENCEDB01" returned message "The partner transaction manager has disabled its support for remote/network transactions.".
Msg 7391, Level 16, State 2, Line 17 The operation could not be performed because OLE DB provider "SQLNCLI11" for linked server "AWSCADENCEDB01" was unable to begin a distributed transaction.
I know how to enable distributed transactions on the servers that error out, but if it is not needed for anything other then my audit script, I doubt the business will approve me turning on distributed transactions at those locations (so I am not even going to ask).
I am attempting to setup a singular audit .rdl with the information I want to review quarterly.
CREATE PROC [dbo].[Import_Login_Data] AS IF EXISTS ( SELECT 1 FROM master.sys.servers WHERE is_linked = 1
If on the source I have a new column, the script generated by SqlPackage.exe recreates the table on the background with moving the data into a temp storage. If the table is big, such approach can cause issues.
Example of the script is below: in the source project I added columns [MyColumn_LINE_1] and [MyColumn_LINE_5].
Is there any way I can make it generating an alter statement instead?
BEGIN TRANSACTION; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; SET XACT_ABORT ON; CREATE TABLE [dbo].[tmp_ms_xx_MyTable] ( [MyColumn_TYPE_CODE] CHAR (3) NOT NULL,
[Code] ....
The same script is generated regardless the table having data or not, having a clustered or nonclustered PK.
I have a requirement of table partitioning. we have 10 years of data on a table which is 30 billion up rows on 2005 server we are upgrading it to 2014. we have to keep 7 years of data. there is no keys on table or date column. since its a huge amount of data and many users its slow down the process speed. we are thinking to do partition on 7 years for Quarterly based. but as i said there is no date column on table we have to use reference table to get date. is there a way i can do the partitioning with out adding date column on table? also does partition will make query faster?
I have think three ways to do it. 1. leave as it is. 2. 7 years partition on one server 3. 3 years partition on server1 and 4 years partition on server2 (for 4 years is snapshot better?)
I am studying indexes and keys. I have a table that has a fixed width of data to be loaded in the first column which is parsed in a view based on data types within the fixed width specifications.
Example column A: (name phone house cost of house,zipcodecountystatecountry) -a view will later split this large varchar string based column b: is the source filename of the data load (varchar 256) ....
a. would there be a benefit of adding a clustered or nonclustered index (if so which/point in direction on why)
b. is there benefit of making one of these two columns a primary key (millions of records) or for adding a 3rd new column as a pk?
c. view: this parses the data in column a so it ends up looking more like "name phone house cost of house zipcode county state country" each having their own column.
-any pros/cons of adding indexes (if so which) to the view instead of the tables or both for once the data is parsed?
I am trying to create a table that holds info about a user; with the usual columns for firstName, lastName, etc.... no problem creating the table or it's columns, but how can I "restrict" the values of my State column in the 'users' table so that it only accepts values from the 'states' table?
Hi, I am having problem in bulk update of a sql server table haning identity column from a datatable( has no identity column) using sqlbulkcopy. I tried several approaches, but it does not show any error nor is the table getting updated. But the identity value seems to getting increased every time. thanks. varun
Hi, All, I have agentID in product table. Now I add agentID column in transaction table. Now I want to copy all agentID from product table to transaction table based on the order_id in both table. Can you show me an example? Thanks Betty
I created a PDF that contains information from a few different tables using VB.Net(ie name, date, etc). Once the PDF is created I stored it in a new SQL table (Form_001) in the varbinary max column. My issue now is to populate the other colums in Form_001 that conatins the data from the other tables. Is it possible to populate the columns even though the names are different with data from the other tables?
Okay, after I got everything imported, I found that a few thousand columns had "Shifted" on me. So now I am trying to "Shift them over" to where they need to be.
I did this: INSERT INTO dbo.TABLENAME (COLUMN_NAME_TO_BE POPULATED) SELECT COLUM_NAME_OF_INFO_TO_BE_MOVED FROM dbo.TABLENAME WHERE MFG = 'MANUFACTURER_NAME' AND PN LIKE '8888888888' GO
I populated the PN column with 8888888888 to use as a reference point, and the MFG column already was populated with the correct name, so I was using those as my unique reference points.
Other columns that have the same MFG name are correct, so I have to use two unique identifiers to specify the actual data that needs to be moved.
It inserted 'NULL' in the whole table after I ran it in the Query Analyzer. It appeared to disregard the WHERE statement all together Any ideas on what I am doing wrong here?
Is there a way to move the data from one column to the next one over by specifying other WHERE criterias?
I need to find a table and column name from some given data. I know what data i want to edit, I just need to know where it is located and the database is too big to manually go through. It is Microsoft sql server 2000. Any help is appreciated.
I have a table where the designer allowed time reporting for 8 different types of activity on each timecard, then stored the hours by activity in 8 separate columns in the database. Example:
Table A. Column Headers: Employee|Date|Phase1Hours|Phase2Hours, etc... Data: Fred Jones| 7/15/13 |3|3, etc...
The problem is there is no way based on this structure to get an employee's hours for the day in columnar form.
To get this data into columnar form I have used Select queries with Union All, for instance:
SELECT Employee, Date, Phase1 FROM Table A UNION ALL SELECT Employee, Date, Phase2 FROM Table A ,etc.....
Query is running slow (I am guessing it is because the same table is repeatedly being accessed).
What would be the most efficient method of constructing a sql statement to retrieve certain record into (this gets tricky) rows of a table with 3 columns date.desc.
How to modify/change data that is in a SQL table column. Here is what I need or have
I have 2 tables, Table 1 and Table 2, within those 2 tables there is 1 Column that has the same column heading aswell as data within,(I will call it the serial_number column) there is also a 2nd column that is labeled the Same in both tables, I would like to be able to update the Values in the 2nd column of table 1 with the values from table 2 column 3, using the Serial_number column as my matching reference between the 2, the data is not in the same order between the 2 Tables,
More or less I need to set the value of table 1/column 2 to match table 2/column 3, where the serial_numbers are the same in both tables serial_number columns, the data being changed is set as Small int...
I do an insert of the xml into the table and that works fine, but how do I split the tags to different tables. I have tried SSIS and a XML Source to an OLEDB Source, but since the xml file contains different groups that do not work.
The xml is created by Infopath and it seems like the groups are created if the components belongs to different sections.
Table1 id int, xmltag xml
I am starting to be desperate, I really need some help solving this one way or another.