Fairly new to triggers and stored procedures and so far I haven't had to pass any variables or even call an SP from a trigger. The problem I have is to pass a 'callid' variable from an insert trigger to a stored procedure.
The SP is to delete any rows that already exist that contain the passed 'callid' and then insert the new inserted data (using variables again hopefully).
Is this at all possible and if so what is the syntax for passing the variable from the trigger and then reading it into the stored procedure ?
I placed a post regarding this issue previously but no success. So I thought I explain everything properly this time in a new post. Thanks
I have created a stored procedure which passes variables to the ssis package and then executes the package. The two variables inside the ssis package are @FileName and @ConnectionPath As you can see from the below stored procedure, xp_cmdshell is used to execute the package. If only the first variable is used in the package and the @connectionPath variable is hardcoded inside the package then package runs fine. Problem is in this particular call as you see below because @ConnectionPath is included.
Error: 2007-08-08 08:46:01.29 Code: 0xC0202009 Source: BaseProfiles2 Connection manager "CounterpartyExposure" Description: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft OLE DB Provider for ODBC Drivers" Hresult: 0x80004005 Description: "[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified". End Error
if only the output is run in the query analyser then the error is:
The identifier that starts with 'Data Source=gblond088sjyMSQL_curves_DEV1, 2025;Initial Catalog=CounterpartyExposure;Provider=SQLNCLI.1;Integrated Security=SSPI' is too long. Maximum length is 128.
iam trying to pass variable to a statement to grab data to from one DB table and pitch it in the same table in another DB base on evaluation like a where clause. but its not working what am i doing rong
here is the code
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[ClientComment]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
Declare @BDFR varchar(20), @BDTO varchar(20), @EQID varchar(20), @TABLEDESC varchar(20), @DBO varchar(20) set @TABLEDESC = 'ClientComment' set @DBO = '.dbo.' set @BDFR = 'Commander' + @DBO + @TABLEDESC set @BDTO = 'Test_Commander'+ @DBO + @TABLEDESC set @EQID = '80_300_113' insert into @BDTO select from @BDFR where Eqid = @EQID
I have a project that uses GUID's througout and I'm completely stumped.
1) I create a "batch" GUID to batch the records I'm about to process.
2) I call a web service on a remote machine, and reserve the batch records by inserting the batch GUID into a string ---works fine
3)I call another web service that returns the rows that I just reserved as XML objects and insert into a string variable
4)I need to use the "batch"GUID variable which is typed as a string (DT_WSTR) as an added column so in a Data Flow Task I do the following:
a) use the XML string variable as the source of a XML Source Task -- works (now that I'm passing custom objects and not a dataset -- curious as to why I can't consume a dataset but thats a different question)
This is where things get tricky:
I've tried to add the BatchGUID as a derived column, as a datatype of DT_WSTR (unicode string) and convert it to uniqueidentifier in the Data Conversion task --- error, cannot convert unicode to uniqueidentifier (I know I can in C# and SQL Server so why not here).
I've tried to CAST the BatchGUID as a uniqueidentifier and pass that to the datasource -- again conversion error.
I've tried using the type Object and Casting to anything and that doesn't work either.
I've tried to pass the unicode all the way to the SQL Server Destination -- and insert into UniqueIdentifier field... again no go.
All help would be appreciated, at this point I can't see any way of using a UniqueIdentifier as a key field, and maintaining it through the package... Is this a bug?
Oh and if you want to have some real fun, try returning the type of UniqueIdentifier as an output parameter using a ADO.NET connection.
UpdateCommand="UPDATE PsnQualifications SET quantity=@quantity WHERE rowstatus=1 and qualId=@original_qualId"
DeleteCommand="UPDATE PsnQualifications SET rowStatus=0, lastUpdateOn=getdate(), lastUpdateBy=@createdBy WHERE rowstatus=1 and qualId=@original_qualId"
--- Not sure if this should be moved to webforms forum, or if it belongs here ---
Alright, I have been dealing with this issue for a few days now, and have found a few solutions but they all seem to throw different errors so I figured I'd ask here.
What i am trying to do is have a webform where user enter data, and have the data passed across forms, then displayed and inserted into a database on another form. THe first for has an asp:rangevalidator control dymamicly built so I cannot simply take of the tags and use the old style.
Eventually the user will be directed to a paypal form, and upon successful completion be redirected to the page with the insert command within it, but for now, passing it to a second page for review, then inserting it will work.
I am not sure how to accomplish this, a tutorial or a code example would be great!! I have though about panels, creating public objects, etc, but all the solutions I have found have one issue or another when I attempt to create them.
I want to pass a command line values into an osql run stored procedure. The commandline values should be are the values to put into the insert stored procedure that writes them to a table.
Hi, I want to pass below given query into a variable
"if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[ <POS_MONTH>]]') and OBJECTPROPERTY(id, N'IsUserTable') = 1) DROP TABLE [dbo].[ <POS_DATE>] GO
SELECT * INTO [dbo].[<POS_DATE>] ] FROM SG_POS_Template WHERE 1 = 0; GO" Where [<POS_DATE>] is a parameter by which value will be assigned dymanically.....anybody please help me out....!!
The application developers came up with an interesting requirement that Id like some help with.
They want to be able to somehow pass a parameter to the trigger that is fired when you do something to a table. The parameter would control what logic is executed within the trigger.
The two solutions that I proposed, but were found unacceptable by them were: 1.Have parameter as a column within the table itself. 2.Dont use Update, Delete, Insert but instead use a stored procedure.
Can somebody please help me with compiling my insert trigger below. I am fairly new to SQL server 2000 and I am having troubles with using variables in insert triggers. The trigger that I am creating will basically update another table based on a certain criteria that is not specified below. I am hoping to first get my trigger to work then apply the criteria on when to fire afterwards. I just need help with being able to store values in my declared variables for insert into another table. Thanks in advance for everyones help.
Use database_testing
IF EXISTS (SELECT name FROM sysobjects WHERE type = 'TR' AND name = 'Trigger_Name') DROP TRIGGER Trigger_Name GO
CREATE TRIGGER Trigger_Name ON [trigger_table] FOR INSERT AS Declare @resource_id int = inserted.resource @type = varchar(100) = inserted.type @date_logged (datetime) = inserted.creation_date @created varchar(100) = inserted.username
Friends,I would just like to know that why SQL Server doen't allow us to definea text data type local variable while creating trigger?I tried creating a text variable in a trigger as a local variable andit raises error."Implicit conversion from data type text to nvarchar is not allowed.Use the CONVERT function to run this query".For this i have to use convert function in MS SQL Server.-ThanksBhavin Vyas
I am trying to get the contents of the Excel Files dynamically and dumping into the SQL Database using SSIS. Through WMI Event Watcher, I could find when one or more Excel files dumped in a particular folder and using ForEach Loop Container I was able to take all the filenames and pass it through Variables. But at the same time in the Data Flow, I have to pass each Sheet of an Excel File to the Excel Source control and export the data to my SQL Database using OLEDB Destination.
For that I need to get the names of each sheets in an Excel File and pass it to the Excel Source Control through variables. But when I give Data Access Mode as "Table name or view name variable" and provide the variable name in that, then it is giving an error message as "A destination table name has not been provided".
And at the same time, Since I was not able to provide an static Filename (as I am passing through Variables), when I tried to map the columns in the OleDB Destination, it is not allowing me to map the columns.
So all these things I should do at Run-time using Variables in SSIS. I don't want to hard-code any filenames or Sheet names. If any one of you have a solution, please share with me.
I'm new to SSIS, but have been programming in SQL and ASP.Net for several years. In Visual Studio 2005 Team Edition I've created an SSIS that imports data from a flat file into the database. The original process worked, but did not check the creation date of the import file. I've been asked to add logic that will check that date and verify that it's more recent than a value stored in the database before the import process executes.
Here are the task steps.
[Execute SQL Task] - Run a stored procedure that checks to see if the import is running. If so, stop execution. Otherwise, proceed to the next step.
[Execute SQL Task] - Log an entry to a table indicating that the import has started.
[Script Task] - Get the create date for the current flat file via the reference provided in the file connection manager. Assign that date to a global value (FileCreateDate) and pass it to the next step. This works.
[Execute SQL Task] - Compare this file date with the last file create date in the database. This is where the process breaks. This step depends on 2 variables defined at a global level. The first is FileCreateDate, which gets set in step 3. The second is a global variable named IsNewFile. That variable needs to be set in this step based on what the stored procedure this step calls finds out on the database. Precedence constraints direct behavior to the next proper node according to the TRUE/FALSE setting of IsNewFile.
If IsNewFile is FALSE, direct the process to a step that enters a log entry to a table and conclude execution of the SSIS.
If IsNewFile is TRUE, proceed with the import. There are 5 other subsequent steps that follow this decision, but since those work they are not relevant to this post. Here is the stored procedure that Step 4 is calling. You can see that I experimented with using and not using the OUTPUT option. I really don't care if it returns the value as an OUTPUT or as a field in a recordset. All I care about is getting that value back from the stored procedure so this node in the decision tree can point the flow in the correct direction.
The SSIS package passes the FileCreateDate parameter to this procedure, which then compares that parameter with the date saved in tbl_ImportFileCreateDate.
If the date is newer (or if there is no date), it updates the field in that table and returns a TRUE IsNewFile bit value in a recordset.
Otherwise it returns a FALSE value in the IsNewFile column.
SELECT @CreateDateInTable = FileCreateDate FROM tbl_ImportFileCreateDate WHERE ProcessName = @ProcessName
IF EXISTS (SELECT ProcessName FROM tbl_ImportFileCreateDate WHERE ProcessName = @ProcessName)
BEGIN
-- The process exists in tbl_ImportFileCreateDate. Compare the create dates.
IF (@FileCreateDate > @CreateDateInTable)
BEGIN
-- This is a newer file date. Update the table and set @IsNewFile to TRUE.
UPDATE tbl_ImportFileCreateDate
SET FileCreateDate = @FileCreateDate
WHERE ProcessName = @ProcessName
SET @IsNewFile = 1
END
ELSE
BEGIN
-- The file date is the same or older.
SET @IsNewFile = 0
END
END
ELSE
BEGIN
-- This is a new process for tbl_ImportFileCreateDate. Add a record to that table and set @IsNewFile to TRUE.
INSERT INTO tbl_ImportFileCreateDate (ProcessName, FileCreateDate)
VALUES (@ProcessName, @FileCreateDate)
SET @IsNewFile = 1
END
SELECT @IsNewFile
The relevant Global Variables in the package are defined as follows: Name : Scope : Date Type : Value FileCreateDate : (Package Name) : DateType : 1/1/2000 IsNewFile : (Package Name) : Boolean : False
Setting the properties in the "Execute SQL Task Editor" has been the difficult part of this. Here are the settings.
General Name = Compare Last File Create Date Description = Compares the create date of the current file with a value in tbl_ImportFileCreateDate. TimeOut = 0 CodePage = 1252 ResultSet = None ConnectionType = OLE DB Connection = MyServerDataBase SQLSourceType = Direct input IsQueryStoredProcedure = False BypassPrepare = True
I tried several SQL statements, suspecting it's a syntax issue. All of these failed, but with different error messages. These are the 2 most recent attempts based on posts I was able to locate. SQLStatement = exec ? = dbo.p_CheckImportFileCreateDate 'GL Account Import', ?, ? output SQLStatement = exec p_CheckImportFileCreateDate 'GL Account Import', ?, ? output
Parameter Mapping Variable Name = User::FileCreateDate, Direction = Input, DataType = DATE, Parameter Name = 0, Parameter Size = -1 Variable Name = User::IsNewFile, Direction = Output, DataType = BYTE, Parameter Name = 1, Parameter Size = -1
Result Set is empty. Expressions is empty.
When I run this in debug mode with this SQL statement ... exec ? = dbo.p_CheckImportFileCreateDate 'GL Account Import', ?, ? output ... the following error message appears.
SSIS package "MyPackage.dtsx" starting. Information: 0x4004300A at Import data from flat file to tbl_GLImport, DTS.Pipeline: Validation phase is beginning.
Error: 0xC002F210 at Compare Last File Create Date, Execute SQL Task: Executing the query "exec ? = dbo.p_CheckImportFileCreateDate 'GL Account Import', ?, ? output" failed with the following error: "No value given for one or more required parameters.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
Task failed: Compare Last File Create Date
Warning: 0x80019002 at GLImport: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
SSIS package "MyPackage.dtsx" finished: Failure.
When the above is run tbl_ImportFileCreateDate does not get updated, so it's failing at some point when calling the procedure.
When I run this in debug mode with this SQL statement ... exec p_CheckImportFileCreateDate 'GL Account Import', ?, ? output ... the tbl_ImportFileCreateDate table gets updated. So I know that data piece is working, but then it fails with the following message.
SSIS package "MyPackage.dtsx" starting. Information: 0x4004300A at Import data from flat file to tbl_GLImport, DTS.Pipeline: Validation phase is beginning.
Error: 0xC001F009 at GLImport: The type of the value being assigned to variable "User::IsNewFile" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object.
Error: 0xC002F210 at Compare Last File Create Date, Execute SQL Task: Executing the query "exec p_CheckImportFileCreateDate 'GL Account Import', ?, ? output" failed with the following error: "The type of the value being assigned to variable "User::IsNewFile" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object. ". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. Task failed: Compare Last File Create Date
Warning: 0x80019002 at GLImport: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (3) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
SSIS package "MyPackage.dtsx" finished: Failure.
The IsNewFile global variable is scoped at the package level and has a Boolean data type, and the Output parameter in the stored procedure is defined as a Bit. So what gives?
The "Possible Failure Reasons" message is so generic that it's been useless to me. And I've been unable to find any examples online that explain how to do what I'm attempting. This would seem to be a very common task. My suspicion is that one or more of the settings in that Execute SQL Task node is bad. Or that there is some cryptic, undocumented reason that this is failing.
I have an Execute SQL Task that returns a Full Rowset from a SQL Server table and assigns it to a variable objRecs. I connect that to a foreach container with an ADO enumerator using objRecs variable and Rows in first table mode. I defined variables and mapped them to the columns.
I tested this by placing a Script task inside the foreach container and displaying the variables in a messagebox.
Now, for each row, I want to write a record to an MS Access table and then update a column back in the original SQL Server table where I retreived data in the Execute SQL task (i have the primary key). If I drop a Data Flow Task inside my foreach container, how do I pass the variables as input to an OLE DB Destination on the Data Flow?
Also, how would I update the original source table where source.id = objRects.id?
Thank you for your assistance. I have spent the day trying to figure this out (and thought it would be simple), but I am just not getting SSIS. Sorry if this has been covered.
I am trying to update one table when records are inserted in another table.
I have added the following trigger to the table “ProdTr” and every time a record is added I want to update the field “Qty3” in the table “ActInf” with a value from the inserted record.
My problem appears to be that I am unable to fill the variables with values, and I cannot understand why it isn’t working, my code is:
ALTER trigger [dbo].[antall_liter] on [dbo].[ProdTr] for insert as begin declare @liter as decimal(28,6)
We already used Oracle Datasatage Server the following Query statement for Source and Lookup.here there is parameter maping in the SQl Statement . How can achive in SSIS the Folowing Querystatment?
Query 1: (source View Query) SELECT V_RDP_GOLD_PRICE.GDR_PRODUCT_ID, V_RDP_GOLD_PRICE.ASSET_TYPE, V_RDP_GOLD_PRICE.PREFERENCE_SEQ, V_RDP_GOLD_PRICE.RDP_PRICE_SOURCE, TO_CHAR(V_RDP_GOLD_PRICE.PRICE_DATE_TIME,'YYYY-MM-DD HH24:MI:SS'), TO_CHAR(V_RDP_GOLD_PRICE.REPORT_DATE,'YYYY-MM-DD HH24:MI:SS'), V_RDP_GOLD_PRICE.SOURCE_SYSTEM_ID FROM V_RDP_GOLD_PRICE V_RDP_GOLD_PRICE WHERE REPORT_DATE = (select max(report_date) from V_RDP_GOLD_PRICE where source_system_id = 'RM' )
Query 2: (look up )
SELECT GDR_PRODUCT_ID, TO_CHAR(MAX(PRICE_DATE_TIME),'YYYY-MM-DD HH24:MI:SS') , TO_CHAR(REPORT_DATE,'YYYY-MM-DD HH24:MI:SS') FROM V_RDP_GOLD_PRICE where GDR_PRODUCT_ID = :1 and report_date = TO_DATE(:2,'YYYY-MM-DD HH24:MI:SS') AND PRICE_DATE_TIME BETWEEN TO_DATE(:2,'YYYY-MM-DD HH24:MI:SS') - 7) AND TO_DATE(:2,'YYYY-MM-DD HH24:MI:SS') GROUP BY GDR_PRODUCT_ID, TO_CHAR(REPORT_DATE,'YYYY-MM-DD HH24:MI:SS')
please anyone give the sample control flow and how to pass the parameter?
I am not comfortable with DTS 2000 but I need to execute a encapsulated DTS 2000 package from a SSIS package. The real problem is when I need to pass SSIS variables to DTS 2000 package. The DTS 2000 package have 3 global variables that I can identify on " Execute DTS 2000 Package Task Editor - Inner Variables ". I believe the SSIS variables must be mapped on " Execute DTS 2000 Package Task Editor - OuterVariables ". How can I associate the SSIS variables(OuterVariables ) to "Inner Variables"? How can I do it? Much Thanks.
I'm new to this whole SQL Server 2005 thing as well as database design and I've read up on various ways I can integrate business constraints into my database. I'm not sure which way applies to me, but I could use a helping hand in the right direction.
A quick explanation of the various tables I'm dealing with: WBS - the Work Breakdown Structure, for example: A - Widget 1, AA - Widget 1 Subsystem 1, and etc. Impacts - the Risk or Opportunity impacts for the weights of a part/assembly. (See Assemblies have Impacts below) Allocations - the review of the product in question, say Widget 1, in terms of various weight totals, including all parts. Example - September allocation, Initial Demo allocation, etc. Mostly used for weight history and trending Parts - There are hundreds of Parts which will eventually lead to thousands. Each part has a WBS element. [Seems redundant, but parts are managed in-house, and WBS elements are cross-company and issued by the Government] Parts have Allocations - For weight history and trending (see Allocations). Example, Nut 17 can have a September 1st allocation, a September 5th allocation, etc. Assemblies - Parts are assemblies by themselves and can belong to multiple assemblies. Now, there can be multiple parts on a product, say, an unmanned ground vehicle (UGV), and so those parts can belong to a higher "assembly" [For example, there can be 3 Nut 17's (lower assembly) on Widget 1 Subsystem 2 (higher assembly) and 4 more on Widget 1 Subsystem 5, etc.]. What I'm concerned about is ensuring that the weight roll-ups are accurate for all of the assemblies. Assemblies have Impacts - There is a risk and opportunity impact setup modeled into this design to allow for a risk or opportunity to be marked on a per-assembly level. That's all this table represents.
A part is allocated a weight and then assigned to an assembly. The Assemblies table holds this hierarchical information - the lower assembly and the higher one, both of which are Parts entries in the [Parts have Allocations] table.
Therefore, to ensure proper weight roll ups in the [Parts have Allocations] table on a per part-basis, I would like to check for any inserts, updates, deletes on both the [Parts have Allocations] table as well as the [Assemblies] table and then re-calculate the weight roll up for every assembly. Now, I'm not sure if this is a huge performance hog, but I do need to keep all the information as up-to-date and as accurate as possible. As such, I'm not sure which method is even correct, although it seems an AFTER DML trigger is in order (from what I've gathered thus far). Keep in mind, this trigger needs to go through and check every WBS or Part and then go through and check all of it's associated assemblies and then ensure the weights are correct by re-summing the weights listed.
If you need the design or create script (table layout), please let me know.
Are there any limitations or gotchas to updating the same table whichfired a trigger from within the trigger?Some example code below. Hmmm.... This example seems to be workingfine so it must be something with my specific schema/code. We'reworking on running a SQL trace but if anybody has any input, fireaway.Thanks!create table x(Id int,Account varchar(25),Info int)GOinsert into x values ( 1, 'Smith', 15);insert into x values ( 2, 'SmithX', 25);/* Update trigger tu_x for table x */create trigger tu_xon xfor updateasbegindeclare @TriggerRowCount intset @TriggerRowCount = @@ROWCOUNTif ( @TriggerRowCount = 0 )returnif ( @TriggerRowCount > 1 )beginraiserror( 'tu_x: @@ROWCOUNT[%d] Trigger does not handle @@ROWCOUNT[color=blue]> 1 !', 17, 127, @TriggerRowCount) with seterror, nowait[/color]returnendupdate xsetAccount = left( i.Account, 24) + 'X',Info = i.Infofrom deleted, inserted iwhere x.Account = left( deleted.Account, 24) + 'X'endupdate x set Account = 'Blair', Info = 999 where Account = 'Smith'
This Audit Trigger is Generic (i.e. non-"Table Specific") attach it to any tabel and it should work. Be sure and create the 'Audit' table first though.
The following code write audit entries to a Table called 'Audit' with columns 'ActionType' //varchar 'TableName' //varchar 'PK' //varchar 'FieldName' //varchar 'OldValue' //varchar 'NewValue' //varchar 'ChangeDateTime' //datetime 'ChangeBy' //varchar
using System; using System.Data; using System.Data.SqlClient; using Microsoft.SqlServer.Server;
public partial class Triggers { //A Generic Trigger for Insert, Update and Delete Actions on any Table [Microsoft.SqlServer.Server.SqlTrigger(Name = "AuditTrigger", Event = "FOR INSERT, UPDATE, DELETE")]
public static void AuditTrigger() { SqlTriggerContext tcontext = SqlContext.TriggerContext; //Trigger Context string TName; //Where we store the Altered Table's Name string User; //Where we will store the Database Username DataRow iRow; //DataRow to hold the inserted values DataRow dRow; //DataRow to how the deleted/overwritten values DataRow aRow; //Audit DataRow to build our Audit entry with string PKString; //Will temporarily store the Primary Key Column Names and Values here using (SqlConnection conn = new SqlConnection("context connection=true"))//Our Connection { conn.Open();//Open the Connection //Build the AuditAdapter and Mathcing Table SqlDataAdapter AuditAdapter = new SqlDataAdapter("SELECT * FROM Audit WHERE 1=0", conn); DataTable AuditTable = new DataTable(); AuditAdapter.FillSchema(AuditTable, SchemaType.Source); SqlCommandBuilder AuditCommandBuilder = new SqlCommandBuilder(AuditAdapter);//Populates the Insert command for us //Get the inserted values SqlDataAdapter Loader = new SqlDataAdapter("SELECT * from INSERTED", conn); DataTable inserted = new DataTable(); Loader.Fill(inserted); //Get the deleted and/or overwritten values Loader.SelectCommand.CommandText = "SELECT * from DELETED"; DataTable deleted = new DataTable(); Loader.Fill(deleted); //Retrieve the Name of the Table that currently has a lock from the executing command(i.e. the one that caused this trigger to fire) SqlCommand cmd = new SqlCommand("SELECT object_name(resource_associated_entity_id) FROM ys.dm_tran_locks WHERE request_session_id = @@spid and resource_type = 'OBJECT'", conn); TName = cmd.ExecuteScalar().ToString(); //Retrieve the UserName of the current Database User SqlCommand curUserCommand = new SqlCommand("SELECT system_user", conn); User = curUserCommand.ExecuteScalar().ToString(); //Adapted the following command from a T-SQL audit trigger by Nigel Rivett //http://www.nigelrivett.net/AuditTrailTrigger.html SqlDataAdapter PKTableAdapter = new SqlDataAdapter(@"SELECT c.COLUMN_NAME from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE c where pk.TABLE_NAME = '" + TName + @"' and CONSTRAINT_TYPE = 'PRIMARY KEY' and c.TABLE_NAME = pk.TABLE_NAME and c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME", conn); DataTable PKTable = new DataTable(); PKTableAdapter.Fill(PKTable);
switch (tcontext.TriggerAction)//Switch on the Action occuring on the Table { case TriggerAction.Update: iRow = inserted.Rows[0];//Get the inserted values in row form dRow = deleted.Rows[0];//Get the overwritten values in row form PKString = PKStringBuilder(PKTable, iRow);//the the Primary Keys and There values as a string foreach (DataColumn column in inserted.Columns)//Walk through all possible Table Columns { if (!iRow[column.Ordinal].Equals(dRow[column.Ordinal]))//If value changed { //Build an Audit Entry aRow = AuditTable.NewRow(); aRow["ActionType"] = "U";//U for Update aRow["TableName"] = TName; aRow["PK"] = PKString; aRow["FieldName"] = column.ColumnName; aRow["OldValue"] = dRow[column.Ordinal].ToString(); aRow["NewValue"] = iRow[column.Ordinal].ToString(); aRow["ChangeDateTime"] = DateTime.Now.ToString(); aRow["ChangedBy"] = User; AuditTable.Rows.InsertAt(aRow, 0);//Insert the entry } } break; case TriggerAction.Insert: iRow = inserted.Rows[0]; PKString = PKStringBuilder(PKTable, iRow); foreach (DataColumn column in inserted.Columns) { //Build an Audit Entry aRow = AuditTable.NewRow(); aRow["ActionType"] = "I";//I for Insert aRow["TableName"] = TName; aRow["PK"] = PKString; aRow["FieldName"] = column.ColumnName; aRow["OldValue"] = null; aRow["NewValue"] = iRow[column.Ordinal].ToString(); aRow["ChangeDateTime"] = DateTime.Now.ToString(); aRow["ChangedBy"] = User; AuditTable.Rows.InsertAt(aRow, 0);//Insert the Entry } break; case TriggerAction.Delete: dRow = deleted.Rows[0]; PKString = PKStringBuilder(PKTable, dRow); foreach (DataColumn column in inserted.Columns) { //Build and Audit Entry aRow = AuditTable.NewRow(); aRow["ActionType"] = "D";//D for Delete aRow["TableName"] = TName; aRow["PK"] = PKString; aRow["FieldName"] = column.ColumnName; aRow["OldValue"] = dRow[column.Ordinal].ToString(); aRow["NewValue"] = null; aRow["ChangeDateTime"] = DateTime.Now.ToString(); aRow["ChangedBy"] = User; AuditTable.Rows.InsertAt(aRow, 0);//Insert the Entry } break; default: //Do Nothing break; } AuditAdapter.Update(AuditTable);//Write all Audit Entries back to AuditTable conn.Close(); //Close the Connection } }
//Helper function that takes a Table of the Primary Key Column Names and the modified rows Values //and builds a string of the form "<PKColumn1Name=Value1>,PKColumn2Name=Value2>,......" public static string PKStringBuilder(DataTable primaryKeysTable, DataRow valuesDataRow) { string temp = String.Empty; foreach (DataRow kColumn in primaryKeysTable.Rows)//for all Primary Keys of the Table that is being changed { temp = String.Concat(temp, String.Concat("<", kColumn[0].ToString(), "=", valuesDataRow[kColumn[0].ToString)].ToString(), ">,")); } return temp; } }
The trick was getting the Table Name and the Primary Key Columns. I hope this code is found useful.
I want to be able to create a trigger that updates table 2 when a row is inserted into table 1. However Im not sure how to increment the ID in table 2 or to update only the row that has been inserted.
I want to be able to create a trigger so that when a row is inserted into table A by a specific user then the ID will appear in table B. Is it possible to find out the login id of the user inserting a row?
I believe the trigger should look something like this:
create trigger test_trigger on a for insert as insert into b(ID)
When a row gets modified and it invokes a trigger, we would like to beable to update the row that was modified inside the trigger. This is(basically) how we are doing it now:CREATE TRIGGER trTBL ON TBLFOR UPDATE, INSERT, DELETEasupdate TBLset fld = 'value'from inserted, TBLwhere inserted.id= TBL.id....This work fine but it seems like it could be optimized. Clearly we arehaving to scan the entire table again to update the row. But shouldn'tthe trigger already know which row invoked it. Do we have to scan thetable again for this row or is their some syntax that allows us toupdate the row that invoked the trigger. If not, why. It seems likethis would be a fairly common task. Thanks.
Salve, non riesco a disabilitare un trigger su sqlserver n da queryanalyzer, n da enterprise manager.In pratica tal cosa riuscivo a farla in Oracle con TOAD, mentre qui nonriesco.Mi interessa disattivarlo senza cancellarlo per poi riattivarlo al bisognosenza rilanciare lo script di creazione.Grazie a tuttiHi I need to disable a DB trigger and I'm not able to do this neither withquery analyzer, neither with enterprise manager.I remeber this job was quite simple using TOAd in Oracle.I'm interested in making it disabled not delete it, without run creationscript.Thanks a lot to everybody.
Hi, I am not sure if this is the right forum to post this question. I run an update statement like "Update mytable set status='S' " on the SQL 2005 management Studio. When I run "select * from mytable" for a few seconds all status = "S". After a few seconds all status turn to "H". This is a behaviour when you have an update trigger for the table. But I don't see any triggers under this table. What else would cause the database automatically change my update? Could there be any other place I should look for an update trigger on this table? Thanks,