I have a pretty massive conditional trigger. If there is another way of going about this, please let me know. But I'm populating a temp table with records and based on many conditions, I am transforming this data to another table in a corrected format. These conditions I am using reference the final table in many ways, and this seems to become slower and slower as the final table grows larger.
I have a trigger that updates values in a table when col2 is updated by a stored procedure.
I'd like to make this a conditonal trigger to prevent negative values from being inserted in the table. I'm not familiar with the syntax of triggers enough to get this to work and I've search for a reference with no luck.
Here's the current trigger code:
Code:
FOR INSERT, UPDATE AS UPDATE table1 SET col4= col1 - col2 - col3
I need to evaulate (col1 - col2 - col3) to see if it's less than zero. If it is, I want to set col4=0.
This is what I've tried, but it doesn't work. Any help is appreciated:
Code:
FOR INSERT, UPDATE AS select col1, col2, col3 from table1 if (col1 - col2 - col3) < 0 UPDATE table1 SET col4 = 0 else UPDATE table1 SET col4= col1 - col2 - col3
I have a situation where I'd like to fire a trigger only when certain conditions are met for data that's inserted into it's parent table.
Here is what I'm working with -
The PropertyScenarioIndex table currently contains this data: PropertyID ScenarioType ScenarioID 1732 financing 1 1732 financing 2 1732 financing 3 1732 income_expense 1 1732 income_expense 2
The PropertyScenarioIndex table has a trigger on it (the one that needs to be made conditional) whose purpose is to automatically insert rows into another table called IncomeAndExpenseData to make entries easier for the user in that table. Here is the trigger:
Code Snippet
CREATE TRIGGER addstandardIncomeAndExpenserows ON PropertyScenarioIndex AFTER INSERT AS insert into IncomeAndExpenseData select properytID = t1.propertyID, scenarioID = t1.scenarioID, itemID = t2.itemID, itemvalue = null, monthitemvalue = null from PropertyScenarioIndex t1 cross join IncomeAndExpenseCodes t2 where t1.propertyID+t1.scenarioID in (select propertyID+scenarioID from Inserted) and t2.standarditem = 1 With the trigger as it is above, when the user tries to enter the following into PropertyScenarioIndex: PropertyID ScenarioType ScenarioID 1732 income_expense 3 an error message appears that says Violation of PRIMARY KEY constraint 'PK_IncomeExpenseData'. Cannot insert duplicate key in object 'IncomeExpenseData'.
Is there some way to include code in the trigger that makes it fire only when the data inserted into the PropertyScenarioIndex table has a ScenarioType of 'income_expense'?
Note: in the where statement of the trigger above I have also tried this:
Code Snippet
where t1.propertyID+t1.scenarioID in (select propertyID+scenarioID from Inserted where scenariotype = 'income_expense')but got the same error message as noted above.
Thanks, Knot
P.S. The primary keys in the PropertyScenarioIndex table are PropertyID, ScenarioType, ScenarioID and the primary keys in the IncomeExpenseData table are called PropertyID, ScenarioID, ItemID.
Consider the following two functionally identical example queries:Query 1:DECLARE @Name VARCHAR(32)SET @Name = 'Bob'SELECT * FROM EmployeesWHERE [Name] = CASE WHEN @Name IS NULL THEN [Name] ELSE @Name ENDQuery 2:SELECT * FROM Employees WHERE [Name] = 'Bob'I would expect SQL Server to construct an identical QEP under the hoodfor these two queries, and that they would require essentially thesame amount of time to execute. However, Query 1 takes much longer torun on my indexed table of ~300,000 rows. By "longer", I mean thatQuery 1 takes about two seconds, while Query 2 returns almostinstantly.Is there a way to implement a conditional WHERE clause withoutsuffering this performance hit? I want to avoid using the IF...THENmethod because I frequently require several optional parameters in theWHERE clause.Thanks!Jared
On the database that I am maintaining we are having some data integrity issues between our Logon table and another sub table that stores the LogonId.
The best solution would be to put in a foreign key, but that is going to require a lot of work and a lot of code changes for the entire system. This is what we plan to do, but this is not a quick fix. We need something that can be implemented quickly.
The easiest and quickest fix is to check this sub table to see if the LogonId is in the sub table and the row is marked as Active or Working. If it is then we will abort the deletion and raise an error. Otherwise the delete should happen normally.
Is aborting the deletion as simple as :
<code> Delete From deleted Where LogonId = @myId </code>
Is there any way to create a "Conditional Insert Trigger" My Scenario is this; When a user adds an email address to the database, I want to look to see if the email address is like '%@acme-holdings%' and if it is then to change the value to 'Not allowed', otherwise to leave it alone and go ahead with inserting the original email address
I need some help here in creating a conditional update trigger. The purpose of this trigger would check to see if a contact already exist in the database on an insert and update only the fields that are null.
So How would I compare each field from the CONTACTS Table against my INSERTED Table?
Inserted.FirstName (COMPARE) Contacts.Firstname
Inserted.LastName (COMPARE) Contacts.LastName
Inserted.Email (COMPARE) Contacts.Email
I will be using the email address as the check for the duplicate record and if a duplicate is found... Instead of not allowing the insert I want to compare the existing record and update any fields that are NULL in Contacts with Inserted.
I have no idea on how to compare all of the fields.
In a very busy SQL2000 enterprise edition server with 8GB memory and 6 cpus sp3, I could not install a update trigger, unless all the appl connections are dropped. For this 24 HR running svr, could do it.
then I try to run a query as follows:
if exists (select rfABC.* A from rfABC inner join remoteSvr.XYZDB.dbo.vwIP L on A.Address = L.address and A.metro <> L.metro begin print ' ---- Yes metroID <> LAMetro, start job exec.... -----'
insert into tempCatchMetroIDGPRS select rfABC.*, metro, getdate() from rfABC inner join remoteSvr.XYZDB.dbo.vwIP L on rfABC.Address = L.address and rfABC.metro <> L.metro
update rfABC A set A.metro = L.metro from rfABC A inner join remoteSvr.XYZDB.dbo.vwIP L on A.Address = L.address and A.metro <> L.metro end else begin print ' ---- no metroID <> LAMetro, skip job exec.... -----' end
------------------------------ this query hang there could not execute. When I took off the if ... else condition, it run with like 0 second. Wondered if a 'busy' (which updates the IP address continueously) could cause above issues...
I have a multi-part question regarding trigger performance. First ofall, is there performance gain from issuing the following within atrigger:SELECT PrimaryKeyColumn FROM INSERTEDopposed to:SELECT * FROM INSERTEDSecondly, what is the optimum way to determine which action fired anAFTER trigger that is defined for INSERT, UPDATE, DELETE? Is itsomething similiar to:IF NOT EXISTS(SELECT * FROM DELETED)--sql block for insertELSEIF NOT EXISTS(SELECT * FROM INSERTED)--sql block for deleteELSE--sql block for updateor is there a superior method?Lastly, is checking @@ROWCOUNT at the beginning of an AFTER trigger thebest way to differentiate between multiple rows being affected or justa single row? Can using this @@ROWCOUNT test fail? Are there anysituations where it would return erroneous results?I realize I'm being somewhat nitpicky on these matters but any feedbackwould be greatly appreciated!
I have a very complex trigger on a table, and the performance has always been poor. I always attributed this to the large number of operations being performed. However, we recently upgraded to SQL 7.0, and I was finally able to watch what was really happening with the Query Analyzer's graphical execution plan. Much to my surprise, nearly all of the cost of the update is in simply selecting a few values from the INSERTED table.
Here is a simple update statement that does nothing, but causes the trigger to fire:
update BillHeader set Status = Status where pkBillHeader = -2135366980
The following statement is contained in the trigger:
According to the query analyzer, this is 98% of the total cost of the query!
Is this correct? Is the query analyzer just being fooled, and something else is really eating up all of this time?
If this is correct, is there any other way to get this data? I could get just the primary key (pkBillHeader) from the INSERTED table, then look up the rest of the values from the "real" table, but I don't see how this would help.
Is there a way in order to execute a subscribed report based on a certain criteria?
For example, let's say send a report to users when data exist on the report else if no data is returned by the query executed by the report then it will not send the report to users.
My current situation here is that users tend to say that this should not happen, since no pertinent information is contained in the report, why would they receive email with blank data in it.
I have the following code in the color property of a textbox. However, when I run my report all of the values in this column display in green regardless of their value.
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
I'm new to this whole SQL Server 2005 thing as well as database design and I've read up on various ways I can integrate business constraints into my database. I'm not sure which way applies to me, but I could use a helping hand in the right direction.
A quick explanation of the various tables I'm dealing with: WBS - the Work Breakdown Structure, for example: A - Widget 1, AA - Widget 1 Subsystem 1, and etc. Impacts - the Risk or Opportunity impacts for the weights of a part/assembly. (See Assemblies have Impacts below) Allocations - the review of the product in question, say Widget 1, in terms of various weight totals, including all parts. Example - September allocation, Initial Demo allocation, etc. Mostly used for weight history and trending Parts - There are hundreds of Parts which will eventually lead to thousands. Each part has a WBS element. [Seems redundant, but parts are managed in-house, and WBS elements are cross-company and issued by the Government] Parts have Allocations - For weight history and trending (see Allocations). Example, Nut 17 can have a September 1st allocation, a September 5th allocation, etc. Assemblies - Parts are assemblies by themselves and can belong to multiple assemblies. Now, there can be multiple parts on a product, say, an unmanned ground vehicle (UGV), and so those parts can belong to a higher "assembly" [For example, there can be 3 Nut 17's (lower assembly) on Widget 1 Subsystem 2 (higher assembly) and 4 more on Widget 1 Subsystem 5, etc.]. What I'm concerned about is ensuring that the weight roll-ups are accurate for all of the assemblies. Assemblies have Impacts - There is a risk and opportunity impact setup modeled into this design to allow for a risk or opportunity to be marked on a per-assembly level. That's all this table represents.
A part is allocated a weight and then assigned to an assembly. The Assemblies table holds this hierarchical information - the lower assembly and the higher one, both of which are Parts entries in the [Parts have Allocations] table.
Therefore, to ensure proper weight roll ups in the [Parts have Allocations] table on a per part-basis, I would like to check for any inserts, updates, deletes on both the [Parts have Allocations] table as well as the [Assemblies] table and then re-calculate the weight roll up for every assembly. Now, I'm not sure if this is a huge performance hog, but I do need to keep all the information as up-to-date and as accurate as possible. As such, I'm not sure which method is even correct, although it seems an AFTER DML trigger is in order (from what I've gathered thus far). Keep in mind, this trigger needs to go through and check every WBS or Part and then go through and check all of it's associated assemblies and then ensure the weights are correct by re-summing the weights listed.
If you need the design or create script (table layout), please let me know.
Are there any limitations or gotchas to updating the same table whichfired a trigger from within the trigger?Some example code below. Hmmm.... This example seems to be workingfine so it must be something with my specific schema/code. We'reworking on running a SQL trace but if anybody has any input, fireaway.Thanks!create table x(Id int,Account varchar(25),Info int)GOinsert into x values ( 1, 'Smith', 15);insert into x values ( 2, 'SmithX', 25);/* Update trigger tu_x for table x */create trigger tu_xon xfor updateasbegindeclare @TriggerRowCount intset @TriggerRowCount = @@ROWCOUNTif ( @TriggerRowCount = 0 )returnif ( @TriggerRowCount > 1 )beginraiserror( 'tu_x: @@ROWCOUNT[%d] Trigger does not handle @@ROWCOUNT[color=blue]> 1 !', 17, 127, @TriggerRowCount) with seterror, nowait[/color]returnendupdate xsetAccount = left( i.Account, 24) + 'X',Info = i.Infofrom deleted, inserted iwhere x.Account = left( deleted.Account, 24) + 'X'endupdate x set Account = 'Blair', Info = 999 where Account = 'Smith'
This Audit Trigger is Generic (i.e. non-"Table Specific") attach it to any tabel and it should work. Be sure and create the 'Audit' table first though.
The following code write audit entries to a Table called 'Audit' with columns 'ActionType' //varchar 'TableName' //varchar 'PK' //varchar 'FieldName' //varchar 'OldValue' //varchar 'NewValue' //varchar 'ChangeDateTime' //datetime 'ChangeBy' //varchar
using System; using System.Data; using System.Data.SqlClient; using Microsoft.SqlServer.Server;
public partial class Triggers { //A Generic Trigger for Insert, Update and Delete Actions on any Table [Microsoft.SqlServer.Server.SqlTrigger(Name = "AuditTrigger", Event = "FOR INSERT, UPDATE, DELETE")]
public static void AuditTrigger() { SqlTriggerContext tcontext = SqlContext.TriggerContext; //Trigger Context string TName; //Where we store the Altered Table's Name string User; //Where we will store the Database Username DataRow iRow; //DataRow to hold the inserted values DataRow dRow; //DataRow to how the deleted/overwritten values DataRow aRow; //Audit DataRow to build our Audit entry with string PKString; //Will temporarily store the Primary Key Column Names and Values here using (SqlConnection conn = new SqlConnection("context connection=true"))//Our Connection { conn.Open();//Open the Connection //Build the AuditAdapter and Mathcing Table SqlDataAdapter AuditAdapter = new SqlDataAdapter("SELECT * FROM Audit WHERE 1=0", conn); DataTable AuditTable = new DataTable(); AuditAdapter.FillSchema(AuditTable, SchemaType.Source); SqlCommandBuilder AuditCommandBuilder = new SqlCommandBuilder(AuditAdapter);//Populates the Insert command for us //Get the inserted values SqlDataAdapter Loader = new SqlDataAdapter("SELECT * from INSERTED", conn); DataTable inserted = new DataTable(); Loader.Fill(inserted); //Get the deleted and/or overwritten values Loader.SelectCommand.CommandText = "SELECT * from DELETED"; DataTable deleted = new DataTable(); Loader.Fill(deleted); //Retrieve the Name of the Table that currently has a lock from the executing command(i.e. the one that caused this trigger to fire) SqlCommand cmd = new SqlCommand("SELECT object_name(resource_associated_entity_id) FROM ys.dm_tran_locks WHERE request_session_id = @@spid and resource_type = 'OBJECT'", conn); TName = cmd.ExecuteScalar().ToString(); //Retrieve the UserName of the current Database User SqlCommand curUserCommand = new SqlCommand("SELECT system_user", conn); User = curUserCommand.ExecuteScalar().ToString(); //Adapted the following command from a T-SQL audit trigger by Nigel Rivett //http://www.nigelrivett.net/AuditTrailTrigger.html SqlDataAdapter PKTableAdapter = new SqlDataAdapter(@"SELECT c.COLUMN_NAME from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE c where pk.TABLE_NAME = '" + TName + @"' and CONSTRAINT_TYPE = 'PRIMARY KEY' and c.TABLE_NAME = pk.TABLE_NAME and c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME", conn); DataTable PKTable = new DataTable(); PKTableAdapter.Fill(PKTable);
switch (tcontext.TriggerAction)//Switch on the Action occuring on the Table { case TriggerAction.Update: iRow = inserted.Rows[0];//Get the inserted values in row form dRow = deleted.Rows[0];//Get the overwritten values in row form PKString = PKStringBuilder(PKTable, iRow);//the the Primary Keys and There values as a string foreach (DataColumn column in inserted.Columns)//Walk through all possible Table Columns { if (!iRow[column.Ordinal].Equals(dRow[column.Ordinal]))//If value changed { //Build an Audit Entry aRow = AuditTable.NewRow(); aRow["ActionType"] = "U";//U for Update aRow["TableName"] = TName; aRow["PK"] = PKString; aRow["FieldName"] = column.ColumnName; aRow["OldValue"] = dRow[column.Ordinal].ToString(); aRow["NewValue"] = iRow[column.Ordinal].ToString(); aRow["ChangeDateTime"] = DateTime.Now.ToString(); aRow["ChangedBy"] = User; AuditTable.Rows.InsertAt(aRow, 0);//Insert the entry } } break; case TriggerAction.Insert: iRow = inserted.Rows[0]; PKString = PKStringBuilder(PKTable, iRow); foreach (DataColumn column in inserted.Columns) { //Build an Audit Entry aRow = AuditTable.NewRow(); aRow["ActionType"] = "I";//I for Insert aRow["TableName"] = TName; aRow["PK"] = PKString; aRow["FieldName"] = column.ColumnName; aRow["OldValue"] = null; aRow["NewValue"] = iRow[column.Ordinal].ToString(); aRow["ChangeDateTime"] = DateTime.Now.ToString(); aRow["ChangedBy"] = User; AuditTable.Rows.InsertAt(aRow, 0);//Insert the Entry } break; case TriggerAction.Delete: dRow = deleted.Rows[0]; PKString = PKStringBuilder(PKTable, dRow); foreach (DataColumn column in inserted.Columns) { //Build and Audit Entry aRow = AuditTable.NewRow(); aRow["ActionType"] = "D";//D for Delete aRow["TableName"] = TName; aRow["PK"] = PKString; aRow["FieldName"] = column.ColumnName; aRow["OldValue"] = dRow[column.Ordinal].ToString(); aRow["NewValue"] = null; aRow["ChangeDateTime"] = DateTime.Now.ToString(); aRow["ChangedBy"] = User; AuditTable.Rows.InsertAt(aRow, 0);//Insert the Entry } break; default: //Do Nothing break; } AuditAdapter.Update(AuditTable);//Write all Audit Entries back to AuditTable conn.Close(); //Close the Connection } }
//Helper function that takes a Table of the Primary Key Column Names and the modified rows Values //and builds a string of the form "<PKColumn1Name=Value1>,PKColumn2Name=Value2>,......" public static string PKStringBuilder(DataTable primaryKeysTable, DataRow valuesDataRow) { string temp = String.Empty; foreach (DataRow kColumn in primaryKeysTable.Rows)//for all Primary Keys of the Table that is being changed { temp = String.Concat(temp, String.Concat("<", kColumn[0].ToString(), "=", valuesDataRow[kColumn[0].ToString)].ToString(), ">,")); } return temp; } }
The trick was getting the Table Name and the Primary Key Columns. I hope this code is found useful.
I want to be able to create a trigger that updates table 2 when a row is inserted into table 1. However I€™m not sure how to increment the ID in table 2 or to update only the row that has been inserted.
I want to be able to create a trigger so that when a row is inserted into table A by a specific user then the ID will appear in table B. Is it possible to find out the login id of the user inserting a row?
I believe the trigger should look something like this:
create trigger test_trigger on a for insert as insert into b(ID)
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server witha particular query. It would take approximately 22 seconds to return100 rows, thats about 0.22 seconds per row. Note: I ran the query insingle user mode. So I tested the query on the Development server bytaking a backup (.dmp) of the database and moving it onto the devserver. I ran the same query and found that it ran in less than asecond.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue isrelated to some external hardware issue like: disk space, memory etc.Or could it be OS software related issues, like service packs, SQLServer configuations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating systemrelated issue.Any Ideas would help me greatly!Thanks,Brian T
When a row gets modified and it invokes a trigger, we would like to beable to update the row that was modified inside the trigger. This is(basically) how we are doing it now:CREATE TRIGGER trTBL ON TBLFOR UPDATE, INSERT, DELETEasupdate TBLset fld = 'value'from inserted, TBLwhere inserted.id= TBL.id....This work fine but it seems like it could be optimized. Clearly we arehaving to scan the entire table again to update the row. But shouldn'tthe trigger already know which row invoked it. Do we have to scan thetable again for this row or is their some syntax that allows us toupdate the row that invoked the trigger. If not, why. It seems likethis would be a fairly common task. Thanks.
Salve, non riesco a disabilitare un trigger su sqlserver nè da queryanalyzer, nè da enterprise manager.In pratica tal cosa riuscivo a farla in Oracle con TOAD, mentre qui nonriesco.Mi interessa disattivarlo senza cancellarlo per poi riattivarlo al bisognosenza rilanciare lo script di creazione.Grazie a tuttiHi I need to disable a DB trigger and I'm not able to do this neither withquery analyzer, neither with enterprise manager.I remeber this job was quite simple using TOAd in Oracle.I'm interested in making it disabled not delete it, without run creationscript.Thanks a lot to everybody.
Hi, I am not sure if this is the right forum to post this question. I run an update statement like "Update mytable set status='S' " on the SQL 2005 management Studio. When I run "select * from mytable" for a few seconds all status = "S". After a few seconds all status turn to "H". This is a behaviour when you have an update trigger for the table. But I don't see any triggers under this table. What else would cause the database automatically change my update? Could there be any other place I should look for an update trigger on this table? Thanks,
Hi all in .net I've created an application that allows creation of triggers, i also want to allow the deletion of triggers. The trigger name is kept in a table, and apon deleting the record i want to use the field name to delete the trigger
I have the following Trigger
the error is at
DROP TRIGGER @DeleteTrigger
I'm guessing it dosen't like the trigger name being a variable instead of a static name how do i get around this?
I want to know how I can create conditional FROM WHERE clauses like below ..
SELECT X,X,X FROM CASE @intAltSQL > 0 Then Blah Blah Blah END CASE @intAltSQL = 0 Then Blah END WHERE CASE @intAltSQL > 0 Then Blah Blah Blah END CASE @intAltSQL = 0 Then Blah END
I have a trigger set on TABLE1 so that any update to this column should set off trigger to write to the AUDIT log table, it works fine otherwise but not the very first time when table1 has null in the column. if i comment out
and i.req_fname <> d.req_fname from the where clause then it works fine the first time too. Seems like null value of the column is messing things up
Any thoughts?
Here is my t-sql
Insert into dbo.AUDIT (audit_req, audit_new_value, audit_field, audit_user)
select i.req_guid, i.req_fname, 'req_fname', IsNull(i.req_last_update_user,@default_user) as username from inserted i, deleted d
Im faced with the following design issue.. on my site there are different profiles: a city profile, a restaurant profile and a user profile. in my DB:City profiles are stored in tbCities cityID int PK shortname nvarchar(50) forumID int FK (...) Restaurant profiles are stored in tbRests restID int PK shortname nvarchar(50) forumID int FK (...) User profiles are stored in tbUsers userID int PK shortname nvarchar(50) forumID int FK (...) as you can see a single ID value (for CityID,restID or userid) might occur in multiple tables (e.g. ID 12 may exist in tbRests and in tbUsers)Each of these profile owners can start a forum on their profile. forumID in each of the above tables is a FK to the PK in tbForums:forumID intforumname nvarchar(50) (...) Now imagine the following: a site visitor searches ALL forums...say he finds the following forums:ForumID Forumname1 you opinion on politics2 is there life in space?3 who should be the next president of the USA? a user may want to click on the forum name to go to the profile the forum belongs to.And then there's a problem, because I dont know in which table I should look for the forum ID...OR I would have to scan all tables (tbCities,tbRests and tbUsers) for that specific forumid,which is time-consuming and I dont want that! so if a user would click on forumID 2 (is there life in space?) I want to do a conditional inner join for the tablecontainingforumID (which may be tbCities,tbRests or tbUsers) select tablecontainingforumID.shortname FROM tablecontainingforumID tINNER JOIN tbForums f ON t.ForumID=f.ForumIDwhere f.ForumID=2 I hope my problem is clear..any suggestions are welcome (im even willing to change my DB design if that would increase effectivity)
I encounter a T-Sql problem related to if conditional processing:The following script execute an insert statement depending on whether column 'ReportTitle' exists in table ReportPreferences. However it gets executed even when ReportTitle column is not present.Could anyone offer some advice? IF(Coalesce(Col_length('ReportPreferences','ReportTitle'),0) > 0) BeginINSERT INTO dbo.DefaultsSELECT FinancialPlannerID,ReportTitleFROM dbo.ReportPreferencesendGO
I have a stored procedure that performs a search function with params:@username nvarchar(50)@country nvarchar(50)and like 10 more.A user may provide values for these params optionally.So when the @username var is left blank, there should be no filtering on the username field (every field should be selected regardless of the username)Currently my statement is:select username,country from myUsers whereusername=@username and country=@countryWith this statement when a user provides no value for username the username field selects on ''m which returns ofcourse nothing...What can I do to solve this?Thanks!