How Do I Exclude Data From One Table Based On Data From Another?
Feb 28, 2008
I have a table called MasterSkillList which is a list of skills and attributes, eg: [Appraise, INT], [Bluff, CHA] etc
I have a table called Classes, which is a list of all classes available (and some details which are irrelevant), eg: [Fighter], [Assassin] etc.
I also have a table called ClassSkills which holds a list of classes and their applicable skills, eg: [Assassin, Bluff], [Assassin, Open Lock], [Fighter, Appraise], [Fighter, Bluff] etc.
What I have is a gridview which shows all my classes from the class table. i want to be able to select a class on that gridview and create a checkbox list of all available skills that are NOT allready associated with that class. Eg: assassin has bluff and open lock, so those two skills shouldn't appear on my checkbox list. So i want to show all the skill from the master skills list, excluding all the skills the selected class allready has.
Alternatively, It would be better if there was a way to display all the skills in existance on my checkboxlist and the ones that class allready has to be checked. Any suggestions?
Here's the query I have:
SELECT MasterSkillsList.Skill
FROM ClassSkills INNER JOIN
MasterSkillsList ON ClassSkills.Skill = MasterSkillsList.Skill
WHERE (MasterSkillsList.Skill <> ClassSkill.Skill)
Edit:
I just added the following sql query, but when i run it i get no results even though it should show everything except 2 skills. Have I written it wrong?
SELECT Skill
FROM MasterSkillsList
WHERE (NOT EXISTS
(SELECT Skill
FROM ClassSkills
WHERE (ClassName = @ClassName)))
I'm not sure if this could be done, but if anyone has any insight on how to do this please let me know...
Currently, I have a table that has a field of Categories. I recently created a Category table in which each category has it's own ID. I need to replace the Data that was in my original table with the new ID's based on the actual category name... Is there any way of doing this without manaul data entry?
I'm not sure if this could be done, but if anyone has any insight on how to do this please let me know...
Currently, I have a table that has a field of Categories. I recently created a Category table in which each category has it's own ID. I need to replace the Data that was in my original table with the new ID's based on the actual category name... Is there any way of doing this without manaul data entry?
Select AVG(AVG_Back), AVG(AVG_Yield) FROM tblUser WHERE Date Between '3/1/2008' AND '3/31/2008'
I want to limit the AVG_Back field to exclude all values of 0. So only average AVG_Back if the value > 0. What is the best way to accomplish this? I can't just put it in the where clause or the AVG_Yield will be excluded too.
I have a column that is VARCHAR(30) this column is supposed to contain values that "look" like a date in the form mm/dd/yyyy - however because it is a free-text character field often times data is entered other then a date -- "text" -
How do I return only the data that is in the format of mm/dd/yyyy
I have a cube that is showing measures that don't exist. Let me give an example. This example will include 3 dimensions, product, location, and time. The fact table measure will be sales.
Here are the distinct values if you were to write a sql query against the dimensionl model that feeds the cube.
Product Location Time Sales A X 1/04 200 B Y 1/04 100
A X 2/04 300
In the cube, if you were to look at product by location for just 2/04, you would see:
Product Location Sales All Loc 300 A X 300 Y
All Loc B X Y
How do you get rid of the zero's or combinations that don't exist?
Hey,First, sorry if this post appear twice, because, I can not find my postrecently send, trying to post it once again.I'm out of ideas, so, I thought, will search help here again :(I'm trying to prepare a view for ext. app. This is in normal cases veryeasy, but what if the view structure should be dynamic?!Here is my point (I will siplify the examples).I have a table:create table t_data (id bigint identity (1,1) not null,valvarchar(10) not null,data varchar(100) not nullconstraint [PK_t_data] primary key clustered(id) with fillfactor = 90 on [PRIMARY] )goinsert into t_data (val, data) values('1111111111','1234567890abcdefghijklmnoprstuvwxyz 1234567890abcdefghijklmnoprstuvwxyz67890abcdefghij klmnoprstuvwxyz')insert into t_data (val, data) values('2222222222','1234567890abcdefghijklmnoprstuvwxyz 1234567890abcdefghijklmnoprstuvwxyz12345abcdefghij klmnoprstuvwxyz')insert into t_data (val, data) values('3333333333','12345abcdefghijklmnoprstuvwxyz12345 67890abcdefghijklmnoprstuvwxyz1234567890abcdefghij klmnoprstuvwxyz')insert into t_data (val, data) values('4444444444','67890abcdefghijklmnoprstuvwxyz12345 67890abcdefg12345hijklmnoprstuvwxyz67890abcdefghij klmnoprstuvwxyz')insert into t_data (val, data) values('5555555555','1230abcdefghijklmnoprst12345uvwxyz1 234567890abcdefghijklmnoprstuvwxyz67890abcdefghijk lmnoprstuvwxyz')gocreate table t_dataVal (id bigint identity (1,1) not null,valvarchar(10) not null,fill varchar(4) not nullconstraint [PK_t_dataVal] primary key clustered(id) with fillfactor = 90 on [PRIMARY] )goinsert into t_dataVal (val, fill) values ('1111111111','AAAA')insert into t_dataVal (val, fill) values ('2222222222','KKKK')insert into t_dataVal (val, fill) values ('3333333333','DDDD')insert into t_dataVal (val, fill) values ('4444444444','ZZZZ')insert into t_dataVal (val, fill) values ('5555555555','CCCC')gocreate table t_conf (id bigint identity (1,1) not null,start int not null,length int not null,description varchar(20) not null,constraint [PK_t_conf] primary key clustered(id) with fillfactor = 90 on [PRIMARY] )goinsert into t_conf (start, length, description) values (1,10,'value_1')insert into t_conf (start, length, description) values (11,3,'value_2')insert into t_conf (start, length, description) values(55,15,'value_3')insert into t_conf (start, length, description) values (33,2,'value_4')insert into t_conf (start, length, description) values (88,1,'value_5')insert into t_conf (start, length, description) values (56,7,'value_6')goNow here is the issue:table t_conf contain data, which can be modified by user. The user isseting the appropriate values.Now, there should be a view, which returns:- as headers (collumn names) this what is defined in description columnof t_conf (for example: value_1, value_2 ... value_6)- as values, substrings of all data from t_data, cutted with start andlength values for appropriate decription from t_conf.- first two columns of view, should be column val and fill of t_dataValtableSo the effect should be like this:valfillvalue_1value_2value_3value_4value_5value61111111111AAAA1234567890abc....2222222222KKKK1234567890abc....3333333333DDDD12345abcdefgh....4444444444ZZZZ67890abcdefgh....5555555555CCCC1230abcdefghi....of course, for all other value_x should be the appropriate substringsshown.Sounds simple, hm?Well, I'm trying to do this, since yesterday evening, and can not :(In real life, the call of view/function might happend a lot.The table t_data might have around 4000 records, but the data string islonger (around 3000 characters).Application, might acess a udf, which returns table, and I was focusingin that.Was trying, to create local temp table in function, to insert values,using cursor over t_conf.Unfortunately, everything what I get, is just a vertical representationof the data, and I need it horizontal :(The other problem in function is, that I can not use exec() (wll known)so I can not even create a table,dynamicly, using as column names description value from table t_conf,and as size of field length from this table.Sorry, that the description is maybe not exactly for my problem, butthis is because I'm not even sure, which way to use :(any help will be appreciated!Thank You - Matik
I am trying to write a stored procedure that will select information from a SQL table based on a specific time. For example I have a name field and a time field, I want to return just the names that were created between a specific time frame. ex between 3pm and 4pm. Any thoughts?
I have a Users table that I use for membership. I am using username varchar(30) as the primary key for this table since username will always be unique.
The question I have is regarding how SQL Server actually stores data:
I see that when I add users, they are always stored alphabetically sorted on username.
I was expecting that all users will appear on the users table in the order they were added.
Example: I have 3 users (john, jonah, wilson). Now I added 4 user with username='bob'
If I execute select * from users, it returns me (bob, john, jonah, wilson). Look bob has become the first row of the table.
My question: Is Sql server moving 3 older rows to make room for 'bob' and it is also rebuilding part of the index due this new username 'bob'?
If this is the case, then it will have big impact if I have 100K users and I add one user that becomes firstrow. In that case 99,999 rows will have to move.
Bottom line, insert, delete will be very expensive.
I know sql server keeps data physically sorted on PK. But I am concerned here since rows are losing the order in which they were inserted.
Hi I am having a problem in auditing the column data in tables.My requirement is i have write a trigger which is capable of auditing the columns which are going to be added in the future also with out using dynamic SQL.is there any way to do so. I feel if i can get the column data based on ordinal position then it is possible. Can any body suggest. My set Up is like this I have a base_table to be audited. I have a Audit_spec table which contains name of the table and columns to be audited. And Audit table which actually captures the table name,column name ,old value and new value. I have to audit only those columns in the Audit_spec spec. If schema changes(Like new column added) happens to base_table and I want that column to be audited.with out any changes to my trigger code i should handle the newly added column ..
I have a table dbo.Sales that contains all sales records. There is a column in that table called ItemNumber that I'd like to match with ItemNumber in a flat file and update the ItemCost based on the ItemCost column in the flat file.
So while there will be many sales records for each ItemNumber, I need to loop through and update the ItemCost in that sales record based on the corresponding ItemCost in the flat file. Does this make sense? I really need this for court and I can't figure out how to do it. I took a SQL course about 7 years ago but have forgotten everything.
There will be many sales records for each ItemNumber in the database table. I need to update each one with correct cost based on the item number and cost mapping from flat file.
I am putting together an invoice for my company. I have a text box describing each section of the invoice, followed by a table to list out the charges. I am using multiple tables based on what type of charge the client is receiving.
I would like to hide each section if there are no items purchased of that type. I can do this with the table using the expression "=CountRows() < 1", but I do not know how to refer to that table (call it Tablix1 for the sake of discussion) for the text box. I've tried using a ReportItems function as my basis, without success.
I'm trying to load data from old SQL server 2000 to new SQL server 2014. I need to do a checksum to check if all the source data is loaded in the target database(SQL server 2014). I've created the insert statement for the same which works. I need to use checksum to make sure all the source rows are loaded in the target table. I haven't done checksum before.
Here is my insert statement:
INSERT INTO [Test].[dbo].[Order_tab] ([rec_id] ,[date_loaded] ,[Name1] ,[Name2] ,[Address1] ,[Address2]
FirstName | LastName | DateofBirth ---------------------------------------- Thomas | Alva Edison | 10-10-2015 Benjamin | Franklin | 10-10-2015 Thomas | More | 11-10-2015 Thomas | Jefferson | 12-10-2015
Suppose today's date is 09-10-2015 in (dd-MM-yyyy format), I want to perform a query in such a way that I should get the data from the table above WHERE DateofBirth is tomorrow, so I could get the following result.
FirstName | LastName | DateofBirth ---------------------------------------- Thomas | Alva Edison | 10-10-2015 Benjamin | Franklin | 10-10-2015
I have a results table that was created from many different sources in SSIS. I have done calculations and created derived columns in it. I am trying to figure out if there is a way to remove duplicate rows from this table without first writing it to a temp sql table and then parsing through it to remove them.
each row has a like key in a column - I would like to remove like rows keeping specific columns in the resulting row based on the data in this key field.
I want to write a query that will cycle through the results and if it comes across another record that has a matching Table.ID I want to exclude that row from the result set.
I am not all too familiar with how to use either a Case or If..Else Statement within a Sql statement that would accomplish this.
I need to create a function that replaces the data in a column with an 'X' based on the LEN of the data in the column. I created one that does a replacement, but it fills the column based on the max data length, and not the current length of the string or integer. An example of what I'm trying to accomplish.
Original data in a varchar(30) column: thisisavalue thisisanothervalue thisisanothervalueagain shortval
replaced with xxxxxxxxxx xxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxx xxxxxxx
My current function is replacing the data like this: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
I have 4 Tablix and 2 of the Tablix get data from Server 1 and other 2 get the data from Server 2.I have set NoRowsMessage "=Data Not Available for the Selected Values" for all the 4 Tablix.Now if data is not available from Server 1 then I must show "Data Not Available for the Selected Values" only once in the outputbut now its appearing twice in the output because of the 2 tablix that had no rows.Similarly if data not available from Server 2 then it should show "Data Not Available for the Selected Values" only once in my output.If Data not avilable from all the Tablix then also i t should show only once as "Data Not Available for the Selected Values" in the report output.
I need to periodically import a (HUGE) table of data from an external data source (not SQL Server) into SQL Server, with the following scenarios: Some of the records in the external data source may not exist in SQL.Some of the records in the external data source may have a different value at different imports, but this records are identified univocally by the same primary key in the external datasource and in SQL Server.Some of the records in the external data source may be the same in SQL.
Due to the massive volume of the import, I would like to import only the records which are different from what I have in SQL Server (cases 1 and 2 above). In fact case 2 is the most critical.
I thought of making a query with a left outer join between the data in the external data source table (SOURCE) and the data in the SQL Server table (DESTIN). The join is done on the respective primary keys (composed keys of up to 10 columns) and one of the WHERE conditions will be that the value in SOURCE is different from the value in DESTIN.
The result of this query would be exactly what I need to import. How to do this in SSIS??? I couldn't figure out how to join tables in different data sources yet.
In fact I cannot write a stored procedure to do that, since one of the sources is in a datasources not SQL Server. I have seen the Lookup transformation in this article http://www.sqlis.com/default.aspx?311 but this is not exacltly what I want to do. Another possibility is to use the merge join, but due to the sorting I believe its performances would be terrible!
I'm in the middle of developing a Database for a hospital that measures its audits, inhouse operations, and finance. What we currently have and do everyday is collect data from a large database that is real time with patient data, progess, infomation, etc and dump it into a data warehouse that runs on TSI/Eclipsys. We run reports using a number of programs and dump it into Excel sheets that have charts, reports, etc. This Database for which I'm developing won't come soley from the TSI/Eclypsys source, but this is the only source thats updated regularly. I don't want to have in sync with TSI/Eclipysys in fear that every day when it updates data may be lost, not read, or worse won't be up date if there is a problem. My question is is it possible to run a query from Sequel Server 2005 that will take that data upon request using the reporing features on Sequel Server 2005. i.e. What if I need to run a report on measure B in department 12 from Jan 1-Feb 1, instead of being in sync, can I just write queries to take that information rather than double the data and take up twice the space and trouble. FYI, these datatypes rarely change in the TSI/Eclipsys data warehouse. This sure was long question and didn't intended it to be . Thanks for listening and hope to hear back.
Based on a table like below I have created a report so that I can compare number of items in the main warehouse (LOCATION1) and the outlets (LOCATION2 and LOCATION3).
Now the issue starts when I add a parameter to my report for user to choose which outlets (LOCATIONs) he wants in the equation. I know how to make a column disappear based on parameter value but how to take it out of equation? At the moment when user selects only LOCATION2 and not LOCATION3 then data is not filtered correctly:
Ideally I would like a user to select random outlets (warehouse would be static on the report) and compare one or multiple and only show records that are 0 in the outlets.
Here's something that I need to do. Might be pretty simple for you guys. :)
I have a table of Employees. All the employees work in some departments. So, I have a table of Department too. Employee table consists of details like EmpID, FirstName, LastName, SAP etc. Dept table consists of TeamID, TeamNo. Now, I have another table called as Emp-Team. This table basically maps the employees to the department by taking EmpID and TeamID. There's one more column in this table which is date. This date is required because when some person resigns (say today) then he won't feature in the headcount for July 08 but till June 08 he was there and this is how I maintain my history. e.g All the employees in the Emp_Team table have date as 01/06/2008 for this month. So, in future if I query for the employees who worked in June I will get this list. Now, I want to copy all this data in the same table again and want to remove any people who have resigned. Their resignation status is in the Employee table, where you have their last working date as well. So, when I add all this data with date 01/07/2008 I want to remove any employees whose last working date is before that.
Can this be done or I have to change my design? In case it can be - How?
I would like to transfer selected data from an ODBC-based table to a OLEDB-based table. However, there isn't a data flow source on the Data Flow Design screen to accomodate such an action. Please help!
l've some excel files controlled by Vendor which changing frequently. The only thing does not change is the header name of each column.
So my question is, is there any way to create a new table based on the excel file selected including the column name in SSIS? So that l can use the data reader as source to select those columns l am interested on and start the integration.
I am attempting to create a stored procedure that will launch at report runtime to summarize data in a table into a table that will reflect period data using an array type field. I know how to execute one line but I am not sure how to run the script so that it not only summarizes the data below but also creates and drops the table.