I have table 'stores' that has 3 columns (storeid, article, doc), I have a second table 'allstores' that has 3 columns(storeid(always 'ALL'), article, doc). The stores table's storeid column will have a stores id, then will have multiple articles, and docs. The 'allstores' table will have 'all' in the store for every article and doc combination. This table is like the master lookup table for all possible article and doc combinations. The 'stores' table will have the actual article and doc per storeid.
What I am wanting to pull is all article, doc combinations that exist in the 'allstores' table, but do not exist in the 'stores' table, per storeid. So if the article/doc combination exists in the 'allstores' table and in the 'stores' table for storeid of 50 does not use that combination, but store 51 does, I want the output of storeid 50, and what combination does not exist for that storeid. I will try this example:
'allstores' 'Stores' storeid doc article storeid doc article ALL 0010 001 101 0010 001 ALL 0010 002 101 0010 002 ALL 0011 001 102 0011 002 ALL 0011 002
So I want the query to pull the one from 'allstores' that does not exist in 'stores' which in this case would the 3rd record "ALL 0011 001".
I have databases that have zero keys defined, no foreign key constraints, no unique value constraints. I cannot assume that an Identity column is the Id. The end game is be able to output that [Table 1].[TableTwoId] = [Table 2].[Id] but cannot assume that all linkage in the database will be as simple as saying "if the field name contains a table name + Id then it is the Id in that table."
Currently have written a script that will cycle through all columns and start identifying keys in singular tables based on the distinctness of the values in each column (i.e. Column1 is 99% distinct so is the unique value). It will also combine columns if the linkage is the combination of one or more columns from Table 1 = the same number of columns in Table 2. This takes a long time, and it somewhat unreliable as IDENTITY columns may seem to relate to one another when they don't.
I have a table of "applicants" with unique applicant id and another table "reviews" with reviews which has unique id and Emplid and contains general program name like Math and then may contain 3 addition rows for specific program like Calculus, algebra, geometry etc.
There may or may not be a record for each applicant id in table reviews or there can be 1 or more than one record in reviews based on level of review( General or Specific).
All the general reviews has “Math” as Program_code but if there are more reviews, they can have Program_code like “Cal” , “Abr”, “Geo”
I want to join the tables so I can get all the records from both the tables where Program_code in reviews table is “Math” only.
That is I want to join the table and get all the records from reviews table where the program_code is “Math” only How can I do that?
I have this t-sql code which will get some table stats on one database at a time, I was wondering how I would get it to loop through all databases so it will pull the stats from all tables in all databases. Here is my code:
Select object_schema_name(UStat.object_id) + '.' + object_name(UStat.object_id) As [Object Name] ,Case When Sum(User_Updates + User_Seeks + User_Scans + User_Lookups) = 0 Then Null Else Cast(Sum(User_Seeks + User_Scans + User_Lookups) As Decimal) / Cast(Sum(User_Updates
I have 3 tables , Customer , Sales Cost Charge and Sales Price , i have join the customer table to the sales price table with a left outer join into a new table.
i now need to join the data in the new table to sales cost charge. However please note that there is data that is in the sales price table that is not in the sales cost charge table and there is data in the sales cost charge table that is not in the sales price table ,but i need to get all the data. e.g. if on our application it shows 15 records , the sales price table will maybe have 7 records and the sales cost charge table will have 8 which makes it 15 records
I am struggling to match the records , i have also tried a left outer join to the sales cost charge table however i only get the 7 records which is in the sales price table. see code below
I have a table with the list of all TableNames in the database. I would like to query that table and find any tables used in any stored procedure in that DB.
Select * from dbo.MyTableList where Table_Name in ( Select Name From sys.procedures Where OBJECT_DEFINITION(object_id) LIKE '%MY_TABLE_NAME%' Order by name )
how to find the names of the tables owned by the particular user in sql server and how to display the distinct object types owned by the particular user.
I am trying to pull in columns from multiple tables but am getting an error when I run the code:
Msg 4104, Level 16, State 1, Line 1
The multi-part identifier "a.BOC" could not be bound.
I am guessing that my syntax is completely off.
SELECT b.[PBCat] ,c.[VISN] --- I am trying to pull in the Column [VISN] from the Table [DIMSTA]. Current Status: --Failure ,a.[Station] ,a.[Facility] ,a.[CC] ,a.[Office]
I have two tables tabA (cola1, cola2, cola3) and tabB(colb1, colb2, colb3, colb4) which I need to join on all 3 columns of table A.
Of the 3 columns in tabA, few can be NULL, in that case I want to check the joining condition for the rest of the columns, so its conditional joining. Let me rephrase what I am trying to acheive - I need to check if the columns in joining condition is NULL in my 1st table (tabA), If so I need to check the joining condition for the rest of the two columns, if 2nd column is again NULL, I need to check the joining condition on the third column.
What I am trying to do is as below. Its working, but is very slow when one of the tables is huge. Can I optimize it or rewrite in a better way ?
--- First Create two tables Create table tabA (cola1 nvarchar(100), cola2 nvarchar(100), cola3 nvarchar(100)) Insert into tabA values (NULL,'A1','A2') Select * from tabA create table tabB
I have two dynamic pivot tables that I need to join. The problem I'm running into is that at execution, one is ~7500 characters and the other is ~7000 characters.
I can't set them both up as CTEs and query, the statement gets truncated.
I can't use a temp table because those get dropped when the query finishes.
I can't use a real table because the insert statement gets truncated.
Do I have any other good options, or am I in Spacklesville?
I am having problems joining these two tables and returning the correct values. The issue is that i have a work order table and a revenue table. I only want to return the sum of the revenue when the revenue comes after the work order date. That is simple enough, but it gets tricky when there are multiple work orders for the same ID. for those instances, we only want the sum of the revenue if it shows up post the work order date and if it is before any other work order date. So ID 312187014 should only have the 9-5 revenue from below, not the 7/7 or 8/6 revenue because the 8/7 work order date is after those revenue dates and thus will not have any revenue tied to it because there is a 9/3 work order that ties to the 9/5 revenue. Additionally the 412100368 ID has a 7/7 work order that ties to the 7/26 revenue, and the 8/7 work order will tie to the 8/23 and 9/20 revenue
--===== Create the test table with
CREATE TABLE #workorder ( Id varchar(20), wo varchar(10), wodate datetime, amount float ) GO CREATE TABLE #revenue
I need to search for such SPs in my database in which the queries for update a table contains where clause which uses non primary key while updating rows in table.
If employee table have empId as primary key and an Update query is using empName in where clause to update employee record then such SP should be listed. so there would be hundreds of tables with their primary key and thousands of SPs in a database. How can I find them where the "where" clause is using some other column than its primary key.
If there is any other hint or query to identify such queries that lock tables, I only found the above few queries that are not using primary key in where clause.
I have a database of 900+ tables with around 3000 SPs, and views. Manually I reviewed few tables and found that tables are not referenced with FK and I applied few. There are lots of tables and SPs using them in join statement, Is there any way with which I can get each tables missing references, any DMV or other manual script which tells about this?
I am trying to create a data dictionary for a huge application which has aroung 300 tables in the database....when i perform any operation in the application some tables are updated.... can you help me to find out how can we find out the last updated tables in the database ??
I have a database which gets its daily feed from a ftp file. Now is there any way i can figure out the empty tables in the databese which doesnot get any data on the feed.
I have 2 tables that I need to merge let me explain. I have a range of product ID's that have a product grouping of * meaning all product groups. So I have a table with products and one with around 100 groups
ProdID ProdGrp -------- --------- 11 * 12 *
ProdGrp ProdGrpDesc --------- --------------- A Prod Group A B Prod Group B C Prod Group C
I need a table which looks like the below but I have no joining mechanism
ProdID ProdGrp -------- --------- 11 A 11 B 11 C 12 A 12 B 12 C
I have two databases with different collation sequences, let them be called A (SQL_Latin1_General_CI_AS) and B (Latin1_General_CI_AS). Now I need to join between the two (including through temp tables with server collation being Latin1_General_CI_AS). In order to get rid of the errors when trying to do so I changed all my statements in the WHERE and ON clauses to
If I were to change the collation of the character typed columns in database A to the one in database B, would it make any difference in terms of performance, or would it just be a useless exercise? Just asking because many of those columns are part of a primary or foreign key that would need to first be dropped and then recreated after changing the collation, and I'd prefer to save myself the effort of writing scripts to do so if the answer is NO.
BTW, I tried to change the database's collation sequence, but that leaves the collation of the columns unchanged.
I have created some dynamic sql to check a temporary table that is created on the fly for any columns that do contain data. If they do the column name is added to a dynamic sql, if not they are excluded. This looks like:
If (select sum(Case when [Sat] is null then 0 else 1 end) from #TABLE) >= 1 begin set @OIL_BULK = @OIL_BULK + '[Sat]' +',' END
However, I am currently running this on over 230 columns and large tables 1.3 mil rows and it is quite slow. How I can dynamically create a sql script that only selects the columns in the table where there is data in a speedier manner. Unfortunately it has to be on the fly because the temporary table is created on the fly.
Given one table, Table1, with columns Key1 (int), Key2 (int), and Type (varchar)...
I would like to get the rows where Type is equal to 'TypeA' and Key2 is Null that do NOT have a corresponding row in the table where Type is equal to 'TypeB' and Key2 is equal to Key1 from another row
I would like to return only the row where Key1 = 4 because that row meets the criteria of Type='TypeA'/Key2=NULL and does not have a corresponding row with Type='TypeB'/Key1=Key2 from another row.
I have tried this and it doesn't work...
SELECT t1.Key1, t1.Key2, t1.Type FROM Table1 t1 WHERE t1.Key2 IS NULL AND t1.Type LIKE 'TypeA' AND t1.Key1 NOT IN (SELECT Key1 FROM Table1 t2 WHERE t1.Key1 = t2.Key2 AND t1.Key1 <> t2.Key1 AND t2.Type LIKE 'TypeB')