SQL Server 2012 :: Using CTE To Aggregate Separate Tables?
Aug 20, 2014
I am trying to tie together tables that show quantities of a product committed to an order and quantities on hand by a location.
My end result should look like the below example.
Item Location QtyOnHandByLocation SumQtyCommitTotal
Prod1 NJ 10 10
Prod1 NY 10 0
Prod1 FL 0 0
Prod1 PA 0 0
So I can see I have 10 items in NJ On Hand and Committed to an order. 10 available in NY but not on an order. Then the other two locations have no quantities.
Below is the CTE but it produces inaccurate results. I've tried running it several different ways by playing with the grouping but have no luck thus far.
--create the temp table
Create table #SalesLine
(
Novarchar (50) not null
, LocationCodevarchar (50) not null
, QtyCommitint not null
)
create table #ItemLedgerEntry
[code]....
I am close to the desired results but can't find a way.
We have a large Datawarehouse and the size is 50TB.. The tables are placed in filegroups based on the schema like fact, dimensions, raw data each sit on seperate filegroups. I am thinking will it make sense to seperate the large facts which are having billions of rows so that they reside on filegroups on their own..
Hi,As I wrote my message the solution came to me, so I thought I wouldpost anyway for others to see in case it was useful:Here is some sample DDL for this question:CREATE TABLE Source (my_value INT NOT NULL )GOINSERT INTO Source VALUES (1)INSERT INTO Source VALUES (2)INSERT INTO Source VALUES (3)GOCREATE TABLE Destination (value_type VARCHAR(10) NOT NULL,value INT )GOI would like to fill the destination with a row for the COUNT, SUM,MIN, and MAX. My own problem is of course much more complex than this,but this is the basic stumbling block for me now. So, the rows that Iwould expect to see in Destination are:value_type value---------- -----COUNT 3SUM 6MIN 1MAX 3The solution that I came up with was to add a Value_Types table:CREATE TABLE Value_Types (value_type VARCHAR(10) NOT NULL )GOINSERT INTO Value_Types VALUES ('COUNT')INSERT INTO Value_Types VALUES ('SUM')INSERT INTO Value_Types VALUES ('MAX')INSERT INTO Value_Types VALUES ('MIN')GONow the SQL is pretty simple:SELECT V.value_type,CASE V.value_typeWHEN 'COUNT' THEN COUNT(*)WHEN 'SUM' THEN SUM(S.my_value)WHEN 'MAX' THEN MAX(S.my_value)WHEN 'MIN' THEN MIN(S.my_value)ENDFROM Source SINNER JOIN Value_Types V ON 1=1-Tom.P.S. - I know that I did not include primary or foreign keys in my DDL.I'll leave it as an excercise to the reader to figure those out. Ithink the code adequately explains the concept.
I am trying to add 2 separate columns from separate tables i.e column1 should be added to column 2 when inserted and I want to use a trigger but i don't know the syntax to use...
I'm trying to write a query that returns last 30 days data and sums the amount by day. However I need to do it for every year not just the current one(I need to go back as far as 2000).
declare @t table (id int identity(1,1), dt datetime, amt MONEY) insert into @t (dt, amt) select '2014-11-30 23:39:35.717' as dt, 66 as amt UNION ALL select '2014-11-30 23:29:16.747' as dt, 5 as amt UNION ALL select '2014-11-22 23:25:33.780' as dt, 62 as amt UNION ALL
[Code] ....
--expected output select '2014-11-30' AS dt, 71 AS Amt UNION ALL select '2014-11-22' AS dt, 62 AS Amt UNION ALL select '2014-11-20' AS dt, 66 AS Amt UNION ALL select '2014-11-18' AS dt, 102 AS Amt UNION ALL
I am trying to create an aggregate table where the value is a rolling sum. Type a on date 1 is the sum of the values in the main tbl. Type a on date 2 is the sum of values for type a on date 1 and date 2. Is this possible? I have been trying update t sql with sum(case where date <= date) statements but can't get it to run.
create table main_table (type nvarchar(10), date int, datavalues int); insert into main_table values('a', '1',3); insert into main_table values('b', '1',4)
declare @LastDate datetime SELECT @LastDate = max([LastUpdate]) FROM [exhibitor].[dbo].[blgBelongs] WHERE (([Table1]=@module1 OR [Table2]=@module2 )or ([Table2]=@module1 OR [Table1]=@module2 ) AND Exists (SELECT [Table1],[Table1ID] FROM [exhibitor].[dbo].[blgBelongs] WHERE table2=30 and table2ID=@dmn_ID))
Before I see @LastDate , I see this warning
Warning: Null value is eliminated by an aggregate or other SET operation.
In a t-sql 2012 query that will be updated in a stored procedure I am getting the following warning message:
Warning: Null value is eliminated by an aggregate or other SET operation. I would like to get rid of this warning missing without just turning off the warning messages.
I would like to change the sql so that it does not occur. The following is my new sql that generates the warning:
case when (coalesce(a.status,ae.status) = 'A') and (IsNull(ae.excuse, 'U') = 'U') and (IsNull(ae.code, 'DRC') = 'DRC') then
sum(DATEDIFF(minute,pm.startTime,pm.endTime)-coalesce(pm.lunchTime,0)-coalesce(a.presentMinutes,0)) else 0 end as DRCMinutes,
I'm trying to write a query to select various columns from 3 tables. In the where clause I use a set of conditions, but most important condition is that I only want to see all results from the different columns where the ph.ProdHeaderDossierCode contains at least 25 lines of processed hours. I tried this with group by and having, but I constant get error messages on all other columns that I want to see: "is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause". How can I make this so I can see all information I need?
I have a specific variation on the standard 'Column Invalid' question: I have this query that works fine:
SELECT vd.Question , csq.Q# , csq.Q_Sort , csq.Q_SubSort , AVG(CAST(vd.Response AS FLOAT)) AS AvgC , vd.RType
[Code] ....
When I add this second average column like this:
SELECT vd.Question , csq.Q# , csq.Q_Sort , csq.Q_SubSort , AVG(CAST(vd.Response AS FLOAT)) AS AvgC ,
[Code] ....
I get the error: Column 'dbo.vwData.Response' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
Clearly things are in the right place before the change, so I can only assume that the OVER clause is my problem. Is this just not possible?
I have two queries that give me the total sales amount for the current year, and the last year.
SELECT SUM([Sales (LCY)]) FROM [$Cust_ Ledger Entry] cle LEFT OUTER JOIN dw.dim.FiscalDate fd ON fd.CalendarDate = cle.[Posting Date] WHERE [Customer No_] = '10135' AND fd.CalendarYear = '2013'
[Code] ....
I would like to learn how to be able to make this a single query and end up with two columns and their summed up totals. Like it shows on the attached image.
This is my query without the columns I need:
SELECT c.CustomerNumber ,c.Name ,c.ChainName ,c.PaymentTermsCode ,cle.CreditLimit AS 'CreditLimit' ,SUM(cle.Amount) AS 'Amount'
I am new to T-SQL and triggers Any help will be appreciated
I am trying to change this code to insert firstname, surname (taken from employee table on db A) to firstname, surname on customer table of DB B but also create cust_id on customer table and DB B. currently I am getting all rows of customer.cust_id filled with the same data whenever a new data is inserted into (firstname,surmname of employee table)
Create trigger gen_cust_id ON employee for insert AS Update customer SET cust_id =( SELECT Replicate('0',(4-DATALENGTH(CONVERT(varchar(10),i.id)))) + Convert (varchar(10),i.id) + Substring(i.lastname,1,3) + Substring(i.firstname,1,1) from employee C INNER JOIN inserted i on i.id=c.id) from employee C INNER JOIN inserted i on i.id=c.id
How would i write a single sql statement where i can get that counts how many bookIDs are listed for each custoemrID and how many magzaineIDs are listed for each customerID and have it return one table that looks like this:
I am using the following select statement to get the row count from SQL linked server table.
SELECT Count(*) FROM OPENQUERY (CMSPROD, 'Select * From MHDLIB.MHSERV0P')
MHDLIB is the library name in IBM DB2 database. The above query gives me only the row count of table MHSERV0P. However, I need to get the names, rowcounts, and sizes of all tables that exist in MHDLIB librray. Is it possible at all?
We have the below query that pulls benefit ids for employees but it will show each benefit on a separate row but we would like to have just one rows for the employee and columns for each of the benefits.
I am trying to take the results of a query and re-orient them into separate columns.
select distinct W_SUMMARYDETAILS.FACILITY_ID, W_SUMMARYDETAILS.REPORTING_YEAR, (2011 - 2014, I want these years broken out into columns for each year) W_SUMMARYDETAILS.FACILITY_NAME, W_DEF_SUMMARYDETAILS.REPORTING_PERIOD (2011 - 2013, I want these years broken out into columns for each year) From W_SUMMARYDETAILS full outer join W_DEF_SUMMARYDETAILS on W_SUMMARYDETAILS.FACILITY_ID=W_DEF_SUMMARYDETAILS.FACILITY_ID and W_SUMMARYDETAILS.REPORTING_YEAR=W_DEF_SUMMARYDETAILS.REPORTING_PERIOD
As of now the query puts all the years into a single column -- one for DEF_SUMMARY and another for SUMMARY.
I am looking to create 7 additional columns for all the individual years in the results instead of just two columns.
We have some tables that we have spread across two databases. The segregation isn’t essential, but the entities involved were disparate enough that we thought it made sense. However, our client app regularly & frequently requires information that can only be answered by queries to tables in both databases. It has been suggested that segregating the tables as we have introduces a performance hit. At this stage, it would be relatively easy to re-combine the tables into one DB.
Ive read a few docs and i have some doubts about the implementation i ll make.
I will setup a FCI between ServerA and ServerB. So my databases will be running on server A or on ServerB. they will be replicated using AllwaysON to ServerC.
Because ill use a diferent vlan for AllwaysON replication iam having some doubts on how can i set the endpoints...
For example.
I ll have the following ips configured for AllwaysON replication on Server A, Server B and Server C 10.10.240.1/10.10.240.2/10.10.240.3
Database teste will be running on serverA. the endpoint configured will be 10.10.240.1 but when database fails over to Server B (using FCI) the ip configured there will be 10.10.240.2... so AO replication will stop working.
I tough on associating a virtual ip to the AllwaysON cluster resource but Microsoft doesn't recommend to alter AllwaysON cluster resource.
Hello, I am working with a database that among other things uses multipart keys as the unique indexes which are not consistent from say one table where a parent record resides to another table which contains related child records. For example I am working with two tables right now, one that contains content that I'll call Contents and the other which contains Usage information about the contents (number of view, a rating and comments give by a customer) which I'll call ContentsUsage. The system that manages the data for the tables has a versioning system by which, whn a content item is added (first time) a "unique" id (guid) and a version number of 1 is created along with the rest of data items in the Contents table and likewise in the ContentsUsage table (essentially a one to one mapping) on the like named fields in that table. Now, each time a given record in the Contents table is updated a new version, with the same guid is created in the Contents and ContentsUsage table. So one side I have:ContentGUID > AAAAVersion > 1ContentGUID > AAAAVersion > 2And the other table (ContentsUsage)ContentGUID > AAAAVersion > 1ContentGUID > AAAAVersion > 2 While both of these tables have a quasi-unique record (row_id) of type char and stored as a guid neither obviously are the same in the two tables and having reviewed the database columns for these tables I find that the official unique key's for these tables are different (table 1, Contents combines the ContentGUID and Version) as the composite / mutli-key index, while table ContentsUsage uses the RowGUID as it's unique index. Contents RowGUID (unique key)ContentGUIDVersionViewsRatingComments................RowGUID ContentGUID (unique key)Version (unique key)Description..... Bearing this in mind I am unable of course to link directly the two tables by using the just the ContentGUID and have to combine the additional Version to I believe obtain the actual "unique" record in question. The question is in terms of writing queries, what would the most efficient query be? What would be the best way to join the two in a query? And are there any pitfalls with the current design that you can see with the way this database (or specifically these tables are defined)? It's something I inherited, so fire away at will on the critique. Having my druthers I would have designed these tables using a unique key of type int that was autogenerated by the database. Any advice, thoughts or comments would be helpful. Thanks,P.
I've read that if particular tables are frequently queried together through a join then these tables should be placed on different devices on different physical disks. What does this mean exactly and how would you configure this? Is this a common practice in high-performance real-world environments (or should it be)?
I would like to 'one table' record to separate 'two or three tables' . I just know use the DTS , try to import and export again and agian. So trouble.
Could you give me some suggestions for me? For example , 'Cursor' write in new table . But I try to SQL Server Books Online which is not suitable for me solving problems. One table separate two or three tables. Can you wirte the detail example for me? Thx a lot.
Hi, We are building an application for online system for people to place ADs for selling various used items like Car, Electronics, Houses, Books etc. If someone selling a car then he can fill out headline, year, make, model, mileage, transmission, condition, color, price, description, contact etc. Similarly if someone selling a digital camera he will fillout headline, memory, zoom, megapixel, maker, model, color, batter, description etc. Option 1: I can have a main table to hold the common attributes of all different types of ADs (headline, images, contact, price, color, condition, description) + 1 table to store string values of all ADs (car: maker, model, square feet (if house), memory, megapixel (camera) etc) + 1 table to store the droplist select values(car: transmission, door, seat etc; house: year_built) pros: single table for all ADs. unique IDs for all ADs, easy to extend as new attributes can be dropped easily. cons: lot of physical reads of 2nd and 3rd table from join. 10 times physical reads compared to option 2 when reading 5000 records. Option 2: have different set of table for each AD type. Car will have its own main table + 1 table to store multiselect list box values. Similarly housing will have its own set of tables pros: 10% less physical read than option 1. cons: hard to add new attributes. We have to modify the main table by adding one column. Query will go to different table based on the category. Do you have any suggestions on which way to go?Thanks
I have a report that's created each day as a flat textfile.Because I came from the Access world, I created a macro that importsit with a schema that gives meaningful names to the various columns,and then uses a query to massage some of the data for me (deletes thefirst blank row and does a couple of calculations)Then I use DTS to import the Access query as a table.the textfile has a column called "File_num", and among several others,a column called "Serial_num". (the file numbers represent shipments,and sometimes there are more than one serial number in the shipment,etc., so there is a separate line for every serial number)Naturally, I would like to split this info into two tables..one thatdoes not contain the serial numbers and has a primary key on the"File_num" column, and another table that would contain just the"File_num" and "Serial_num" columns. That way I could relate themlater...but most importantly, it will give me a table where I can usethe "File_num" as my primary key.What would be the best way to import these two tables from one sourcetextfile? The other thing that gives me problems is that the textfile has no column names, and the first row is always blank.I'm very new to SQL and DTS and would appreciate any direction.Thanks,Larry- - - - - - - - - - - - - - - - - -"Forget it, Jake. It's Chinatown."
I have 1 table that is just a list of feeds. A, B, C, D etc (15 rows in total) and each feed has other information attached to it such as Full name, location etc etc. I am only interested in the Feed column.
Each feed then has a corresponding data table which contains a list of records. Eg Feed A data is contained in TableA, Feed B data is contains in TableB and so on.
Basically what I need is a combined table that shows the list of Feeds in the 1st Column ( So A, B, C, D…..) and then a second column which counts the records from each separate data table corresponding to that feed.
So the end result would look something like this:
Feed------No of Records A----------4 (from TableA) B----------7 (from TableB) C----------8 (from TableC) D----------1 (from TableD)
In database DB1, I have table DB1.dbo.Suppliers1. This table has an ID column of type INT named SUPPL1_ID
In database DB2, I have table DB2.dbo.Suppliers2. This table has an ID column of type INT named SUPPL2_ID I would like to update DB2.dbo.Suppliers2 based on values from DB1.dbo.Suppliers1 joining on SUPPL1_ID = SUPPL2_ID.
How can I do this in SSIS?
Assumptions:
linked servers are not an option, as I want the SSIS package to be portable and not dependent on server environments. TIA.
Request is to merge or join or case stmt or union or... from up to four unique columns all in separate tables to new combined table (matrix) of results from said.