I have two tables - one with sales and another with payments against those. The payment may not match the exact amount of sales and I have to use FIFO method to apply payments. The payment month must >= sales month.
How can i write a query to do this? Examples are as below.
Table 1
Sales Sale DateSale Amt
1Jun-141200
2Oct-142400
3Dec-14600
4Feb-1512000
I would like to match two sets of data. I have setup a view of data that contains a group of customers and their details. I want to view this data, but also find these customers in another table based on matching their surname and date of birth, then retreive the information stored on these customers from the second table.
Does anyone have any suggestions how i would go about doing this?
Thanks in advance Humate
quote:Originally posted by Michael Valentine Jones
It takes real skill to produce something good out of a giant mess.
I've been matching some incoming contacts to existing contacts in a database and need to update, insert, or rematch based on the ifs below.
If name, phone, and email all provided for both, then use the highest Source. So if the Contact has a Source value of 5 and SourceContact has a ContactSource of 1, update Contact with SourceContact
If name, phone and email all provided, and source the same then update to new values.
If name and phone on SourceContact and name and Email on Contact then reset Contact_fk to -1 should not have matched, but should be an insert
If name and email on SourceContact and name and phone on Contact then reset Contact_fk to -1 should not have matched, but should be an insert
If name and phone on SourceContact and name and/or Phone is blank in Contact then update
If name and email on SourceContact and name and/or email is blank in Contact then update
If phone numbers can be different, just update record to SourceContact if it has a same or higher ContactSource
If email address do not match then set SourceContacts to -1 Contact_fk it was computed incorrectly. Both have a non blank email
If Contact_fk is 0 then it is a new contact and just insert it.
IF OBJECT_ID('dbo.SourceContact', 'U') IS NOT NULL DROP TABLE [dbo].[SourceContact]; CREATE TABLE [dbo].[SourceContact] ( [SourceContact_pk]INT IDENTITY(1,1) NOT NULL CONSTRAINT PK_SourceContact PRIMARY KEY CLUSTERED, [ContactLastName] VARCHAR(30) NOT NULL CONSTRAINT DF_SourceContact_ContactLastName DEFAULT (''), [ContactFirstName] VARCHAR(30) NOT NULL CONSTRAINT DF_SourceContact_ContactFirstName DEFAULT (''),
Given an ID (column B), I need to find which IDs have identical data.That is, given '200', I want the desired result to be:100The idea is that the system sees that id=200 has 5 records with theindicated data in cols C and D.It should then find any other ids with the exact same data for thosecolumns.Note, in this case, both 200 and 100 have (30:1, 30:2, 30:3, 40:4,40:5) so they match. 300 and 400 should NOT be returned.Any bright ideas out there? Thanks!DECLARE @a TABLE(A int, B int, C int, D int)DECLARE @b TABLE(A int, B int, C int, D int)INSERT INTO @a (A, B, C, D) VALUES (1, 100, 30, 1)INSERT INTO @a (A, B, C, D) VALUES (2, 100, 30, 2)INSERT INTO @a (A, B, C, D) VALUES (3, 100, 30, 3)INSERT INTO @a (A, B, C, D) VALUES (4, 100, 40, 4)INSERT INTO @a (A, B, C, D) VALUES (5, 100, 40, 5)INSERT INTO @a (A, B, C, D) VALUES (6, 200, 30, 1)INSERT INTO @a (A, B, C, D) VALUES (7, 200, 30, 2)INSERT INTO @a (A, B, C, D) VALUES (8, 200, 30, 3)INSERT INTO @a (A, B, C, D) VALUES (9, 200, 40, 4)INSERT INTO @a (A, B, C, D) VALUES (10, 200, 40, 5)INSERT INTO @a (A, B, C, D) VALUES (11, 300, 30, 1)INSERT INTO @a (A, B, C, D) VALUES (12, 300, 30, 2)INSERT INTO @a (A, B, C, D) VALUES (13, 300, 40, 3)INSERT INTO @a (A, B, C, D) VALUES (14, 400, 40, 4)INSERT INTO @a (A, B, C, D) VALUES (15, 400, 40, 5)SELECT * FROM @a
I'm working on a join between two tables where I only want one row returned... I'm matching on two columns between two tables. One of those columns in the target table could be null. I only want one record returned.
In Outer join, I would like to add the outer columns that don't exist in the right table for each order number. So currently the columns that don't exist in the right table only appear once for the entire set. How can I go about adding PCity, PState to each order group, so that PCity and PState would be added as null rows to each group of orders?
if OBJECT_ID('tempdb..#left_table') is not null drop table #left_table; if OBJECT_ID('tempdb..#right_table') is not null drop table #right_table; create table #left_table
I am calling stored procedure called GetCommonItemCount within another stored procedure called CheckBoxAvailability, the first stored procedure should return a count to second stored procedure and based on that some logic will be executed.
I have 2 problems in that
1. The result is not coming from first stored so the variable called @Cnt is always 0 although it should be 18 2. At the end i need to see in the output the result from second stored procedure only while now i am seeing multiple results of record sets coming.
I have attached the scripts also, the line i described in step1 is
I have a patient record and emergency contact information. I need to find duplicate phone numbers in emergency contact table based on relationship type (RelationType0 between emergency contact and patient. For example, if patient was a child and has mother listed twice with same number, I need to filter these records. The case would be true if there was a father listed, in any cases there should be one father or one mother listed for patient regardless. The link between patient and emergency contact is person_gu. If two siblings linked to same person_gu, there should be still one emergency contact listed.
Below is the schema structure:
Person_Info: PersonID, Person Info contains everyone (patient, vistor, Emergecy contact) First and last names Patient_Info: PatientID, table contains patient ID and other information Patient_PersonRelation: Person_ID, patientID, RelationType Address: Contains address of all person and patient (key PersonID) Phone: Contains phone # of everyone (key is personID)
The goal to find matching phone for same person based on relationship type (If siblings, then only list one record for parent because the matching phones are not duplicates).
I'm currently working on a BI architecture for a customer, and consider to propose the Power BI data catalog as a data distribution layer. The customer will use Power BI, but also has other BI tools.
Are data sets in the data catalog available to other clients than Power Query alone? E.g. are there OData feed endpoints available? If not, what would be the best way to give other tools access to the data?
Hi, I was wondering how it is posible to join three data sets from different data flows into one txt file. Let's explain a little more:
I have 3 dataflows. Each of them connect to sql server and and by a SQL command, they bring data into SSIS.
Each SQL command differ between them. So each data set have different columns (they dont have the same format). Also the amount of columns differ between each one.
What I need is to join the three data sets into one txt file. How can I do this? It is posible to join them with different data set formats into a txt file?
Is this the best way to join different data? It is better to use as many OLE DB Sources are needed instead of different data flows? Thanks for your help!
Hi, I'm currently trying to retrieve results from a large dataset, there are over 45000 records and I need to use them all to peform counts etc. I have set up views, but my page is still being returned slowly, is there anything I can do to speed this up? Thanks Gemma
I am trying to query one table and get two different timeperiods of data, I am summing monthly totals to provide a running year total, but I also need last month's total in a seperate column. This is what I have so far but the subquery makes me group it which provides duplicate grouping.DECLARE @LASTPD AS INT SET @LASTPD = (SELECT MAX(LASTPERIOD) FROM TABLE) SELECT NAME, POST_PD AS [MONTH],SUM(CHARGE_AMOUNT) AS MONTHLY_$, LASTMONTH.LAST_MONTH,(SELECT SUM(CHARGE_AMOUNT) AS LAST_MONTH FROM TABLE INNER JOIN TABLE2 ON TABLE2.NAME = TABLE.NAME WHERE POST_PD = @LASTPD AND TABLE2.NUM= 539 GROUP BY NAME) AS LASTMONTH INTO #TEMP_SAFROM TABLE INNER JOIN TABLE2 ON TABLE2.NAME = TABLE.NAME,(SELECT SUM(CHARGE_AMOUNT) AS LAST_MONTH FROM TABLEWHERE TABLE2.NUM = 539 GROUP BY NAME, POST_PDORDER BY NAME, POST_PD SELECT NAME, LAST_MONTH, CAST(SUM(MONTHLY_$)AS DECIMAL(20,2)) AS YEARLY_$ FROM #TEMP_SA GROUP BY NAME ORDER BY NAME
I have the following situation. One set of data has 274 rows (set2)and anther has 264 (set1). Both data sets are similar in structure aswell as values for both of them were extracts from the same parenttable. Hope the info would substitute DDL. I need to find the "gap"rows between these two sets.Attempted to run a query likeselect count(*)from set2where not exists(select *from set1)did not yield what I desired. What else to try?TIA.
I've seen that sometimes is better to split the table into a test dataset and a training dataset, and I'll appreciate if anyone can explain why is this...
Is there a way to put more then one data set in a list. I have a report that has three data sets with three tables. Now i want to show each report by Region, per page. So you can view the same stuff for each region seperately, instead of all together. Is there a way to do this. Where i dont have to go back in my code, and find a way to link everything together, so its in one data set .
I have designed a contact manager with Data Grid Control bound to a Data Set.
When the application closes, data from Data Set is written to XML file and when application opens, data from XML file is loded into Data Set and is show in Data Grid control.
Contacts in my application can exceed over 1,000 So, Is Data Set capable of handling lot of data very efficiently in memory?
i have one task in which i have to match some attributes(required for creating a new databse) with the exiting database, are these attributes present in exisisting database, if yes how many , and how many are not,pls do reply
Basically without going into too much detail, our company gets databases arriving and put onto our systems which have been made my other organizations with no guarantee of what the primary key is or if this is one at all.
I should probably give my main problem in an example for clarity:
Currently I have a .csv file full of data that needs to be put into say TableA. However I do not know if TableA has a primary key or not, or if the file that needs to be inserted into TableA contains duplicate data. I have the importer sorted that does this if you ignore the problem of duplicate data, however what I would like is an MS SQL query that does the following (but I cannot figure it out):
Assuming we are reading through the file line-by-line and a check is performed each time:
1.If there is a line with a primary key in the file that matches a primary key in TableA in the database update that row in the database with the line in the file.
2. If there is no primary key on the table and there is an exact data match between the line in the file and a row in the database then update it.
3. If neither 1 or 2 are successful then just insert the data.
Obviously the potential lack of a PK here makes things a lot more convoluted.
Hello, I am using existing code, which I am trying to convert from using MS Access to SQL Server 2005... The data set works fine with MS Access database, however when executing with SQL Server 2005 as data source, it generates the following error: "..The data types ntext and nvarchar are incompatible in the equal to operator..." in this line: count = adapter.Update(dataset); Not sure what should I look for since data sets are new to me.. Where should I check to fix this problem? I have noticed that the table has two columns with nvarchar...
I have two queries that generate two different datasets. One is a count of memebers, and the other is count of admits. I need to generate a calculated field from the two data sets called admits per 1000, which is essential the count of admits/counts of members *12000 I was able to calculte admits per 1000 easily in excel, however I need some insight on how to do is SQL.
Below are my queries from the two datasets.
MemberMonths dataset: Select factMembership.BusinessUnitCode, EffectiveCCYYMM, ISNULL(count(Distinct MemberId),0) As MemberCount From factMembership
[Code] ....
Admits dataset:
SELECT Factadmissions.BusinessUnitCode, factAdmissions.AdmitCCYYMM, ISNULL(Count(AdmitNum),0)As [Count of Admits] FROM factAdmissions
What I need to be able to do is somehow select based on a day, the total value of open orders. I have tried to do this in the database but it becomes fixed and quite cumbersome (this is a simplified example in reality i have line information and line component information).I am not hugely skilled with MDX and SSAS but know there are some semi-additive functions i want somebody to be able to pick a day and have the total value of only open orders.
I was wondering if anyone has ever written a chart with multiple datasets.
I need to be able to show sales dollars inflow by order date on one line and on the other needs to be sales dollars delivered by delivery date. So the all sections Values, Category groups, and Series Groups in the chart will be from 2 different datasets.
I have tried but it will not allow aggreates in the series groups.