SQL 2012 :: How To Target Transactional Values With Aggregate Amount
Apr 27, 2015
I am trying to exclude records that have an assessed value that has been waived in an aggregation. For Example:
Here is my table:
CREATE TABLE #temptable (ReportingMonth Varchar(6), Fee_Code Varchar(20), Fee_Transaction_Amount Decimal(12,2), Fee_Transaction_Date Datetime, Fee_Transaction_Type Char)
INSERT INTO #temptable (ReportingMonth, Fee_Code, Fee_Transaction_Amount, Fee_Transaction_Date, Fee_Transaction_Type)
SELECT 'Jan-13', 'ONE TIME DRAFT FEE', '20', '01/24/2013', 'A'
UNION ALL SELECT 'Feb-13', 'LATE CHARGE', '33.6', '02/19/2013', 'A'
UNION ALL SELECT 'Mar-13', 'LATE CHARGE', '37.01', '03/18/2013', 'A'
[code]....
Here are Data Mapping Description
Reporting Month = Month - Year
Fee Code = Fee Description Name
Fee Transaction Amount = Fee Amount
Fee Transaction Date = When Fee Amount was Applied
Fee Transaction Type = "A" = Assessed Fee; "W" = Waived Fee; "P" = Paid Fee
I've also included an image with beginning data set and what I want to identify in red and what my final data set should look like after the exclusion of those 4 records are removed.
Here are the logic requirements:In the attachment what I need the logic to do is essentially identify the $20 One Time Draft Fee from the first instance using the MIN Transaction date. Since there were $80 waived for this fee code (One Time Draft Fee), I would expect to see the first 4 (highlighted in red) to be identified as the target and as you can see in the attachment the second data set had the 4 highlighted items removed. That should be my final output.
trying to loop and remove the waive amounts from the assess amounts and tied it back to remove from my base data.
HiI tried posting this query in microsoft.public.sqlserver.programming but gotno response.I am new to replication but I am trying to setup up a transactionalreplication of tables from one database to another in MSSQL 2000 (SP2).My target tables have primary keys defined. Under publication properties Igo to the snapshot tab and for each table I clear the check box that says"Drop the existing table and re-create-it" and "clustered Indexes." On thispage the nothing is checked. for each table.Whenever the subscription is reinitialized it drops the primary keys on mytarget tablesand replaces them with a unique clustered index on the column that used tobethe Primary key.Is this normal? Is there anyway to stop it from doing this?I don't plan to send the snapshot more than once and let transactionalreplication take over for keeping my source and target in sync, but if Iever have to reinitialize the subscription, it would seem that I (orsomeone) willhave to take the a second step of manually dropping these clustered indexesand recreating the primary keys on the target table.Thanks in advance.---Dick Christoph---Dick ChristophJoin Bytes!612-724-9282
I want to aggregate to monthly values for the reading. I want to display Reading value for Oct 2010, November 2010 likewise My question is simple and I have tried to follow the etiquette.
IF OBJECT_ID('TempDB..#mytable','U') IS NOT NULL DROP TABLE #mytable
--===== Create the test table with
CREATE TABLE #mytable ( meterID INT PRIMARY KEY, Readingdate DATETIME, reading real )
--===== Setup any special required conditions especially where dates are concerned
SET DATEFORMAT DMY SELECT '4','Oct 17 2013 12:00AM','5.1709' UNION ALL SELECT '4','Oct 17 2013 12:15AM','5.5319' UNION ALL SELECT '4','Nov 17 2013 12:00AM','5.5793' UNION ALL SELECT '4','Nov 17 2013 14:00AM','5.2471' UNION ALL SELECT '5','Nov 17 2013 12:00AM','5.1177' UNION ALL SELECT '5','Nov 17 2013 14:00AM','5.5510' UNION ALL SELECT '5','Dec 17 2013 15:00AM','5.5128', UNION ALL SELECT '5','Dec 17 2013 16:00AM','5.5758' UNION ALL
Output should display as
MeterId Period Reading
4 Oct 13 10.20 4 Nov 13 10.40 5 Oct 13 10.20 5 Nov 13 10.40 4 Dec 13 11.15
Hi, I am copying records in a table. The source table and the target table are the same. I need the value from the id-field from both the source and target row. Is there a way to do this with one query?
I tried the following, but it doesn't seem to work:
INSERT tableOne (value1, value2, value3) OUTPUT source.id, inserted.id SELECT value1, value2, value3 FROM tableOne AS source WHERE ID = @number
I need to run a query to get the following result(by carrier and for each calc_date, calculate the percentage of all individuals who have rcf greater than 0.73):
carrier,calc_date,count of ind with rcf > 0.73, count of all individual, percentage of individuals with rcf's greater than 0.73.
does anyone have an idea of how to achieve that result?
Void Start Date: When a property becomes empty or vacant
Let Date: When the property is filled in again
I have the sample data below and like to show the void loss per month basis as below:
1) Allocate the amount from voidloss column between months based on voiddays:
for example for propcode 3698 the amount 13,612.56 needs to be divided between September and October based on the VoidDays. i.e of the 39 voiddays, 25 where in September and 14 in October hence 8726 will be allocated to September and 4886.56 to October
2) After allocating the amount sum the amount by controlgroup and total the voiddays per month. It will be great if we can divide the voiddays between months and sum them by controlgroup as well.
So in the end result we should have
ControlGroup, Month, Year, VoidLoss, VoidsdayinMonth 106 September 2014 8726 25 106 October 2014 4886.56 14 106 December 2014 2940 7
Declare @voidloss Table ( History_IND INT ,PropCode VARCHAR(10) ,VoidCategory VARCHAR(10) ,ControlGroup VARCHAR(10) ,VoidStartDate date
I have table A which has and accountid,df_date1,df_date2. The table is a demographic one which has 1 record for each account I have a table B which I need to populate from the first df_date1 fields in table A. Table B which is normalized and has an accountid and a df_date1 field but may have several records per accountid. I need the max(date) from this table. I wanted to do an update statement like below
update A set df_date1 = max(df_date1) from b where a.account_id = b.account_id
I get the error message Server: Msg 157, Level 15, State 1, Line 3 An aggregate may not appear in the set list of an UPDATE statement.
Is there another way to do this with a subselect and update?
I have a SSIS package that simply moves data from a SQL database A to another SQL database B. I have update (increased) the size of a nvarchar column, on both A and B.I am wondering if there is a way to "refresh" somehow the SSIS package so I don't have to rebuild and redeploy it.The error I get now is a truncation error: "Text was truncated or one or more characters had no match in the target code page".
Hi every one, I am facing problem in printing the reports from browser and also when i export it to pdf,the problem i am facing is blank pages are coming when report column getting the large amount of text around 2500 characters into column value. can any one help me in this issue?. if the report is getting acceptable amout of data it is printing in proper way i.e no balnk pages at all.i maintained all properties like margins+body size < page size.
In the full recovery model, if i run a transaction that inserts 10MB of data into a table, then 10 MB of data is moved in the data file. Does this mean then that the log file will grow by exactly 10MB as well?
I understand that all transactions are logged to the log file to enable rollback and point in time recovery, but what is actually physically stored in the log file for this transactions record? Is it the text of the command from the transaction or the actual physical data from that transaction?
I ask because say if I have two drives, one with 5MB/s write speed for the log file and one with 10MB/s write speed for the data file, if I start trying to insert 10 MB of data per second into the table, am I going to be limited to 5MB/s by the log file drive, or is SQL server not going to try and log all 10 MB each second to the log file?
I just joined a bank related IT department where the existing IT team is already working on creating a new application since a year now and designed database on SQL Server 2012, the amount (currency related) column's datatype found different in tables some where it is decimal and somewhere found different. I want to suggest them Money datatype to choose where we are dealing with currency related columns. My question is, is that correct to choose Money datatype or should I ignore this?
The actual schema I'm working against is proprietary and also adds more complication to the problem I'm trying to solve. So to solve this problem, I created a mock schema that is hopefully representative. See below for the mock schema, test data, my initial attempts at the query and the expected results.
-- greatly simplified schema that makes as much sense as the real schema CREATE TABLE main (keyvalue INT NOT NULL PRIMARY KEY, otherdata VARCHAR(10)); CREATE TABLE dates (datekeyvalue INT NOT NULL, keyvalue INT NOT NULL, datevalue DATE NULL, PRIMARY KEY(datekeyvalue, keyvalue)); CREATE TABLE payments (datekeyvalue INT NOT NULL, keyvalue INT NOT NULL, paymentvalue INT NULL, PRIMARY KEY(datekeyvalue, keyvalue));
[Code] ....
Desired results:
SELECT 1 AS keyvalue, 'first row' AS otherdata, '2015-09-25' AS nextdate, 30 AS next_payment UNION ALL SELECT 2, 'second row', '2015-10-11', 150 UNION ALL SELECT 3, 'third row', NULL, NULL
I know I'm doing something wrong in the last query and I believe another sub-query is needed?
Let me answer a few questions in advance:
Q: This schema looks horrible! A: You don't know the half of it. I cleaned it up considerably for this question.
Q: Why is this schema designed like this? A: Because it's a 3rd-party mainframe file dump being passed off as a relational database. And, no, I can't change it.
Q: I hope this isn't a frequently-run query against a large, high-activity database in which performance is mission-critical. A: Yes, it is, and I left out the part where both the date and the amount are actually characters and have to pass through TRY_CONVERT (because I know how to do that part).
Hi all!In a statement I want to find the IDENTITY-column value for a row thathas the smallest value. I have tried this, but for the result i alsowant to know the row_id for each. Can this be solved in a neat way,without using temporary tables?CREATE TABLE some_table(row_id INTEGERNOT NULLIDENTITY(1,1)PRIMARY KEY,row_value integer,row_name varchar(30))GO/* DROP TABLE some_table */insert into some_table (row_name, row_value) VALUES ('Alice', 0)insert into some_table (row_name, row_value) VALUES ('Alice', 1)insert into some_table (row_name, row_value) VALUES ('Alice', 2)insert into some_table (row_name, row_value) VALUES ('Alice', 3)insert into some_table (row_name, row_value) VALUES ('Bob', 2)insert into some_table (row_name, row_value) VALUES ('Bob', 3)insert into some_table (row_name, row_value) VALUES ('Bob', 5)insert into some_table (row_name, row_value) VALUES ('Celine', 4)insert into some_table (row_name, row_value) VALUES ('Celine', 5)insert into some_table (row_name, row_value) VALUES ('Celine', 6)select min(row_value), row_name from some_table group by row_name
I have the table below and like to create a view to show the no of days the property was vacant or void and rent loss per month. The below explanation will describe output required
For example we have a property (house/unit/apartment) and the tenant vacates on 06/09/2014. Lets say we fill the property back on 15/10/2014. From this we know the property was empty or void for 39 days. Now we need to calculate the rent loss. Based on the Market Rent of the property we can get this. Lets say the market rent for our property is $349/pw. So the rent loss for 39 days is 349/7*39 = $1944.43/-.
Now the tricky part and what im trying to achieve. Since the property was void or empty between 2 months, I want to know how many days the property was empty in the first month and the rent loss in that month and how many days the property was empty in the second month and the rent loss incurred in that month. Most of the properties are filled in the same month and only in few cases the property is empty between two months.
As shown below we are splitting the period 06/09/2014 - 15/10/2014 and then calculating the void days and rent loss per month
Period No of Void Days Rent Loss 06/09/2014 - 30/09/2014 24 349/7*24 = 1196.57 01/10/2014 - 15/10/2014 15 349/7*15 = 747.85
I have uploaded a screenshot of how the result on this link: [URL] ....
Declare @void Table ( PropCode VARCHAR(10) ,VoidStartDate date ,LetDate date ,Market_Rent Money
I am using the below script and I am getting data for 15 minutes interval. I would like to aggregate this data to hourly so instead of reading for 2014-01-01 00:15:00.000 and 2014-01-01 00:30:00.000 I want all the data aggregated for 2014-01-01 00:00:00.000 and then for 2 o’clock. how should I tweak this query to sum the interval values and display it?
SELECT r.MeterId, r.ReadingDate, r.Reading FROM MeterReading r, MeterDetail d, Building b where r.MeterId = d.MeterId and d.BuildingId = b.BuildingId and b.BuildingName like '%182%' and r.ReadingDate between '2014-01-01'and '2014-01-10' order by r.MeterId
I like to create an SQL view to divide amount 300,000 between 12 month starting from Month July 2014 to June 2015 as shown below
Amount Month Year 25,000 July 2014 25,000 August 2014 25,000 September 2014 25,000 October 2014 25,000 November 2014 25,000 December 2014 25,000 January 2015 25,000 February 2015 . . . .
We are planning to setup HA using either AAG or FC. ON my production environment (P), I have transnational replication configured for 2 of databases out of three. These data get replicated to another server( C1) hosted on cloud, with local distributor at P.
If I configure AAG/ FCI , how do I handle failover?I wanted to setup P as primary AAG with two replicas as S1 and S2. P->S1 will be synchronous while P->S2 will be asynchronous.Incase my P goes down how the replication will failover S1? Incase P and S1 goes down how do I failover to S2 with the replication.
I am using SSRS 2014. I'm using a matrix instead of a tablix because it allows me to have dynamic columns. In the example I'm showing, two of the columns use the sum function to get the total counts per practice. The third column contains percentages so I averaged for the total but the value is inaccurate compared to the value I would get if the divided the the two totals that are sums of the counts. Is there a way for me to specify that I want to divide the total counts numerator divided by the total counts denominator?
Here's an example report output with the percentage column averaged (inaccurate):
PCP numerator denominator percentage John Smith 66 104 63.46 Tom Jones 4 36 11.11 . . . Jane Doe 1 1 100 Total 708 1005 72.3
So the 72.3 value is from Avg(metricvalue)
I would like to do this instead: % total = 708/1005, which equals 70.5 - a significant difference.
The metricvalue column is what is the value for every number above (Because it's a matrix).
So, Microsoft decided that they were deprecating Transactional Replication with Updatable subscriptions. In that case, you have 2 options (if I am correct): Pay for Enterprise (if you are already not) and use peer-to-peer or use bidirectional transactional replication which is basically setting up a transactional from db1 to db2 and also transactional from db2 to db1.
The issue I see in both cases is conflict resolution. With updatable subscriptions, you could specify how to handle the conflict. With either of these 2 options (from what I can tell) you cannot allow the engine to handle this for you.
Any thoughts? Seems like a slap in the face to those who have been using MS for years and a damn good reason for companies that rely on updatable subscriptions to not upgrade to 2012.
I am trying to tie together tables that show quantities of a product committed to an order and quantities on hand by a location.
My end result should look like the below example.
Item Location QtyOnHandByLocation SumQtyCommitTotal Prod1 NJ 10 10 Prod1 NY 10 0 Prod1 FL 0 0 Prod1 PA 0 0
So I can see I have 10 items in NJ On Hand and Committed to an order. 10 available in NY but not on an order. Then the other two locations have no quantities.
Below is the CTE but it produces inaccurate results. I've tried running it several different ways by playing with the grouping but have no luck thus far.
--create the temp table Create table #SalesLine ( Novarchar (50) not null , LocationCodevarchar (50) not null , QtyCommitint not null ) create table #ItemLedgerEntry
[code]....
I am close to the desired results but can't find a way.
I'm trying to write a query that returns last 30 days data and sums the amount by day. However I need to do it for every year not just the current one(I need to go back as far as 2000).
declare @t table (id int identity(1,1), dt datetime, amt MONEY) insert into @t (dt, amt) select '2014-11-30 23:39:35.717' as dt, 66 as amt UNION ALL select '2014-11-30 23:29:16.747' as dt, 5 as amt UNION ALL select '2014-11-22 23:25:33.780' as dt, 62 as amt UNION ALL
[Code] ....
--expected output select '2014-11-30' AS dt, 71 AS Amt UNION ALL select '2014-11-22' AS dt, 62 AS Amt UNION ALL select '2014-11-20' AS dt, 66 AS Amt UNION ALL select '2014-11-18' AS dt, 102 AS Amt UNION ALL
I built a number of publications on a SQL Server 2005 box to replicate to a SQL Server 2012 subscriber. All the publications except one are fine. During the snapshot phase of schema script generation I get Script Failed for Table 'dbo.MediaDisplayLibraryFileData'. From the Replication monitor for the Snapshot Agent on the Publication I get, "Column FileData in object MediaDisplayLibraryFileData contains type VarBinaryMax, which is not supported in the target server version, SQL Server 2000." This message makes no sense since the target server version is 2012. I have even checked that the compatibility level was set to 110 before I started the process of setting up replication. How do I resolve this error?
I have found some articles with no publication in our transactional replication.
For example, running this:
select p.publication, a.publication_id, a.article from dbo.MSArticles as a left outer join dbo.MSpublications as p on a.publication_id = p.publication_id
shows this:
NULL1org_Community NULL3org_Community Purchasing to EDW5org_Community NULL1org_Division NULL3org_Division Purchasing to EDW5org_Division
How can I get rid of the articles that are not part of a publication?
I can't use sp_droparticle because it requires a publication which these articles do not have.
I am using SQL 2012 SE and implementing transactional replication. I need to insert the rows from publisher database tables to new tables, drop the old tables and rename the new tables with the old table names.
For example:
Publisher database tables that are being replicated:
Table1 Table2 Table3
and I am going to create new tables in publisher database
Drop constraints from and then tables (does this require articles to be removed from replication?)
Table1 Table2 Table3
Rename
Table1_new to Table1 Table2_new to Table2 Table3_new to Table3
Does this require replication to set up from scratch or add the three articles only to replication? Is there a way this can be done without pausing or reinitializing replication or without removing articles and adding them back?
I am trying to create an aggregate table where the value is a rolling sum. Type a on date 1 is the sum of the values in the main tbl. Type a on date 2 is the sum of values for type a on date 1 and date 2. Is this possible? I have been trying update t sql with sum(case where date <= date) statements but can't get it to run.
create table main_table (type nvarchar(10), date int, datavalues int); insert into main_table values('a', '1',3); insert into main_table values('b', '1',4)
I have an existing publication in sql 2012 with 2 articles, and then I add 2 more articles. After that when I generate a snapshot, will the snapshot be generated for 2 new articles only or for all 4 articles?
I remember adding 1 new articles to one existing publication with 150 articles and when I generated snapshot, it was generated only for 1 article. But I don't remember clearly.
Does it behave differently for small and large number of articles?
-----Table Proc Index Performance TSQL &&%$#@*(#@$%.......------------
We have a database which is (a subset of tables are) replicated to another via transactional replication. Whilst most changes made at the published database reach the subscriber within a matter of seconds, we have a SQL Agent job which performs a calculation in the published database and then immediately exports data from the subscriber using log shipping. The result is that the calculated changes do not make it through to the exported transaction logs in time.
Is there a way to manually "refresh" the subscriber databases using T-SQL?