SQL Server 2008 :: Update Incoming Rows Based On Percentage Calculation
Mar 13, 2015
I have below records coming in from source files
ProdName Amount TranType
P1 100 A
P1 100 S
P2 200 A
P2 205 S
In case the ProdName is same, and Amount = or (within +/- 5%) of Amount, I have to update the TranType column as IN/OUT respectively as shown below in the tables.
I am okay with using 2 different tables if needed as in the records comes in one table and then i can reference that table to upload the values in another.
ProdName Amount TranType
P1 100 IN
P1 100 OUT
P2 200 IN
P2 205 OUT
The order of the records coming in can be different order, they need not be subsequent.
This is on SQL Server 2008. Please find a detailed description and the file of the data, that I am working on.
Requirements:
1. If 'Channel' is not equal to "Omnibus" where the 'Trans Description'is equal to "Purchase" and "Redemption" for one purchase and one redemption that match on 'System' , 'Account TA Number' , 'Product Name' , 'Settled Date' , and where the 'Trade Amount' of the purchase and redemption is within 5%, then display those set of records.
2. If deemed wash trades, allow user to update the purchase and redemption pair 'Trans Description' from "Purchase" to "Exchange In" and 'Trans Description' from "Redemption" with "Exchange out"
System Channel Dealer Name Firm Name Product Cusip Product Name Product Share Class Trade ID Settled Date Account TA Number Trans Description Trade Amount
SCHWABPORTAL US - ASG MILLIMAN MILLIMAN 64128K777 Strategic Income Fund A 29806259 30-Jan-15 000BY00F2RW Redemption $ 25,68,458.15
I'm trying to get a calculation based on count(*) to format as a decimal value or percentage.
I keep getting 0s for the solution_rejected_percent column. How can I format this like 0.50 (for 50%)?
select mi.id, count(*) as cnt, count(*) + 1 as cntplusone, cast(count(*) / (count(*) + 1) as numeric(10,2)) as solution_rejected_percent from metric_instance mi INNER JOIN incident i on i.number = mi.id WHERE mi.definition = 'Solution Rejected' AND i.state = 'Closed' group by mi.id
I have a table like the following (with much more data, but the concept is the same) with Dates and Actions for People and a column called Action with beginning Dates and end dates.
(I attached a picture because I could not figure out how to Format it)
begin Date end Date Name
begin 2014-10-15 end 2014-10-31 phil begin 2014-09-18 end 2014-09-30 phil begin 2014-08-21 end 2014-08-23 John
I need the query to be like this. The idea is to have the query grab the next 'END' not all Ends, which my attempts have done i.e. I get not just the closest end to the begin date, but ALL Ends with the same Person.
I Need it to look like this:
begin Date end Date Name
begin 2014-10-15 end 2014-10-31 phil begin 2014-09-18 end 2014-09-30 phil begin 2014-08-21 end 2014-08-23 John
There can be different People so the query Needs to return the beginning and end rows for the Person in sequential order.I can't figure out how to select only the 'next' end. My query always gets 'end' values that have a 'begin'. I
Deciding whether or not to use a CTE or this simple faster approach utilizing system tables, hijacking them.
SELECT s.ORDER_NUMBER, s.PRODUCT_ID, 1 AS QTY, s.VALUE/s.QTY AS VALUE FROM @SPLITROW s INNER JOIN master.dbo.spt_values t ON t.type='P' AND t.number BETWEEN 1 AND s.QTY
Just wanted to know if its okay to use system tables in a production environment and if there are any pit falls of using them ?
Part 1: When there is ~ (tilde) and has any value after it then it goes into a new row and duplicating the other columns like the facility in the screenshot attached and new column having the sequence.
Part 2: When there is ^ (Caret) its a new column irrespective of a value present or not
CREATE TABLE [dbo].[Equipment]( [EQU] [VARCHAR](50) NOT NULL, [Notes] [TEXT] NULL, [Facility] [VARCHAR](50) NULL) INSERT INTO [dbo].[Equipment] ([EQU] ,[Notes] ,[Facility]) SELECT '1001','BET I^BOBBETT,DAN^1.0^REGULAR^22.09^22.090~BET II^^^REGULAR^23.56^0~','USA' union SELECT '998','BET I^JONES, ALANA^0.50^REGULAR^22.09^11.0450~BET II^^^REGULAR^23.56^0~','Canada' UNION select '55','BET I^SLADE,ADAM F.^1.5^REGULAR^27.65^41.475~','USA' SELECT * FROM dbo.Equipment
I created the table in excel and attached the screenshot for a clear picture as to what is required. I use text to Columns in excel to achieve this not sure if there is anything similar in sql.
We are doing a review of a SQL 2008 server. Though we can identify what are the linked servers to the database.
However is there a sure shot way to identify, incoming DB links to the database server. I know the successful connections are easy to identify, but I wish to know all possible incoming DB links to server.
Will the DB logs support in identifying all attempted and successful DB link connections.
I have a table where I would like to update the document number row for 3k rows. The problem I have is that the documents come in sets of two (version 1 and 2) but both have different numbers. Picture it like this below:
DOCNUM: 4445787 Version 1 DOCNUM: 4445790 Version 2
It should be the same docnum (ie 4445787 Version 1, 4445787 Version 2).
The challenge is how can we assign the new docnum for version 1 to be also for version 2 as well. Basically in SQL we need a way to
1. Find a way to distinguish the pair of documents in the target db that are the same even though they have different docnums.
The objective is to identify orders where an order fee has been applied incorrectly. I have multiple orders per customer, my table contains an orderID and a customerID. Currently if the customer places additional orders before the previous orders have been closed/cancelled, then additional fees are being applied.
Let's say I'm comparing order #1 to order #2. I need to identify these rows where the following is true:-
The CustID is the same.
Order #2 has a more recent order date.
Order #2 has a FeeDate Before the CancelledDate of Order #1 (or Order #1 has no cancellation date).
So in the table the orderID:2835692 of CustID: 24643 has a valid order fee. But all the subsequently placed orders have fees which were applied before the first order was cancelled and so I want to update the FeeInvalid column with a 'Y'. The first fee will always be valid.
I think I understand why the code I am trying doesn't achieve the result I want but I can't figure out how to write it correctly. Below is one example of code I've tried and also code to create the table and insert some test data.
update t1 SET FeeInvalid = 'Y' FROM MockData t1 Join MockData t2 on t1.CustID = t2.CustID WHERE t1.CustID = t2.CustID AND t2.OrderDate > t1.OrderDate AND t2.FeeDate > t1.CancelledDate CREATE TABLE [dbo].[MockData]( [OrderID] [float] NULL,
I have a dataset named Diagnosis_Code_Analysis which has as part of the SQL query: COUNT(DISTINCT IntOOS) AS Cases which is a simple case count. In my tabular report I have a column with the following expression: =Sum(Fields!Cases.Value) and this works fine.
Now, my next column I want to see the percentage of the whole for each case and I am using the following expression: =Sum(Fields!Cases.Value)/Sum(Fields!Cases.Value, "Diagnosis_Code_Analysis")
My results are always 100% for each case (line item). I have tried changing the format to P and even 0.00% but it still reports 100%.
Can someone give me a better method to produce this percentage?
I have the following problem with ROUND. When doing the calculation for each value by a percentage using round, the sum of the result does not equal the sum of the values for the percentage (also using round).
IF OBJECT_ID('Tempdb..#Redondeo') IS NOT NULL DROP TABLE #Redondeo Create Table #Redondeo (Orden int Identity(1,1),Valores money) Insert into #Redondeo Select 71374.24 Union Select 16455.92 Union Select 56454.20 Union Select 9495.18
Interest rate has been stored in comments column along with other information ( e.g. mike's student loan is 5% and car payment is $ 150). I need to extract 5% using Contains .. Why Contains? because it's a 1.7 m rows dataset and searching for fours specific interest rate values (e.g. 3%, 9%, 12% and 15%)
I have tableX with columns colA, colB, colC, colD and there are 2256 rows in the tableX.
I would like to find out the percentages of colA, colB, colC, colD that hold data (where it is not an empty string or NULL value).
So out of 2256 rows in the table, the user has stored data in colA 1987 times, colB 2250 times, colC 2256 times and colD 17 times.
So the report would say:
colA: 88.07% colB: 99.73% colC: 100% colD: 0.01%
We have an application that has a bunch of fields that we believe are not being used and would like to remove them, but we need to prove this by looking at the data.
I know I could run a query, one at a time and change the column name, but this would take a long time as there are a lot of columns in this table. I am hoping there is some way to do this in one query.
INSERT INTO MAIN VALUES ('1000', '1/1/2014',3000,1000,700,1500) INSERT INTO MAIN VALUES ('1000', '3/5/2014',1000,2000,650,200) INSERT INTO MAIN VALUES ('1000', '5/10/2014',500,5000,375,125) INSERT INTO MAIN VALUES ('1000', '11/20/2014',100,2000,400,300) INSERT INTO MAIN VALUES ('1000', '8/20/2014',100,3500,675,1300)
Ok, I'm not quite sure how to approach this one. This is a VB.NET console app in which I want to capture each row and throw it into a table. The reason being, they want a report on what was processed...which I'll be able to do easily in Reporting Services 2005 once this crap is in a table where it should be. 1) What should I use to do this, dataset? I want to use stored procedures also, not inline SQL Function here takes an incoming file, and splits it up into separate files. I want to insert each row that is succesfully split Public Sub ProcessFiles(ByVal sIncomingfile As String, ByVal sOutputDirectory As String) If sIncomingfile <> "" And sOutputDirectory <> "" Then Dim f As New Security.Permissions.FileIOPermission(Security.Permissions.PermissionState.None) f.AllLocalFiles = Security.Permissions.FileIOPermissionAccess.Read Dim file As New IO.FileInfo(sIncomingfile) Dim filefs As IO.FileStream = Nothing If file.Exists Then Try filefs = New IO.FileStream(file.FullName, IO.FileMode.Open) 'Place: 1 Catch ex As Exception SendEmail("Incoming .mnt or .naf Filename Invalid or not found", "Place: 1") Application.Exit() End Try End If Dim reader As New IO.StreamReader(filefs) Dim counter As Integer = 0 Dim CurrentFS As IO.FileStream Dim CurrentWriter As IO.StreamWriter Dim extension As String = IO.Path.GetExtension(file.FullName) If extension = ".mnt" Then While Not reader.Peek < 0 Dim Line As String = reader.ReadLine If IsNumeric(Line.Substring(0, 1)) Then Dim Parts() As String = Line.Split(" "c) ' split row into parts If Parts(0).Length = 8 Then ' if first part is 8 then know we hit another header so cut and then write to file counter += 1 If Not CurrentWriter Is Nothing Then CurrentWriter.Flush() : CurrentWriter.Close() CurrentFS = New IO.FileStream(IO.Path.Combine(IO.Path.GetDirectoryName(sOutputDirectory), Line.Substring(59, 4) & "[" & counter.ToString & "]" & Now.ToString("MM-dd-yyyy") & IO.Path.GetExtension(file.FullName)), IO.FileMode.Create) CurrentWriter = New IO.StreamWriter(CurrentFS) End If If Not CurrentWriter Is Nothing Then CurrentWriter.WriteLine(Line) End If End If End While If Not CurrentWriter Is Nothing Then CurrentWriter.Flush() : CurrentWriter.Close() MoveFilesFTP(sOutputDirectory, "mnt") ElseIf extension = ".naf" Then While Not reader.Peek < 0 Dim Line As String = reader.ReadLine If Not IsNumeric(Line.Substring(0, 1)) Then ' if first part is not a number, then we know it's a header so split the file counter += 1 If Not CurrentWriter Is Nothing Then CurrentWriter.Flush() : CurrentWriter.Close() CurrentFS = New IO.FileStream(IO.Path.Combine(IO.Path.GetDirectoryName(sOutputDirectory), Line.Substring(6, 4) & "[" & counter.ToString & "]" & Now.ToString("MM-dd-yyyy") & IO.Path.GetExtension(file.FullName)), IO.FileMode.Create) CurrentWriter = New IO.StreamWriter(CurrentFS) End If If Not CurrentWriter Is Nothing Then CurrentWriter.WriteLine(Line) End If End While If Not CurrentWriter Is Nothing Then CurrentWriter.Flush() : CurrentWriter.Close() MoveFilesFTP(sOutputDirectory, "naf") End If Else 'input file not valid SendEmail("Incoming .mnt or .naf Filename Invalid", "Place: 1") End If End Sub
I am trying to get a date diff in a named calculation using inner join I am getting
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. #
I have checked online and have been able to get some of my codes right by using the where condition in place of the inner join, but I have not been able to work around the datediff one
(select DATEDIFF(day,TDate,[DID]) FROM [dv].[dbo].[Dail] as D inner join [dv].[dbo].[ANT] as A on a.AID = d.AID)
I've been asked to find the Mode of the loads on trips we run. So I run a count on the [loads] (without grouping, in an analytical function) column, then sort on the highest counts. The problem is when there is a tie. In that case, I need to find the mean of all the tied values. This means that if there is no mode at all, it would simply take the mean of all the values.
Here is sample data:
CREATE TABLE test6 ( trip char(1) NULL, date int NULL, loads int NULL
[Code] ...
And here is what I'd like to see:
CREATE TABLE test7 ( trip char(1) NULL, loads int NULL ) insert into test (trip, loads) values ('A', 15) insert into test (trip, loads) values ('B', 17) insert into test (trip, loads) values ('C', 11)
I run the following statement and it will not update beyond 7 million plus rows and I have about 38 million to complete. I keep checking updated row counts and after 1/2 day it's still the same so I know something is wrong because it was rolling through no problem when I initiated it. I need to complete ASAP so it's adding to my frustration. The 'Acct_Num_CH' field is an encrypted field (fyi).
SET rowcount 10000 UPDATE [dbo].[CC_Info_T] SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v' WHERE [Acct_Num_CH] IS NOT NULL WHILE @@ROWCOUNT > 0 BEGIN SET rowcount 10000 UPDATE [dbo].[CC_Info_T] SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v' WHERE [Acct_Num_CH] IS NOT NULL END SET rowcount 0
i am dealing with around 14000 rows which need to be put into the sql destination.,But what i see is that the order of the rows in the desination is not the same as in the source,
Hopefully a nice easy one to begin with. I have an access database which I use to build my web pages - I use sql to extract products which meet certain criteria then the pages are automatically generated. I can extract the data without any problem. An example I use is shown below:
=================================================== SELECT * FROM products WHERE products.group = 'LCD-HDTV' AND status = 'LIVE' AND tradeprice > 0 AND description3 > '' ORDER BY diagonal, tradeprice, description3 ===================================================
I now want to get it to work out a discount of 1% on the products shown using the tradeprice field. I've tried many times using the SELECT and SUM functions but am not getting the syntax correct. Any help would be very much appreciated.
I have small requirement in my project. I need to display the results of the WHERE clause based on percentage/ranking of exact match.
I mean the result set should be displayed based on percentage match.
For example i have the below table.
create table test ( id int identity(1,1) primary key, ename varchar(10) )
insert into test(ename) select 'REG' insert into test(ename) select 'xyz' insert into test(ename) select 'abc' insert into test(ename) select 'Reg' insert into test(ename) select 'Regsxysn' insert into test(ename) select 'psReg'
I need the output something similar as below
REG Reg Regsxysn psReg
I have tried out with full text indexing but i could'nt get the required output.
I have small requirement in my project. I need to display the results of the WHERE clause based on percentage/ranking of exact match.
I mean the result set should be displayed based on percentage match.
For example i have the below table.
create table test ( id int identity(1,1) primary key, ename varchar(10) )
insert into test(ename) select 'REG' insert into test(ename) select 'xyz' insert into test(ename) select 'abc' insert into test(ename) select 'Reg' insert into test(ename) select 'Regsxysn' insert into test(ename) select 'psReg'
I need the output something similar as below
REG Reg Regsxysn psReg
I have tried out with full text indexing but i could'nt get the required output.
Need to resolve this calculation, which I would believe is something very common on SSAS environments.
Like many companies, my company has different ways of calculating Sales and the two I want to focus are Sales Gross and Sales Net.
At a high level, we calculate Sales Gross as Sales with returns, and Sales Net as Sales without returns.
We have an attribute called Order Type that has various types of orders a user can execute with my company. One of them is Returns. If you return something back to us, we record that as a return line on the sales table. With that, we can calculate that return, breaking data down by Order Type, such as:
Order Type Line Total
Mail Orders $ 776,655.44
Internet Orders $ 2,211,334.00
Call Center Orders $ 11,223,344.00
Credit Orders $ (55,666.00)
Today, to calculate Sales Gross and Sales Net, we are creating two dimensions: DimSalesGross and DimSalesNet.
To calculate Sales Gross, we leave the data at the natural state, not making any changes to mappings.
To calculate Sales Net, we map Credit Orders to Call Center Orders at the ETL level, getting a Net value for sales (Orders - Returns), however, I doubt this is the correct way of doing.
I would like to have a Line Total Net / Line Total Gross calculation, which would be based on the Order Type value.
Perhaps using a CASE statement in MDX? Is the above possible?
If I have a table with 1 or more Nullable fields and I want to make sure that when an INSERT or UPDATE occurs and one or more of these fields are left to NULL either explicitly or implicitly is there I can set these to non-null values without interfering with the INSERT or UPDATE in as far as the other fields in the table?
EXAMPLE:
CREATE TABLE dbo.MYTABLE( ID NUMERIC(18,0) IDENTITY(1,1) NOT NULL, FirstName VARCHAR(50) NULL, LastName VARCHAR(50) NULL,
[Code] ....
If an INSERT looks like any of the following what can I do to change the NULL being assigned to DateAdded to a real date, preferable the value of GetDate() at the time of the insert? I've heard of INSTEAD of Triggers but I'm not trying tto over rise the entire INSERT or update just the on (maybe 2) fields that are being left as null or explicitly set to null. The same would apply for any UPDATE where DateModified is not specified or explicitly set to NULL. I would want to change it so that DateModified is not null on any UPDATE.
INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded) VALUES('John','Smith',NULL)
INSERT INTO dbo.MYTABLE( FirstName, LastName) VALUES('John','Smith')
INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded) SELECT FirstName, LastName, NULL FROM MYOTHERTABLE
and I need to create a session temp table (eg ##output) that translates the calculation (NewAmt - OldAmt) into categories such as
"decrease -201 to -500" "decrease -1 to -200" "no change" "increase 1 to 200" "increase 201 to 500"
so that my final output would look like this:
ID NewPer NewAmt OldPer OldAmt Change ChangeCategory 334 1/07/08 200 22/01/08 200 0 no change 2396 1/07/08 4000 10/12/07 3600 400 increase 201 to 500 7650 1/07/08 1100 07/07/06 1200 -100 decrease -1 to -200 . . . I understand how to add the "Change" column to my temp output table, but am struggling with the ChangeCategory column - can someone point me in the right direction?
I am about to rebuild all my indexes on a database that is very heavily fragmented. In looking at the report, seems that 80% or more tables are 90%+ fragmented.
My understanding is that fill value value is used for performance reasons. Our shiny new backend SAN is 100% SSD. If solid state can provide a sub-millisecond response, is fill factor still necessary at the cost of additional space being used used?