I'm trying to create a query to return Open, Close, Max and Min Price for each 1 minute interval. Source data has two fields - Price, and Datestamp at 5 second intervals.
I can calculate the Max and Min (below) and set the datestamp to the middle of the interval, but get stuck on how to also return the Open and Close price for each interval.
SELECT MAX(price) AS MaxPrice, MIN(price) AS MinPrice,
DATEADD(ss, 30, DATEADD(n,DATEDIFF n, '1/1/2006', DateStamp),'1/1/2006')) AS DateStamp
FROM MasterData
GROUP BY DATEDIFF(n, '1/1/2006',DateStamp)
I have a situation where I have table with over a billion records and needs to be scrubbed. Table does have a field with date time timestamp. I have been deleting rows from the table using the script below which basically provides me delete statements by date for records older than 90 days.
But now on each day row count is over 30 million rows and it takes forever to delete by date and transaction log becomes humongous.
So I would like to scrub it in 5 minute intervals instead of daily for records older than 90 days. Even in 5 minute intervals the record count tends to be around a million. This will keep the delete slice small enough to not a gigantic transaction log.
declare @startdate Datetime declare @enddate Datetime set @startdate = getdate()-480 set @enddate = getdate()-90
WHILE (@startdate < @enddate) BEGIN print 'delete from vending where DetectedDate < ''' + CONVERT(varchar(10), @startdate, 101) +'''' set @startdate = @startdate+1 END
I am hoping to modify the script above to produce a script with statements like this for a window between last 90 and 120 days:
delete from vending where DetectedDate <'6/15/2015 8:25:00 PM' go delete from vending where DetectedDate <'6/15/2015 8:30:00 PM' go delete from vending where DetectedDate <'6/15/2015 8:35:00 PM' go
I have two fields ID and Log data and log data is a 96 character long string of numbers representing 15 minute intervals from midnight to midnight.
I need to convert these 96 characters to a full 1440 characters which would mean taking each of the 96 characters one by one and making 1 character into 15.
I had Vb macro to do the conversion but now it's broken and I can't fix it. Getting it done in SQL would solve a lot of problems.
I then go from the 1440 fields and do log analysis like total time doing a specific activity but my query is dependent on having all 1440 characters.
Hello Probably a very simple problem, but im stumped. I have a table which gives the start-time and end-time of an employees work day. I want to create a view which contains a line of data for each 5 minute period worked. Please help.
I have collected perfmon data that is in every 15 seconds. I need to run a query that will only retrun rows that are 5 minutes from the last row starting at a specific date/time.
I have an employee, who received an initial starting bonus of $50k. This value will be static from day 1 to day n and will never change. I want to see the intial starting bonus, but any sets higher than the employee dimension will need to aggregate the starting bonus. Is there an easy way to do this?
If I just look at the data from an employee's perspective, I can do this by making the measure a Min, Max, or Avg Aggregate function. But, if for instance I want to view the data from the perspective of departments, it would need to sum() the data instead (which min/max/avg don't do).
If I make the starting bonus a member property of the employee, and a calculated measure off the member property, it aggregates the data when it shouldn't.
What I am trying to do is count persons in buckets "non-recidivists" and "recidevists" based on how many bkg_nbr they have per STATE_NBR. If they have more than 1 bkg_nbr per STATE_NBR then put them in the "recdivists" bucket. If they only have a 1 to 1 then put them in the "non-recidivists" bucket.
How can I select data between a financial year. The financial year begin in july(7) and end in june(6).
Lets say I want all data between the begining of financial year (7) to January (1)
so I would select all data between 7 and 1. (6 months period)
and lets say I want all data between the beginning of financial year(7) to october (10) so I would just select all data between 7 and 10. (3 months period).
Table Name: EmployeeDetails Columns: EMpID - Date - WorkedHours
For each day I get details of number of hours worked by each employee in this table.
Now my HR wants a report with such columns
empid - Week - Month - Qtr
So, week will have Sum of hours worked by employee in that week Month will have Sum of hours worked by employee in that Month Qtr will have Sum of hours worked by employee in that Qtr
I want to build a data import process with SSIS, sourcing Hyperion Financial Management. Accoring to my knowldge there were a Star Integration Server (Star Analytics acquired by IBM in Feb 2013) doing the extraction job and which could be used in SSIS.
As this product is not available now, how to do this.
I have a SQL table that has data filled with million records and the date is in minutes it looks like :-
RowDateTime Meter 1 Meter 2 Meter 3
25/05/2006 02:49:00 1220 450 489
25/05/2006 02:50:00 1223 470 500
25/05/2006 02:51:00 1227 490 511
25/05/2006 02:52:00 1230 510 522
25/05/2006 02:53:00 1233.5 530 533
25/05/2006 02:54:00 1236.9 550 544
25/05/2006 02:55:00 1240.3 570 555
25/05/2006 02:56:00 1243.7 590 566
I want to make a query to the above table and convert the data to houlry by summing Meter1,Meter 2 and Meter 3 to be the average. I want to import all the hourly data to a new table that will look like :-
Just wondering what is the best time to ensure that we only return data when the datetime field is the same when compared between two datetimes within a minute difference.
As in the following should return the data:
'2015-04-09 09:00:20' compared to '2015-04-09 09:00:50'
And the following should not return the data:
'2015-04-09 09:01:20' compared to '2015-04-09 09:00:50'
The problem, is that I'm merging data from three different result sets, which they all have data for every minute, however, the timestamp can be different by seconds or milliseconds.
So, I'm only interested to return the data when the two fields that I'm comparing are equal within a minute. I need to ignore seconds and milliseconds.
I need some help aggregating values in a single table, where neither a simple Sum() nor a simple First() function will do... Would like to do Sum(First()) but that's not allowed!
Sample dataset (select * from cs_view):
Gender | Program | Student | Class_Section | Heads | Credits ------ | ------- | ------- | ------------- | ----- | ------- Female | English | Elena | Phys 101-b | 1 | 4 Female | English | Elena | Hist 101-c | 1 | 4 Female | English | Elena | Engl 101-a | 1 | 4 Female | English | Elena | Engl 105-b | 1 | 4 Male | History | Rich | Phys 105-a | 1 | 4 Male | History | Rich | Engl 101-c | 1 | 4 Male | History | Rich | Hist 101-b | 1 | 4 Male | History | Jacob | Phys 101-a | 1 | 4 Male | History | Jacob | Hist 101-b | 1 | 4 Male | History | Jacob | Engl 101-c | 1 | 4 Male | History | Jacob | Phys L-101-a | 1 | 0
Dataset has one row per student enrollment in class section. No trouble summing credits by student or by program (or gender). HOWEVER, aggregate head-count should add each student only once.
Desired table: Gender Program Heads Credits ------ ------- ----- ------- Female English 1 16 Male History 2 24 - --- 3 40If I add a third grouping level, that is, add a student-level grouping to the desired table, First(Fields!Heads.Value) will return the correct student-level head count; however, I don't know how to sum up the student-level group header rows ('subtotal' rows), to aggregate head count by gender or by program.
I am using the following query (which works fine):
select min(timex) as start_date ,end_date ,entityid ,entityname ,locationid
[code]....
However I would like to not use the delta (it takes effort to calculate and populate it); instead I am wondering if there is any way to calculate it as part / whilst running the query.
Problem 2
I have the following table which shows the location of different people at 1 hour intervals
I'm facing a big problem in my actual report development task... however, what i need to do is (in my opinion) so basic that there has to be a way to do this...
What does it all about:
My customer needs a report in which different measures are shown in rows in a matrix table. columns are reserved for the months, simulating a sort of calendar aspect.
However, the matrix has one additional group in it's rows and the whole matrix is contained within a list.
A possible situation could be following:
Coca Cola
Jan Feb Mar Apr May June July Aug Sep Oct Nov Dec TOT %
The problem are those percentages in red. They are calculated on total amount of BRU/NET without distinction of the first group in the table (PM or IN). If I set the value for the textbox to the following expression:
=(sum(Fields!value.Value)/sum(Fields!value.Value,"Level2"))*100 Where Level2 is the Details specified in the list (in our case Coca Cola or Pepsi Cola) Reporting services makes the sum of the value field without making the distinction of which type of value it is dealing with...
My question is now rather obvious: Is this calculation possible and if yes how?? I've thought to write a custom method in Reporting Services (no assembly) but I've no Idea how i can 1) access the dataset. 2) I need to be able to specifiy the name of the measure for which the calculation needs to be done and 3rd I also need to pass the current detail of the list item to be able to apply the necessary filters on the dataset to perform the aggregation....
I have what I thought would be a simple problem, maybe my approach is all wrong!
Say you have customers, registered in a year/month group. They can be active in another year/month group.
So we get a simple table with registration year/month down the side and activity along the top.
This shows "of all the people p who register in month m, how many are active in month a?"
So assuming that all people registered in month m are also active in month m, the max(activity) for that slice (ie the max value for any active month for the registered month) should be the value if we collapse the activemonth group to activeyear.
Let me try to draw it :D
Registered/Active 2006 2007 ... Jan Feb Mar ... Jan Feb Mar ... ... 2 Jan 20 10 10 20 . 5 3 1 5 20 0 Feb 0 25 15 25 . 8 5 5 8 25 0 Mar 0 0 40 40 . 11 1 6 11 40 6 ... 20 35 65 65 . 24 9 12 24 85 2 Jan 0 0 0 0 . 50 34 44 34 50 0 Feb 0 0 0 0 . 0 45 40 41 45 7 ... 20 35 65 65 . 50 79 44 75 95 ..... 20 35 65 65 . 74 88 56 99 180
I think. The ...'s represent the grouping band total.
I've built a cube which has all the measures and dimensions I want, unfortunately the numbers are off: the product prize is aggregated then rather simply being fetched from the underlying oledb destination file in the data warehouse. So if product x costs 10 USD and has been purchased 3 times in a certain month by a customer, the cube shows a product prize of 30 USD. I've switched the measure's usage from SUM to no aggregations but I still don't get the simple value list without aggregation. Why is this happening?
The actual schema I'm working against is proprietary and also adds more complication to the problem I'm trying to solve. So to solve this problem, I created a mock schema that is hopefully representative. See below for the mock schema, test data, my initial attempts at the query and the expected results.
-- greatly simplified schema that makes as much sense as the real schema CREATE TABLE main (keyvalue INT NOT NULL PRIMARY KEY, otherdata VARCHAR(10)); CREATE TABLE dates (datekeyvalue INT NOT NULL, keyvalue INT NOT NULL, datevalue DATE NULL, PRIMARY KEY(datekeyvalue, keyvalue)); CREATE TABLE payments (datekeyvalue INT NOT NULL, keyvalue INT NOT NULL, paymentvalue INT NULL, PRIMARY KEY(datekeyvalue, keyvalue));
[Code] ....
Desired results:
SELECT 1 AS keyvalue, 'first row' AS otherdata, '2015-09-25' AS nextdate, 30 AS next_payment UNION ALL SELECT 2, 'second row', '2015-10-11', 150 UNION ALL SELECT 3, 'third row', NULL, NULL
I know I'm doing something wrong in the last query and I believe another sub-query is needed?
Let me answer a few questions in advance:
Q: This schema looks horrible! A: You don't know the half of it. I cleaned it up considerably for this question.
Q: Why is this schema designed like this? A: Because it's a 3rd-party mainframe file dump being passed off as a relational database. And, no, I can't change it.
Q: I hope this isn't a frequently-run query against a large, high-activity database in which performance is mission-critical. A: Yes, it is, and I left out the part where both the date and the amount are actually characters and have to pass through TRY_CONVERT (because I know how to do that part).
Is that like some interval function for dates, like i want to group my data in intervals of 15 minutes and 30 minutes. Is there such a function in T-SQL
I am working on a line chart. The variable on the X- axis is DateTime. The requirement is to have the values displayed with an interval of 4 hours (or mabe 5 or 6 hours - basically every regular intervals). I am using a list control to Subreports to show around 10 reports.
HOW can I do this? The Major and MInor intervals only help in separating which samples will be displayed. But we cannot configure for regular intervals. Please help.
Does anyone have financial functions to be run in SQL Server 2000? For example, future value, interest rate, payments, and so on. Or where can I find them on Internet?
I'm running into 'CLS-compliance' problems trying to use characters such as "+" and "-" as reporting fields. For example, Standard and Poors ratings have these characters to indicate a notch between main letter rating categories. I have reports that need to display distributions of these various ratings in column charts. However, the X axis labels refuse to print because of these characters!
Any suggestions? This is an awfully common type of labelling in this business!
I want to monitor any suspicious financial transaction which takeplace in a bank through electronic transfer.There are three tables Customers, Account and transaction_type.How can I write a SQl to report the following:Detect an outbound Electronic transfer that is unusually high,compared to a set threshold.For each customer, generate alerts if any outbound Electronic Transferexceeds threshold.Detect Electronic Transfer that are high, compared to a set threshold.For each customer, generate alerts if any set of last 5 outboundElectronic Transfers exceeds the set threshold.Detect Electronic Transfer that are high, compared to historicalbehavior of the customer.For each customer, generate alerts if any set of last 5 ElectronicTransfer (the average of all sets of 5 outbound Electronic Transfer +2standard deviation points)
Ok, I know that there is a very smart programmer out there that can resovle my issue.
I am trying to calculate time worked by 15 minute intervals.
Example: Emp 1 started work at 13:00:00 and worked 183 minutes Emp 2 started work at 17:15:00 and worked 150 minutes Emp 3 started work at 08:30:00 and worked 17 minutes
I have a client that collects data from a manufacturing facility a one minute intervals. I already have sql statments to convert the 1 minute data to other timeframes (e.g. 30 min, 60 min, daily). However, now the client wants to look at data converted to irregular time intervals. For example, instead of looking at the first, second, third, etc. 60 minutes of a day, they wish to see data grouped irregularly: first 30 minutes, next 1 hr & 45 min, next 2 hours, next 1 hr & 30 min, etc. These irregular intervals could change; they may later want to look at the first hr, next 2 1/2 hrs, next 1/2 hour, etc.; or whatever strikes their fancy.
So far, all I've come up with is run one query for each desired time session and then do a join on all the resulting tables. Anybody have a better idea on how to do this?