Procedure Intermediate New Old Total avg
Proc1 6 0 0 6 2.000000
Proc2 74 13 0 87 29.000000
Proc3 29 0 0 29 9.666666
Proc4 16 0 0 16 5.333333
And I want to dynamically calculate the average rather than divide the total by the number of columns every time, I want to divide only by the number of non zero values per row for intermediate, new and old. So row 1 average would just be itself = 6, row 2 would be 74+13+87/3 etc.
;with cte as
(
select
f.Procedure,
SUM(case when f.old_new = 'O' then f.Value else 0 end) "Intermediate",
SUM(case when f.old_new = 'Y' then f.Value else 0 end) "New",
SUM(case when f.old_new = 'N' then f.Value else 0 end) "Old"
Hi,I am trying to add a staggered running total and average to a queryreturning quarterly CPI data. I need to add 4 quarterly data pointstogether to calculate a moving 12-month sum (YrCPI), and then tocomplicate things, calculate a moving average of the 12-month figure(AvgYrCPI).Given the sample data:CREATE TABLE [dbo].[QtrInflation] ([Qtr] [smalldatetime] NOT NULL ,[CPI] [decimal](8, 4) NOT NULL) ON [PRIMARY]GOINSERT INTO QtrInflation (Qtr, CPI)SELECT '1960-03-01', 0.7500 UNIONSELECT '1960-06-01', 1.4800 UNIONSELECT '1960-09-01', 1.4600 UNIONSELECT '1960-12-01', 0.7200 UNIONSELECT '1961-03-01', 0.7100 UNIONSELECT '1961-06-01', 0.7100 UNIONSELECT '1961-09-01',-0.7000 UNIONSELECT '1961-12-01', 0.0000 UNIONSELECT '1962-03-01', 0.0000 UNIONSELECT '1962-06-01', 0.0000 UNIONSELECT '1962-09-01', 0.0000 UNIONSELECT '1962-12-01', 0.0000 UNIONSELECT '1963-03-01', 0.0000 UNIONSELECT '1963-06-01', 0.0000 UNIONSELECT '1963-09-01', 0.7100 UNIONSELECT '1963-12-01', 0.0000 UNIONSELECT '1964-03-01', 0.7000 UNIONSELECT '1964-06-01', 0.7000 UNIONSELECT '1964-09-01', 1.3900 UNIONSELECT '1964-12-01', 0.6800 UNIONSELECT '1965-03-01', 0.6800 UNIONSELECT '1965-06-01', 1.3500 UNIONSELECT '1965-09-01', 0.6700 UNIONSELECT '1965-12-01', 1.3200I am trying to return the following results:Qtr CPI YrCPI AvgYrCPI-------- ----- ----- --------1-Jun-60 1.481-Sep-60 1.461-Dec-60 0.721-Mar-61 0.71 4.371-Jun-61 0.71 3.601-Sep-61 -0.70 1.441-Dec-61 0.00 0.72 2.531-Mar-62 0.00 0.01 1.441-Jun-62 0.00 -0.70 0.371-Sep-62 0.00 0.00 0.011-Dec-62 0.00 0.00 -0.171-Mar-63 0.00 0.00 -0.181-Jun-63 0.00 0.00 0.001-Sep-63 0.71 0.71 0.181-Dec-63 0.00 0.71 0.361-Mar-64 0.70 1.41 0.711-Jun-64 0.70 2.11 1.241-Sep-64 1.39 2.79 1.761-Dec-64 0.68 3.47 2.451-Mar-65 0.68 3.45 2.961-Jun-65 1.35 4.10 3.451-Sep-65 0.67 3.38 3.601-Dec-65 1.32 4.02 3.74Note, 4 data points are required to calculate a moving sum of CPI(YrCPI) and 4 calculate YrCPI figures are required calculate theannual average of YrCPI (AvgYrCPI), giving a staggered effect to thefirst 7 resultsThis sad effort is about as far as I've got:SELECT I.Qtr, I.CPI, SUM(S.CPI) AS YrCPIFROM QtrInflation IJOIN (SELECT TOP 4 Qtr, CPIFROM QtrInflation) SON S.Qtr <= I.QtrGROUP BY I.Qtr, I.CPIORDER BY I.Qtr ASCCan anyone suggest how do achieve this result without having to resortto cursors?Thanks,Stephen
Hello, I am very new to SQL and just getting to learn this stuff. To make this question easier I will scale down the fields dramatically.
I have about 8000 records close to 2000 records for the last 4 years and I would like to create a query that will create a table on my SQL server. I need to bind the data based on two items the Year and the Name and average several records. However, one record needs it's own calculation.
Here are my field names: [year] ***4 choices 2007, 2006, 2005, 2004*** [name] [rush_no] ***integer*** [rush_net] ***integer*** [YPC] *** This field needs to be calculated by [rush_net] divided by [rush_no]***decimal***
I also need to create the same table that will "total/sum" the same records.
I trying to get the moving total (juts as moving average). It always sum up the current record plus previous two records as well and grouped by EmpId.For example, attaching a image of excel calculation.
I have created cube. 1 fact table and few dimensions including dimDate
I need to create a calculated member for variance.
Variance = SUM([Measures].[Amt]) starting from financial year beginning(2015-04-01 to current date) - SUM([Measures].[Amt]) for the same period last year(2014-04-01 to current date last year)
Calculation of an average using DAX' AVERAGE and AVERAGEX.This is the manual calculation in DW, using SQL.In the tabular project (we're i've noticed that these 4 %'s are in itself strange), in a 1st moment i've noticed that i would have to divide by 100 to get the same values as in the DW, so i've used AVERAGEX:
The results were, respectively: 701,68; 2120,60...; -669,441; and  finally **-694,74** for Avg_FMPdollar.i can't understand the difference to SQL calculation, since calculations are similar to the other ones. After that i've tried:
test:=SUM([_FMPdollar])/countrows('Fct Sales') AND the value was EQUAL to SQL: -672,17 test2:=AVERAGE('Fct Sales'[_Frontend Margin Percent ACY]), and here, without dividing by 100 in the end, -696,74...
So, AVERAGE and AVERAGEX have a diferent behaviour from the SUM divided by COUNTROWS, and even more strange, test2 doesn't need the division by 100 to be similar to AVERAGEX result.
I even calculated the number of blanks and number of zeros on each column, could it be a difference on the denominator (so, a division by a diferente number of rows), but they are equal on each row.
I have a temp_max column and a temp_min column with data for every day for 60 years. I want the average temp for jan of yr1 through yr60, averaged... I.E. the avg temp for Jan of yr1 is 20 and the avg temp for Jan of yr2 is 30, then the overall average is 25. The complexity lies within calculating a daily average by month, THEN a yearly average by month, in one statement. ?confused?
Here's the original query. accept platformId CHAR format a6 prompt 'Enter Platform Id (capital letters in ''): '
SELECT name, country_cd from weather_station where platformId=&&platformId;
SELECT to_char(datetime,'MM') as MO, max(temp_max) as max_T, round(avg((temp_max+temp_min)/2),2) as avg_T, min(temp_min) as min_temTp, count(unique(to_char(datetime, 'yyyy'))) as TOTAL_YEARS FROM daily WHERE platformId=&&platformId and platformId = platformId and platformId = platformId and datetime=datetime and datetime=datetime GROUP BY to_char(datetime,'MM') ORDER BY to_char(datetime,'MM');
Does anyone know how I can determine the number of page writes that have been performed during a set period of time? I need to figure out the data churn in that time period.
Very new to SQL and trying to get this query to run. I need to sum the total trips and total values as separate columns by day to insert them into another table.....
My code is as follows;
Insert Into [dbo].[CombinedTripTotalsDaily] ( Year, Month, Week, DayNo, Day, Trip_Date,
I haven't a clue how to accomplish this.All the data is in one table. The data is stored by registration dateand includes county and number of students brokne out by grade.Any help appreciated!Rob
I have a table that writes daily sales each night but it adds the day's sales to the cumulative total for the month. I need to pull the difference of todays cumulative total less yesterdays. So when my total for today is 30,000 and yesterday's is 28,800, my sales for today would be 1,200. I want to write this to a new field but I just can't seen to get the net sales for the day. Here is some sample data. For daily sales for 6-24 I want to see 2,000, for 6-25 3,000, 6-26 3,500, and 6-27 3,500. I'm thinking a case when but can't seem to get it right.
I have created a Report using Visual studio-the report displays a subreport within it.
On the Subjective Report I have 12 values for each month of the year.
For the first month the value is =sum(Fields! Month_1.Value), and I have named this text box €™SubRepM1€™ The name of the subreport is €˜subreport1'.
On my Main Report, again I have 12 values for each month of the year. For the first month the value is =sum(Fields! Month_1.Value)*-1, and I have named this text box 'MainRepM1' The name of the main report is 'GMSHA Budget Adjustment Differentials'
The report displays both of the subreport and main report values but I now need to total these values together for each month in order to produce a grand total.
I have tried using the following to add the totals for Month 1 together, =subreport1.Report.SubRepM1 + MainRepM1 but this does not work and I get the following error message €˜The value expression for the text box 'textbox18'contains an error [BC30451] Name subreport1 is not declared'.
I feel that it should be a simple matter of adding the two sets of values together but I€™m having major problems trying to get these totals to work.
I'm importing from an Excel spreadsheet into an SQL Table using DTS.
Two things:
1. I don't want anything imported into Column 1 of the table as this has been designed to automatically increment (creating a primary key). I haven't worked out how to do this.
2. How can I get the DTS package to ignore the top 2 rows of the Excel spreadsheet (header information)?
An answer to one will be great but both will be tremendous!!
So I have this query that is ignoring my date filter and for the life of me I can't figure out why. Was hoping some guru could explain it to me. Here goes:
This query ignores my date filter:
SELECT rcv.Name AS MachineName, r.JobId, j.Name AS JobName, r.CreateTime AS JCreateTime, rsv.Name AS JResultStatus, rpv.Path + rpv.Name AS ResourcePool, rcvv.ResourceConfigurationVal AS Dimension
FROM dbo.Result_View AS r INNER JOIN dbo.ResourceConfiguration_View AS rcv ON r.ResourceConfigurationId = rcv.Id INNER JOIN dbo.Job_View AS j ON r.JobId = j.Id INNER JOIN dbo.ResultStatus_View AS rsv ON r.ResultStatusId = rsv.Id INNER JOIN dbo.Resource_View AS rv ON rcv.ResourceId = rv.Id INNER JOIN dbo.ResourcePool_View AS rpv ON rv.ResourcePoolId = rpv.Id RIGHT JOIN dbo.ResourceConfigurationValue_View AS rcvv ON rv.LatestResourceConfigurationId = rcvv.ResourceConfigurationId LEFT JOIN dbo.Dimension_View AS d ON rcvv.DimensionId = d.Id
WHERE (r.CreateTime > DATEADD(DAY, -15, GETDATE())) AND (rcv.Name LIKE 'PNP%') AND (d.Id = 859) OR (d.Id = 860)
If I comment out the last two joins and associated select/filters, all of the sudden the date filter works again. From everything I have read, the joins are supposed to be processed BEFORE the filters are applied in the virtual table. My DB goes back a number of years and contains millions of records. W/O the date filter, the query takes a very, very long time to run.
I have just transferred my site to a new server with SBS R2 Premium, so the site's database changed from SQL 2000 to SQL 2005. I find that searches are now returning results in random order, even though they use a view with an Order By clause to force the order I want. I find that the results are unordered when I test the view with Management Studio, so the issue is unrelated to my VB/ASP Net code. Using my SQL update tool (SQL Compare, from Redgate) I find that there are no differences in the views, or the underlying tables. Using Management Studio to test a number of views, I find that I have a general problem with all views. For example, one of the simpler views is simply a selection of fields from one table, with an Order By clause on the tables primary key: - SELECT TOP (100) PERCENT GDQid, GDQUser, GDQGED, GDQOption, gdqTotalLines, GDQTotalIndi, GDQRestart, GDQCheckpointMessage, GDQStarted, GDQFinished, gdqCheckpointRecordCountr FROM dbo.GEDQueue ORDER BY GDQid DESC If I right-click the view (from Management Studio's Object Explorer pane), select Design from the menu to show the view's design, and then click the Execute SQL icon, the view's results are displayed perfectly, in descending order of GDQid. However, if I select "Open View" the view's results are displayed out of order. When I do this with the SQL 2000 database, both Design/Execute and Open View correctly display the data in the correct order. Is there something that I should check in the SQL 2005 installation - some option that has been set incorrectly? Regards, Robert Barnes
I am using the SqlDataSource to access the dB from my page. Basically this is what I do with it ds.SelectParameters.Clear(); ds.SelectParameters("Id", TypeCode.Int32, id.ToString());
DataSourceSelectArguments dssa = new DataSourceSelectArguments(); dssa.MaximumRows = 1; dssa.AddSupportedCapabilities(DataSourceCapabilities.Page);
DataView dv = (DataView)ds.Select(dssa); if (dv.Count > 0) { // collect the information string title = (string)dv[index].Row.ItemArray[0]; } And the SelectCommand attribute of the SqlDataSource is set in design mode to "SELECT * from vw_Items ORDER BY Category". So, since I am trying to retrieve just the item with the given Id I was expecting just one record but when I step through I see that the data view has a count of 9 (all records in the table) !!! What am I doing wrong here? why can't it return just one? as per the select statement which after adding the parameter should be something like "SELECT * FROM vw_Items WHERE ID = 5 ORDER BY Category
If you have a table called Person and has ID, Age, Gender and Name. And you have a stored procedure: CREATE PROCEDURE [dbo].[GetName]@Age TINYINT,@Gender TINYINTASBEGIN SELECT Name FROM Person WHERE Age = @Age AND Gender = @Gender END You want some times to send @Age as null to get all ages (or in other words, you are not concerned with the age) or @Gender as null to get all genders. A way to do that is to build your query string dynamically and execute it with EXEC command. However, this is a bad solution because you are losing most of the SP benefits and writting longer and more complicated procedures, specially that you have a lot of parameters! Another solution that I thought of is having your select like this: SELECT Name FROM Person WHERE Age = ISNULL(@Age,Age) AND Gender = ISNULL(@Gender, Gender) However, I noticed that when the value of the column is NULL, you will get false results, obviously because you are using = operator in comparing a value with null while you should be using IS operator, so this method did not work. I am wondering if any one has a good solution for this. Adam Tibi
In our schools we have a number of East-European, Turkish, Scandinavian, ... students. Their names contain "special" characters, like Ö, Ü, Ø, ... Our users want to be able to search for student names without having to enter those special characters. Most often they don't know the exact spelling of the names and they get "no match found" messages as a result.
They want to have persons with the name Ösgür, Osgueld, ... in the result set after entering "osgu" in the search field.
What is the best way to do this? I was thinking about using another collation near the LIKE, but I don't know if that would work and how it should be done. The Database collation is Latin1_General_CI_AS.
Hi, I am using SQL Server 2000. In database i am having one column named Address which contains full address of the customer. While searching i want to ignore starting numeric or alphanumeric values. Kinly guide how I can ignore numeric or alphanumeric values while searching the data.
Hi, I'm using the following SQL script to return a list of part number and the order is not what I expect. Perhaps this is a collation problem but I have no idea where to look to modify that.
Thanks in advance, John select part from transactions T where (T.transdate between '20070701' and '20070705') and (T.transtype = 'ISSU' or T.transtype = 'RTRN') order by part
Here is the beginning of the Transactions table create script
CREATE TABLE [Transactions] ( [RecNo] [int] IDENTITY (1,1) NOT NULL, [Part] [nvarchar] (30) NOT NULL , [TransDate] [nvarchar] (8) NOT NULL , [TransType] [nvarchar] (4) NOT NULL , [FromLoc] [nvarchar] (10) ,
The 'Part' column is an alphanumeric field. The problem I am having is that the Order By seems to ignore the hyphen character '-' when the returned rows are ordered by the Part (which can contain hyphens in any column).
Here is an example of what I get.
130909N9 130909N9 130909N9 1-480698-0 * These two should not be here 1-480699-0 * 15-423 164-07700 164-07700 164-07700 1683
I was expecting this ( and I get this in and older database ).
068-03000 068-03000 06A19956 074-03200 077-367-0 08DU08 1-480698-0 * These should be here eariler in the data 1-480699-0 100-364072 100-364072
This may be a silly question out of ignorance, but I'm working in an environment where I am a DBA for a server, but I'm not a local admin on the windows server for the first time, so sorry if it is
Anyway, I have set up some traces that are writing to a file stored in a directory I have full control on. However, the trace files themselves are not inheriting the folder permissions, even though the folder is configured to do so. The only logins with access are the Local System account, the SQL Server service account local group, and the local administrators group.
Is this something that SQL Server does on its own, or would this have to be something the system admins have set up with group policy or something? If its SQL Server, is there some configuration setting I can change to get it to stop doing that so I can stop bugging the admins to give me access every time I run a trace?
SQL Version: 2008 (not r2) Problem: My Select statement seems to be unaffected by some of the conditions in the WHERE Clause. For instance
JCCD.Mth >= cutoffs.FiscalYear_FirstMonth (value '20130101') AND JCCD.Mth <= @WIPMonthCurrent (value '20130101')AND LTRIM(RTRIM(JCCD.Job)) = '71-' (see output and code below) SQL Code: declare @WIPMonthCurrent date = '20130101' SELECT      JCCD.JCCo, JCCD.Job, JCCD.Mth, JCCD.Source, sum(JCCD.ActualCost) AS CostToDate
I am looking to delete a single row from a relatively small table. Unfortunately, there is a foreign key relationship between this table and a much much larger table. The checking of this foreign key when I am deleting this row seems to significantly impact the performance of the operation. Previously there was an index on this larger table that helped this query run. This index has been dropped to improve the performance of a more frequently executed operation.
Is there a way I can use a hint or something to stop SQL checking this foreign key when deleting the row? I am certain that there are no associated rows in the larger table.
I have read elsewhere that I could disable the foreign key, perform the delete, then enable the foreign key. This delete statement is not a one off process and could happen in the normal operation of the application so I don't really know what the implications of doing this are.
I would like to populate a grid with data from 2 different tables. Table1: [PK]id(int), name(nvarchar), areaID(int) Table2: [PK][FK]areaID(int), areaDescription(nvarchar)
My cerrent query is: SELECT Table1.id, Table1.name, Table2.areaDescription FROM Table1 INNER JOIN Table2 ON Table1.areaID = Table2.areaID
However, sometimes the areaID in Table1 will only be populated at a later stage and therefore will be NULL in Table1. Table2 is used as a lookup table when inserting into Table1. This query therefore ommits any records in Table1 which do not have an areaID. I would like to view ALL records(ones without an areaID as well) as they would be populated in the grid and selected to be updated on web forms because they are incomplete and then subsequently assigned an areaID.
Any help with this query would be much appreciated...
I"m trying to use a BULK INSERT command to insert data into a table from a file. There is a UNIQUE Index that is being violated and the BULK INSERT fails.
I do not want to drop or disable the index, however, i also do not want to load 'duplicate' records so i keep the CHECK_CONSTRAINTS parameter.
Is there a way to have the duplicate records outputed to the ERRORFILE ?
Would like to know if it is possible to calculate the duration of a Datetime Start and End Dates ignoring all overlapps? Eg: 1) StartTime 10:00:00 EndTime 11:00:00 Duration: 01:00:00 2) StartTime 10:30:00 EndTime 11:15:00 Duration: 00:45:00 Total Duration should be 01:15:00 and not 01:45:00
I'm pretty new at this, writing SQL and reporting services. I created a report with a date parameter. I need the report to ignore the timestamp. My @Startdate is fine because the timestamps is at 12:00:00AM but my @EndDate also has this timestamp. I need to pull all the data up to the end date the user enters without taking the timestamp into consideration.
If someone can help me out with, I would greatly appreciate it.
Can you please tell me the way to configure the LOOK UP transformation so that it will ignore all the null values ? I want to configure a Look up component for the column "Col1" as follows
All the NULL values of Col1 should not be considered for look-up process. They should be passed to the downstream component as valid rows. All NOT NULL values of Col1 should be processed by the Look up component. If there is no matching value present for any NOT NULL value of Col1 then it should be directed to error output.