I've got a statistics table that I've been writing to for about 2 years now. Every saturday night, a size (in MB) snapshot of each DB file is taken and dumped into this table. I'm then emailed a copy for that week.
Now, I'm trying to figure out what the fastest growers are. Here's the table ddl
What I'm trying to figure out is how to query the average monthly and yearly growth percentages per DB on the MDFSize column.
I'm usually pretty good at this sort of thing, but I just can't seem to wrap my head around how to solve this issue. I'm not having a very good math day.
I am only DBA in my company and client want to know the growth rate of his SQL server DataBase which is in production. How can I get the growth rate per day?
I have a table Product2 as the attachment at the bottom. Now i want to create a Column "Purchasing rate" over Product and Region like this. I tried some Code but it gave me still Error.
I want to be able to display the percent of growth for databases on the server. My query has the file sizes and names and the dates, its done monthly . How do i create a field with the growth % from one month to the next, or from one date entry , to the next. so it appears like this
2006 2007 2008
23000 % growth 3400 %growth 20000
keep in mind, this is how it is displayed in reporting services with a matrix(crosstab)
Here's my query
Code Snippet SELECT RC_STAT.dbo.Tbl_Database_Statistics.AutoId, RC_STAT.dbo.Tbl_Database_Statistics.Server_Description, RC_STAT.dbo.Tbl_Database_Statistics.Database_Size_Datetime, RC_STAT.dbo.Tbl_Database_Statistics.Database_Description, RC_STAT.dbo.Tbl_Database_Statistics.Database_Size_Mb, RC_STAT.dbo.Tbl_Database_Statistics.Database_Location, RC_STAT.dbo.Tbl_Database_Statistics.Database_File_Name, DbSTAT.MonthlyDBTotal FROM RC_STAT.dbo.Tbl_Database_Statistics INNER JOIN (SELECT Database_Size_Datetime, Server_Description, Database_Description, SUM(Database_Size_Mb) AS MonthlyDBTotal FROM RC_STAT.dbo.Tbl_Database_Statistics AS Tbl_Database_Statistics_1 GROUP BY Database_Size_Datetime, Server_Description, Database_Description) AS DbSTAT ON DbSTAT.Server_Description = RC_STAT.dbo.Tbl_Database_Statistics.Server_Description AND DbSTAT.Database_Description = RC_STAT.dbo.Tbl_Database_Statistics.Database_Description AND DbSTAT.Database_Size_Datetime = RC_STAT.dbo.Tbl_Database_Statistics.Database_Size_Datetime
This code displays dates, File name, and File size for four seperate dates 11/20/2007 , 11/30/2007, 12/30/2007 and 01/31/2007 . I'm trying to show the percentage growth from date to date (ie 11/20/2007 -11/30/2007 percentage growth)
is there a way i can get the previous date file size for each entry, so i can have a variable for the calculation. Or i can get the calculate it within this code (ie database_size_mb / ((database_size_md ) where database_size_datetime -1) *100 or whatever the formula is for percentage growth.
Code Snippet SELECT Database_Size_Datetime, Database_file_name, Database_Size_Mb FROM RC_STAT.dbo.Tbl_Database_Statistics AS Tbl_Database_Statistics_1 GROUP BY Database_Size_Datetime, Database_file_name, Database_Size_Mb
It seems to be subtracting the value of field3 from the total of 3 fields at some points instead of giving me the average.Some of the results are here:
Count Field1 Field2 Field3 Total Average ===== ====== ====== ====== ===== ======= 2NULL6NULL66 11NULL28134129 10NULL33NULL3333 3NULL12NULL1212
I want to calculate average row size of a record. By based on this i want to add some more columns into an existing table. Here is my table structureCREATE TABLE patient_procedure( proc_id int IDENTITY(1,1) CONSTRAINT proc_id_pri_key PRIMARY KEY, patient_id int NULL, surgeon_name varchar(40) NOT NULL, proc_name varchar(20) , part_name varchar(30), wth_contrast int , wthout_contrast int , wth_wthout_contrast int, xray_part varchar(60), arth_area varchar(30), others varchar(30) , cpt varchar(20) , procedure_date smalldatetime NOT NULL, mraloperrun varchar(20),CONSTRAINT patientid_foreign_key FOREIGN KEY(patient_id) REFERENCES dbo.patient_information (Patient_id)) Now i got a requirement that i have to add two more procedures with different columns.The columns overall size is 195 bytes.I can place those two procedures as seperate tables. I dont want to do that becuase of front end requirements.Here the problem is when the user enters these two procedures information remaining fields will store the null value. I know that when we store the null values into corresponding columns min of 1 byte will be occupied. Please suggest me that shall i include these columns into the above table. If i add these columns is performance will be decreased or not. Waiting for valuable suggestions.
An example of 4 rows on my table would be like this $1400 80 $1500 85 $1560 82 $1700 81
to calculate the average of the price sold related to the number of sold items just have to do Select avg(priceSold*itemsSold)
But sometimes i just want the average price of the first 100 sold items, so how can i make my query to just use the first 100 sold items?
in math it would be like this average= ( (1400*80) + (1500*20) ) / 100
but if i wanted the first 200 it would be like this average= ( (1400*80) + (1500*85) + (1560*35)) / 200
and if i wanted the first 300 would be like this average= ( (1400*80) + (1500*85) + (1560*82) + (1700*53)) / 300
but of course the number i want will always be a variable which is less than the total of the products sold. So, how the heck do i program this query where the number of the items sold is variable and it will take the rows of the database depending on how many items were sold.
I hope i didnt wrote my explanation too confusing and that i can get any help from you guys. thank you a lot for the help and byye
Hi, How do I Calculate Average Leadtime... I have a Table named "iCalls_Calls" which has 2 Columns (start_Date and Closed_Date).I need to calculate average leadtime based on the columns from this table . I have a query and i need to add this ( calculate average leadtime) to this query.
Code:
SELECT B.USER_DIVISION,B.USER_DEPARTMENT,COUNT(*) FROM iCalls_Calls A INNER JOIN iCalls_Users B on A.REQUESTOR = B.USER_ID GROUP BY USER_DIVISION,USER_DEPARTMENT
Can anyone send me the correct query to calculate the average time ? Thanks..
What is the piece of SQL which looks at the average date difference for each enquiry and then sums it all up to give an overall average number of days it takes?
I need calculating a rolling 3 month average cost from the two dataset below. Which is the 3 month Average of Dataset1 / Dataset 2.
Dataset 1:
SELECT(factAdmissions.ContractCode + '-' +factAdmissions.BenefitPlanCode) AS [Contract Code], factAdmissions.AdmitCCYYMM, ISNULL(sum(AmountPaid),0)As [Amount Paid] FROM factAdmissions
[Code] ....
Dataset2:
Select
(factMembership.ContractCode+'-'+ factMembership.BenefitPlanCode) As Product, EffectiveCCYYMM, ISNULL(count(Distinct MemberId),0) As MemberCount From factMembership Where EffectiveCCYYMM >= '200701'
Calculation of an average using DAX' AVERAGE and AVERAGEX.This is the manual calculation in DW, using SQL.In the tabular project (we're i've noticed that these 4 %'s are in itself strange), in a 1st moment i've noticed that i would have to divide by 100 to get the same values as in the DW, so i've used AVERAGEX:
The results were, respectively: 701,68; 2120,60...; -669,441; and finally **-694,74** for Avg_FMPdollar.i can't understand the difference to SQL calculation, since calculations are similar to the other ones. After that i've tried:
test:=SUM([_FMPdollar])/countrows('Fct Sales') AND the value was EQUAL to SQL: -672,17 test2:=AVERAGE('Fct Sales'[_Frontend Margin Percent ACY]), and here, without dividing by 100 in the end, -696,74...
So, AVERAGE and AVERAGEX have a diferent behaviour from the SUM divided by COUNTROWS, and even more strange, test2 doesn't need the division by 100 to be similar to AVERAGEX result.
I even calculated the number of blanks and number of zeros on each column, could it be a difference on the denominator (so, a division by a diferente number of rows), but they are equal on each row.
I have a temp_max column and a temp_min column with data for every day for 60 years. I want the average temp for jan of yr1 through yr60, averaged... I.E. the avg temp for Jan of yr1 is 20 and the avg temp for Jan of yr2 is 30, then the overall average is 25. The complexity lies within calculating a daily average by month, THEN a yearly average by month, in one statement. ?confused?
Here's the original query. accept platformId CHAR format a6 prompt 'Enter Platform Id (capital letters in ''): '
SELECT name, country_cd from weather_station where platformId=&&platformId;
SELECT to_char(datetime,'MM') as MO, max(temp_max) as max_T, round(avg((temp_max+temp_min)/2),2) as avg_T, min(temp_min) as min_temTp, count(unique(to_char(datetime, 'yyyy'))) as TOTAL_YEARS FROM daily WHERE platformId=&&platformId and platformId = platformId and platformId = platformId and datetime=datetime and datetime=datetime GROUP BY to_char(datetime,'MM') ORDER BY to_char(datetime,'MM');
If you are familiar with Crystal reports or Visual basic, you may be familiar with the Rate and Pmt functions.
I need to duplicate them in SQL sever 7.
Anybody have code for this already? I hate re-inventing the wheel.
More (Unnecesary) details: I have a client who has handed me the formula that I need to use for calculating Interest rates. Unfortunatly, the formula was written in Crystal reports, so now I need to pick it apart and do the work that CR does automaticly. Any help?
I am looking for information that tells me how fast a db is growing in MB and or percentages over a given period of time, ie weekly, monthly, yearly etc. Either in real numbers or estimates. Does 7.0 already store something like this or do I need to create some code for this?
Or does someone have something like this already coded that they would be willing to share?
Hi, I want to store tax rate in my tables. I set the data type to float, I wan't 4 decimal places and the data in the table has 4 decimals, but when I run a query in query analyzer it returns: 4.4999999999999998E-2 instead of 0.045. How can I fix this?
I have installed a SQL Server diagnose tool for evaluation. It prompts and warns me that "Procedure Cache hit rate is for example 15%. Its help indicates:
The Procedure Cache Hit Rate alarm is raised when the ratio between the number of times SQL Server looks for a plan in the procedure cache and the number of times it does not find a required plan in the procedure cache falls below a threshold.
A low procedure cache hit rate indicates that SQL Server is finding fewer of the query execution plans it needs already in memory and therefore has to perform more compiles. These extra compilations will degrade SQL Server performance by causing extra CPU load.
I need to pickup a tax rate, that is stored on a 1 record file. I would like to avoid using the CROSS JOIN. Is there a way to SELECT the record and set a Variable = to the tax rate so I can pickup the rate in another SELECT statement on each record?
We are having problems with our SQL server 2000. The problem is that on a daily basis we run out of disk space and I always have to run shrinkdatabase on tempdb. Today we started with 160GB of free space and by the end of the day it was gone!
Yes we do have many jobs running on our SQL server pulling data in from many sources. But I dont know how to find out which job is causing this problem. I have a suspicion that it could be a job that runs hourly that pulls data from Oracle (approximately 10000 rows each time), but that job has been active since the 28th August 2007. We only started running out of space in the past 5 days. Any suggestions would be appreciated as to what is causing this or how to diagnose the problem.
I have a 32 bit SQL 2005 EE clustered installation with 10GB of physical memory and AWE enabled. Our monitoring tool, Spotlight, is reporting the Procedure Cache to be 384MB and a Hit Rate of 75% on a fairly regular basis. Sometimes the Procedure Cache increases to 495MB and a Hit Rate of 82%.
(1) With 2005 can the Procedure Cache be increased?
(2) What is the max size of Procedure Cache?
(3) How do I increase the Hit Rate to a higher percentage?
I do not encounter the issue on any other SQL Server installation, however this is our only cluster.
DBCC PROCCACHE num proc buffs = 64889 num proc buffs used = 1135 num proc buffs = 1135 active proc cache size = 2896 proc cache used = 364 proc cache active = 364
Hi, all here, I found that in my case when I trained the data mining models, the model cover rate is very low (in my case, the train data set has 82 rows but the case occuring in the models I trained is only 25). How can I improve the cover rate to improve the quality of the models? (if it is possible in SQL Server 2005) I am using SQL Server 2005.
We have Asynchronous Database Mirroring on SQL Server 2005 SP2 Entprise Edition/Windows 2000 Advanced Server. We noticed that log sent rate is quite low (average 1.3 MB/sec) in most of the cases whereas "Log bytes flushed/sec" is high (1.4 MB/sec) as a result Log send queue keeps on increasing and finally taking all the transaction log space. Our disk queue length is always in range of 0.01. And prinicipal and mirror servers are on local LAN.
I tried on low end server and high end server and in both cases Log sent rate is approx 1.3 MB/sec (Maximum 4 MB/sec).
Is there any limitation on Log sent rate?
How can we improve on log sent rate? Since both servers are on local LAN, network bandwith does not seems to be an issue.
I have a procedure that requires picking up the Rate based on Effective Date. This is what I have so far:
SELECT SHPD.ProductID, SHPD.ReceivedDate, SHPD.Shipper, SHIP.UnitRate FROM tblShipmentDet SHPD LEFT OUTER JOIN tblShippers ON SHIP.ProductID = SHPD.ProductID AND SHIP.Shipper = SHPD.Shipper AND Max???(SHIP.Effectivedate) <= SHPD.ReceivedDate
Because there can be more than 1 Shipper record, I would somehow need to pickup the Maximum EffectiveDate in each case. I realize I cannot use the Max aggregate in the JOIN. Not sure where to go from here. On the Mainframe I used a LOOKUP function that would return the correct EffectiveDate. Help would be appreciated.