If you are familiar with Crystal reports or Visual basic, you may be familiar with the Rate and Pmt functions.
I need to duplicate them in SQL sever 7.
Anybody have code for this already? I hate re-inventing the wheel.
More (Unnecesary) details:
I have a client who has handed me the formula that I need to use for calculating Interest rates. Unfortunatly, the formula was written in Crystal reports, so now I need to pick it apart and do the work that CR does automaticly. Any help?
I am looking for information that tells me how fast a db is growing in MB and or percentages over a given period of time, ie weekly, monthly, yearly etc. Either in real numbers or estimates. Does 7.0 already store something like this or do I need to create some code for this?
Or does someone have something like this already coded that they would be willing to share?
Hi, I want to store tax rate in my tables. I set the data type to float, I wan't 4 decimal places and the data in the table has 4 decimals, but when I run a query in query analyzer it returns: 4.4999999999999998E-2 instead of 0.045. How can I fix this?
I have installed a SQL Server diagnose tool for evaluation. It prompts and warns me that "Procedure Cache hit rate is for example 15%. Its help indicates:
The Procedure Cache Hit Rate alarm is raised when the ratio between the number of times SQL Server looks for a plan in the procedure cache and the number of times it does not find a required plan in the procedure cache falls below a threshold.
A low procedure cache hit rate indicates that SQL Server is finding fewer of the query execution plans it needs already in memory and therefore has to perform more compiles. These extra compilations will degrade SQL Server performance by causing extra CPU load.
I've got a statistics table that I've been writing to for about 2 years now. Every saturday night, a size (in MB) snapshot of each DB file is taken and dumped into this table. I'm then emailed a copy for that week.
Now, I'm trying to figure out what the fastest growers are. Here's the table ddl
What I'm trying to figure out is how to query the average monthly and yearly growth percentages per DB on the MDFSize column.
I'm usually pretty good at this sort of thing, but I just can't seem to wrap my head around how to solve this issue. I'm not having a very good math day.
I need to pickup a tax rate, that is stored on a 1 record file. I would like to avoid using the CROSS JOIN. Is there a way to SELECT the record and set a Variable = to the tax rate so I can pickup the rate in another SELECT statement on each record?
We are having problems with our SQL server 2000. The problem is that on a daily basis we run out of disk space and I always have to run shrinkdatabase on tempdb. Today we started with 160GB of free space and by the end of the day it was gone!
Yes we do have many jobs running on our SQL server pulling data in from many sources. But I dont know how to find out which job is causing this problem. I have a suspicion that it could be a job that runs hourly that pulls data from Oracle (approximately 10000 rows each time), but that job has been active since the 28th August 2007. We only started running out of space in the past 5 days. Any suggestions would be appreciated as to what is causing this or how to diagnose the problem.
I have a 32 bit SQL 2005 EE clustered installation with 10GB of physical memory and AWE enabled. Our monitoring tool, Spotlight, is reporting the Procedure Cache to be 384MB and a Hit Rate of 75% on a fairly regular basis. Sometimes the Procedure Cache increases to 495MB and a Hit Rate of 82%.
(1) With 2005 can the Procedure Cache be increased?
(2) What is the max size of Procedure Cache?
(3) How do I increase the Hit Rate to a higher percentage?
I do not encounter the issue on any other SQL Server installation, however this is our only cluster.
DBCC PROCCACHE num proc buffs = 64889 num proc buffs used = 1135 num proc buffs = 1135 active proc cache size = 2896 proc cache used = 364 proc cache active = 364
Hi, all here, I found that in my case when I trained the data mining models, the model cover rate is very low (in my case, the train data set has 82 rows but the case occuring in the models I trained is only 25). How can I improve the cover rate to improve the quality of the models? (if it is possible in SQL Server 2005) I am using SQL Server 2005.
We have Asynchronous Database Mirroring on SQL Server 2005 SP2 Entprise Edition/Windows 2000 Advanced Server. We noticed that log sent rate is quite low (average 1.3 MB/sec) in most of the cases whereas "Log bytes flushed/sec" is high (1.4 MB/sec) as a result Log send queue keeps on increasing and finally taking all the transaction log space. Our disk queue length is always in range of 0.01. And prinicipal and mirror servers are on local LAN.
I tried on low end server and high end server and in both cases Log sent rate is approx 1.3 MB/sec (Maximum 4 MB/sec).
Is there any limitation on Log sent rate?
How can we improve on log sent rate? Since both servers are on local LAN, network bandwith does not seems to be an issue.
I have a procedure that requires picking up the Rate based on Effective Date. This is what I have so far:
SELECT SHPD.ProductID, SHPD.ReceivedDate, SHPD.Shipper, SHIP.UnitRate FROM tblShipmentDet SHPD LEFT OUTER JOIN tblShippers ON SHIP.ProductID = SHPD.ProductID AND SHIP.Shipper = SHPD.Shipper AND Max???(SHIP.Effectivedate) <= SHPD.ReceivedDate
Because there can be more than 1 Shipper record, I would somehow need to pickup the Maximum EffectiveDate in each case. I realize I cannot use the Max aggregate in the JOIN. Not sure where to go from here. On the Mainframe I used a LOOKUP function that would return the correct EffectiveDate. Help would be appreciated.
SQL Server 2005XEON CPU 3.0GMEMORY 2.0GRAID Tow tables:HIS_HTTP_ONLINE_LOG(PARTITION) FOR HISTORY DATAREL_HTTP_ONLINE_LOG(NOT PARTITIONED) FOR EVERYDAY DATA,AND THEY HAVE THE SAME STRUCTURE CREATE TABLE HIS_HTTP_ONLINE_LOG(ID numeric(20,0) NOT NULL,USERID varchar(32) NOT NULL,USERIP varchar(16) NOT NULL,USERPORT numeric(10, 0) NULL,OBJECTIP varchar(16) NULL,OBJECTPORT numeric(10, 0) NULL,HTTPURL varchar(256) NULL,HTTPHOST varchar(128) NULL,HTTPDNS varchar(128) NULL,VISITIME numeric(10, 0) NULL,STARTIME datetime NOT NULL,ENDTIME datetime NOT NULL)....... SELECT * INTO REL_HTTP_ONLINE_LOG SELECT * FROM HIS_HTTP_ONLINE_LOGWHERE 1=2 There are 5 indexes in HIS_HTTP_ONLINE_LOG ,There is not one index in REL_HTTP_ONLINE_LOG There are about 5000,000 records in REL_HTTP_ONLINE_LOG everyday,at night it will move into HIS_HTTP_ONLINE_LOG automatically,The data of everyday in REL_HTTP_ONLINE_LOG will be last 90 days. My operations:1: ALTER DATABASE DB SET RECOVERY SIMPLE2: EXEC SP_DBOPTION DB, 'select into/bulkcopy', 'TRUE'3:INSERT INTO REL_HTTP_ONLINE_LOG SELECT * FROM HIS_HTTP_ONLINE_LOGWHERE 1=24: TRUNCATE TABLE REL_HTTP_ONLINE_LOG ASK:why the step 3 cost so much time ? (about 1 hour) and how can I reduce the transaction logs in this period ? Could you give me some suggestions ?Thanks!
SQL Server 2005XEON CPU 3.0GMEMORY 2.0GRAID Tow tables:HIS_HTTP_ONLINE_LOG(PARTITION) FOR HISTORY DATAREL_HTTP_ONLINE_LOG(NOT PARTITIONED) FOR EVERYDAY DATA,AND THEY HAVE THE SAME STRUCTURE CREATE TABLE HIS_HTTP_ONLINE_LOG(ID numeric(20,0) NOT NULL,USERID varchar(32) NOT NULL,USERIP varchar(16) NOT NULL,USERPORT numeric(10, 0) NULL,OBJECTIP varchar(16) NULL,OBJECTPORT numeric(10, 0) NULL,HTTPURL varchar(256) NULL,HTTPHOST varchar(128) NULL,HTTPDNS varchar(128) NULL,VISITIME numeric(10, 0) NULL,STARTIME datetime NOT NULL,ENDTIME datetime NOT NULL)....... SELECT * INTO REL_HTTP_ONLINE_LOG SELECT * FROM HIS_HTTP_ONLINE_LOGWHERE 1=2 There are 5 indexes in HIS_HTTP_ONLINE_LOG ,There is not one index in REL_HTTP_ONLINE_LOG There are about 5000,000 records in REL_HTTP_ONLINE_LOG everyday,at night it will move into HIS_HTTP_ONLINE_LOG automatically,The data of everyday in REL_HTTP_ONLINE_LOG will be last 90 days. My operations:1: ALTER DATABASE DB SET RECOVERY SIMPLE2: EXEC SP_DBOPTION DB, 'select into/bulkcopy', 'TRUE'3:INSERT INTO REL_HTTP_ONLINE_LOG SELECT * FROM HIS_HTTP_ONLINE_LOGWHERE 1=24: TRUNCATE TABLE REL_HTTP_ONLINE_LOG ASK:why the step 3 cost so much time ? (about 1 hour) and how can I reduce the transaction logs in this period ? Could you give me some suggestions ?Thanks!
I am only DBA in my company and client want to know the growth rate of his SQL server DataBase which is in production. How can I get the growth rate per day?
When sizing products we use predefined size groups that the users can choose any or all of the sizes from. For example if i size group consisted of sizes (6,8,10) they could use all sizes (6,8,10) or just (6,8) or just (10) if required. Similarly, if a group consisted of (S,M,L,XL) they could choose to only buy (S,L). They cannot choose across groups, so would not be able to choose (6,S)
Once the required sizing is determined they then assign size mixes to the sizes to denote how much of the buy will be in that size. So for example if we had 3 sizes: (6,8,10) and they had the associated mixes (25%,25%,50%) that would mean we would buy 25% of size 6 and 50% of size 10. All size mixes must add up to 100% in total.
The users do analysis to determine what sizes they wish to buy and how much of it.
We also have a franchise portion of the business that have some predefined size mixes. They use the same base size groups as above, but the rule is that they can only use sizes that the particular product is being bought in.
So if the assigned franchise mix is S (50%), M (50%) and the main mix was S (100%) then the franchise mix would only be able to then have the S size.
We would then eliminate the sizes from the franchise mix and then to ensure that the franchise mix still adds to 100 we would then pro-rate up the franchise mix to give a new mix. To do this I divide one by the total the remaining size mixes to get a ratio and then multiple the mixes by this factor.
In the case above not be able to use the M size and would only use the S.This would be
-Total of remaining mixes, in this case only size S for simplicity 1 / 0.5 = 2
-multiple original mix by this factor 0.5 * 2 = 1
size S would now be 100% instead of 50%
The issue I'm having is that on occasion some of the totals are adding up to 100.01% because another one of the requirements is that it needs to be 4 decimal places (0.1015 would represent 10.15% in excel)
Here is a shortened version of the code with some test data:
One employee has two pay rates for two different jobs:
Job A: Rate $10.00 Job B: Rate $15.00
I will be updating their record so that they only have one job going forward, Job C. I need Job C to equal their HIGHER of the two existing jobs.
I have a select statement to find what the higher rate is. However, I am not sure how I can apply the rate to be the new job's rate. Here's what I used to find the highest rate for one single person:
SELECT max(rate), employeeID FROM JobsTable inner join IDTable on JobsID2 = IDID2
WHERE JobCode in ('JOBA','JOBB') and EmployeeID = '12345' GROUP BY EmployeeID
(this returns the employee ID from one table, and the highest rate from Jobs A and B from another table)
I can get it to update to add JobC -- how can I get it to assign the result from the above query to be the rate used for Job C?
Below is the query in which i want to retrieve another column (exchange rate) from a particular date for the sub query.
Actually PurchaseOrderDet table get records related to purchaseorder but for each row in purchaseorderdet need info from the same table for all rows on a particular date.
how i can achieve this query without using RANK() and other functions.
"SELECT Supplier.Uniid AS SupplierID, Supplier.Name, PurchaseOrder.Uniid AS PurchaseID, PurchaseOrder.OrderNo, PurchaseOrder.FormDate, StockItem.Uniid AS StockID, StockItem.StockCode + N' - ' + StockItem.Description1 AS StockItem, PurchaseOrderDet.ExchangeRate, PurchaseOrderDet.UnitPrice, PurchaseOrderDet.Discount, SUM(PurchaseOrderDet.OrderQty) AS SumOfOrderQty,
I have a table Product2 as the attachment at the bottom. Now i want to create a Column "Purchasing rate" over Product and Region like this. I tried some Code but it gave me still Error.
We have several 2005 servers with "Maximum server memory" set to 214 gig, which I believe is the default at installation time. I am told that this means "use all the memory there is including paging." Well, this is nuts but the servers seem to work fine with this setting no matter how much physical memory they have.
One of our 2005 servers recently started paging like crazy, so I reduced "Maximum server memory" to 6000 and the paging disappeared (server has 8 gig of physical memory) and the server appears happy.
I can not explain why only this one server has this paging issue and the others do not. Should I be setting "Maximum server memory" on all my servers? Are there other considerations which might cause the server to eat-up all the memory? As far as I know no other applications run on this box.
Our monitoring tool shows that our production system periodically experiencing large rate - up to 800 memory pages/sec. How to find out which particular queries, S.P., processes that initiate this?