Hi all
I have a Float Field in my table with following values:
1.63
7.42
35.71
0.58
every thing is ok BUT when i Select Data from this table in
QUERY ANALIZER environment with (Select * from mytable) , Query Analizer shows me the following values:
Hi There, I am trying to export a SQL table containing Float Fields to a CSV file. Since its a batch process with 200 tables, I am using BCP to automate the process.
The Problem: All other fields export fine except the fields with 'Float' values. When I open the CSV, few of the float values show a rounding error.
eg. if the value in SQL table is 0.29 the CSV file has the corresponding value as 0.2899999999....
Few float values are exporting fine, but most of them give the recurring '9's at the end - for no reason. I've tried to find the cause why it happens in few places and not in others, I did not see any similarity.
I am using SQL 2000 in a kind of electronic wallet way. Users out money onto an account and spend it on various services on a system. The cost of those services is deducted from the value in their wallet, and everybody's happy. However, some very strange things have been happening to my transactions; seemingly at random.
Some transactions (such as purchasing time on the Internet) are returning values such as 0.10000000000000001 instead of 0.1. This minute difference affects the user's wallet balance because the rouge digit is subtracted from their account. So instead of a balance of, say, 3.4 they have 3.39999999999999999.
"So what?", I hear you say. Well the problem comes when it's time to give them a refund. They walk over to a kiosk and the machine tells them they have 3.40 remaining in their account (it's nicely rounding up the value), but when they click Refund, it tells them they have insufficient funds to complete the refund! (Note: The refund amount is being compared with the wallet balance). If I go into the database via Query Analyzer it tells me their balance is 3.3999999etc, but in Enterprise Manager the value is 3.4. If try to manipulate the data in any way it is treated as 3.4. However, if I add 0.000000000000001 then QA reads the value as 3.4 and the customer can get their refund.
My questions is this. One, how the hell do I stop this from happening? I only need the two decimal places. Taking the value in a query and round it up/chopping off the remaining decimal points hasn't worked. It always picks up the value as 3.4 in a query. Two, why on Earth is this happening??? Has anyone experienced this problem before.
Thanks in advance to anyone that's read this far down.
I need to round UP values but they should never be rounded down, below is my expected output in RoundVal column.
SELECT 89 AS Val, 100 AS RoundVal UNION ALL SELECT 329, 1000 UNION ALL SELECT 6329, 10000 UNION ALL SELECT 43299, 100000 UNION ALL SELECT 155329, 1000000
HiThe scenario:The price of products are determined by size.I have a Prices table that contains 3 columnsWidth Length and Price.User inputs their own width and length values as inWidth and inLength.It is unlikely that these values will exactly match existing lengths and widths in the price table.I need to take these User Input values and round them up to the nearest values found in the Prices table to pull the correct price.What is the most efficient way of achieving this?Thanks for your time.C# novice!
I am using SSRS 2014. I'm using a matrix instead of a tablix because it allows me to have dynamic columns. In the example I'm showing, two of the columns use the sum function to get the total counts per practice. The third column contains percentages so I averaged for the total but the value is inaccurate compared to the value I would get if the divided the the two totals that are sums of the counts. Is there a way for me to specify that I want to divide the total counts numerator divided by the total counts denominator?
Here's an example report output with the percentage column averaged (inaccurate):
PCP       numerator denominator percentage John Smith  66       104 63.46 Tom Jones   4         36   11.11 . . . Jane Doe    1   1     100 Total 708      1005     72.3
So the 72.3 value is from Avg(metricvalue)
I would like to do this instead: % total = 708/1005, which equals 70.5 - a significant difference.
The metricvalue column is what is the value for every number above (Because it's a matrix).
How can I remove -1.#IND values from a float column in SQL Server 2000 (8.00.2039 SP4).
So far I've tried - Using Query Analyzer, setting the value to a proper float value. I receive error €œA floating point exception occurred€? - Using Query Analyzer, deleting the record with the invalid float. I receive error €œA floating point exception occurred€? - Using Enterprise Manager, setting the value to a proper float value. I receive error €œA floating point exception occurred€? - Using Enterprise Manager, deleting the record with the invalid float. I receive error €œA floating point exception occurred€? - Previously I was able to convert the column to nvarchar, but now this fails because the table is being replicated.
I don€™t really care what happens to the records, I just need them to go away.
In SQL Server for a field of datatype float(8), if i insert a value 2.62 , it saves it as 2.6200000000000001 like so for other values also. But in frontend i can see the right values.
STDEV() gives incorrect values with reasonable input.
I have a table filled with GPS readings. I've got a column LATITUDE (FLOAT) with about 20,000 records between 35.6369018 and 35.639890. (Same value to the first 5 digits of precision---what can i say, it's a good gps.)
Here's what happens when I ask SQL Server ("9.00.1399.06 (IntelX86)") to compute the standard deviation of the latitude:
// Transact-SQL StdDev function:
SELECT STDEV(LATITUDE) FROM GPSHISTORY WHERE STATTIME BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000;
0
// Zero. ZERO??!?!! //Let's re-implement Std Dev from the definition using other aggregate functions:
DECLARE @AVERAGE FLOAT; SELECT @AVERAGE = AVG(LATITUDE) FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000; SELECT SQRT(SUM(SQUARE((LATITUDE - @AVERAGE)))/COUNT(LATITUDE)) FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000;
6.03401924005392E-06
// That's better. Maybe STDEV is using fixed point arithmetic?!?
SELECT STDEV(10 * LATITUDE)/10 FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000;
4.77267753808509E-06
SELECT STDEV(100 * LATITUDE)/100 FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000;
1.66904329068838E-05
SELECT STDEV(1000 * LATITUDE)/1000 FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000;
8.11904280806654E-06
// The standard deviation should, of course, be linear, e.g.
DECLARE @AVERAGE FLOAT; SELECT @AVERAGE = AVG(LATITUDE) FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000; SELECT SQRT(SUM(SQUARE(100*(LATITUDE - @AVERAGE)))/COUNT(LATITUDE))/100 FROM GPSHISTORY WHERE GPSDATE BETWEEN '2007-10-23 11:21:00.859' AND '2007-10-23 17:00:00.062' AND GPSDEVICEID = 0x004A08BC04050000;
6.03401924005389E-06
// Std Dev is a numerically stable computation, although it does require traversing the dataset twice. // // This calculation is not being done correctly. // // Incidently, SQRT(VAR(Latitude....)) gives 4.80354E-4, which is also way off.
I will redefine STDEV to use a stored procedure similar to the above, but the algorithm used to compute VAR, STDEV etc should be reviewed and fixed.
I am importing a couple SAS datasets to SQL Server 2008 for a project. The dates are in a float format, they show up as DT_R8 in SSIS. How can I convert these values to SQL server datetime? I have tried dozens of methods I found on-line with no success, I keep getting 'Arithmetic overflow error converting expression to data type datetime.' errors.
I can't take full credit for this. I want to share this with Jeff Moden who did the important research for this calculation here.
All I did was just adapting some old code according to the mantissa finding Jeff made and optimized it a little
Some test codeDECLARE@SomeNumber FLOAT, @BinFloat BINARY(8)
SELECT@SomeNumber = -185.6125, @BinFloat = CAST(@SomeNumber AS BINARY(8))
SELECT@SomeNumber AS [Original], CAST(@SomeNumber AS BINARY(8)) AS [Binary], dbo.fnBinaryFloat2Float(CAST(@SomeNumber AS BINARY(8))) AS [Converted], @SomeNumber - dbo.fnBinaryFloat2Float(CAST(@SomeNumber AS BINARY(8))) AS [Error]
And here is the code for the function.CREATE FUNCTION dbo.fnBinaryFloat2Float ( @BinaryFloat BINARY(8) ) RETURNS FLOAT AS BEGIN DECLARE@Part TINYINT, @PartValue TINYINT, @Mask TINYINT, @Mantissa FLOAT, @Exponent SMALLINT, @Bit TINYINT, @Ln2 FLOAT, @BigValue BIGINT
WHILE @Part <= 8 BEGIN SELECT@Part = @Part + 1, @PartValue = CAST(SUBSTRING(@BinaryFloat, @Part, 1) AS TINYINT), @Mask =CASE WHEN @Part = 2 THEN 8 ELSE 128 END
WHILE @Mask > 0 BEGIN IF @PartValue & @Mask > 0 SET @Mantissa = @Mantissa + EXP(-@Bit * @Ln2)
SELECT@Bit = @Bit + 1, @Mask = @Mask / 2 END END
RETURNSIGN(@BigValue) * @Mantissa * POWER(CAST(2 AS FLOAT), @Exponent - 1023) END Thanks again Jeff!
Does anyone out there know if you can take a user like this domain1Username and without dropping it and recreating change it to domain2username. I am using Sql 7.
So, I kinda taught myself sql on the net in a couple of weeks... I've been practicing commands and what not with a database with a company I work with. Here's what I need some help with:
I'm creating a batch file that will connect to the sql server, and input a .sql in the batch file which launches the sql script to check to see if a store's data has come in or not, and if it hasn't, to launch the job to process that store. Now, after it launches the job, the only step for the job is to process what it can before it fails or completes, and most of the time it fails after 1-10 minutes. A big a variance. So my question is, how do I make it so that the script being called, waits for the job to finish before proceeding to the next command which checks a different store and launches another script?
If you want the script, I'll mod out some of the actual names for security purposes and post it for you guys. Any help is appreciated, and I've searched and hacked Google to get an answer, but nothing, so I decided to post this. Thanks!
Hi everybody I want to know in the real world, what RDBMS is used for keeping people's accounts in official Banks( institution for saving and borrowing money) ? do they use SQL-Server , ORACLE or some thing else?
Whilst digging about in the systables I noticed 2 curious things: When creating an application that returned all tables, columns and fields I had to use sys.systables, and when returning the information from this view, I received two identical columns 1. myColumn of type sysname and 2. myColumn of type varchar. The point being, what is this column for, and why is it there, as it was quite annoying to note it when returning the dataset in my app. My other point is, while checking out what I could do with sysname, I noted the xml type has a capacity of -1, whereas all other data types have a capacity between 2-8000. Could someone clarify this for me, as it seems a single xml document could be inserted upto 2gb. Regards.
So I have a person who is adamant in tell me that SQL Server does not run on windows XP.
Now, I have already done all the research on this (i.e. sql server 2000 product page / requirements) and know the answer, but they insist on asking the question, so here it is .....
'Will SQL Server run on Windows XP'
A simple YES or NO will suffice; however, if you want to explain the answer (if it requires one ;) ), please feel free.
Can someone try and explain to me how a sql query can run fine and return data, yet when I try to run it in dtswizard, while the "preview" view shows the data, actually running it returns zero rows?
This is on SQL2005 express and since I can't get dtsrun or dtexec to work, I'm using auto-it to simulate my actually stepping through the process. Very kludgy, but "when all you've got is a hammer...."
Dear GroupSomething that I ever found quite difficult was finding a bug in ascript e.g. in a stp#ored procedure as often it would indicate theerror in a completely different line in which the error is actuallylocated. Take the following scriptALTER PROCEDURE [dbo].[fra_UpdateCompany]@CompanyID int, @CompanyName varchar(50), @Status int, @TelNovarchar(50), @FaxNo varchar(50), @Email varchar(50), @Web varchar(50),@OfficeType int, @Comment varchar(512)ASDECLARE@CommentOrg varchar(512)IF (LEN(@CompanyName) < 1) BEGIN SET @CompanyName = NULL ENDIF (LEN(@TelNo) < 1) BEGIN SET @TelNo = NULL ENDIF (LEN(@FaxNo) < 1) BEGIN SET @FaxNo = NULL ENDIF (LEN(@Email) < 1) BEGIN SET @Email = NULL ENDIF (LEN(@Web) < 1) BEGIN SET @Web = NULL ENDIF (LEN(@Comment) < 1) BEGIN SET @Comment = NULL ENDSET @CommentOrg = (SELECT Comment from fra_company WHERE CompanyID =@CompanyID)SET @Comment = (@Comment + '' + @CommentOrg)-- UPDATE COMPANYUPDATE fra_company SET CompanyName = @CompanyName, Status = @Status,TelNo = @TelNo, FaxNo = @FaxNo, Email = @Email, Web=@Web, OfficeType =@OfficeType, Comment = LTRIM(@Comment) WHERE CompanyID = @CompanyIDIt throws an error in Line 17 IF (LEN(@TelNo) < 1) BEGIN SET @TelNo =NULL END when the error is actually in line 23: SET @CommentOrg =(SELECT Comment from fra_company WHERE CompanyID = @CompanyID)ErrorMsg:Msg 137, Level 15, State 2, Procedure fra_UpdateCompany, Line 17Must declare the scalar variable "@ContactID".Why is that? I had hoped that it had improved from SQL 7.0 to 2005 butit's still the same vague thing.Thanks for sharing your expertise and wisdom on this,Martin
Dear group,if someone could give me an idea what is going on in one of ourdatabases, this would really really be helpful.We have two tables with around 2 / 3 million rows. These tables have nokey and no ID. (This major design flaw will be overcome in some laterversion of the application-software working on this DB but right now ihave to live with this).Now for the funny bit1) I open one window in the Query-Analyzer and write some code likeBegin transaction INSERT INTO TABLE COMMIT2) in another window i write "SELECT COUNT(*) from TABLE"If I perform the insert then afterwards select count(*) the row-countis incremented by two whereas the Insert-Statement said "1 row(s)modified.DBCC gives no errors.DBCC gives amount of rows 2 million rowsSelect count(*) on the same table gives 3 million rowsExporting the data, truncating the table re-importing data gives noresult, right now the DTS-status is 203 and the machine is "thinking".Is there any possibility to check the "integrity" of the table?This problem is on the production machine, but right now i am workingon a copy so it was propagated with backup / restore-mechanism.Any hint would be very helpfulThanks and GreetingsUli
Hi,I have been running some queries against a table in a my database andhave noted an odd (at least it seems odd to me) performance issue.The table has approximately 5 million rows and includes the followingcolumns:DocID (INTEGER, PRIMARY KEY, CLUSTERED)IsRecord (INTEGER, NONCLUSTERED)Title (VARCHAR(255), NONCLUSTERED)If I issue the following query:SELECT DocID, IsRecord FROM DocTable WHERE Title LIKE '%process%'it takes about 23 seconds to return the 481 hits.The execution plan shows a non-clustered index scan being performed onthe Title index (returning 481 rows) and a non-clustered index scan onthe IsRecord index (returning 4.9 million rows). These are then mergedin a hash match/inner join operation.The Title index scan has an estimated row size of 41 and an I/O costof 9.82 (cost is 27%). The IsRecord index scan has an estimated rowsize of 33 and an I/O cost of 6.32 (cost is 21%). The Hash Matchaccounts for a further 52% of the cose with the SELECT at the head ofthe plan listed as 0% cost.If I issue the following query:SELECT DocID, Title FROM DocTable WHERE Title LIKE '%process%'it takes about 12 seconds to return the 481 hits and consists solelyof a non-clustered index scan of the Title Index.Again the Title index scan has an estimated row size of 41 and an I/Ocost of 9.82 ans it's cost is listed as 78%. The SELECT at the head ofthe plan is attributed the other 22% of the cost.All this is fine, however when I issue the following query:SELECT DocID, Title, IsRecord FROM DocTable WHERE Title LIKE'%process%'it takes 1 minute 50 seconds to run the query. The execution plansshows that a clustered index scan is occurring and this accounts for96% of the cost. The estimated row size is 463 and the I/O cost is111.What on earth is going on here. I can understand the need to scan theTitle index because of the wildcards, but why on earth would the queryperform a scan of the clustered (primary key) index? And what is goingon with the row size and I/O cost?All the indexes and statistics are up to date, so I am at a completeloss to explain what is going on here. Can anyone explain why the 3rdquery is so much slower (and possibly suggest a way to improve theperformance)/ThanksPaul MateerMeridio LimtedI am at a complete loss to explain what is happening here,
Looking at BOL for temp tables help, I discover that a local temp table (I want to only have life within my stored proc) SHOULD be visible to all (child) stored procs called by the papa stored proc.
However, the following code works just peachy when I use a GLOBAL temp table (i.e., ##MyTempTbl) but fails when I use a local temp table (i.e., #MyTempTable). Through trial and error, and careful weeding efforts, I know that the error I get on the local version is coming from the xp_sendmail call. The error I get is: ODBC error 208 (42S02) Invalid object name '#MyTempTbl'.
Here is the code that works:SET NOCOUNT ON
CREATE TABLE ##MyTempTbl (SeqNo int identity, MyWords varchar(1000)) INSERT ##MyTempTbl values ('Put your long message here.') INSERT ##MyTempTbl values ('Put your second long message here.') INSERT ##MyTempTbl values ('put your really, really LONG message (yeah, every guy says his message is the longest...whatever!') DECLARE @cmd varchar(256) DECLARE @LargestEventSize int DECLARE @Width int, @Msg varchar(128) SELECT @LargestEventSize = Max(Len(MyWords)) FROM ##MyTempTbl
SET @cmd = 'SELECT Cast(MyWords AS varchar(' + CONVERT(varchar(5), @LargestEventSize) + ')) FROM ##MyTempTbl order by SeqNo' SET @Width = @LargestEventSize + 1 SET @Msg = 'Here is the junk you asked about' + CHAR(13) + '----------------------------' EXECUTE Master.dbo.xp_sendmail 'YoMama@WhoKnows.com', @query = @cmd, @no_header= 'TRUE', @width = @Width, @dbuse = 'MyDB', @subject='none of your darn business', @message= @Msg DROP TABLE ##MyTempTbl
The only thing I change to make it fail is the table name, change it from ##MyTempTbl to #MyTempTbl, and it dashes the email hopes of the stored procedure upon the jagged rocks of electronic despair.
Any insight anyone? Or is BOL just full of...well..."stuff"?
I have the following code that retreives the current value of the item price. however it always rounds up. If I manually enter a return value like so:return (decimal)12.47 It returns the correct value, however if I set it with an expression like this:return (decimal)arParam[1].Value;It rounds the number up: How can I get it to not round up when insertign a value based ona expression? public decimal GetCreditPrice(string CustomerSecurityKey) { try { System.Data.SqlClient.SqlParameter prmCrnt; System.Data.SqlClient.SqlParameter[] arParam = new System.Data.SqlClient.SqlParameter[2]; prmCrnt = new System.Data.SqlClient.SqlParameter("@CustomerSecurityKey", SqlDbType.VarChar,25); prmCrnt.Value = CustomerSecurityKey; arParam[0] = prmCrnt; prmCrnt = new System.Data.SqlClient.SqlParameter("@Price", SqlDbType.Decimal); prmCrnt.Direction = ParameterDirection.Output; arParam[1] = prmCrnt; SqlHelper.ExecuteNonQuery(stConnection, CommandType.StoredProcedure, "GetCreditPrice", arParam); return (decimal)arParam[1].Value;
I have the following statement and I want to round the final value(gbkmut.bdr_hfl)two decimal places.
begin UPDATE gbkmut SET gbkmut.bdr_hfl = gbkmut.bdr_hfl - (SELECT SUM(inserted.bdr_hfl) FROM inserted WHERE inserted.freefield3 = 'Rebate') WHERE reknr = ' 1040' end
Looking for a way to round numbers to a specified number of significant digits. The ROUND function rounds to a specific decimal place but does not take into account the level of significance of the remaining numbers. (i.e. ROUND(7.12435,2)=7.12000) The type of function I need would round the number in the following manner: SigFigRound(7.12435,3)=7.12 or SigFigRound(7.12345,1)=7.
I have rounding problems when editing or inserting a new record in float type fields. e.g. I have a cursor running an agrregate SQL statement. I have a calculated field Sum(DFactor*Cost). DFactor gets values -1,1 and values of Cost in the table have 2 digits. I get these values in a variable e.g. @FCost. Then I round @FCost=Round(@FCost,2). When I try to inert this value to a new record again I'using Round(@FCost,2). However in a lot of records a lot of digits are stored. I have the same probelm when trying to insert values from MSAccess by ODBC. Although I'm using CLng(@FCost*100)/100 in order to have 2 digits, a lot of demical values are created. What is the best practise in order to solve this problem? Regards, Manolis
I have just converted some Access VBA code to a sproc. I'm finding that for some reason the rounding is different: eg. ROUND(17 * 97995 / 1000,2) = 1665.915 before Rounding
SQL SProc: 1665.91 Rounds down ADP VBA: 1665.92 Rounds up