什么是最有效的浮动和双重比较方式?
什么是最有效的方法来比较两个double
或两个float
值?
简单地做这件事是不正确的:
bool CompareDoubles1 (double A, double B) { return A == B; }
但是像这样的:
bool CompareDoubles2 (double A, double B) { diff = A - B; return (diff < EPSILON) && (-diff < EPSILON); }
似乎浪费处理。
有谁知道更聪明的浮动比较器?
使用任何其他build议时要格外小心。 这一切都取决于上下文。
我花了很长时间去追踪系统中的一个错误,假设a==b
如果|ab|<epsilon
。 根本的问题是:
-
如果
a==b
和b==c
则a==c
,algorithm中隐含的推定。 -
对于以英寸为单位的线和以密耳(.001英寸)为单位的线,使用相同的epsilon。 那是
a==b
但1000a!=1000b
。 (这就是为什么AlmostEqual2sComplement要求epsilon或max ULPS)。 -
对于angular度的余弦和线的长度使用相同的epsilon!
-
使用这样的比较函数对集合中的项目进行sorting。 (在这种情况下使用内置的C ++运算符==为双打产生了正确的结果。)
就像我说的:这一切都取决于上下文和a
和b
的预期大小。
顺便说一句, std::numeric_limits<double>::epsilon()
是“机器epsilon”。 这是1.0和下一个值可以用double来表示的区别。 我猜它可以在比较函数中使用,但只有当期望值小于1时(这是对@ cdv的答案的回应…)
另外,如果你基本上有int
算术doubles
(在这里我们使用double来保存int值在某些情况下),你的算术是正确的。 例如4.0 / 2.0将与1.0 + 1.0相同。 只要你不做导致分数(4.0 / 3.0)的事情,或者不要超出int的大小。
与epsilon值的比较是大多数人所做的(即使在游戏编程中)。
你应该改变一下你的实现:
bool AreSame(double a, double b) { return fabs(a - b) < EPSILON; }
编辑:克里斯特在最近的博客文章上添加了一堆有关此主题的详细信息。 请享用。
我发现Google C ++testing框架包含了一个很好的跨平台基于模板的AlmostEqual2sComplement实现,它可以在双精度和浮点运算上工作。 鉴于它是根据BSD许可证发布的,只要您保留许可证,在自己的代码中使用它应该没有问题。 我从http://code.google.com/p/googletest/source/browse/trunk/include/gtest/internal/gtest-internal.h中提取了以下代码,并在顶部添加了许可证。;
请确保#定义GTEST_OS_WINDOWS到某个值(或者更改它用于符合您的代码库的代码 – 毕竟它是BSD许可的)。
用法示例:
double left = // something double right = // something const FloatingPoint<double> lhs(left), rhs(right); if (lhs.AlmostEquals(rhs)) { //they're equal! }
代码如下:
// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee) // // The Google C++ Testing Framework (Google Test) // This template class serves as a compile-time function from size to // type. It maps a size in bytes to a primitive type with that // size. eg // // TypeWithSize<4>::UInt // // is typedef-ed to be unsigned int (unsigned integer made up of 4 // bytes). // // Such functionality should belong to STL, but I cannot find it // there. // // Google Test uses this class in the implementation of floating-point // comparison. // // For now it only handles UInt (unsigned int) as that's all Google Test // needs. Other types can be easily added in the future if need // arises. template <size_t size> class TypeWithSize { public: // This prevents the user from using TypeWithSize<N> with incorrect // values of N. typedef void UInt; }; // The specialization for size 4. template <> class TypeWithSize<4> { public: // unsigned int has size 4 in both gcc and MSVC. // // As base/basictypes.h doesn't compile on Windows, we cannot use // uint32, uint64, and etc here. typedef int Int; typedef unsigned int UInt; }; // The specialization for size 8. template <> class TypeWithSize<8> { public: #if GTEST_OS_WINDOWS typedef __int64 Int; typedef unsigned __int64 UInt; #else typedef long long Int; // NOLINT typedef unsigned long long UInt; // NOLINT #endif // GTEST_OS_WINDOWS }; // This template class represents an IEEE floating-point number // (either single-precision or double-precision, depending on the // template parameters). // // The purpose of this class is to do more sophisticated number // comparison. (Due to round-off error, etc, it's very unlikely that // two floating-points will be equal exactly. Hence a naive // comparison by the == operation often doesn't work.) // // Format of IEEE floating-point: // // The most-significant bit being the leftmost, an IEEE // floating-point looks like // // sign_bit exponent_bits fraction_bits // // Here, sign_bit is a single bit that designates the sign of the // number. // // For float, there are 8 exponent bits and 23 fraction bits. // // For double, there are 11 exponent bits and 52 fraction bits. // // More details can be found at // http://en.wikipedia.org/wiki/IEEE_floating-point_standard. // // Template parameter: // // RawType: the raw floating-point type (either float or double) template <typename RawType> class FloatingPoint { public: // Defines the unsigned integer type that has the same size as the // floating point number. typedef typename TypeWithSize<sizeof(RawType)>::UInt Bits; // Constants. // # of bits in a number. static const size_t kBitCount = 8*sizeof(RawType); // # of fraction bits in a number. static const size_t kFractionBitCount = std::numeric_limits<RawType>::digits - 1; // # of exponent bits in a number. static const size_t kExponentBitCount = kBitCount - 1 - kFractionBitCount; // The mask for the sign bit. static const Bits kSignBitMask = static_cast<Bits>(1) << (kBitCount - 1); // The mask for the fraction bits. static const Bits kFractionBitMask = ~static_cast<Bits>(0) >> (kExponentBitCount + 1); // The mask for the exponent bits. static const Bits kExponentBitMask = ~(kSignBitMask | kFractionBitMask); // How many ULP's (Units in the Last Place) we want to tolerate when // comparing two numbers. The larger the value, the more error we // allow. A 0 value means that two numbers must be exactly the same // to be considered equal. // // The maximum error of a single floating-point operation is 0.5 // units in the last place. On Intel CPU's, all floating-point // calculations are done with 80-bit precision, while double has 64 // bits. Therefore, 4 should be enough for ordinary use. // // See the following article for more details on ULP: // http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm. static const size_t kMaxUlps = 4; // Constructs a FloatingPoint from a raw floating-point number. // // On an Intel CPU, passing a non-normalized NAN (Not a Number) // around may change its bits, although the new value is guaranteed // to be also a NAN. Therefore, don't expect this constructor to // preserve the bits in x when x is a NAN. explicit FloatingPoint(const RawType& x) { u_.value_ = x; } // Static methods // Reinterprets a bit pattern as a floating-point number. // // This function is needed to test the AlmostEquals() method. static RawType ReinterpretBits(const Bits bits) { FloatingPoint fp(0); fp.u_.bits_ = bits; return fp.u_.value_; } // Returns the floating-point number that represent positive infinity. static RawType Infinity() { return ReinterpretBits(kExponentBitMask); } // Non-static methods // Returns the bits that represents this number. const Bits &bits() const { return u_.bits_; } // Returns the exponent bits of this number. Bits exponent_bits() const { return kExponentBitMask & u_.bits_; } // Returns the fraction bits of this number. Bits fraction_bits() const { return kFractionBitMask & u_.bits_; } // Returns the sign bit of this number. Bits sign_bit() const { return kSignBitMask & u_.bits_; } // Returns true iff this is NAN (not a number). bool is_nan() const { // It's a NAN if the exponent bits are all ones and the fraction // bits are not entirely zeros. return (exponent_bits() == kExponentBitMask) && (fraction_bits() != 0); } // Returns true iff this number is at most kMaxUlps ULP's away from // rhs. In particular, this function: // // - returns false if either number is (or both are) NAN. // - treats really large numbers as almost equal to infinity. // - thinks +0.0 and -0.0 are 0 DLP's apart. bool AlmostEquals(const FloatingPoint& rhs) const { // The IEEE standard says that any comparison operation involving // a NAN must return false. if (is_nan() || rhs.is_nan()) return false; return DistanceBetweenSignAndMagnitudeNumbers(u_.bits_, rhs.u_.bits_) <= kMaxUlps; } private: // The data type used to store the actual floating-point number. union FloatingPointUnion { RawType value_; // The raw floating-point number. Bits bits_; // The bits that represent the number. }; // Converts an integer from the sign-and-magnitude representation to // the biased representation. More precisely, let N be 2 to the // power of (kBitCount - 1), an integer x is represented by the // unsigned number x + N. // // For instance, // // -N + 1 (the most negative number representable using // sign-and-magnitude) is represented by 1; // 0 is represented by N; and // N - 1 (the biggest number representable using // sign-and-magnitude) is represented by 2N - 1. // // Read http://en.wikipedia.org/wiki/Signed_number_representations // for more details on signed number representations. static Bits SignAndMagnitudeToBiased(const Bits &sam) { if (kSignBitMask & sam) { // sam represents a negative number. return ~sam + 1; } else { // sam represents a positive number. return kSignBitMask | sam; } } // Given two numbers in the sign-and-magnitude representation, // returns the distance between them as an unsigned number. static Bits DistanceBetweenSignAndMagnitudeNumbers(const Bits &sam1, const Bits &sam2) { const Bits biased1 = SignAndMagnitudeToBiased(sam1); const Bits biased2 = SignAndMagnitudeToBiased(sam2); return (biased1 >= biased2) ? (biased1 - biased2) : (biased2 - biased1); } FloatingPointUnion u_; };
编辑:这个职位是4岁。 这可能仍然有效,代码很好,但有些人发现改进。 最好从Google Test源代码中获取最新版本的AlmostEquals
,而不是我在这里粘贴的那个。
比较浮点数取决于上下文。 因为即使改变操作的顺序也会产生不同的结果,知道你想要的数字是多么“平等”是很重要的。
比较 Bruce Dawson的浮点数是考察浮点比较的好时机。
以下定义来自Knuth的计算机编程艺术 :
bool approximatelyEqual(float a, float b, float epsilon) { return fabs(a - b) <= ( (fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * epsilon); } bool essentiallyEqual(float a, float b, float epsilon) { return fabs(a - b) <= ( (fabs(a) > fabs(b) ? fabs(b) : fabs(a)) * epsilon); } bool definitelyGreaterThan(float a, float b, float epsilon) { return (a - b) > ( (fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * epsilon); } bool definitelyLessThan(float a, float b, float epsilon) { return (b - a) > ( (fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * epsilon); }
当然,selectepsilon取决于上下文,并决定你想要的数字是多less。
另一种比较浮点数的方法是查看数字的ULP(最后一个单位)。 虽然没有专门处理比较,但是每一个计算机科学家应该知道的关于浮点数的知识是了解浮点数是如何工作的,以及错误是什么,包括什么是ULP。
有关更深入的方法,请参阅比较浮点数 。 以下是该链接的代码片段:
// Usable AlmostEqual function bool AlmostEqual2sComplement(float A, float B, int maxUlps) { // Make sure maxUlps is non-negative and small enough that the // default NAN won't compare as equal to anything. assert(maxUlps > 0 && maxUlps < 4 * 1024 * 1024); int aInt = *(int*)&A; // Make aInt lexicographically ordered as a twos-complement int if (aInt < 0) aInt = 0x80000000 - aInt; // Make bInt lexicographically ordered as a twos-complement int int bInt = *(int*)&B; if (bInt < 0) bInt = 0x80000000 - bInt; int intDiff = abs(aInt - bInt); if (intDiff <= maxUlps) return true; return false; }
在C ++中获得epsilon的便携方式是
#include <limits> std::numeric_limits<double>::epsilon()
然后比较function变成
#include <cmath> #include <limits> bool AreSame(double a, double b) { return std::fabs(a - b) < std::numeric_limits<double>::epsilon(); }
意识到这是一个古老的线程,但这篇文章是我比较浮点数最直接的之一,如果你想探索更多,它也有更详细的参考,它的主要网站涵盖了一个完整的问题处理浮点数浮点指南:比较 。
我们可以find一个更实用的文章在浮点公差重访,并注意到有绝对的容忍度testing,这在C ++归结为:
bool absoluteToleranceCompare(double x, double y) { return std::fabs(x - y) <= std::numeric_limits<double>::epsilon() ; }
和相对容忍度testing:
bool relativeToleranceCompare(double x, double y) { double maxXY = std::max( std::fabs(x) , std::fabs(y) ) ; return std::fabs(x - y) <= std::numeric_limits<double>::epsilon()*maxXY ; }
文章指出,当x
和y
很大时,绝对testing失败,而在相对小的时候失败。 假设他的绝对和相对容忍度是相同的,那么组合testing将如下所示:
bool combinedToleranceCompare(double x, double y) { double maxXYOne = std::max( { 1.0, std::fabs(x) , std::fabs(y) } ) ; return std::fabs(x - y) <= std::numeric_limits<double>::epsilon()*maxXYOne ; }
你写的代码被窃听了:
return (diff < EPSILON) && (-diff > EPSILON);
正确的代码是:
return (diff < EPSILON) && (diff > -EPSILON);
(…是的,这是不同的)
我想知道在某些情况下晶圆厂是否会让你失去懒惰的评价。 我会说这取决于编译器。 你可能想尝试两个。 如果它们的平均水平相当,则采取与晶圆厂的实施。
如果你有一些关于哪一个浮球比另一个更有可能比较大的信息,你可以按比较的顺序进行,以更好地利用懒惰的评价。
最后你可以通过内联这个函数得到更好的结果。 虽然不太可能改善
编辑:OJ,谢谢你纠正你的代码。 我相应地删除了我的评论
(a – b)<EPSILON;
这很好,如果:
- 您input的数量级不会有太大变化
- 极less数相反的符号可以被视为相等
但否则会导致你陷入麻烦。 双精度数字的分辨率约为16位小数。 如果你比较的两个数字的数量级大于EPSILON * 1.0E16,那么你可能会说:
return a==b;
我将研究一种不同的方法,假定您需要担心第一个问题,并假设第二个问题是适用的。 解决scheme将是这样的:
#define VERYSMALL (1.0E-150) #define EPSILON (1.0E-8) bool AreSame(double a, double b) { double absDiff = fabs(a - b); if (absDiff < VERYSMALL) { return true; } double maxAbs = max(fabs(a) - fabs(b)); return (absDiff/maxAbs) < EPSILON; }
这在计算上是昂贵的,但它有时候是所谓的。 这是我们公司必须做的事情,因为我们处理的是一个工程库,input可能会有几十个数量级的差异。
无论如何,重点在于(并且几乎适用于所有的编程问题):评估您的需求,然后提出解决scheme来满足您的需求 – 不要以为简单的答案就能满足您的需求。 如果在评估之后,您发现fabs(ab) < EPSILON
就足够了,完美 – 使用它! 但也要注意其缺点和其他可能的解决scheme。
正如其他人所指出的那样,使用固定指数epsilon(如0.0000001)对于远离epsilon值的值将是无用的。 例如,如果你的两个值是10000.000977和10000,那么在这两个数字之间没有 32位的浮点值 – 10000和10000.000977尽可能的接近,而不是按位相同。 在这里,小于0.0009的ε是毫无意义的; 你也可以使用直线相等运算符。
同样,由于这两个值接近epsilon大小,相对误差增长到100%。
因此,试图将诸如0.00001的定点数与浮点值(其中指数是任意的)混合是无意义的练习。 如果你能确定操作数值在一个狭窄的范围内(即接近某个特定的指数),并且如果你为这个特定的testing正确地select了一个ε值,那么这将只能工作。 如果你从空中拉出一个数字(“嘿!0.00001很小,所以一定要好!”),你注定要犯数字错误。 我花了很多时间debugging不好的数字代码,其中一些可怜的笨蛋扔在随机的epsilon值,使另一个testing案例的工作。
如果您进行任何types的数字编程,并且相信您需要达到定点ε,请阅读比较浮点数的文章 。
比较浮点数
通常浮点数的比较通常是没有意义的。 如何比较真的取决于手头的问题。 在许多问题中,数字被充分离散化,以便在给定的容差范围内进行比较。 不幸的是,这样的伎俩并不真正起作用。 举一个例子,当你的观察非常接近屏障时,考虑使用一个有问题的数字的Heaviside(step)函数(数字股票期权出现在脑海中)。 以容忍为基础的比较不会有太大的好处,因为这将有效地将问题从原来的障碍转移到两个新的障碍。 再次,对于这样的问题没有通用的解决scheme,为了实现稳定性,特定的解决scheme可能需要改变数值方法。
我最终花了相当一段时间在这个伟大的线程中通过材料。 我怀疑每个人都想花这么多时间,所以我会强调我学到的东西和我实施的解决scheme的总结。
快速总结
- 浮点数比较有两个问题:精度有限,“近似零”的含义取决于上下文(见下一点)。
- 1E-8与1E-16大致相同吗? 如果你正在看噪声的传感器数据,那么可能是的,但如果你正在做分子模拟,那么可能不是! 底线:您总是需要考虑特定函数调用的容忍值,而不是使其成为通用的应用程序范围的硬编码常量。
- 对于一般库函数,使用默认容差的参数仍然很好。 一个典型的select是
numeric_limits::epsilon()
,它和float.h中的FLT_EPSILON相同。 然而,这是有问题的,因为比较像1.0这样的值,如果不是像1E9那样的值, FLT_EPSILON定义为1.0。 - 检查数字是否在容差范围内的显而易见的实现是
fabs(ab) <= epsilon
但是这不起作用,因为默认epsilon定义为1.0。 我们需要按照a和b来调整epsilon的大小。 - 这个问题有两个解决scheme:要么你设置epsilon与
max(a,b)
成比例max(a,b)
要么你可以得到围绕a的下一个可表示的数字,然后看看b是否落入该范围。 前者被称为“相对”方法,后来被称为ULP方法。 - 无论如何,这两种方法实际上都与0比较。在这种情况下,应用程序必须提供正确的容差。
实用函数实现(C ++ 11)
//implements relative method - do not use for comparing with zero //use this most of the time, tolerance needs to be meaningful in your context template<typename TReal> static bool isApproximatelyEqual(TReal a, TReal b, TReal tolerance = std::numeric_limits<TReal>::epsilon()) { TReal diff = std::fabs(a - b); if (diff <= tolerance) return true; if (diff < std::fmax(std::fabs(a), std::fabs(b)) * tolerance) return true; return false; } //supply tolerance that is meaningful in your context //for example, default tolerance may not work if you are comparing double with float template<typename TReal> static bool isApproximatelyZero(TReal a, TReal tolerance = std::numeric_limits<TReal>::epsilon()) { if (std::fabs(a) <= tolerance) return true; return false; } //use this when you want to be on safe side //for example, don't start rover unless signal is above 1 template<typename TReal> static bool isDefinitelyLessThan(TReal a, TReal b, TReal tolerance = std::numeric_limits<TReal>::epsilon()) { TReal diff = a - b; if (diff < tolerance) return true; if (diff < std::fmax(std::fabs(a), std::fabs(b)) * tolerance) return true; return false; } template<typename TReal> static bool isDefinitelyGreaterThan(TReal a, TReal b, TReal tolerance = std::numeric_limits<TReal>::epsilon()) { TReal diff = a - b; if (diff > tolerance) return true; if (diff > std::fmax(std::fabs(a), std::fabs(b)) * tolerance) return true; return false; } //implements ULP method //use this when you are only concerned about floating point precision issue //for example, if you want to see if a is 1.0 by checking if its within //10 closest representable floating point numbers around 1.0. template<typename TReal> static bool isWithinPrecisionInterval(TReal a, TReal b, unsigned int interval_size = 1) { TReal min_a = a - (a - std::nextafter(a, std::numeric_limits<TReal>::lowest())) * interval_size; TReal max_a = a + (std::nextafter(a, std::numeric_limits<TReal>::max()) - a) * interval_size; return min_a <= b && max_a >= b; }
Unfortunately, even your "wasteful" code is incorrect. EPSILON is the smallest value that could be added to 1.0 and change its value. The value 1.0 is very important — larger numbers do not change when added to EPSILON. Now, you can scale this value to the numbers you are comparing to tell whether they are different or not. The correct expression for comparing two doubles is:
if (fabs(a - b) <= DBL_EPSILON * fmax(fabs(a), fabs(b))) { // ... }
This is at a minimum. In general, though, you would want to account for noise in your calculations and ignore a few of the least significant bits, so a more realistic comparison would look like:
if (fabs(a - b) <= 16 * DBL_EPSILON * fmax(fabs(a), fabs(b))) { // ... }
If comparison performance is very important to you and you know the range of your values, then you should use fixed-point numbers instead.
My class based on previously posted answers. Very similar to Google's code but I use a bias which pushes all NaN values above 0xFF000000. That allows a faster check for NaN.
This code is meant to demonstrate the concept, not be a general solution. Google's code already shows how to compute all the platform specific values and I didn't want to duplicate all that. I've done limited testing on this code.
typedef unsigned int U32; // Float Memory Bias (unsigned) // ----- ------ --------------- // NaN 0xFFFFFFFF 0xFF800001 // NaN 0xFF800001 0xFFFFFFFF // -Infinity 0xFF800000 0x00000000 --- // -3.40282e+038 0xFF7FFFFF 0x00000001 | // -1.40130e-045 0x80000001 0x7F7FFFFF | // -0.0 0x80000000 0x7F800000 |--- Valid <= 0xFF000000. // 0.0 0x00000000 0x7F800000 | NaN > 0xFF000000 // 1.40130e-045 0x00000001 0x7F800001 | // 3.40282e+038 0x7F7FFFFF 0xFEFFFFFF | // Infinity 0x7F800000 0xFF000000 --- // NaN 0x7F800001 0xFF000001 // NaN 0x7FFFFFFF 0xFF7FFFFF // // Either value of NaN returns false. // -Infinity and +Infinity are not "close". // -0 and +0 are equal. // class CompareFloat{ public: union{ float m_f32; U32 m_u32; }; static bool CompareFloat::IsClose( float A, float B, U32 unitsDelta = 4 ) { U32 a = CompareFloat::GetBiased( A ); U32 b = CompareFloat::GetBiased( B ); if ( (a > 0xFF000000) || (b > 0xFF000000) ) { return( false ); } return( (static_cast<U32>(abs( a - b ))) < unitsDelta ); } protected: static U32 CompareFloat::GetBiased( float f ) { U32 r = ((CompareFloat*)&f)->m_u32; if ( r & 0x80000000 ) { return( ~r - 0x007FFFFF ); } return( r + 0x7F800000 ); } };
I'd be very wary of any of these answers that involves floating point subtraction (eg, fabs(ab) < epsilon). First, the floating point numbers become more sparse at greater magnitudes and at high enough magnitudes where the spacing is greater than epsilon, you might as well just be doing a == b. Second, subtracting two very close floating point numbers (as these will tend to be, given that you're looking for near equality) is exactly how you get catastrophic cancellation .
While not portable, I think grom's answer does the best job of avoiding these issues.
There are actually cases in numerical software where you want to check whether two floating point numbers are exactly equal. I posted this on a similar question
https://stackoverflow.com/a/10973098/1447411
So you can not say that "CompareDoubles1" is wrong in general.
It depends on how precise you want the comparison to be. If you want to compare for exactly the same number, then just go with ==. (You almost never want to do this unless you actually want exactly the same number.) On any decent platform you can also do the following:
diff= a - b; return fabs(diff)<EPSILON;
as fabs
tends to be pretty fast. By pretty fast I mean it is basically a bitwise AND, so it better be fast.
And integer tricks for comparing doubles and floats are nice but tend to make it more difficult for the various CPU pipelines to handle effectively. And it's definitely not faster on certain in-order architectures these days due to using the stack as a temporary storage area for values that are being used frequently. (Load-hit-store for those who care.)
In terms of the scale of quantities:
If epsilon
is the small fraction of the magnitude of quantity (ie relative value) in some certain physical sense and A
and B
types is comparable in the same sense, than I think, that the following is quite correct:
#include <limits> #include <iomanip> #include <iostream> #include <cmath> #include <cstdlib> #include <cassert> template< typename A, typename B > inline bool close_enough(A const & a, B const & b, typename std::common_type< A, B >::type const & epsilon) { using std::isless; assert(isless(0, epsilon)); // epsilon is a part of the whole quantity assert(isless(epsilon, 1)); using std::abs; auto const delta = abs(a - b); auto const x = abs(a); auto const y = abs(b); // comparable generally and |a - b| < eps * (|a| + |b|) / 2 return isless(epsilon * y, x) && isless(epsilon * x, y) && isless((delta + delta) / (x + y), epsilon); } int main() { std::cout << std::boolalpha << close_enough(0.9, 1.0, 0.1) << std::endl; std::cout << std::boolalpha << close_enough(1.0, 1.1, 0.1) << std::endl; std::cout << std::boolalpha << close_enough(1.1, 1.2, 0.01) << std::endl; std::cout << std::boolalpha << close_enough(1.0001, 1.0002, 0.01) << std::endl; std::cout << std::boolalpha << close_enough(1.0, 0.01, 0.1) << std::endl; return EXIT_SUCCESS; }
/// testing whether two doubles are almost equal. We consider two doubles /// equal if the difference is within the range [0, epsilon). /// /// epsilon: a positive number (supposed to be small) /// /// if either x or y is 0, then we are comparing the absolute difference to /// epsilon. /// if both x and y are non-zero, then we are comparing the relative difference /// to epsilon. bool almost_equal(double x, double y, double epsilon) { double diff = x - y; if (x != 0 && y != 0){ diff = diff/y; } if (diff < epsilon && -1.0*diff < epsilon){ return true; } return false; }
I used this function for my small project and it works, but note the following:
Double precision error can create a surprise for you. Let's say epsilon = 1.0e-6, then 1.0 and 1.000001 should NOT be considered equal according to the above code, but on my machine the function considers them to be equal, this is because 1.000001 can not be precisely translated to a binary format, it is probably 1.0000009xxx. I test it with 1.0 and 1.0000011 and this time I get the expected result.
我使用这个代码:
bool AlmostEqual(double v1, double v2) { return (std::fabs(v1 - v2) < std::fabs(std::min(v1, v2)) * std::numeric_limits<double>::epsilon()); }
My way may not be correct but useful
Convert both float to strings and then do string compare
bool IsFlaotEqual(float a, float b, int decimal) { TCHAR form[50] = _T(""); _stprintf(form, _T("%%.%df"), decimal); TCHAR a1[30] = _T(""), a2[30] = _T(""); _stprintf(a1, form, a); _stprintf(a2, form, b); if( _tcscmp(a1, a2) == 0 ) return true; return false; }
operator overlaoding can also be done
You cannot compare two double
with a fixed EPSILON
. Depending on the value of double
, EPSILON
varies.
A better double comparison would be:
bool same(double a, double b) { return std::nextafter(a, std::numeric_limits<double>::lowest()) <= b && std::nextafter(a, std::numeric_limits<double>::max()) >= b; }
In a more generic way:
template <typename T> bool compareNumber(const T& a, const T& b) { return std::abs(a - b) < std::numeric_limits<T>::epsilon(); }
Why not perform bitwise XOR? Two floating point numbers are equal if their corresponding bits are equal. I think, the decision to place the exponent bits before mantissa was made to speed up comparison of two floats. I think, many answers here are missing the point of epsilon comparison. Epsilon value only depends on to what precision floating point numbers are compared. For example, after doing some arithmetic with floats you get two numbers: 2.5642943554342 and 2.5642943554345. They are not equal, but for the solution only 3 decimal digits matter so then they are equal: 2.564 and 2.564. In this case you choose epsilon equal to 0.001. Epsilon comparison is also possible with bitwise XOR. 纠正我,如果我错了。