我使用C++
hypot()
和Java
Math.hypot
进行了一些测试。它们似乎都明显慢于sqrt(a*a + b*b)
。那是因为精度更高吗?计算斜边hypot
函数的方法有哪些?令人惊讶的是,我在文档中找不到任何表现不佳的迹象。
答案 0 :(得分:26)
这不是一个简单的sqrt函数。您应该检查此链接以了解算法的实现:http://www.koders.com/c/fid7D3C8841ADC384A5F8DE0D081C88331E3909BF3A.aspx
它有while循环检查收敛
/* Slower but safer algorithm due to Moler and Morrison. Never
produces any intermediate result greater than roughly the
larger of X and Y. Should converge to machine-precision
accuracy in 3 iterations. */
double r = ratio*ratio, t, s, p = abig, q = asmall;
do {
t = 4. + r;
if (t == 4.)
break;
s = r / t;
p += 2. * s * p;
q *= s;
r = (q / p) * (q / p);
} while (1);
EDIT(来自J.M的更新):
答案 1 :(得分:4)
这是一个更快的实现,其结果也更接近java.lang.Math.hypot: (注意:对于Delorie的实现,需要添加NaN和+ -Infinity输入的处理)
private static final double TWO_POW_450 = Double.longBitsToDouble(0x5C10000000000000L);
private static final double TWO_POW_N450 = Double.longBitsToDouble(0x23D0000000000000L);
private static final double TWO_POW_750 = Double.longBitsToDouble(0x6ED0000000000000L);
private static final double TWO_POW_N750 = Double.longBitsToDouble(0x1110000000000000L);
public static double hypot(double x, double y) {
x = Math.abs(x);
y = Math.abs(y);
if (y < x) {
double a = x;
x = y;
y = a;
} else if (!(y >= x)) { // Testing if we have some NaN.
if ((x == Double.POSITIVE_INFINITY) || (y == Double.POSITIVE_INFINITY)) {
return Double.POSITIVE_INFINITY;
} else {
return Double.NaN;
}
}
if (y-x == y) { // x too small to substract from y
return y;
} else {
double factor;
if (x > TWO_POW_450) { // 2^450 < x < y
x *= TWO_POW_N750;
y *= TWO_POW_N750;
factor = TWO_POW_750;
} else if (y < TWO_POW_N450) { // x < y < 2^-450
x *= TWO_POW_750;
y *= TWO_POW_750;
factor = TWO_POW_N750;
} else {
factor = 1.0;
}
return factor * Math.sqrt(x*x+y*y);
}
}
答案 2 :(得分:2)
http://steve.hollasch.net/cgindex/math/pythag-root.txt
表明使用sqrt()的更快实现是二次方与对于Moler&amp;的立方。莫里森,具有大致相同的溢出特征。
答案 3 :(得分:2)
我发现Math.hypot()非常慢。我发现我可以使用生成相同结果的相同算法编写快速java版本。这可以用来代替它。
/**
* <b>hypot</b>
* @param x
* @param y
* @return sqrt(x*x +y*y) without intermediate overflow or underflow.
* @Note {@link Math#hypot} is unnecessarily slow. This returns the identical result to
* Math.hypot with reasonable run times (~40 nsec vs. 800 nsec).
* <p>The logic for computing z is copied from "Freely Distributable Math Library"
* fdlibm's e_hypot.c. This minimizes rounding error to provide 1 ulb accuracy.
*/
public static double hypot(double x, double y) {
if (Double.isInfinite(x) || Double.isInfinite(y)) return Double.POSITIVE_INFINITY;
if (Double.isNaN(x) || Double.isNaN(y)) return Double.NaN;
x = Math.abs(x);
y = Math.abs(y);
if (x < y) {
double d = x;
x = y;
y = d;
}
int xi = Math.getExponent(x);
int yi = Math.getExponent(y);
if (xi > yi + 27) return x;
int bias = 0;
if (xi > 510 || xi < -511) {
bias = xi;
x = Math.scalb(x, -bias);
y = Math.scalb(y, -bias);
}
// translated from "Freely Distributable Math Library" e_hypot.c to minimize rounding errors
double z = 0;
if (x > 2*y) {
double x1 = Double.longBitsToDouble(Double.doubleToLongBits(x) & 0xffffffff00000000L);
double x2 = x - x1;
z = Math.sqrt(x1*x1 + (y*y + x2*(x+x1)));
} else {
double t = 2 * x;
double t1 = Double.longBitsToDouble(Double.doubleToLongBits(t) & 0xffffffff00000000L);
double t2 = t - t1;
double y1 = Double.longBitsToDouble(Double.doubleToLongBits(y) & 0xffffffff00000000L);
double y2 = y - y1;
double x_y = x - y;
z = Math.sqrt(t1*y1 + (x_y*x_y + (t1*y2 + t2*y))); // Note: 2*x*y <= x*x + y*y
}
if (bias == 0) {
return z;
} else {
return Math.scalb(z, bias);
}
}
答案 4 :(得分:2)
hypot()
相比, sqrt(a*a+b*b)
会产生开销,以避免中间计算中的上溢和下溢。这涉及缩放操作。在较旧的实现中,缩放可能使用除法,这本身是相当慢的操作。即使在通过直接指数操作完成缩放的情况下,在较旧的处理器体系结构上它也可能相当慢,因为在整数ALU和浮点单元之间传输数据相当慢(例如,涉及到内存的往返)。数据驱动分支对于各种扩展方法也很常见。这些很难预测。
数学库设计人员的一个经常目标是为像hypot()
这样的简单数学函数实现忠实的舍入,即最大误差小于1 ulp。与天真的解决方案相比,这种准确性的提高意味着必须以比原始精度更高的精度执行中间计算。一种经典方法是将操作数分成“高”和“低”部分并模拟扩展精度。这增加了拆分的开销,并增加了浮点运算的数量。最后,hypot()
的ISO C99规范(后来被C ++标准采用)规定了NaN和无穷大的处理方式,这自然不属于简单的计算。
较旧的最佳实践的一个代表性示例是FDLIBM中的__ieee754_hypot()
。尽管它声称最大误差为1 ulp,但是针对任意精度库的快速测试表明它实际上并未达到该目标,因为可以观察到高达1.15 ulp的误差。
自从提出此问题以来,处理器体系结构的两项进步使hypot()
实现更加有效。一种是引入融合的乘加(FMA)操作,该操作在内部使用完整乘积进行加法运算,并且最后仅应用一次舍入。这通常减轻了在中间计算中模拟更高精度的需求,并且通过将两个运算组合为一条指令也可以提高性能。另一个架构上的进步是允许浮点和整数操作都可以在同一组寄存器上进行操作,从而使对浮点操作数的位操作便宜。
因此,在现代高性能数学库中,hypot()
的吞吐量通常约为sqrt()
的一半,例如,在英特尔为MKL发布的performance data中可以看到
对于自己的实验,最好从hypot()
的单精度实现开始,因为这可以跨所有可能的参数值的总组合的较大部分进行测试。当使用许多现代处理器体系结构提供的低精度倒数平方根近似值时,这也允许在速度和精度之间进行权衡。一种这样的探索的示例性代码如下所示。
请注意,fmin()
fmax()
上的ISO C99(以及扩展名,C ++)对NaN处理的要求并不总是与浮点最小/最大机器指令的功能相匹配,使这些功能比预期的要慢。由于下面的代码分别处理NaN,因此应该可以直接使用此类机器指令。代码应经过全面优化,但必须严格遵循IEEE-754标准。
基于对500亿个随机参数对的测试,使用以下代码变体观察到的最大错误为:({USE_BEEBE == 1
):1.51 ulp; (USE_BEEBE == 0 && USE_SQRT == 1
):1.02 ulp; (USE_BEEBE == 0 && USE_SQRT == 0 && USE_2ND_ITERATION == 0
):2.77 ulp; (USE_BEEBE == 0 && USE_SQRT == 0 && USE_2ND_ITERATION == 1
):1.02 ulp。
#include <stdint.h>
#include <string.h>
#include <float.h>
#include <math.h>
#include "immintrin.h"
uint32_t __float_as_uint32 (float a);
float __uint32_as_float (uint32_t a);
float sse_rsqrtf (float a);
float my_hypotf (float a, float b)
{
float fa, fb, mn, mx, res, r, t;
fa = fabsf (a);
fb = fabsf (b);
mx = fmaxf (fa, fb);
mn = fminf (fa, fb);
#if USE_BEEBE
/* Nelson H. F. Beebe, "The Mathematical Function Computation Handbook."
Springer 2017
*/
float s, c;
r = mn / mx;
t = fmaf (r, r, 1.0f);
s = sqrtf (t);
c = fmaf (-s, s, t) / (s + s);
res = fmaf (mx, s, mx * c);
if (mx == 0) res = mx; // fixup hypot(0,0)
#else // USE_BEEBE
float scale_in, scale_out, s, v, w;
uint32_t expo;
/* compute scale factors */
expo = __float_as_uint32 (mx) & 0xfc000000;
scale_in = __uint32_as_float (0x7e000000 - expo);
scale_out = __uint32_as_float (expo + 0x01000000);
/* scale mx towards unity */
mn = mn * scale_in;
mx = mx * scale_in;
/* mx in [2**-23, 2**6) */
s = fmaf (mx, mx, mn * mn); // 0.75 ulp
#if USE_SQRT
w = sqrtf (s);
#else // USE_SQRT
r = sse_rsqrtf (s);
/* use A. Schoenhage's coupled iteration for the square root */
v = r * 0.5f;
w = r * s;
w = fmaf (fmaf (w, -w, s), v, w);
#if USE_2ND_ITERATION
v = fmaf (fmaf (r, -w, 1), v, v);
w = fmaf (fmaf (w, -w, s), v, w);
#endif // USE_2ND_ITERATION
if (mx == 0) w = mx; // fixup hypot(0,0)
#endif // USE_SQRT
/* reverse previous scaling */
res = w * scale_out;
#endif // USE_BEEBE
/* check for special cases: infinity and NaN */
t = a + b;
if (!(fabsf (t) <= INFINITY)) res = t; // isnan(t)
if (mx == INFINITY) res = mx; // isinf(mx)
return res;
}
uint32_t __float_as_uint32 (float a)
{
uint32_t r;
memcpy (&r, &a, sizeof r);
return r;
}
float __uint32_as_float (uint32_t a)
{
float r;
memcpy (&r, &a, sizeof r);
return r;
}
float sse_rsqrtf (float a)
{
__m128 b, t;
float res;
int old_mxcsr;
old_mxcsr = _mm_getcsr();
_MM_SET_FLUSH_ZERO_MODE (_MM_FLUSH_ZERO_OFF);
_MM_SET_ROUNDING_MODE (_MM_ROUND_NEAREST);
b = _mm_set_ss (a);
t = _mm_rsqrt_ss (b);
_mm_store_ss (&res, t);
_mm_setcsr (old_mxcsr);
return res;
}
hypot()
的基于FMA的简单双精度实现看起来像这样:
uint64_t __double_as_uint64 (double a)
{
uint64_t r;
memcpy (&r, &a, sizeof r);
return r;
}
double __uint64_as_double (uint64_t a)
{
double r;
memcpy (&r, &a, sizeof r);
return r;
}
double my_hypot (double a, double b)
{
double fa, fb, mn, mx, scale_in, scale_out, r, s, t;
uint64_t expo;
fa = fabs (a);
fb = fabs (b);
mx = fmax (fa, fb);
mn = fmin (fa, fb);
/* compute scale factors */
expo = __double_as_uint64 (mx) & 0xff80000000000000ULL;
scale_in = __uint64_as_double (0x7fc0000000000000ULL - expo);
scale_out = __uint64_as_double (expo + 0x0020000000000000ULL);
/* scale mx towards unity */
mn = mn * scale_in;
mx = mx * scale_in;
/* mx in [2**-52, 2**6) */
s = fma (mx, mx, mn * mn); // 0.75 ulp
r = sqrt (s);
/* reverse previous scaling */
r = r * scale_out;
/* check for special cases: infinity and NaN */
t = a + b;
if (!(fabs (t) <= INFINITY)) r = t; // isnan(t)
if (mx == INFINITY) r = mx; // isinf(mx)
return r;
}