`BigDecimal#divide()`中更高的`scale`更快?

时间:2013-12-19 00:01:26

标签: java performance bigdecimal

我提出了一个问题,最初将成为Q / A风格的问题。

最初的问题:

scale中较高BigDecimal#divide()对效果的影响程度是多少?

所以,我创建了这个SSCCE:

import java.math.BigDecimal;
import java.math.RoundingMode;
import java.util.concurrent.TimeUnit;

public class Test {

    public static void main(String args[]) {
        int[] scales = new int[] {1, 10, 50, 100, 500, 1000, 5000, 100000, 1000000};
        for(Integer scale : scales) {
            long start = System.nanoTime();
            BigDecimal.ONE.divide(BigDecimal.valueOf(7), scale, RoundingMode.HALF_UP);
            long end = System.nanoTime();
            long elapsed = end - start;
            String elapsed_str = String.format("%d mins, %d secs, %d millis, %d nanos", 
                TimeUnit.NANOSECONDS.toMinutes(elapsed),
                TimeUnit.NANOSECONDS.toSeconds(elapsed) - TimeUnit.MINUTES.toSeconds(TimeUnit.NANOSECONDS.toMinutes(elapsed)),
                TimeUnit.NANOSECONDS.toMillis(elapsed) - TimeUnit.SECONDS.toMillis(TimeUnit.NANOSECONDS.toSeconds(elapsed)),
                elapsed - TimeUnit.MILLISECONDS.toNanos(TimeUnit.NANOSECONDS.toMillis(elapsed))
            );
            System.out.println("Time for scale = " + scale + ": " + elapsed_str);
        }
    }
}

因此输出:

Time for scale = 1: 0 mins, 0 secs, 2 millis, 883903 nanos
Time for scale = 10: 0 mins, 0 secs, 0 millis, 13995 nanos
Time for scale = 50: 0 mins, 0 secs, 1 millis, 138727 nanos
Time for scale = 100: 0 mins, 0 secs, 0 millis, 645636 nanos
Time for scale = 500: 0 mins, 0 secs, 1 millis, 250220 nanos
Time for scale = 1000: 0 mins, 0 secs, 4 millis, 38957 nanos
Time for scale = 5000: 0 mins, 0 secs, 15 millis, 66549 nanos
Time for scale = 100000: 0 mins, 0 secs, 500 millis, 873987 nanos
Time for scale = 1000000: 0 mins, 50 secs, 183 millis, 686684 nanos

随着数量级的增加,性能会以指数方式受到影响。但是让我挠头的是这些线条:

Time for scale = 1: 0 mins, 0 secs, 2 millis, 883903 nanos
Time for scale = 10: 0 mins, 0 secs, 0 millis, 13995 nanos
Time for scale = 50: 0 mins, 0 secs, 1 millis, 138727 nanos
Time for scale = 100: 0 mins, 0 secs, 0 millis, 645636 nanos
Time for scale = 500: 0 mins, 0 secs, 1 millis, 250220 nano

对于10BigDecimal#divide()的比例似乎是最佳的?而100的比例比50快?我认为这可能只是一个异常,所以我再次运行它(这次,省略了最高的两个比例,因为我不想等待50秒:))这就是结果:

Time for scale = 1: 0 mins, 0 secs, 3 millis, 440903 nanos
Time for scale = 10: 0 mins, 0 secs, 0 millis, 10263 nanos
Time for scale = 50: 0 mins, 0 secs, 0 millis, 833169 nanos
Time for scale = 100: 0 mins, 0 secs, 0 millis, 487492 nanos
Time for scale = 500: 0 mins, 0 secs, 0 millis, 802846 nanos
Time for scale = 1000: 0 mins, 0 secs, 2 millis, 475715 nanos
Time for scale = 5000: 0 mins, 0 secs, 16 millis, 646117 nanos

同样,101快得多,100再次快于50

我一次又一次地尝试,100总是比50快。 1的比例始终低于1000以外的所有内容。

有人有解释吗?

1 个答案:

答案 0 :(得分:0)

Java代码是动态优化的,但是第一次运行它时,必须加载它。为避免在运行代码时重新编译代码导致混淆结果,我建议

  • 先做最长的跑步,而不是最后一次。
  • 运行测试至少2秒钟。
  • 忽略第一次至少运行3到5次(在这种情况下只有一次)

我会保持比例简单,以便您可以比较所有结果。在你的情况下,我会做每一次至少1000次,并以微秒为单位打印平均值。