如何在scala中优化这个短因子函数? (创造50000 BigInts)

时间:2011-10-23 00:14:25

标签: function scala optimization lazy-evaluation factorial

我已经比较了scala版本

(BigInt(1) to BigInt(50000)).reduce(_ * _)

到python版本

reduce(lambda x,y: x*y, range(1,50000))
事实证明,scala版本比python版本长了大约10倍。

我猜,一个很大的区别是python可以使用其原生long类型而不是为每个数字创建新的BigInt对象。但scala中是否有解决方法?

4 个答案:

答案 0 :(得分:16)

您的Scala代码创建50,000 BigInt个对象的事实在这里不太可能产生太大的影响。一个更大的问题是乘法算法 - Python's long使用Karatsuba multiplication和Java BigIntegerBigInt只是包装)不会。

最简单的解决方法可能是切换到更好的任意精度数学库,如JScience

import org.jscience.mathematics.number.LargeInteger

(1 to 50000).foldLeft(LargeInteger.ONE)(_ times _)

这比我机器上的Python解决方案更快。


更新:我使用some quick benchmarking code撰写Caliper来回复Luigi Plingi's answer,这会在我的(四核)计算机上显示以下结果:

              benchmark   ms linear runtime
         BigIntFoldLeft 4774 ==============================
             BigIntFold 4739 =============================
           BigIntReduce 4769 =============================
      BigIntFoldLeftPar 4642 =============================
          BigIntFoldPar  500 ===
        BigIntReducePar  499 ===
   LargeIntegerFoldLeft 3042 ===================
       LargeIntegerFold 3003 ==================
     LargeIntegerReduce 3018 ==================
LargeIntegerFoldLeftPar 3038 ===================
    LargeIntegerFoldPar  246 =
  LargeIntegerReducePar  260 =

我没有看到他所做的reducefold之间的区别,但道德很清楚:如果你可以使用Scala 2.9的并行集合,它们会给你一个巨大的改进,但切换到LargeInteger也有帮助。

答案 1 :(得分:9)

我机器上的Python:

def func():
  start= time.clock()
  reduce(lambda x,y: x*y, range(1,50000))
  end= time.clock()
  t = (end-start) * 1000
  print t

给出1219 ms

Scala的:

def timed[T](f: => T) = {
  val t0 = System.currentTimeMillis
  val r = f
  val t1 = System.currentTimeMillis
  println("Took: "+(t1 - t0)+" ms")
  r
}

timed { (BigInt(1) to BigInt(50000)).reduce(_ * _) }
4251 ms

timed { (BigInt(1) to BigInt(50000)).fold(BigInt(1))(_ * _) }
4224 ms

timed { (BigInt(1) to BigInt(50000)).par.reduce(_ * _) }
2083 ms

timed { (BigInt(1) to BigInt(50000)).par.fold(BigInt(1))(_ * _) }
689 ms

// using org.jscience.mathematics.number.LargeInteger from Travis's answer
timed { val a = (1 to 50000).foldLeft(LargeInteger.ONE)(_ times _) }
3327 ms

timed { val a = (1 to 50000).map(LargeInteger.valueOf(_)).par.fold(
                                          LargeInteger.ONE)(_ times _) }
361 ms

经过几次预热后,这个689毫秒和361毫秒。它们都在大约1000毫秒开始,但似乎以不同的量热身。并行集合似乎比非并行集合显着更热:非并行操作并没有从第一次运行中显着减少。

.par(意思是,使用并行集合)似乎比fold加速reduce。我只有2个内核,但更多的内核应该会有更大的性能提升。

因此,实验上,优化此功能的方法是

a)使用fold而不是reduce

b)使用并行集合

<强>更新 受到将计算分解为较小块的观察结果的启发,我设法让他跟随在我的机器上运行215 ms,这比标准并行算法提高了40%。 (使用BigInt,需要615毫秒。)此外,它不使用并行集合,但不知何故使用90%的CPU(与BigInt不同)。

  import org.jscience.mathematics.number.LargeInteger

  def fact(n: Int) = {
    def loop(seq: Seq[LargeInteger]): LargeInteger = seq.length match {
      case 0 => throw new IllegalArgumentException
      case 1 => seq.head
      case _ => loop {
        val (a, b) = seq.splitAt(seq.length / 2)
        a.zipAll(b, LargeInteger.ONE, LargeInteger.ONE).map(i => i._1 times i._2)
      } 
    }
    loop((1 to n).map(LargeInteger.valueOf(_)).toIndexedSeq)
  }

答案 2 :(得分:1)

这里的另一个技巧可能是尝试reduceLeftreduceRight来查看最快的内容。在您的示例中,我可以更快地执行reduceRight

scala> timed { (BigInt(1) to BigInt(50000)).reduceLeft(_ * _) }
Took: 4605 ms

scala> timed { (BigInt(1) to BigInt(50000)).reduceRight(_ * _) }
Took: 2004 ms

foldLeftfoldRight之间的区别相同。猜猜你开始减少树的哪一面很重要:)

答案 3 :(得分:0)

在Scala中计算阶乘的最有效方法是使用分而治之策略:

def fact(n: Int): BigInt = rangeProduct(1, n)

private def rangeProduct(n1: Long, n2: Long): BigInt = n2 - n1 match {
  case 0 => BigInt(n1)
  case 1 => BigInt(n1 * n2)
  case 2 => BigInt(n1 * (n1 + 1)) * n2
  case 3 => BigInt(n1 * (n1 + 1)) * ((n2 - 1) * n2)
  case _ => 
    val nm = (n1 + n2) >> 1
    rangeProduct(n1, nm) * rangeProduct(nm + 1, n2)
}

另外,为了更快地使用最新版本的JDK和以下JVM选项:

-server -XX:+TieredCompilation

Bellow是Intel(R)Core(TM)i7-2640M CPU @ 2.80GHz(最大3.50GHz),RAM 12Gb DDR3-1333,Windows 7 sp1,Oracle JDK 1.8.0_25-b18 64位的结果:< / p>

(BigInt(1) to BigInt(100000)).product took: 3,806 ms with 26.4 % of CPU usage
(BigInt(1) to BigInt(100000)).reduce(_ * _) took: 3,728 ms with 25.4 % of CPU usage
(BigInt(1) to BigInt(100000)).reduceLeft(_ * _) took: 3,510 ms with 25.1 % of CPU usage
(BigInt(1) to BigInt(100000)).reduceRight(_ * _) took: 4,056 ms with 25.5 % of CPU usage
(BigInt(1) to BigInt(100000)).fold(BigInt(1))(_ * _) took: 3,697 ms with 25.5 % of CPU usage
(BigInt(1) to BigInt(100000)).par.product took: 406 ms with 66.3 % of CPU usage
(BigInt(1) to BigInt(100000)).par.reduce(_ * _) took: 296 ms with 71.1 % of CPU usage
(BigInt(1) to BigInt(100000)).par.reduceLeft(_ * _) took: 3,495 ms with 25.3 % of CPU usage
(BigInt(1) to BigInt(100000)).par.reduceRight(_ * _) took: 3,900 ms with 25.5 % of CPU usage
(BigInt(1) to BigInt(100000)).par.fold(BigInt(1))(_ * _) took: 327 ms with 56.1 % of CPU usage
fact(100000) took: 203 ms with 28.3 % of CPU usage

顺便说一句,提高大于20000的数字的因子计算效率使用Schönhage-Strassen算法的following实现或等到它将合并到JDK 9并且Scala将能够使用它