为什么这个LR代码在火花上运行太慢?

时间:2013-12-25 03:31:28

标签: scala hadoop machine-learning apache-spark

因为MLlib不支持稀疏输入。所以我在spark集群上运行支持稀疏输入格式的流动代码。 设置是:

  1. 5个节点,每个节点有8个核心(每个节点上的所有cpu都是100%, 用户模型98%,运行代码时)。
  2. 输入:10,000,000+实例,以及HDFS上600,000+维度
  3. 代码是:

    import java.util.Random
    import scala.collection.mutable.HashMap
    import scala.io.Source
    import org.apache.spark.SparkContext
    import org.apache.spark.rdd.RDD
    import org.apache.spark.util.Vector
    import java.lang.Math
    import org.apache.spark.broadcast.Broadcast
    
    object SparseLR {
      val lableNum = 1
      val dimNum = 632918 
      val iteration = 10
      val alpha = 0.1
      val lambda = 0.1
      val rand = new Random(42)
      var w = Vector(dimNum, _=> rand.nextDouble)
    
      class SparserVector {
        var elements = new HashMap[Int, Double]
    
        def insert(index: Int, value: Double){
          elements += index -> value;
        }
    
    
        def *(scale: Double): Vector = {
          var x = new Array[Double](dimNum)
          elements.keySet.foreach(k => x(k) = scale * elements.get(k).get)
          Vector(x)
        }
      }
      case class DataPoint(x: SparserVector, y: Int)
    
      def parsePoint(line: String): DataPoint = {
        var features = new SparserVector
        val fields = line.split("\t")
        //println("fields:" + fields(0))
        val y = fields(0).toInt
        fields.filter(_.contains(":")).foreach( f => {
          val feature = f.split(":")
          features.insert(feature(0).toInt, feature(1).toDouble)
        })
        return DataPoint(features, y)
      }
    
      def gradient(p: DataPoint, w: Broadcast[Vector]) : Vector = {
        def h(w: Broadcast[Vector], x: SparserVector): Double = {
          val wb = w.value
          val features = x.elements
          val s = features.keySet.map(k => features.get(k).get * wb(k)).reduce(_ + _)
          1 / (1 + Math.exp(-p.y * s))
        }
        p.x * (-(1 - p.y *h(w, p.x)))
      }
    
      def train(sc: SparkContext, dataPoints: RDD[DataPoint]) {
        //val sampleNum = dataPoints.count
        val sampleNum = 11680250
    
        for(i <- 0 until iteration) {
          val wb = sc.broadcast(w)
          val g = (dataPoints.map(p => gradient(p, wb)).reduce(_ + _) + lambda * wb.value) /sampleNum
          w -= alpha * g
    
          println("iteration " + i + ": g = " + g)
        }
      }
    
      def main(args : Array[String]): Unit = {
        System.setProperty("spark.executor.memory", "15g")
        System.setProperty("spark.default.parallelism", "32");
        val sc = new SparkContext("spark://xxx:12036", "LR", "/xxx/spark", List("xxx_2.9.3-1.0.jar"))
        val lines = sc.textFile("hdfs:xxx/xxx.txt", 32)
    
        val trainset = lines.map(parsePoint _).cache()
    
        train(sc, trainset)
      }
    }
    

    任何人都可以帮助我吗?谢谢!

1 个答案:

答案 0 :(得分:4)

很难给你一个答案。也许这会更好地匹配code review stackoverflow子网站?

有些事情显而易见:

您的渐变功能似乎效率低下。当你想为地图的每个键/值对做一些事情时,做

效率要高得多
for((k,v)<-map) { 
  ...
}

而非

for(k<-map.keySet) { val value = map.get(k).get; 
  ... 
}

此外,对于像这样的性能关键代码,最好将reduce更改为累积可变值。所以重写的渐变函数将是

def gradient(p: DataPoint, w: Broadcast[Vector]) : Vector = {
  def h(w: Broadcast[Vector], x: SparserVector): Double = {
    val wb = w.value
    val features = x.elements
    var s = 0.0
    for((k,v)<-features)
      s += v * wb(k)
    1 / (1 + Math.exp(-p.y * s))
  }
  p.x * (-(1 - p.y *h(w, p.x)))
}

现在,如果您想进一步提高性能,则必须更改SparseVector以使用索引数组和值数组而不是Map [Int,Double]。原因是在Map中,键和值将被装箱作为具有相当大开销的对象,而Array [Int]或Array [Double]只是一个紧凑的内存块

(为方便起见,最好定义一个使用SortedMap [Int,Double]的构建器,并在完成构建时转换为两个数组)

class SparseVector(val indices: Array[Int], val values: Array[Double]) {
  require(indices.length == values.length)

  def *(scale: Double): Vector = {
    var x = new Array[Double](dimNum)
    var i = 0
    while(i < indices.length) {  
      x(indices(i)) = scale * values(i) 
      i += 1
    }
    Vector(x)
  }
}

请注意,上面的代码示例未经过测试,但我想您会明白这一点。