我是否需要持续更新RDD?

时间:2019-02-24 12:40:42

标签: scala apache-spark hadoop rdd

我正在使用一个spark程序,该程序需要循环不断地更新一些RDD:

var totalRandomPath: RDD[String] = null
for (iter <- 0 until config.numWalks) {
  var randomPath: RDD[String] = examples.map { case (nodeId, clickNode) =>
    clickNode.path.mkString("\t")
  }

  for (walkCount <- 0 until config.walkLength) {
    randomPath = edge2attr.join(randomPath.mapPartitions { iter =>
      iter.map { pathBuffer =>
        val paths: Array[String] = pathBuffer.split("\t")

        (paths.slice(paths.size - 2, paths.size).mkString(""), pathBuffer)
      }
    }).mapPartitions { iter =>
      iter.map { case (edge, (attr, pathBuffer)) =>
        try {
          if (pathBuffer != null && pathBuffer.nonEmpty && attr.dstNeighbors != null && attr.dstNeighbors.nonEmpty) {
            val nextNodeIndex: PartitionID = GraphOps.drawAlias(attr.J, attr.q)
            val nextNodeId: VertexId = attr.dstNeighbors(nextNodeIndex)
            s"$pathBuffer\t$nextNodeId"
          } else {
            pathBuffer //add
          }
        } catch {
          case e: Exception => throw new RuntimeException(e.getMessage)
        }
      }.filter(_ != null)
    }
  }

  if (totalRandomPath != null) {
    totalRandomPath = totalRandomPath.union(randomPath)
  } else {
    totalRandomPath = randomPath
  }
}

在此程序中,RDD totalRandomPathrandomPath通过许多转换操作(joinmapPartitions)不断更新。该程序将以操作collect结尾。

那么我是否需要保留那些不断更新的RDD(totalRandomPath,randomPath)来加快我的Spark程序?
而且我注意到该程序在单节点计算机上运行很快,但是在三节点群集中运行时却变慢了,为什么会这样?

1 个答案:

答案 0 :(得分:0)

是的,您需要保留更新的RDD并保持旧的RDD不变

var totalRandomPath:RDD[String] = spark.sparkContext.parallelize(List.empty[String]).cache()   
for (iter <- 0 until config.numWalks){

    // existing logic

    val tempRDD = totalRandomPath.union(randomPath).cache()
    tempRDD foreach { _ => } //this will trigger cache operation for tempRDD immediately  
    totalRandomPath.unpersist() //unpersist old RDD which is no longer needed
    totalRandomPath = tempRDD   // point totalRandomPath to updated RDD
}