Spark,Graphx程序不使用cpu和内存

时间:2017-01-30 16:06:45

标签: apache-spark spark-graphx

我有一个函数接受一个节点的邻居,对于我使用广播变量的邻居和节点本身的id,它计算该节点的紧密度中心性。我用图的结果映射图的每个节点那个函数。当我打开任务管理器时,cpu根本没有被利用,好像它不是并行工作,内存也是如此,但是每个节点并行执行该功能,而且数据也很大,需要时间完成,它不喜欢它不需要资源。非常感谢,非常感谢,谢谢。 要加载图表,请使用val graph = GraphLoader.edgeListFile(sc, path).cache

object ClosenessCentrality {

  case class Vertex(id: VertexId)

  def run(graph: Graph[Int, Float],sc: SparkContext): Unit = {
    //Have to reverse edges and make graph undirected because is bipartite
    val neighbors = CollectNeighbors.collectWeightedNeighbors(graph).collectAsMap()
    val bNeighbors = sc.broadcast(neighbors)

    val result = graph.vertices.map(f => shortestPaths(f._1,bNeighbors.value))
    //result.coalesce(1)
    result.count()

  }

  def shortestPaths(source: VertexId,  neighbors: Map[VertexId, Map[VertexId, Float]]): Double ={
    val predecessors = new mutable.HashMap[VertexId, ListBuffer[VertexId]]()
    val distances = new mutable.HashMap[VertexId, Double]()
    val q = new FibonacciHeap[Vertex]
    val nodes = new mutable.HashMap[VertexId, FibonacciHeap.Node[Vertex]]()

    distances.put(source, 0)

    for (w <- neighbors) {
      if (w._1 != source)
        distances.put(w._1, Int.MaxValue)

      predecessors.put(w._1, ListBuffer[VertexId]())
      val node = q.insert(Vertex(w._1), distances(w._1))
      nodes.put(w._1, node)
    }

    while (!q.isEmpty) {
      val u = q.minNode
      val node = u.data.id
      q.removeMin()
      //discover paths
      //println("Current node is:"+node+" "+neighbors(node).size)
      for (w <- neighbors(node).keys) {
        //print("Neighbor is"+w)
        val alt = distances(node) + neighbors(node)(w)
//        if (distances(w) > alt) {
//          distances(w) = alt
//          q.decreaseKey(nodes(w), alt)
//        }
//        if (distances(w) == alt)
//          predecessors(w).+=(node)
         if(alt< distances(w)){
           distances(w) = alt
           predecessors(w).+=(node)
           q.decreaseKey(nodes(w), alt)
         }

      }//For
    }
    val sum = distances.values.sum
    sum
  }

1 个答案:

答案 0 :(得分:1)

为了对原始问题提供某些答案,我怀疑您的RDD只有一个分区,因此使用单个核心进行处理。

SELECT cname, COUNT(cname) AS games, SUM(CASE points WHEN 3 THEN 1 ELSE 0 END) AS wins, SUM(CASE points WHEN 1 THEN 1 ELSE 0 END) AS draws, SUM(CASE points WHEN 0 THEN 1 ELSE 0 END) AS loses, SUM(goalsscored)-SUM(goalsconceded) AS goaldiff FROM pmatch GROUP BY cname ORDER BY 3 DESC, 6 DESC; 方法有一个参数来指定所需的最小分区数。 此外,您可以使用edgeListFile来获取更多分区。

您提到了repartition,但默认情况下只会减少分区数量,请​​参阅此问题:Spark Coalesce More Partitions