Apache Spark使用在一个执行程序上运行一个任务

时间:2016-10-01 22:40:26

标签: scala apache-spark apache-spark-sql rdd partitioning

我有一个spark作业,它从数据库读取并执行过滤器,union,2个连接,最后将结果写回数据库。

但是,最后一个阶段只在一个执行程序中运行一个任务,在50个执行程序中。我试图增加分区数量,使用散列分区但没有运气。

经过几个小时的Google搜索后,似乎我的数据可能是skewed,但我不知道如何修复它。

有什么建议吗?

规格:

  • 独立群集
  • 120核心
  • 400G Memory

执行人:

  • 30位遗嘱执行人(4名核心/遗嘱执行人)
  • 每位执行人13G
  • 4G驱动程序内存

代码段

    ...

  def main(args: Array[String]) {

    ....

    import sparkSession.implicits._


    val similarityDs = sparkSession.read.format("jdbc").options(opts).load
    similarityDs.createOrReplaceTempView("locator_clusters")

    val  ClassifierDs = sparkSession.sql("select *  " +
                                              "from locator_clusters where " +
                                              "relative_score >= 0.9 and " +
                                              "((content_hash_id is not NULL or content_hash_id <> '') " +
                                              "or (ref_hash_id is not NULL or ref_hash_id <> ''))").as[Hash].cache()



    def nnHash(tag: String) = (tag.hashCode & 0x7FFFFF).toLong


    val contentHashes = ClassifierDs.map(locator => (nnHash(locator.app_hash_id), Member(locator.app_hash_id,locator.app_hash_id, 0, 0, 0))).toDF("id", "member").dropDuplicates().alias("ch").as[IdMember]
    val similarHashes = ClassifierDs.map(locator => (nnHash(locator.content_hash_id), Member(locator.app_hash_id, locator.content_hash_id, 0, 0, 0))).toDF("id", "member").dropDuplicates().alias("sh").as[IdMember]


    val missingContentHashes = similarHashes.join(contentHashes, similarHashes("id") === contentHashes("id"), "right_outer").select("ch.*").toDF("id", "member").as[IdMember]

    val locatorHashesRdd = similarHashes.union(missingContentHashes).cache()

    val vertices = locatorHashesRdd.map{ case row: IdMember=> (row.id, row.member) }.cache()

    val toHashId = udf(nnHash(_:String))

    val edgesDf =  ClassifierDs.select(toHashId($"app_hash_id"), toHashId($"content_hash_id"), $"raw_score", $"relative_score").cache()

    val edges = edgesDf.map(e => Edge(e.getLong(0), e.getLong(1), (e.getDouble(2), e.getDouble(2)))).cache()


    val graph = Graph(vertices.rdd, edges.rdd).cache()

    val sc = sparkSession.sparkContext

    val ccVertices =  graph.connectedComponents.vertices.cache()


    val ccByClusters = vertices.rdd.join(ccVertices).map({
                          case (id, (hash, compId)) => (compId, hash.content_hash_id, hash.raw_score, hash.relative_score, hash.size)
                      }).toDF("id", "content_hash_id", "raw_score", "relative_score", "size").alias("cc")


    val verticesDf  = vertices.map(x => (x._1, x._2.app_hash_id, x._2.content_hash_id, x._2.raw_score, x._2.relative_score, x._2.size))
                              .toDF("id", "app_hash_id", "content_hash_id", "raw_score", "relative_score", "size").alias("v")

    val superClusters = verticesDf.join(ccByClusters, "id")
                                  .select($"v.app_hash_id", $"v.app_hash_id", $"cc.content_hash_id", $"cc.raw_score", $"cc.relative_score", $"cc.size")
                                  .toDF("ref_hash_id", "app_hash_id", "content_hash_id", "raw_score", "relative_score", "size")



    val prop = new Properties()
    prop.setProperty("user", M_DB_USER)
    prop.setProperty("password", M_DB_PASSWORD)
    prop.setProperty("driver", "org.postgresql.Driver")


    superClusters.write
                 .mode(SaveMode.Append)
                 .jdbc(s"jdbc:postgresql://$M_DB_HOST:$M_DB_PORT/$M_DATABASE", MERGED_TABLE, prop)


    sparkSession.stop()

显示一个执行者的屏幕截图 enter image description here

来自遗嘱执行人的Stderr

16/10/01 18:53:42 INFO ShuffleBlockFetcherIterator: Getting 409 non-empty blocks out of 2000 blocks
16/10/01 18:53:42 INFO ShuffleBlockFetcherIterator: Started 59 remote fetches in 5 ms
16/10/01 18:53:42 INFO ShuffleBlockFetcherIterator: Getting 2000 non-empty blocks out of 2000 blocks
16/10/01 18:53:42 INFO ShuffleBlockFetcherIterator: Started 59 remote fetches in 9 ms
16/10/01 18:53:43 INFO UnsafeExternalSorter: Thread 123 spilling sort data of 896.0 MB to disk (1  time so far)
16/10/01 18:53:46 INFO UnsafeExternalSorter: Thread 123 spilling sort data of 896.0 MB to disk (2  times so far)
16/10/01 18:53:48 INFO Executor: Finished task 1906.0 in stage 769.0 (TID 260306). 3119 bytes result sent to driver
16/10/01 18:53:51 INFO UnsafeExternalSorter: Thread 123 spilling sort data of 1792.0 MB to disk (3  times so far)
16/10/01 18:53:57 INFO UnsafeExternalSorter: Thread 123 spilling sort data of 1792.0 MB to disk (4  times so far)
16/10/01 18:54:03 INFO UnsafeExternalSorter: Thread 123 spilling sort data of 1792.0 MB to disk (5  times so far)
16/10/01 18:54:09 INFO UnsafeExternalSorter: Thread 123 spilling sort data of 1792.0 MB to disk (6  times so far)
16/10/01 18:54:15 INFO UnsafeExternalSorter: Thread 123 spilling sort data of 1792.0 MB to disk (7  times so far)
16/10/01 18:54:21 INFO UnsafeExternalSorter: Thread 123 spilling sort data of 1792.0 MB to disk (8  times so far)
16/10/01 18:54:27 INFO UnsafeExternalSorter: Thread 123 spilling sort data of 1792.0 MB to disk (9  times so far)
16/10/01 18:54:33 INFO UnsafeExternalSorter: Thread 123 spilling sort data of 1792.0 MB to disk (10  times so far)
16/10/01 18:54:39 INFO UnsafeExternalSorter: Thread 123 spilling sort data of 1792.0 MB to disk (11  times so far)
16/10/01 18:54:44 INFO UnsafeExternalSorter: Thread 123 spilling sort data of 1792.0 MB to disk (12  times so far)

1 个答案:

答案 0 :(得分:3)

如果数据偏斜确实是问题,并且所有键都散列到单个分区,那么您可以尝试的是完整的笛卡尔积或带有预滤波数据的广播连接。让我们考虑以下示例:

val left = spark.range(1L, 100000L).select(lit(1L), rand(1)).toDF("k", "v")

left.select(countDistinct($"k")).show
// +-----------------+
// |count(DISTINCT k)|
// +-----------------+
// |                1|
// +-----------------+

任何使用此类数据join的尝试都会导致严重的数据偏差。现在让我们说另一张表如下:

val right = spark.range(1L, 100000L).select(
  (rand(3) * 1000).cast("bigint"), rand(1)
).toDF("k", "v")

right.select(countDistinct($"k")).show
// +-----------------+
// |count(DISTINCT k)|
// +-----------------+
// |             1000|
// +-----------------+

如上所述,我们可以尝试两种方法:

  • 如果我们预计right对应key左侧的记录数量很少,我们可以使用广播加入:

    type KeyType = Long
    val keys = left.select($"k").distinct.as[KeyType].collect
    
    val rightFiltered = broadcast(right.where($"k".isin(keys: _*)))
    left.join(broadcast(rightFiltered), Seq("k"))
    
  • 否则我们可以执行crossJoin后跟filter

    left.as("left")
      .crossJoin(rightFiltered.as("right"))
      .where($"left.k" === $"right.k")
    

    spark.conf.set("spark.sql.crossJoin.enabled", true)
    
    left.as("left")
      .join(rightFiltered.as("right"))
      .where($"left.k" === $"right.k")
    

如果混合使用稀有键和常用键,您可以通过在稀有键上执行标准连接并使用上面显示的常用方法之一来分离计算。

另一个可能的问题是jdbc格式。如果您不提供predicates或分区列,边界和分区数,则所有数据都由单个执行程序加载。