Spark sql"期货在300秒后超时"过滤时

时间:2017-04-27 19:02:17

标签: apache-spark-sql

我在做一些看似简单的spark sql过滤工作时遇到异常:

    someOtherDF
      .filter(/*somecondition*/)
      .select($"eventId")
      .createOrReplaceTempView("myTempTable")

    records
      .filter(s"eventId NOT IN (SELECT eventId FROM myTempTable)")

知道如何解决这个问题吗?

注意:

  • someOtherDF在过滤后包含~1M和5M行,而eventId是guid。
  • 记录包含40M到50M行。

错误:

Stacktrace:

org.apache.spark.SparkException: Exception thrown in awaitResult:
        at org.apache.spark.util.ThreadUtils$.awaitResultInForkJoinSafely(ThreadUtils.scala:215)
        at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:131)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:124)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:124)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
        at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:123)
        at org.apache.spark.sql.execution.joins.BroadcastNestedLoopJoinExec.doExecute(BroadcastNestedLoopJoinExec.scala:343)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
        at ...
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
    at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    at org.apache.spark.util.ThreadUtils$.awaitResultInForkJoinSafely(ThreadUtils.scala:212)
    ... 84 more

1 个答案:

答案 0 :(得分:0)

使用以下部分: 1)How to exclude rows that don't join with another table? 2)Spark Duplicate columns in dataframe after join

我可以使用左外连接解决我的问题:

    val leftColKey = records("eventId")
    val rightColKey = someOtherDF("eventId")
    val toAppend: DataFrame = records
      .join(someOtherDF, leftColKey === rightColKey, "left_outer")
      .filter(rightColKey.isNull) // Keep rows without a match in 'someOtherDF'. See (1)
      .drop(rightColKey) // Needed to discard duplicate column. See (2)

性能非常好,并且不会遇到“未来超时”问题。

修改

正如一位同事向我指出的那样,“leftanti”连接类型更有效。