3600秒超时,激发工人与心跳中的火花驱动器通信

时间:2018-01-12 03:31:10

标签: apache-spark

我没有配置任何超时值但使用了默认设置。 在哪里配置3600秒超时?怎么解决?

错误讯息:

18/01/10 13:51:44 WARN Executor: Issue communicating with driver in heartbeater
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [3600 seconds]. This timeout is controlled by spark.executor.heartbeatInterval
    at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:47)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:62)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:58)
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
    at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92)
    at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:738)
    at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply$mcV$sp(Executor.scala:767)
    at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply(Executor.scala:767)
    at org.apache.spark.executor.Executor$$anon$2$$anonfun$run$1.apply(Executor.scala:767)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1948)
    at org.apache.spark.executor.Executor$$anon$2.run(Executor.scala:767)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [3600 seconds]
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
    at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    ... 14 more

3 个答案:

答案 0 :(得分:2)

在错误消息中显示:

  

此超时由spark.executor.heartbeatInterval

控制

因此,您尝试的第一件事就是增加这个值。它可以通过多种方式完成,例如将值增加到10000秒:

  • 使用spark-submit时,只需添加标记:

    --conf spark.executor.heartbeatInterval=10000s
    
  • 您可以在spark-defaults.conf中添加一行:

    spark.executor.heartbeatInterval 10000s
    
  • 在程序中创建新的SparkSession时,添加一个配置参数(Scala):

    val spark = SparkSession.builder
      .config("spark.executor.heartbeatInterval", "10000s")
      .getOrCreate()
    

如果这没有用,那么尝试增加spark.network.timeout的值也是一个好主意。它也是与这些类型的超时相关的问题的常见来源。

答案 1 :(得分:-1)

如异常

中所述
This timeout is controlled by spark.executor.heartbeatInterval

因此,您可以使用spark.executor.heartbeatInterval来设置间隔时间,默认值为10秒。根据文档记录。

spark.executor.heartbeatInterval的Spark文档说:

  

每个遗嘱执行人的心跳与驾驶员之间的间隔。心跳   让驱动程序知道执行程序仍处于活动状态并更新它   包含正在进行的任务的指标。 spark.executor.heartbeatInterval   应该远远小于spark.network.timeout

有关详细信息,请查看spark文档here

答案 2 :(得分:-1)

val spark = SparkSession.builder().appName("SQL_DataFrame")
  .master("local")
  .config("spark.network.timeout", "600s")
  .config("spark.executor.heartbeatInterval", "10000s")
  .getOrCreate()

测试。它解决了这个问题。