Spark“Executor heartbeat超时”

时间:2016-09-04 13:52:27

标签: apache-spark

我有一个简单的可重现的Spark错误。 (Spark 2.0 + Amazon EMR 5.0 FYI)

def row_parse_function():
    # Custom row parsing function. Details omitted.
    return pyspark.sql.types.Row(...)


if __name__ == "__main__"
    spark_context = build_spark_context("max value bug isolation")
    spark_sql_context = SQLContext(spark_context)

    full_source_path = "s3a://my-bucket/ten_gb_data_file.txt.gz"

    # Tried changing partition parameter to no effect.
    raw_rdd = spark_context.textFile(full_source_path, 5000)
    row_rdd = raw_rdd.map(row_parse_function).filter(bool)
    data_frame = spark_sql_context.createDataFrame(row_rdd, AttribPixelMergedStructType)
    # Tried removing and chaning this repartition call to no effect.
    data_frame.repartition(5000)
    # Removing this cache call makes this small sample work.
    data_frame.cache()
    data_frame_count = data_frame.count()

这失败了:

ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 169068 ms
Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438)

我知道心跳超时错误通常意味着工作人员死亡,通常是由于缺乏记忆。我该如何解决这个问题?

1 个答案:

答案 0 :(得分:0)

您可以增加执行程序和网络超时。另外,建议您没有足够的内存来进行持久存储(MEMORY_AND_DISK_SER),这样,如果没有足够的内存来缓存以将其保存在磁盘上。

--conf spark.network.timeout 10000000 --conf spark.executor.heartbeatInterval=10000000   --conf spark.driver.maxResultSize=4g