在执行来自Dataproc集群的Spark作业时,执行器心跳在125009 ms之后超时

时间:2020-06-11 19:22:16

标签: shell apache-spark google-cloud-platform pyspark google-cloud-dataproc

下面是我如何创建我的dataproc集群,同时制定属性时我通过分配3600来照顾网络超时,但是尽管执行者的心跳在125009ms之后超时。为什么会发生这种情况?如何避免这种情况发生?

compute_sum_ij(a,b)
>>> 9, (2, 4)

以下是我得到的错误:

default_parallelism=512

PROPERTIES="\
spark:spark.executor.cores=2,\
spark:spark.executor.memory=8g,\
spark:spark.executor.memoryOverhead=2g,\
spark:spark.driver.memory=6g,\
spark:spark.driver.maxResultSize=6g,\
spark:spark.kryoserializer.buffer=128m,\
spark:spark.kryoserializer.buffer.max=1024m,\
spark:spark.serializer=org.apache.spark.serializer.KryoSerializer,\
spark:spark.default.parallelism=${default_parallelism},\
spark:spark.rdd.compress=true,\
spark:spark.network.timeout=3600s,\
spark:spark.rpc.message.maxSize=256,\
spark:spark.io.compression.codec=snappy,\
spark:spark.shuffle.service.enabled=true,\
spark:spark.sql.shuffle.partitions=256,\
spark:spark.sql.files.ignoreCorruptFiles=true,\
yarn:yarn.nodemanager.resource.cpu-vcores=8,\
yarn:yarn.scheduler.minimum-allocation-vcores=2,\
yarn:yarn.scheduler.maximum-allocation-vcores=4,\
yarn:yarn.nodemanager.vmem-check-enabled=false,\
capacity-scheduler:yarn.scheduler.capacity.resource-calculator=org.apache.hadoop.yarn.util.resource.DominantResourceCalculator
  "

gcloud beta dataproc clusters create $CLUSTER_NAME  \
    --zone $ZONE \
    --region $REGION \
    --master-machine-type n1-standard-4 \
    --master-boot-disk-size 500 \
    --worker-machine-type n1-standard-4 \
    --worker-boot-disk-size 500 \
    --num-workers 3 \
    --bucket $GCS_BUCKET \
    --image-version 1.4-ubuntu18 \
    --optional-components=ANACONDA,JUPYTER \
    --subnet=default \
    --enable-component-gateway \
    --scopes 'https://www.googleapis.com/auth/cloud-platform'

1 个答案:

答案 0 :(得分:4)

您应该设置spark.executor.heartbeatInterval。默认值为10秒。

https://spark.apache.org/docs/latest/configuration.html