Spark总是跳过一份工作

时间:2017-11-09 12:00:49

标签: hadoop apache-spark spark-streaming yarn hortonworks-data-platform

我们正在运行Spark作业,修复为20号。我正在从Kafka主题读取数据,只有1个分区,因此使用修复来实现与执行程序的更多并行性以及控制消息速率。

它始终在UI中显示1个已跳过的作业。我试图将赔偿数改为40,15等。但它总是显示1跳过Job。

以下是重新分区的代码段:

@Override
public void call(JavaRDD<ConsumerRecord<String, byte[]>> consumerStreamRdd) throws Exception {
                OffsetRange[] offsetRanges = ((HasOffsetRanges) consumerStreamRdd.rdd()).offsetRanges();
                JavaRDD<String> jsonRdd = consumerStreamRdd.repartition(20).map(new Function<ConsumerRecord<String, byte[]>, String>() {

                    private static final long serialVersionUID = 1L;

                    @Override
                    public String call(ConsumerRecord<String, byte[]> kafkaRecord) throws Exception {}

我有以下问题:

  1. 是否有任何影响,例如数据丢失?

  2. 我如何避免这些     跳过乔布斯?

  3. Spark Skipped Jobs

    以下是Spark配置:

    #!/bin/bash
    
    export SPARK_MAJOR_VERSION=2
    
    # Minimum TODOs on a per job basis:
    # 1. define name, application jar path, main class, queue and log4j-yarn.properties path
    # 2. remove properties not applicable to your Spark version (Spark 1.x vs. Spark 2.x)
    # 3. tweak num_executors, executor_memory (+ overhead), and backpressure settings
    
    # the two most important settings:
    num_executors=4
    executor_memory=16g
    
    # 3-5 cores per executor is a good default balancing HDFS client throughput vs. JVM overhead
    # see http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/
    executor_cores=2
    
    # backpressure
    reciever_minRate=1
    receiver_max_rate=10
    receiver_initial_rate=10
    
    /usr/hdp/2.6.1.0-129/spark2/bin/spark-submit --master yarn --deploy-mode cluster \
      --name production \
      --class com.Data \
      --driver-memory 16g \
      --num-executors ${num_executors} --executor-cores ${executor_cores} --executor-memory ${executor_memory} \
      --files log4j-yarn-warid-br1-ccn-data.properties \
      --conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=log4j-yarn-warid-br1-ccn-data.properties" \
      --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=log4j-yarn-warid-br1-ccn-data.properties" \
      --conf spark.serializer=org.apache.spark.serializer.KryoSerializer `# Kryo Serializer is much faster than the default Java Serializer` \
      --conf spark.kryoserializer.buffer.max=1g \
      --conf spark.locality.wait=30 \
      --conf spark.task.maxFailures=8 `# Increase max task failures before failing job (Default: 4)` \
      --conf spark.ui.killEnabled=true `# Prevent killing of stages and corresponding jobs from the Spark UI` \
      --conf spark.logConf=true `# Log Spark Configuration in driver log for troubleshooting` \
    `# SPARK STREAMING CONFIGURATION` \
      --conf spark.scheduler.mode=FAIR \
      --conf spark.default.parallelism=32 \
      --conf spark.streaming.blockInterval=75 `# [Optional] Tweak to balance data processing parallelism vs. task scheduling overhead (Default: 200ms)` \
      --conf spark.streaming.receiver.writeAheadLog.enable=true `# Prevent data loss on driver recovery` \
      --conf spark.streaming.backpressure.enabled=false \
      --conf spark.streaming.kafka.maxRatePerPartition=${receiver_max_rate} `# [Spark 1.x]: Corresponding max rate setting for Direct Kafka Streaming (Default: not set)` \
    `# YARN CONFIGURATION` \
      --conf spark.yarn.driver.memoryOverhead=10240 `# [Optional] Set if --driver-memory < 5GB` \
      --conf spark.yarn.executor.memoryOverhead=10240 `# [Optional] Set if --executor-memory < 10GB` \
      --conf spark.yarn.maxAppAttempts=4 `# Increase max application master attempts (needs to be <= yarn.resourcemanager.am.max-attempts in YARN, which defaults to 2) (Default: yarn.resourcemanager.am.max-attempts)` \
      --conf spark.yarn.am.attemptFailuresValidityInterval=1h `# Attempt counter considers only the last hour (Default: (none))` \
      --conf spark.yarn.max.executor.failures=$((8 * ${num_executors})) `# Increase max executor failures (Default: max(numExecutors * 2, 3))` \
      --conf spark.yarn.executor.failuresValidityInterval=1h `# Executor failure counter considers only the last hour` \
      --conf spark.task.maxFailures=8 \
      --conf "spark.executor.extraJavaOptions=-XX:+UseG1GC -XX:ConcGCThreads=20 -XX:MaxGCPauseMillis=800" \
      --conf spark.speculation=false \
    /home/runscripts/production.jar
    

0 个答案:

没有答案