我正在尝试在Spark中优化一个长期运行的作业,看来无论与群集一起运行的执行者有多少,该作业仍需要约3个小时才能完成。
检查了Ganglia用户界面,我发现有空闲的执行器(!)
集群设置:
我在做什么错了?
火花配置:
spark.executor.memory: 70G
spark.executor.cores: 70
spark.rdd.compress : false
spark.io.compression.codec : org.apache.spark.io.SnappyCompressionCodec
spark.io.compression.snappy.blockSize : 32768
spark.serializer : org.apache.spark.serializer.KryoSerializer
spark.kryo.referenceTracking : true
spark.kryo.registrationRequired : false
spark.hadoop.validateOutputSpecs : false
spark.memory.fraction: 0.7
spark.memory.storageFraction: 0.5
spark.scheduler.allocation.file: /home/hadoop/fairscheduler.xml
spark.scheduler.mode: FAIR
spark.cleaner.referenceTracking.blocking: false
spark.cleaner.periodicGC.interval: 3min
spark.task.cpus: 2
spark.executor.instances: 4
spark.yarn.executor.memoryOverhead: 45000
spark.default.parallelism: 64
spark.sql.shuffle.partitions: 64
spark.speculation: false
spark.speculation.multiplier: 5
spark.speculation.quantile: 0.80
spark.speculation.interval: 1000ms
此外,尝试重新分区数据框,这没有帮助。
dataframe = load_raw_data(reader, id).repartition(64)