Spark SQL:为什么一个查询有两个作业?

时间:2016-06-10 19:37:56

标签: apache-spark apache-spark-sql unsafe parquet

实验

我在overrideXxx上尝试了以下代码段。

Spark 1.6.1

val soDF = sqlContext.read.parquet("/batchPoC/saleOrder") # This has 45 files soDF.registerTempTable("so") sqlContext.sql("select dpHour, count(*) as cnt from so group by dpHour order by cnt").write.parquet("/out/") 是:

Physical Plan

对于此查询,我有两个工作:== Physical Plan == Sort [cnt#59L ASC], true, 0 +- ConvertToUnsafe +- Exchange rangepartitioning(cnt#59L ASC,200), None +- ConvertToSafe +- TungstenAggregate(key=[dpHour#38], functions=[(count(1),mode=Final,isDistinct=false)], output=[dpHour#38,cnt#59L]) +- TungstenExchange hashpartitioning(dpHour#38,200), None +- TungstenAggregate(key=[dpHour#38], functions=[(count(1),mode=Partial,isDistinct=false)], output=[dpHour#38,count#63L]) +- Scan ParquetRelation[dpHour#38] InputPaths: hdfs://hdfsNode:8020/batchPoC/saleOrder Job 9 enter image description here

对于Job 10Job 9为:

enter image description here

对于DAGJob 10为:

enter image description here

观察

  1. 显然,一个查询有两个DAG
  2. jobs(在Stage-16中标记为Stage-14)在Job 9中被跳过。
  3. Job 10的最后Stage-15,与RDD[48]的最后Stage-17相同。 如何的?我在日志中看到RDD[49]执行后,Stage-15注册为RDD[48]
  4. RDD[49]显示在Stage-17中,但从未在driver-logs执行。在Executors上显示了任务执行,但是当我查看driver-logs容器的日志时,没有证据表明从Yarn收到任何task
  5. 支持这些观察的日志(仅Stage-17,由于以后崩溃,我丢失了driver-logs个日志。可以看出,在executor开始之前,Stage-17已注册:

    RDD[49]

    问题

    1. 为什么有两个16/06/10 22:11:22 INFO TaskSetManager: Finished task 196.0 in stage 15.0 (TID 1121) in 21 ms on slave-1 (199/200) 16/06/10 22:11:22 INFO TaskSetManager: Finished task 198.0 in stage 15.0 (TID 1123) in 20 ms on slave-1 (200/200) 16/06/10 22:11:22 INFO YarnScheduler: Removed TaskSet 15.0, whose tasks have all completed, from pool 16/06/10 22:11:22 INFO DAGScheduler: ResultStage 15 (parquet at <console>:26) finished in 0.505 s 16/06/10 22:11:22 INFO DAGScheduler: Job 9 finished: parquet at <console>:26, took 5.054011 s 16/06/10 22:11:22 INFO ParquetRelation: Using default output committer for Parquet: org.apache.parquet.hadoop.ParquetOutputCommitter 16/06/10 22:11:22 INFO FileOutputCommitter: File Output Committer Algorithm version is 1 16/06/10 22:11:22 INFO DefaultWriterContainer: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter 16/06/10 22:11:22 INFO FileOutputCommitter: File Output Committer Algorithm version is 1 16/06/10 22:11:22 INFO SparkContext: Starting job: parquet at <console>:26 16/06/10 22:11:22 INFO DAGScheduler: Registering RDD 49 (parquet at <console>:26) 16/06/10 22:11:22 INFO DAGScheduler: Got job 10 (parquet at <console>:26) with 25 output partitions 16/06/10 22:11:22 INFO DAGScheduler: Final stage: ResultStage 18 (parquet at <console>:26) 16/06/10 22:11:22 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 17) 16/06/10 22:11:22 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 17) 16/06/10 22:11:22 INFO DAGScheduler: Submitting ShuffleMapStage 17 (MapPartitionsRDD[49] at parquet at <console>:26), which has no missing parents 16/06/10 22:11:22 INFO MemoryStore: Block broadcast_25 stored as values in memory (estimated size 17.4 KB, free 512.3 KB) 16/06/10 22:11:22 INFO MemoryStore: Block broadcast_25_piece0 stored as bytes in memory (estimated size 8.9 KB, free 521.2 KB) 16/06/10 22:11:22 INFO BlockManagerInfo: Added broadcast_25_piece0 in memory on 172.16.20.57:44944 (size: 8.9 KB, free: 517.3 MB) 16/06/10 22:11:22 INFO SparkContext: Created broadcast 25 from broadcast at DAGScheduler.scala:1006 16/06/10 22:11:22 INFO DAGScheduler: Submitting 200 missing tasks from ShuffleMapStage 17 (MapPartitionsRDD[49] at parquet at <console>:26) 16/06/10 22:11:22 INFO YarnScheduler: Adding task set 17.0 with 200 tasks 16/06/10 22:11:23 INFO TaskSetManager: Starting task 0.0 in stage 17.0 (TID 1125, slave-1, partition 0,NODE_LOCAL, 1988 bytes) 16/06/10 22:11:23 INFO TaskSetManager: Starting task 1.0 in stage 17.0 (TID 1126, slave-2, partition 1,NODE_LOCAL, 1988 bytes) 16/06/10 22:11:23 INFO TaskSetManager: Starting task 2.0 in stage 17.0 (TID 1127, slave-1, partition 2,NODE_LOCAL, 1988 bytes) 16/06/10 22:11:23 INFO TaskSetManager: Starting task 3.0 in stage 17.0 (TID 1128, slave-2, partition 3,NODE_LOCAL, 1988 bytes) 16/06/10 22:11:23 INFO TaskSetManager: Starting task 4.0 in stage 17.0 (TID 1129, slave-1, partition 4,NODE_LOCAL, 1988 bytes) 16/06/10 22:11:23 INFO TaskSetManager: Starting task 5.0 in stage 17.0 (TID 1130, slave-2, partition 5,NODE_LOCAL, 1988 bytes) ?将Jobs分成两个DAG
    2. 的目的是什么?
    3. jobs的{​​{1}}看起来完整用于查询执行。是否有具体的Job 10正在做什么?
    4. 为什么DAG没有跳过?它看起来像是创建了虚拟Job 9,它们是否有任何用途。
    5. 后来,我尝试了另一个相当简单的查询。出乎意料的是,它创造了3 Stage-17

      sqlContext.sql(“按dphour顺序选择dpHour”)。write.parquet(“/ out2 /”)

1 个答案:

答案 0 :(得分:7)

当您使用高级数据框/数据集API时,可以将其留给Spark来确定执行计划,包括作业/阶段分块。这取决于许多因素,例如执行并行性,缓存/持久数据结构等。在Spark的未来版本中,随着优化器复杂性的增加,每个查询可能会看到更多作业,例如,某些数据源被采样以进行参数化基于成本的执行优化。

例如,我经常(但并非总是)看到写作从涉及洗牌的处理中生成单独的作业。

最重要的是,如果您使用的是高级API,除非您必须对大量数据进行非常详细的优化,否则很少需要深入研究特定的分块。与处理/输出相比,工作启动成本极低。

另一方面,如果您对Spark内部结构感到好奇,请阅读优化程序代码并参与Spark开发人员邮件列表。

相关问题