为什么我的镶木地板分区数据比非分区数据慢?

时间:2018-04-11 13:42:31

标签: apache-spark parquet

我的理解是:如果我在列上分区我的数据,我会查询它应该更快。但是,当我尝试它时,它似乎更慢而不是为什么?

我有一个用户数据框,我尝试将yearmonth分区而不是。

所以我有一个creation_yearmonth分区的数据集。

questionsCleanedDf.repartition("creation_yearmonth") \
    .write.partitionBy('creation_yearmonth') \
    .parquet('wasb://.../parquet/questions.parquet')

我有另一个未分区的

questionsCleanedDf \
    .write \
    .parquet('wasb://.../parquet/questions_nopartition.parquet')

然后我尝试从这两个镶木地板文件创建一个数据框并运行相同的查询

questionsDf = spark.read.parquet('wasb://.../parquet/questions.parquet')

questionsDf = spark.read.parquet('wasb://.../parquet/questions_nopartition.parquet')

查询

spark.sql("""
    SELECT * FROM questions
    WHERE creation_yearmonth = 201606
""")

似乎没有分区一直是更快或有相似的时间(~2 - 3s),而分区的一个是慢得多(~3 - 4s)。

我试着做一个解释:

对于分区数据集:

== Physical Plan ==
*FileScan parquet [id#6404,title#6405,tags#6406,owner_user_id#6407,accepted_answer_id#6408,view_count#6409,answer_count#6410,comment_count#6411,creation_date#6412,favorite_count#6413,creation_yearmonth#6414] Batched: false, Format: Parquet, Location: InMemoryFileIndex[wasb://data@cs4225.blob.core.windows.net/parquet/questions.parquet], PartitionCount: 1, PartitionFilters: [isnotnull(creation_yearmonth#6414), (creation_yearmonth#6414 = 201606)], PushedFilters: [], ReadSchema: struct<id:int,title:string,tags:array<string>,owner_user_id:int,accepted_answer_id:int,view_count...

PartitionCount: 1 我应该因为在这种情况下,它可以直接进入它应该更快的分区吗?

对于非分区的人:

== Physical Plan ==
*Project [id#6440, title#6441, tags#6442, owner_user_id#6443, accepted_answer_id#6444, view_count#6445, answer_count#6446, comment_count#6447, creation_date#6448, favorite_count#6449, creation_yearmonth#6450]
+- *Filter (isnotnull(creation_yearmonth#6450) && (creation_yearmonth#6450 = 201606))
   +- *FileScan parquet [id#6440,title#6441,tags#6442,owner_user_id#6443,accepted_answer_id#6444,view_count#6445,answer_count#6446,comment_count#6447,creation_date#6448,favorite_count#6449,creation_yearmonth#6450] Batched: false, Format: Parquet, Location: InMemoryFileIndex[wasb://data@cs4225.blob.core.windows.net/parquet/questions_nopartition.parquet], PartitionFilters: [], PushedFilters: [IsNotNull(creation_yearmonth), EqualTo(creation_yearmonth,201606)], ReadSchema: struct<id:int,title:string,tags:array<string>,owner_user_id:int,accepted_answer_id:int,view_count...

也非常令人惊讶。首先,数据集将日期作为字符串,因此我需要执行以下查询:

spark.sql("""
    SELECT * FROM questions
    WHERE CAST(creation_date AS date) BETWEEN '2017-06-01' AND '2017-07-01'
""").show(20, False)

我预计这会更慢但事实证明,它表现最好~1-2秒。这是为什么?我想在这种情况下,它需要每一行投出?

这里的解释输出:

== Physical Plan ==
*Project [id#6521, title#6522, tags#6523, owner_user_id#6524, accepted_answer_id#6525, view_count#6526, answer_count#6527, comment_count#6528, creation_date#6529, favorite_count#6530]
+- *Filter ((isnotnull(creation_date#6529) && (cast(cast(creation_date#6529 as date) as string) >= 2017-06-01)) && (cast(cast(creation_date#6529 as date) as string) <= 2017-07-01))
   +- *FileScan parquet [id#6521,title#6522,tags#6523,owner_user_id#6524,accepted_answer_id#6525,view_count#6526,answer_count#6527,comment_count#6528,creation_date#6529,favorite_count#6530] Batched: false, Format: Parquet, Location: InMemoryFileIndex[wasb://data@cs4225.blob.core.windows.net/filtered/questions.parquet], PartitionFilters: [], PushedFilters: [IsNotNull(creation_date)], ReadSchema: struct<id:string,title:string,tags:array<string>,owner_user_id:string,accepted_answer_id:string,v...

1 个答案:

答案 0 :(得分:1)

过度分区实际上可以降低性能:

  

如果一列只有几行匹配每个值,则为   要处理的目录可以成为限制因素和数据文件   在每个目录中可能太小而无法利用Hadoop   以兆字节块传输数据的机制。

摘录摘自不同Hadoop组件Impala的文档,但所提出的参数应该对Hadoop堆栈的所有组件都有效。

我认为无论使用何种分区方案,分区的优势在表格超过900 MB-s之前都不会显而易见。