更改分区大小是否会影响查询的输出?

时间:2019-01-18 06:07:41

标签: apache-spark

我正在练习有关Spark的书中的一些示例。在这些示例中,我从.csv个文件中读取了一些数据

val staticDataFrame = spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("/data/retail-data/by-day/*.csv")

然后创建一个sql视图

staticDataFrame.createOrReplaceTempView("retail_data")
val staticSchema = staticDataFrame.schema

然后运行查询

import org.apache.spark.sql.functions.{window, column, desc, col}
staticDataFrame
.selectExpr(
"CustomerId",
"(UnitPrice * Quantity) as total_cost",
"InvoiceDate")
.groupBy(
col("CustomerId"), window(col("InvoiceDate"), "1 day"))
.sum("total_cost")
.show(5)

我得到以下输出

+----------+--------------------+-----------------+
|CustomerId|              window|  sum(total_cost)|
+----------+--------------------+-----------------+
|   16057.0|[2011-12-05 00:00...|            -37.6|
|   14126.0|[2011-11-29 00:00...|643.6300000000001|
|   13500.0|[2011-11-16 00:00...|497.9700000000001|
|   17160.0|[2011-11-08 00:00...|516.8499999999999|
|   15608.0|[2011-11-11 00:00...|            122.4|
+----------+--------------------+-----------------+

然后更改分区大小,然后再次运行相同的查询。但是我得到了不同的输出

    scala> spark.conf.set("spark.sql.shuffle.partitions","5");

scala> staticDataFrame.
     | selectExpr(
     | "CustomerId",
     | "(UnitPrice * Quantity) as total_cost",
     | "InvoiceDate").
     | groupBy(
     | col("CustomerId"),window(col("InvoiceDate"),"1 day")).
     | sum("total_cost").
     | show(5)


+----------+--------------------+------------------+
|CustomerId|              window|   sum(total_cost)|
+----------+--------------------+------------------+
|   14075.0|[2011-12-05 00:00...|316.78000000000003|
|   18180.0|[2011-12-05 00:00...|            310.73|
|   15358.0|[2011-12-05 00:00...| 830.0600000000003|
|   15392.0|[2011-12-05 00:00...|304.40999999999997|
|   15290.0|[2011-12-05 00:00...|263.02000000000004|
+----------+--------------------+------------------+
only showing top 5 rows

是这种预期的行为。两种情况下的输出是否应该相同?

1 个答案:

答案 0 :(得分:1)

您的数据框有多少条记录?没关系。

我相信它的行为符合预期,因为您仅显示5条记录,所以第二个查询在分区后返回了不同的数据集。

尝试对某列进行排序,并获得前5个结果,它应该在分区前后为您提供相同的结果。

谢谢