pyspark:在窗口上统计

时间:2017-08-24 19:04:28

标签: count pyspark window-functions distinct-values

我刚尝试在窗口上执行countDistinct并收到此错误:

AnalysisException: u'Distinct window functions are not supported: count(distinct color#1926)

有没有办法在pyspark中对窗口进行明确的计数?

以下是一些示例代码:

from pyspark.sql import functions as F

#function to calculate number of seconds from number of days
days = lambda i: i * 86400

df = spark.createDataFrame([(17, "2017-03-10T15:27:18+00:00", "orange"),
                    (13, "2017-03-15T12:27:18+00:00", "red"),
                    (25, "2017-03-18T11:27:18+00:00", "red")],
                    ["dollars", "timestampGMT", "color"])

df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))

#create window by casting timestamp to long (number of seconds)
w = (Window.orderBy(F.col("timestampGMT").cast('long')).rangeBetween(-days(7), 0))

df = df.withColumn('distinct_color_count_over_the_last_week', F.countDistinct("color").over(w))

df.show()

这是我想看到的输出:

+-------+--------------------+------+---------------------------------------+
|dollars|        timestampGMT| color|distinct_color_count_over_the_last_week|
+-------+--------------------+------+---------------------------------------+
|     17|2017-03-10 15:27:...|orange|                                      1|
|     13|2017-03-15 12:27:...|   red|                                      2|
|     25|2017-03-18 11:27:...|   red|                                      1|
+-------+--------------------+------+---------------------------------------+

3 个答案:

答案 0 :(得分:29)

我发现我可以使用collect_set和size函数的组合来模仿窗口上countDistinct的功能:

from pyspark.sql import functions as F

#function to calculate number of seconds from number of days
days = lambda i: i * 86400

#create some test data
df = spark.createDataFrame([(17, "2017-03-10T15:27:18+00:00", "orange"),
                    (13, "2017-03-15T12:27:18+00:00", "red"),
                    (25, "2017-03-18T11:27:18+00:00", "red")],
                    ["dollars", "timestampGMT", "color"])

#convert string timestamp to timestamp type             
df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))

#create window by casting timestamp to long (number of seconds)
w = (Window.orderBy(F.col("timestampGMT").cast('long')).rangeBetween(-days(7), 0))

#use collect_set and size functions to perform countDistinct over a window
df = df.withColumn('distinct_color_count_over_the_last_week', F.size(F.collect_set("color").over(w)))

df.show()

这会导致前一周记录的颜色明显不计:

+-------+--------------------+------+---------------------------------------+
|dollars|        timestampGMT| color|distinct_color_count_over_the_last_week|
+-------+--------------------+------+---------------------------------------+
|     17|2017-03-10 15:27:...|orange|                                      1|
|     13|2017-03-15 12:27:...|   red|                                      2|
|     25|2017-03-18 11:27:...|   red|                                      1|
+-------+--------------------+------+---------------------------------------+

答案 1 :(得分:4)

@Bob Swain的回答很好,很有效!从那时起Spark version 2.1,Spark提供了与countDistinct函数approx_count_distinct等效的功能,#approx_count_distinct supports a window df = df.withColumn('distinct_color_count_over_the_last_week', F.approx_count_distinct("color").over(w)) 使用效率更高,最重要的是,它支持在窗口上进行计数。

以下是替换代码:

rsd

对于基数小的列,结果应与“ countDistinct”相同。当数据集增长很多时,您应该考虑调整参数{{1}} –允许的最大估计误差,从而可以调整权衡精度/性能。

答案 2 :(得分:-1)

其他答案已过时。我只是遇到了这个问题,最简单的解决方案是使用size(collect_set())-即在窗口中生成唯一项的数组列,然后计算结果集中有多少项。替换

df = df.withColumn('distinct_color_count_over_the_last_week', F.countDistinct("color").over(w))

df = df.withColumn('distinct_color_count_over_the_last_week', F.size(F.collect_set("color")).over(w))

重要的是要注意,如果您的条目包含空值,它们将不会被添加到结果集中,因此不会被计数。如果不希望出现这种情况,请考虑将您的entry列(在这种情况下为color)合并为您确定不会在数据中出现的某个默认值-可能是空字符串,或者实际上是任何愚蠢的文本。