在spark-sql中更改了前N个值

时间:2019-03-25 12:46:00

标签: apache-spark apache-spark-sql

我需要根据评论总数显示前五名州和城市(下面提到的原始模式中的评论计数)。我的DF(来自Json文件)的说明如下。

+-------------+--------------------+-------+
|     col_name|           data_type|comment|
+-------------+--------------------+-------+
|   attributes|struct<Accepts Cr...|   null|
|         city|              string|   null|
|neighborhoods|       array<string>|   null|
|         open|             boolean|   null|
| review_count|              bigint|   null|
|        stars|              double|   null|
|        state|              string|   null|
|         type|              string|   null|
+-------------+--------------------+-------+

我尝试按方法排序,但是没有用。终于了解了窗口函数here

在我编写的代码中,评论计数的值与Json文件中的值不完全相同。

我尝试过的代码是:

val topcity=spark.sql("select city,state,review_count,RANK() OVER (ORDER BY review_count desc ) AS RANKING from yelp").show(5)

以下是我得到的输出:

+-------------+-----+------------+-------+
|         city|state|review_count|RANKING|
+-------------+-----+------------+-------+
|   Pittsburgh|   PA|           3|      1|
|     Carnegie|   PA|           3|      2|
|     Carnegie|   PA|           3|      3|
|     Carnegie|   PA|           3|      4|
|   Pittsburgh|   PA|           3|      5|
+-------------+--------------------+-----+

所以我的评论数量只是3的常数。所以我的问题是:

  1. 为什么评论计数始终为3?
  2. 我应该进行哪些更改才能获得评论计数的前5个确切值?

1 个答案:

答案 0 :(得分:1)

接下来的实现是假设您正在寻找如何获取每个州/城市组合的评论总数(希望我理解对了):

首先,我们使用以下方法生成一些虚拟数据:

cities_data = [
            ["Alameda", "California", 1],
            ["Alameda", "California", 3],
            ["Berkeley", "California", 2],
            ["Beverly Hills", "California", 2],
            ["Beverly Hills", "California", 3],
            ["Hollywood", "California", 4],
            ["Miami", "Florida", 3],
            ["Miami", "Florida", 2],
            ["Orlando", "Florida", 1],
            ["Cocoa Beach", "Florida", 1]]

cols = ["city", "state", "review_count"]
df = spark.createDataFrame(cities_data, cols)
df.show(10, False)

这将打印:

+-------------+----------+------------+
|city         |state     |review_count|
+-------------+----------+------------+
|Alameda      |California|1           |
|Alameda      |California|3           |
|Berkeley     |California|2           |
|Beverly Hills|California|2           |
|Beverly Hills|California|3           |
|Hollywood    |California|4           |
|Miami        |Florida   |3           |
|Miami        |Florida   |2           |
|Orlando      |Florida   |1           |
|Cocoa Beach  |Florida   |1           |
+-------------+----------+------------+

按州/市对数据分组,以获取total_reviews的总和。这在pyspark中,但是将其更改为scala应该很容易:

df = df.groupBy("state", "city") \
        .agg(F.sum("review_count").alias("reviews_count")) \
        .orderBy(F.desc("reviews_count")) \
        .limit(5)

这应该是以上情况的输出:

+----------+-------------+-------------+
|state     |city         |reviews_count|
+----------+-------------+-------------+
|California|Beverly Hills|5            |
|Florida   |Miami        |5            |
|California|Alameda      |4            |
|California|Hollywood    |4            |
|California|Berkeley     |2            |
+----------+-------------+-------------+