PySpark RDD-将排名转化为JSON

时间:2018-08-09 19:36:22

标签: json apache-spark pyspark apache-spark-sql

我有一个Hive查询,返回的数据如下:

Date,Name,Score1,Score2,Avg_Score
1/1/2018,A,10,20,15
1/1/2018,B,20,20,20
1/1/2018,C,15,10,12.5
1/1/2018,D,11,12,11.5
1/1/2018,E,21,29,25
1/1/2018,F,10,21,15.5

我使用hive_context.sql(my_query).rdd将其放入RDD。 我的最终目标是将其转换为基于Avg_score降序排列的JSON格式,如下所示:

Scores=
[
    {
        "Date": '1/1/2018',
        "Name": 'A',
        "Avg_Score": 15,
        "Rank":4
    },
    {
        "Date": '1/1/2018',
        "Name": 'B',
        "Avg_Score": 20,
        "Rank":2
    }
]

作为获得排名的第一步,我尝试实现this approach,但遇到诸如AttributeError: 'RDD' object has no attribute 'withColumn'

这样的错误

我该怎么做?

1 个答案:

答案 0 :(得分:1)

这是因为您在RDD级别上工作。如果要使用Dataframe API,则必须使用Dataset(或Dataframe)。如对您的问题的评论所述,您可以删除.rdd转换并使用asDict获得最终结果。

df = sc.parallelize([
  ("1/1/2018","A",10,20,15.0),
  ("1/1/2018","B",20,20,20.0),
  ("1/1/2018","C",15,10,12.5),
  ("1/1/2018","D",11,12,11.5),
  ("1/1/2018","E",21,29,25.0),
  ("1/1/2018","F",10,21,15.5)]).toDF(["Date","Name","Score1","Score2","Avg_Score"])

from pyspark.sql import Window
import pyspark.sql.functions as psf

w = Window.orderBy(psf.desc("Avg_Score"))

rddDict = (df
  .withColumn("rank",psf.dense_rank().over(w))
  .drop("Score1","Score2")
  .rdd
  .map(lambda row: row.asDict()))

结果

>>> rddDict.take(1)
[{'Date': u'1/1/2018', 'Avg_Score': 25, 'Name': u'E', 'rank': 1}]

但是请注意使用没有分区的Window函数的警告:

18/08/13 11:44:32 WARN window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.