如何使用aggregateBykey获取每个键的值列表?

时间:2019-04-06 02:25:14

标签: apache-spark pyspark

假设我们有一个rdd包含元素,每个元素如下:

(studentName, course, grade):

("Joseph", "Maths", 83), ("Joseph", "Physics", 74), ("Joseph", "Chemistry", 91), ("Joseph", "Biology", 82), 
  ("Jimmy", "Maths", 69), ("Jimmy", "Physics", 62), ("Jimmy", "Chemistry", 97), ("Jimmy", "Biology", 80), 
  ("Tina", "Maths", 78), ("Tina", "Physics", 73), ("Tina", "Chemistry", 68)

我的目标是使用(StudentName, [(course, grade)])获得由aggregateBykey组成的另一个rdd:

("Joseph", [("Maths", 83),("Physics", 74), ("Chemistry", 91), ("Biology", 82)]) 
  ("Jimmy", [("Maths", 69), ("Physics", 62), ("Chemistry", 97), ("Biology", 80)])
  ("Tina", [("Maths", 78), ("Physics", 73), ("Chemistry", 68)])

我尝试了以下操作:

zero_val = []

student_list_rdd = studentRDD(lambda u: (u[0], (u[1], u[2]))).aggregateByKey(zero_val, seq_op, comb_op) 

def seq_op(accumulator, element):
    if element not in accumulator:
        return element
    return accumulator

# Combiner Operation : Finding Maximum Marks out Partition-Wise Accumulators
def comb_op(accumulator1, accumulator2):
    return accumulator1 + accumulator2

但是我得到了以下结果:

("Joseph", ("Maths", 83,"Physics", 74, "Chemistry", 91, "Biology", 82) 
      ("Jimmy", ("Maths", 69, "Physics", 62, "Chemistry", 97, "Biology", 80)
      ("Tina", ("Maths", 78, "Physics", 73, "Chemistry", 68)

一些获得想要的输出的提示会受到赞赏吗?

如果我们有一个包含三列的pyspark数据框,我们该怎么做: <student, course, grade>

1 个答案:

答案 0 :(得分:1)

不需要aggregateByKeygroupBy应该可以工作。只需groupBy第一个值,然后通过从每个元组中删除第一个值来变换每个组:

rdd.groupBy(lambda x: x[0]).mapValues(lambda g: [x[1:] for x in g]).collect()

# [('Jimmy', [('Maths', 69), ('Physics', 62), ('Chemistry', 97), ('Biology', 80)]), 
#  ('Tina', [('Maths', 78), ('Physics', 73), ('Chemistry', 68)]), 
#  ('Joseph', [('Maths', 83), ('Physics', 74), ('Chemistry', 91), ('Biology', 82)])]