在Spark上创建复合键

时间:2015-10-28 23:44:53

标签: apache-spark pyspark rdd

我正在研究Spark上的基本协同过滤算法,但我遇到了RDD转换问题。我的输入RDD如下:

  

[" John"," a"," 5"],[" John"," b" ," 3"],[" John"," c"," 2"],[" Mark", " a"," 3"] ["马克"," b"," 4"] [&#34 ;露西"," b"," 2"] ["露西"," c"," 5&#34 ]

在每个RDD元素中,第一个值是用户,第二个值是产品名称(" a"," b"或" c")第三个值是它的评级。

我想通过按名称分组,然后通过产品组合来转换输入RDD,所以我的最终结果RDD将是

  

[(" a"," b"),(" 5"," 2")] [(" a"," b"),(" 3"," 4")] [(" a"," C&#34),(" 5"" 2&#34)]

在上面的结果中,因为约翰和马克都有"评级"在a和b上,所以我有两个RDD元素,其中(a,b)作为键,它们的等级为值。只有John对a和c都有评级,因此我只有一个RDD元素,其中(a,c)为键。

1 个答案:

答案 0 :(得分:1)

您可以执行以下操作:

val keyedElems = rdd1.map { case (a, b, c) => (a, (b, c)) }
val groupedCombinations = keyedElems.groupByKey().flatMapValues(_.toList.combinations(2))
val productScoreCombinations = groupedCombinations.mapValues { case (elems: List[(String, String)]) => ((elems(0)._1, elems(1)._1), (elems(0)._2, elems(1)._2)) }.values   

我们在这里做的是按用户键入您的输入数据集,通过按键分组生成(产品,评级)的可迭代列表,生成每个列表的2个组合,展平该列表以将每个组合放入其自己的组合中记录,最后重新排序元素,以便在自己的元组中包含产品和评级。

在Spark本地运行时,我看到以下内容:

scala> val rdd1 = sc.parallelize(Array(("John", "a", "5"),("John", "b", "3"),("John", "c", "2"),("Mark", "a", "3"),("Mark", "b", "4"),("Lucy", "b", "2"),("Lucy", "c", "5")))
rdd1: org.apache.spark.rdd.RDD[(String, String, String)] = ParallelCollectionRDD[0] at parallelize at <console>:21

scala> val rdd2 = rdd1.map { case (a, b, c) => (a, (b, c)) }
rdd2: org.apache.spark.rdd.RDD[(String, (String, String))] = MapPartitionsRDD[1] at map at <console>:23

scala> val rdd3 = rdd2.groupByKey().flatMapValues(_.toList.combinations(2))
rdd3: org.apache.spark.rdd.RDD[(String, List[(String, String)])] = MapPartitionsRDD[3] at flatMapValues at <console>:25

scala> val rdd4 = rdd3.mapValues { case (elems: List[(String, String)]) => ((elems(0)._1, elems(1)._1), (elems(0)._2, elems(1)._2)) }.values
rdd4: org.apache.spark.rdd.RDD[((String, String), (String, String))] = MapPartitionsRDD[7] at values at <console>:27

scala> rdd4.foreach(println)
...
((a,b),(3,4))
((b,c),(2,5))
((a,b),(5,3))
((a,c),(5,2))
((b,c),(3,2))

您可以对此运行一个简单的过滤器,以查找包含产品“a”的所有行。

(编辑:)

我想念你已将其标记为pyspark所以我已经使用下面的python解决方案进行了更新(主要是从上面的scala中映射):

import itertools

keyedElems = input.map(lambda x: (x[0], (x[1], x[2])))
groupedCombinations = keyedElems.groupByKey().flatMapValues(lambda arr: itertools.combinations(arr, 2))
productScoreCombinations = groupedCombinations.mapValues(lambda elems: ((elems[0][0], elems[1][0]), (elems[0][1], elems[1][1]))).map(lambda x: x[1])

当我运行上面的代码时,我在pyspark中看到以下内容:

>>> input = sc.parallelize([("John", "a", "5"),("John", "b", "3"),("John", "c", "2"),("Mark", "a", "3"),("Mark", "b", "4"),("Lucy", "b", "2"),("Lucy", "c", "5")])
...
>>> productScoreCombinations.take(6)
...
[(('b', 'c'), ('2', '5')), (('a', 'b'), ('5', '3')), (('a', 'c'), ('5', '2')), (('b', 'c'), ('3', '2')), (('a', 'b'), ('3', '4'))]