我试图弄清楚如何从数据集聚合数据,然后使用Apache Spark将结果添加到原始数据集。 我已经尝试了两种我不满意的解决方案,我想知道是否有一种我可以看到的更具可扩展性和效率的解决方案。
以下是输入和预期输出数据的非常简化的样本:
输入:
客户列表,以及每个客户的已购买商品列表。
(John, [toast, butter])
(Jane, [toast, jelly])
输出:
客户列表,以及每个客户的已购买商品列表,以及每件商品,购买此商品的客户数量。
(John, [(toast, 2), (butter, 1)])
(Jane, [(toast, 2), (jelly, 1)])
以下是我到目前为止尝试过的解决方案,列出了步骤和输出数据。
解决方案#1:
Start with a pair rdd:
(John, [toast, butter])
(Jane, [toast, jelly])
flatMapToPair:
(toast, John)
(butter, John)
(toast, Jane)
(jelly, Jane)
aggregateByKey:
(toast, [John, Jane])
(butter, [John])
(jelly, [Jane])
flatMapToPair: (using the size of the list of customers)
(John, [(toast, 2), (butter, 1)])
(Jane, [(toast, 2), (jelly, 1)])
虽然这适用于小型数据集,但对于较大的数据集来说这是一个糟糕的想法,因为在某一点上,您为每个产品保留了一大堆客户,这些客户可能不适合执行者的记忆。
解决方案#2:
Start with a pair rdd:
(John, [toast, butter])
(Jane, [toast, jelly])
flatMapToPair:
(toast, John)
(butter, John)
(toast, Jane)
(jelly, Jane)
aggregateByKey: (counting customers without creating a list)
(toast, 2)
(butter, 1)
(jelly, 1)
join: (using the two previous results)
(toast, (John, 2))
(butter, (John, 1))
(toast, (Jane, 2))
(jelly, (Jane, 1))
mapToPair:
(John, (toast, 2))
(John, (butter, 1))
(Jane, (toast, 2))
(Jane, (jelly, 1))
aggregateByKey:
(John, [(toast, 2), (butter, 1)])
(Jane, [(toast, 2), (jelly, 1)])
此解决方案应该可行,但我觉得应该有一些其他可能不涉及加入RDD的解决方案。
这个问题是否有更具可扩展性/效率/更好的“解决方案#3”?
答案 0 :(得分:1)
我认为另一种方法是使用GraphX。
这是工作代码(scala 2.11.12,Spark 2.3.0):
import org.apache.spark.graphx._
import org.apache.spark.sql.SparkSession
object Main {
private val ss = SparkSession.builder().appName("").master("local[*]").getOrCreate()
private val sc = ss.sparkContext
def main(args: Array[String]): Unit = {
sc.setLogLevel("ERROR")
// Class for vertex values
case class Value(name: String, names: List[String], count: Int)
// Message that is sent from one Vertex to another
case class Message(names: List[String], count: Int)
// Simulate input data
val allData = sc.parallelize(Seq(
("John", Seq("toast", "butter")),
("Jane", Seq("toast", "jelly"))
))
// Create vertices
// Goods and People names - all will become vertices
val vertices = allData.flatMap(pair =>
pair._2 // Take all goods bought
.union(Seq(pair._1)) // add name
.map(v => (v.hashCode.toLong, Value(v, List[String](), 0)))) // (id, Value)
// Hash codes are required because in GraphX in vertexes requires IDs as Long
// Create edges: Person --> Bought goods
val edges = allData
.flatMap(pair =>
pair._2 // Take all goods
.map(goods => Edge[Int](pair._1.hashCode().toLong, goods.hashCode.toLong, 0))) // create pairs of (person, bought_good)
// Create graph from edges and vertices
val graph = Graph(vertices, edges)
// Initial message will be sent to all vertexes at the start
val initialMsg = Message(List[String](), 0)
// How vertex should process received message
def onMsgReceive(vertexId: VertexId, value: Value, msg: Message): Value = {
if (msg == initialMsg) value // Just ignore initial message
else Value(value.name, msg.names, msg.count) // Received message already contains all our results
}
// How vertexes should send messages
def sendMsg(triplet: EdgeTriplet[Value, Int]): Iterator[(VertexId, Message)] = {
// Each vertix sends only one message with it's own name and 1
Iterator((triplet.dstId, Message(List[String](triplet.srcAttr.name), 1)))
}
// How incoming messages to one vertex should be merged
def mergeMsg(msg1: Message, msg2: Message): Message = {
// On the goods vertices messages from people who bought them will merge
// Final message will contain names of all people who bought this good and count of them
Message(msg1.names ::: msg2.names, msg1.count + msg2.count)
}
// Kick out pregel calculation
val res = graph
.pregel(initialMsg, Int.MaxValue, EdgeDirection.Out)(onMsgReceive, sendMsg, mergeMsg)
val values = res.vertices
.filter(v => v._2.count != 0) // Filter out people - they will not have any incoming edges
.map(pair => pair._2) // Also remove IDs
values // (good, (List of names, count))
.flatMap(v => v.names.map(n => (n, (v.name, v.count)))) // transform to (name, (good, count))
.aggregateByKey(List[(String, Int)]())((acc, v) => v :: acc, (acc1, acc2) => acc1 ::: acc2) // aggregate by names
.collect().foreach(println) // Print the result
}
}
可能有更好的方法如何使用相同的方法,但仍然 - 结果:
=======================================
(Jane,List((jelly,1), (toast,2)))
(John,List((butter,1), (toast,2)))
第二个例子是我在评论中谈到的。
import org.apache.spark.graphx._
import org.apache.spark.sql.SparkSession
object Main {
private val ss = SparkSession.builder().appName("").master("local[*]").getOrCreate()
private val sc = ss.sparkContext
def main(args: Array[String]): Unit = {
sc.setLogLevel("ERROR")
// Entity and how much it was bought
case class Entity(name: String, bought: Int)
// Class for vertex values
case class Value(name: Entity, names: List[Entity])
// Message that is sent from one Vertex to another
case class Message(items: List[Entity])
// Simulate input data
val allData = sc.parallelize(Seq(
("John", Seq("toast", "butter")),
("Jane", Seq("toast", "jelly"))
))
// First calculate how much of each Entity was bought
val counts = allData
.flatMap(pair => pair._2.map(v => (v, 1))) // flatten all bought items
.reduceByKey(_ + _) // count occurrences
.map(v => Entity(v._1, v._2)) // create items
// Create vertices
// Goods and People names - all will become vertices
val vertices = allData
.map(pair => Entity(pair._1, 0)) // People are also Entities - but with 0, since they were not bought :)
.union(counts) //
.map(v => (v.name.hashCode.toLong, Value(Entity(v.name, v.bought), List[Entity]()))) // (key, value)
// Hash codes are required because in GraphX in vertexes requires IDs as Long
// Create edges: Entity --> Person
val edges = allData
.flatMap(pair =>
pair._2 // Take all goods
.map(goods => Edge[Int](goods.hashCode.toLong, pair._1.hashCode().toLong, 0)))
// Create graph from edges and vertices
val graph = Graph(vertices, edges)
// Initial message will be sent to all vertexes at the start
val initialMsg = Message(List[Entity](Entity("", 0)))
// How vertex should process received message
def onMsgReceive(vertexId: VertexId, value: Value, msg: Message): Value = {
if (msg == initialMsg) value // Just ignore initial message
else Value(value.name, msg.items) // Received message already contains all results
}
// How vertexes should send messages
def sendMsg(triplet: EdgeTriplet[Value, Int]): Iterator[(VertexId, Message)] = {
// Each vertex sends only one message with it's own Entity
Iterator((triplet.dstId, Message(List[Entity](triplet.srcAttr.name))))
}
// How incoming messages to one vertex should be merged
def mergeMsg(msg1: Message, msg2: Message): Message = {
// On the goods vertices messages from people who bought them will merge
// Final message will contain names of all people who bought this good and count of them
Message(msg1.items ::: msg2.items)
}
// Kick out pregel calculation
val res = graph
.pregel(initialMsg, Int.MaxValue, EdgeDirection.Out)(onMsgReceive, sendMsg, mergeMsg)
res
.vertices
.filter(vertex => vertex._2.names.nonEmpty) // Filter persons
.map(vertex => (vertex._2.name.name, vertex._2.names)) // Remove vertex IDs
.collect() // Print results
.foreach(println)
}
}
答案 1 :(得分:1)
这里有dataframe
方式供您尝试使用
如果您已经配对 rdds ,那么使用列名称调用toDF
应该会为您提供dataframe
val df = pairedRDD.toDF("key", "value")
应该是
+----+---------------+
|key |value |
+----+---------------+
|John|[toast, butter]|
|Jane|[toast, jelly] |
+----+---------------+
现在,您只需explode
,groupby
,聚合计数,再次explode
,groupby
和聚合以获取具有计数的原始数据集
import org.apache.spark.sql.functions._
df.withColumn("value", explode(col("value")))
.groupBy("value").agg(count("value").as("count"), collect_list("key").as("key"))
.withColumn("key", explode(col("key")))
.groupBy("key").agg(collect_list(struct("value", "count")).as("value"))
应该给你
+----+-----------------------+
|key |value |
+----+-----------------------+
|John|[[toast,2], [butter,1]]|
|Jane|[[jelly,1], [toast,2]] |
+----+-----------------------+
您可以在dataframe
中进一步处理,或使用rdd
api更改回.rdd
。