我假设Spark连接是默认的内部连接。
我使用哪种操作来查找与连接不匹配的元素?
e.g。在这里,我试图摆脱长度为零的edges
。
val zeroLengthNodePairs: RDD[(Node, Node)] = edges.filter(_.distance == 0)
.map{e =>
val List(remove, keep) = List(e.startNode, e.endNode).sortBy(_.id)
remove -> keep
}.distinct()
val edgesByEndNode: RDD[(Node, Edge)] = edges.map(e => e.endNode -> e)
val edgesByStartNode: RDD[(Node, Edge)] = edges.map(e => e.startNode -> e)
edgesByEndNode.join(zeroLengthNodePairs).map { case (remove, (edge, keep)) =>
assert(edge.endNode.point == keep.point)
edge.copy(endNode = keep)
} ++
edgesByStartNode.join(zeroLengthNodePairs).map { case (remove, (edge, keep)) =>
assert(edge.startNode.point == keep.point)
edge.copy(startNode = keep)
} ++
???
代替???
,如何在修改之前将修改后的RDD[Edge]
添加回原始edges
而不包括边缘?
编辑:我想我应该使用一个外部加入。有更好的方法吗?
答案 0 :(得分:2)
糟糕。似乎API文档中的方法不在主要的Spark doco中。这有点帮助!
我最终使用subtractByKey
:
val modified: RDD[(Long, Edge)] = edgesByEndNode.join(zeroLengthNodePairs).map { case (remove, (edge, keep)) =>
assert(edge.endNode.point == keep.point)
edge.id -> edge.copy(endNode = keep)
} ++ edgesByStartNode.join(zeroLengthNodePairs).map { case (remove, (edge, keep)) =>
assert(edge.startNode.point == keep.point)
edge.id -> edge.copy(startNode = keep)
}
(edges.map(e => e.id -> e).subtractByKey(modified) ++ modified).values
答案 1 :(得分:1)
您可以执行join
,而不是fullOutterJoin
,并过滤任何一方None
的值
来自文档:
def fullOuterJoin[W](other: RDD[(K, W)], numPartitions: Int): RDD[(K, (Option[V], Option[W]))]
Perform a full outer join of this and other. For each element (k, v) in this, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for w in other, or the pair (k, (Some(v), None))
if no elements in other have key k. Similarly, for each element (k, w) in other, the resulting RDD will either contain all pairs (k, (Some(v), Some(w))) for v in this, or the pair (k, (None, Some(w))) if no elements in this have key k. Hash-partitions the resulting RDD into the given number of partitions.
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions