使用Spark Java RDD匹配数据

时间:2016-03-01 12:23:10

标签: apache-spark rdd

在我最近的BigData项目中,我需要使用Spark。

第一个要求如下

我们有两组来自不同数据源的数据,比如说来自flatFile,另一组来自HDFS。

数据集可能有也可能没有公共列,但我们手头有映射规则,例如

功能1(data1.columnA)==函数2(data2.columnB)

我尝试通过在其他内部的rdd上执行foreach来实现这一点,但这在Spark中是不允许的,

org.apache.spark.SparkException: RDD transformations and actions can only be invoked by the driver, not inside of other transformations; for example, rdd1.map(x => rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063. at org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$sc(RDD.scala:87) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.foreach(RDD.scala:910) at org.apache.spark.api.java.JavaRDDLike$class.foreach(JavaRDDLike.scala:332) at org.apache.spark.api.java.AbstractJavaRDDLike.foreach(JavaRDDLike.scala:46) at com.pramod.engine.DataMatchingEngine.lambda$execute$4e658232$1(DataMatchingEngine.java:44) at com.pramod.engine.DataMatchingEngine$$Lambda$9/1172080526.call(Unknown Source) at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:332) at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.apply(JavaRDDLike.scala:332) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$32.apply(RDD.scala:912) at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$32.apply(RDD.scala:912) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)

请以最佳方式帮助我实现这一目标。

1 个答案:

答案 0 :(得分:1)

听起来你有两个RDD,我们可以调用它们AB,这些需要加入,但ID需要一些修改才能完成。假设这是正确的那么......

// The data to be processed. How you load it and 
// what it looks like is not important.
case class Item (id : Int)

val A = sc.parallelize(Seq(Item(1), Item(2)))
val B = sc.parallelize(Seq(Item(10), Item(20)))

// We then map it to `key, value`, to keep things simple 
// A.id should be id * 100 and B.id should be id * 10
val aWithKey = A.map(x => (x.id * 100, x))
val bWithKey = B.map(x => (x.id * 10, x))

// We can now join the two data sets.
aWithKey.join(bWithKey).collect