基于两列的spark join操作

时间:2014-04-21 05:28:15

标签: scala apache-spark

我正在尝试基于两列连接两个数据集。它一直有效,直到我使用一列但失败并出现以下错误

:29:错误:值连接不是org.apache.spark.rdd.RDD的成员[(String,String,(String,String,String,String,Double))]        val finalFact = fact.join(dimensionWithSK).map {case(nk1,nk2,((parts1,parts2,parts3,parts4,amount),(sk,prop1,prop2,prop3,prop4)))=> (sk,金额)}

代码:

    import org.apache.spark.rdd.RDD

    def zipWithIndex[T](rdd: RDD[T]) = {
      val partitionSizes = rdd.mapPartitions(p => Iterator(p.length)).collect

      val ranges = partitionSizes.foldLeft(List((0, 0))) { case(accList, count) =>
        val start = accList.head._2
        val end = start + count
        (start, end) :: accList
      }.reverse.tail.toArray

      rdd.mapPartitionsWithIndex( (index, partition) => {
          val start = ranges(index)._1
          val end = ranges(index)._2
          val indexes = Iterator.range(start, end)
          partition.zip(indexes)
      })
    }

    val dimension = sc.
      textFile("dimension.txt").
      map{ line => 
        val parts = line.split("\t")
        (parts(0),parts(1),parts(2),parts(3),parts(4),parts(5))
      }

    val dimensionWithSK = 
      zipWithIndex(dimension).map { case((nk1,nk2,prop3,prop4,prop5,prop6), idx) => (nk1,nk2,(prop3,prop4,prop5,prop6,idx + nextSurrogateKey)) }

    val fact = sc.
      textFile("fact.txt").
      map { line =>
        val parts = line.split("\t")
        // we need to output (Naturalkey, (FactId, Amount)) in
        // order to be able to join with the dimension data.
        (parts(0),parts(1), (parts(2),parts(3), parts(4),parts(5),parts(6).toDouble))
      }  

    val finalFact = fact.join(dimensionWithSK).map { case(nk1,nk2, ((parts1,parts2,parts3,parts4,amount), (sk, prop1,prop2,prop3,prop4))) => (sk,amount) }  

在这里请求别人的帮助.. 谢谢 斯里达尔

3 个答案:

答案 0 :(得分:4)

如果你看一下连接的签名,它就适用于对的RDD:

def join[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (V, W))]

你有三倍。我想你试图加入元组的前2个元素,所以你需要将你的三元素映射到一对,其中该对的第一个元素是包含三元组的前两个元素的对,例如对于任何类型V1V2

val left: RDD[(String, String, V1)] = ??? // some rdd

val right: RDD[(String, String, V2)] = ??? // some rdd

left.map {
  case (key1, key2, value) => ((key1, key2), value)
}
.join(
  right.map {
    case (key1, key2, value) => ((key1, key2), value)
  })

这将为您提供RDD[(String, String), (V1, V2)]

形式的RDD

答案 1 :(得分:0)

    val emp = sc.
      textFile("emp.txt").
      map { line =>
        val parts = line.split("\t")
        // we need to output (Naturalkey, (FactId, Amount)) in
        // order to be able to join with the dimension data.
        ((parts(0), parts(2)),parts(1))
      }

    val emp_new = sc.
      textFile("emp_new.txt").
      map { line =>
        val parts = line.split("\t")
        // we need to output (Naturalkey, (FactId, Amount)) in
        // order to be able to join with the dimension data.
        ((parts(0), parts(2)),parts(1))
      }

    val finalemp = 
      emp_new.join(emp).
      map { case((nk1,nk2) ,((parts1), (val1))) => (nk1,parts1,val1) }  

答案 2 :(得分:0)

rdd1架构: field1,field2,field3,fieldX,.....

rdd2架构: field1,field2,field3,fieldY,.....

val joinResult = rdd1.join(rdd2,                Seq(“field1”,“field2”,“field3”),“outer”)

joinResult架构: field1,field2,field3,fieldX,fieldY,......