Spark Filter方法中的多个过滤条件

时间:2017-11-13 12:33:52

标签: scala apache-spark

如何使用scala在spark中使用filter()方法编写多个case,我的Rdd为cogroup

(1,(CompactBuffer(1,john,23),CompactBuffer(1,john,24)).filter(x => (x._2._1 != x._2._2))//value not equal
(2,(CompactBuffer(),CompactBuffer(2,Arun,24)).filter(x => (x._2._1==null))//Second tuple first value is null
(3,(CompactBuffer(3,kumar,25),CompactBuffer()).filter(x => (x._2._2==null))//Second tuple second value is null


val a = source_primary_key.cogroup(destination_primary_key).filter(x => (x._2._1 != x._2._2))

val c=  a.map { y =>



    val key = y._1
    val value = y._2

    srcs = value._1.mkString(",")
    destt = value._2.mkString(",")



    if (srcs.equalsIgnoreCase(destt) == false) {
      srcmis :+= srcs
      destmis :+= destt
    }

    if (srcs == "") {
      extraindest :+= destt.mkString("")
    }

    if (destt == "") {
      extrainsrc :+= srcs.mkString("")
    }


}

如何将每个条件存储在3个不同的数组[String]

我尝试过但是看起来很幼稚,无论如何我们能有效地做到这一点吗?

2 个答案:

答案 0 :(得分:1)

出于测试目的,我创建了以下rdds

val source_primary_key = sc.parallelize(Seq((1,(1,"john",23)),(3,(3,"kumar",25))))
val destination_primary_key = sc.parallelize(Seq((1,(1,"john",24)),(2,(2,"arun",24))))

然后我cogrouped就像你做的那样

val coGrouped = source_primary_key.cogroup(destination_primary_key)

现在是filter cogrouped rdd到三个单独rdds的步骤

val a = coGrouped.filter(x => !x._2._1.isEmpty && !x._2._2.isEmpty)
val b = coGrouped.filter(x => x._2._1.isEmpty && !x._2._2.isEmpty)
val c = coGrouped.filter(x => !x._2._1.isEmpty && x._2._2.isEmpty)

我希望答案很有帮助

答案 1 :(得分:-2)

您可以在RDD上使用collect,然后toList。 示例:

(1,(CompactBuffer(1,john,23),CompactBuffer(1,john,24)).filter(x => (x._2._1 != x._2._2)).collect().toList