一个RDD中的部分/完全匹配值与另一个RDD中的值

时间:2017-08-28 04:57:48

标签: scala apache-spark apache-spark-sql pattern-matching

我有两个RDD,其中第一个RDD具有

形式的记录
RDD1 = (1, 2017-2-13,"ABX-3354 gsfette"
        2, 2017-3-18,"TYET-3423 asdsad"
        3, 2017-2-09,"TYET-3423 rewriu"
        4, 2017-2-13,"ABX-3354 42324"
        5, 2017-4-01,"TYET-3423 aerr")

,第二个RDD有

形式的记录
RDD2 = ('mfr1',"ABX-3354")
       ('mfr2',"TYET-3423")

我需要找到RDD1中的所有记录,RDD2中的每个值都匹配RDD2的第3列和RDD2的第2列,并且得到计数

对于此示例,最终结果为:

ABX-3354  2
TYET-3423 3

这样做的最佳方式是什么?

2 个答案:

答案 0 :(得分:3)

以下是如何获得结果

val RDD1 = spark.sparkContext.parallelize(Seq(
  (1, "2017-2-13", "ABX-3354 gsfette"),
  (2, "2017-3-18", "TYET-3423 asdsad"),
  (3, "2017-2-09", "TYET-3423 rewriu"),
  (4, "2017-2-13", "ABX-3354 42324"),
  (5, "2017-4-01", "TYET-3423 aerr")
))

val RDD2 = spark.sparkContext.parallelize(Seq(
  ("mfr1","ABX-3354"),
  ("mfr2","TYET-3423")
))

RDD1.map(r =>{
  (r._3.split(" ")(0), (r._1, r._2, r._3))
})
  .join(RDD2.map(r => (r._2, r._1)))
  .groupBy(_._1)
  .map(r => (r._1, r._2.toSeq.size))
  .foreach(println)

输出:

(TYET-3423,3)
(ABX-3354,2)

希望这有帮助!

答案 1 :(得分:3)

我正在使用Spark SQL发布几个解决方案,并且更专注于在给定文本中搜索字符串的准确模式匹配

1:使用CrossJoin

import spark.implicits._

val df1 = Seq(
  (1, "2017-2-13", "ABX-3354 gsfette"),
  (2, "2017-3-18", "TYET-3423 asdsad"),
  (3, "2017-2-09", "TYET-3423 rewriu"),
  (4, "2017-2-13", "ABX-335442324"), //changed from "ABX-3354 42324"
  (5, "2017-4-01", "aerrTYET-3423") //changed from "TYET-3423 aerr"
).toDF("id", "dt", "txt")

val df2 = Seq(
  ("mfr1", "ABX-3354"),
  ("mfr2", "TYET-3423")
).toDF("col1", "key")

//match function for filter
def matcher(row: Row): Boolean = row.getAs[String]("txt")
  .contains(row.getAs[String]("key"))

val join = df1.crossJoin(df2)

import org.apache.spark.sql.functions.count

val result = join.filter(matcher _)
  .groupBy("key")
  .agg(count("txt").as("count"))

2:使用广播变量

import spark.implicits._

val df1 = Seq(
  (1, "2017-2-13", "ABX-3354 gsfette"),
  (2, "2017-3-18", "TYET-3423 asdsad"),
  (3, "2017-2-09", "TYET-3423 rewriu"),
  (4, "2017-2-13", "ABX-3354 42324"),
  (5, "2017-4-01", "aerrTYET-3423"),
  (6, "2017-4-01", "aerrYET-3423")
).toDF("id", "dt", "pattern")

//small dataset to broadcast
val df2 = Seq(
  ("mfr1", "ABX-3354"),
  ("mfr2", "TYET-3423")
).map(_._2) // considering only 2 values in pair

//Lookup to use in UDF
val lookup = spark.sparkContext.broadcast(df2)

//Udf
import org.apache.spark.sql.functions._
val matcher = udf((txt: String) => {
  val matches: Seq[String] = lookup.value.filter(txt.contains(_))
  if (matches.size > 0) matches.head else null
})

val result = df1.withColumn("match", matcher($"pattern"))
  .filter($"match".isNotNull) // not interested in non matching records
  .groupBy("match")
  .agg(count("pattern").as("count"))

两种解决方案都会产生相同的输出

result.show()

+---------+-----+
|      key|count|
+---------+-----+
|TYET-3423|    3|
| ABX-3354|    2|
+---------+-----+