我想在Map[String,List[scala.util.matching.Regex]]
与dataframe列之间执行查找。如果任何List[scala.util.matching.Regex]
与数据框列值匹配,则它应从key
返回p Map[String,List[scala.util.matching.Regex]]
Map[String,List[scala.util.matching.Regex]] = Map(m1 -> List(rule1, rule2), m2 -> List(rule3), m3 -> List(rule6)))
我想遍历正则表达式列表并与dataframe列值匹配。如果可以并行进行正则表达式匹配而不是顺序进行
会更好dataframe
+------------------------+
|desc |
+------------------------+
|STRING MATCHES SSS rule1|
|STRING MATCHES SSS rule1|
|STRING MATCHES SSS rule1|
|STRING MATCHES SSS rule2|
|STRING MATCHES SSS rule2|
|STRING MATCHES SSS rule3|
|STRING MATCHES SSS rule3|
|STRING MATCHES SSS rule6|
+------------------------+
O / P:
+-------------------+------------------------+
|merchant |desc |
+-------------------+------------------------+
|m1 |STRING MATCHES SSS rule1|
|m1 |STRING MATCHES SSS rule1|
|m1 |STRING MATCHES SSS rule1|
|m1 |STRING MATCHES SSS rule2|
|m1 |STRING MATCHES SSS rule2|
|m2 |STRING MATCHES SSS rule3|
|m2 |STRING MATCHES SSS rule3|
|m3 |STRING MATCHES SSS rule6|
+-------------------+------------------------+
答案 0 :(得分:1)
这是基于DataFrame map
函数和预定义规则集rules
的另一种方式:
import spark.implicits._
import scala.util.matching.Regex
val df = Seq(
("STRING MATCHES SSS rule1"),
("STRING MATCHES SSS rule1"),
("STRING MATCHES SSS rule1"),
("STRING MATCHES SSS rule2"),
("STRING MATCHES SSS rule2"),
("STRING MATCHES SSS rule3"),
("STRING MATCHES SSS rule3"),
("STRING MATCHES SSS rule6"),
("STRING MATCHES SSS ruleXXX")
).toDF("desc")
val rules = Map(
"m1" -> List("rule1".r, "rule2".r),
"m2" -> List("rule3".r),
"m3" -> List("rule6".r)
)
df.map{r =>
val desc = r.getString(0)
val merchant = rules.find(_._2.exists(_.findFirstIn(desc).isDefined)) match {
case Some((m : String, _)) => m
case None => null
}
(merchant, desc)
}.toDF("merchant", "desc").show(false)
输出:
+--------+--------------------------+
|merchant|desc |
+--------+--------------------------+
|m1 |STRING MATCHES SSS rule1 |
|m1 |STRING MATCHES SSS rule1 |
|m1 |STRING MATCHES SSS rule1 |
|m1 |STRING MATCHES SSS rule2 |
|m1 |STRING MATCHES SSS rule2 |
|m2 |STRING MATCHES SSS rule3 |
|m2 |STRING MATCHES SSS rule3 |
|m3 |STRING MATCHES SSS rule6 |
|null |STRING MATCHES SSS ruleXXX|
+--------+--------------------------+
说明:
rules.find(...
从规则中找到键/值对
_._2.exists(...
的值是正则表达式
_.findFirstIn(desc).isDefined
匹配的 desc
case Some((m : String, _)) => m
并从该对中提取密钥
PS:我不确定您对 regex匹配的含义是可以并行执行,而不是顺序进行,因为上述解决方案中的map函数将已经并行执行。并行化级别取决于所选的分区号。为了在映射函数内部添加额外的并行化(例如以线程(或Scala Futures)的形式),它肯定会使代码复杂化而不增加性能。这是因为如果创建大量线程,则很可能会为CPU创建瓶颈,而不是加快程序速度。 Spark是一种有效的分布式系统,无需寻找有关并行执行的替代方案。
答案 1 :(得分:0)
您可以像下面这样声明UDF
,它将并行运行并且速度很快。根据我的理解,下面仅是参考。您可以以此为参考,并可以相应地设计UDF
。
scala> import org.apache.spark.sql.expressions.UserDefinedFunction
scala> def RuleCheck:UserDefinedFunction = udf((colmn:String) => {
| val Rule:Map[String,List[String]] = Map("Number" -> List("[0-9]"),"Statment" -> List("[a-zA-Z]"), "Fruit" -> List("apple","banana","orange"), "Country" -> List("India","US","UK"))
| var Out = scala.collection.mutable.Set[String]()
| Rule.foreach{ rr =>
| val key = rr._1
| val Listrgx = rr._2
|
| Listrgx.foreach{ x =>
| val rgx = x.r
|
| if(rgx.findFirstMatchIn(colmn).mkString != ""){
| Out += key
| }
| }
| }
| Out.mkString(",") })
scala> df.show()
+---+--------------------+
| id| comment|
+---+--------------------+
| 1| I have 3 apples|
| 2|I like banana and...|
| 3| I am from US|
| 4| 1932409243|
| 5| I like orange|
| 6| #%@#$@#%@#$|
+---+--------------------+
scala> df.withColumn("Key", RuleCheck(col("comment"))).show(false)
+---+---------------------------------+----------------------+
|id |comment |Key |
+---+---------------------------------+----------------------+
|1 |I have 3 apples |Number,Fruit,Statment |
|2 |I like banana and I am from India|Country,Fruit,Statment|
|3 |I am from US |Country,Statment |
|4 |1932409243 |Number |
|5 |I like orange |Fruit,Statment |
|6 |#%@#$@#%@#$ | |
+---+---------------------------------+----------------------+