在列的模式上执行Spark SQL区分大小写的过滤器

时间:2018-09-19 22:08:09

标签: sql regex scala apache-spark apache-spark-sql

如何在模式的列基础上将spark sql过滤器用作区分大小写的过滤器。

例如,我有一个模式:

  

'Aaaa AA'

我的专栏有这样的数据:

adaz
LssA ss 
Leds ST 
Pear QA 
Lear QA

关于字母大小写,我想检索具有“ Aaaa AA”模式的行。这意味着所需的行将是“ Leds ST”,“ Pear QA”,“ Lear QA”。

"Aaaa AA" => 'Leds ST' , 'Pear QA', 'Lear QA'
"AaaA aa" => 'LssA ss'
"aaaa" => 'adaz'

如何使用Spark sql获得此结果? 还是我们可以为此结果编写任何正则表达式sql查询?

3 个答案:

答案 0 :(得分:3)

  

我们可以使用Spark SQL函数translate()为您的字符串创建一个分组列。

使用PySpark:

用于测试的示例数据框

from pyspark.sql.types import StringType

df = spark.createDataFrame(["adaz", "LssA ss", "Leds ST", "Pear QA","Lear QA"], StringType())

实际穿透性

from pyspark.sql.functions import translate, collect_list, col
import string

lowercases = string.ascii_lowercase
uppercases = string.ascii_uppercase
length_alphabet = len(uppercases)

ones = "1" * length_alphabet
zeroes = "0" * length_alphabet

old = uppercases + lowercases
new = ones + zeroes

df.withColumn("group", translate(df.value, old, new)) \
  .groupBy(col("group")).agg(collect_list(df.value).alias("strings")) \
  .show(truncate = False)

结果:

+-------+---------------------------+
|group  |strings                    |
+-------+---------------------------+
|1000 11|[Leds ST, Pear QA, Lear QA]|
|0000   |[adaz]                     |
|1001 00|[LssA ss]                  |
+-------+---------------------------+

使用Scala Spark:

import org.apache.spark.sql.functions.{translate, col, collect_list}

val lower = 'a' to 'z'
val upper = 'A' to 'Z'
val length_alphabet = upper.size

val lowercases = lower.mkString("")
val uppercases = upper.mkString("")

val ones = "1" * length_alphabet
val zeroes = "0" * length_alphabet

val old = uppercases + lowercases
val news = ones + zeroes

df.withColumn("group", translate($"value", old, news)) 
  .groupBy(col("group")).agg(collect_list($"value").alias("strings")) 
  .show(truncate = false)

答案 1 :(得分:2)

带有“ regexp_extract”:

val df=List(
"adaz",
"LssA ss",
  "Leds ST",
  "Pear QA",
  "Lear QA"
).toDF("value")
df.filter(regexp_extract($"value","^[A-Z][a-z]{3} [A-Z]{2}$",0)=!=lit("")).show(false)

输出:

+-------+
|value  |
+-------+
|Leds ST|
|Pear QA|
|Lear QA|
+-------+

答案 2 :(得分:0)

我要介绍@ pasha701。

scala> val df=List(
     | "adaz",
     | "LssA ss",
     |   "Leds ST",
     |   "Pear QA",
     |   "Lear QA"
     | ).toDF("value")
df: org.apache.spark.sql.DataFrame = [value: string]

scala> val df2= df.withColumn("reg1", regexp_extract($"value","^[A-Z][a-z]{3} [A-Z]{2}$",0)=!=lit("")).withColumn("reg2",regexp_extract($"value","^[a-z]{4}$",0)=!=lit("")).withColumn("reg3", regexp_extract($"value","^[A-Z][a-z]{2}[A-Z] [a-z]{2}$",0)=!=lit(""))
df2: org.apache.spark.sql.DataFrame = [value: string, reg1: boolean ... 2 more fields]

scala> val df3=df2.withColumn("reg_patt", when('reg1,"1000 11").when('reg2,"0000").when('reg3,"1001 00").otherwise("9"))
df3: org.apache.spark.sql.DataFrame = [value: string, reg1: boolean ... 3 more fields]

scala> df3.groupBy("reg_patt").agg(collect_list('value) as "newval").show(false)
+--------+---------------------------+
|reg_patt|newval                     |
+--------+---------------------------+
|1000 11 |[Leds ST, Pear QA, Lear QA]|
|0000    |[adaz]                     |
|1001 00 |[LssA ss]                  |
+--------+---------------------------+


scala>