Scala - 在struct上的Spark sql行模式匹配

时间:2018-04-19 05:34:04

标签: scala apache-spark struct pattern-matching case-class

我正在尝试在Dataframe地图函数中进行模式匹配 - 将Row与具有嵌套Case类的Row模式匹配。此数据框是连接的结果,并具有如下所示的架构。它有一些原始类型的列和2个复合列:

case class MyList(values: Seq[Integer])
case class MyItem(key1: String, key2: String, field1: Integer, group1: MyList, group2: MyList, field2: Integer)
val myLine1 = new MyItem ("MyKey01", "MyKey02", 1, new MyList(Seq(1)), new MyList(Seq(2)), 2)
val myLine2 = new MyItem ("YourKey01", "YourKey02", 2, new MyList(Seq(2,3)), new MyList(Seq(4,5)), 20)
val dfRaw = Seq(myLine1, myLine2).toDF
dfRaw.printSchema
dfRaw.show
val df2 = dfRaw.map(r => r match {
    case Row(key1: String, key2: String, field1: Integer, group1: MyList, group2: MyList, field2: Integer) => "Matched"
    case _ => "Un matched"
})
df2.show

我的问题是,在那个地图功能之后,我得到的只是“未匹配”:

root
 |-- key1: string (nullable = true)
 |-- key2: string (nullable = true)
 |-- field1: integer (nullable = true)
 |-- group1: struct (nullable = true)
 |    |-- values: array (nullable = true)
 |    |    |-- element: integer (containsNull = true)
 |-- group2: struct (nullable = true)
 |    |-- values: array (nullable = true)
 |    |    |-- element: integer (containsNull = true)
 |-- field2: integer (nullable = true)
+---------+---------+------+--------------------+--------------------+------+
|     key1|     key2|field1|              group1|              group2|field2|
+---------+---------+------+--------------------+--------------------+------+
|  MyKey01|  MyKey02|     1|   [WrappedArray(1)]|   [WrappedArray(2)]|     2|
|YourKey01|YourKey02|     2|[WrappedArray(2, 3)]|[WrappedArray(4, 5)]|    20|
+---------+---------+------+--------------------+--------------------+------+
df2: org.apache.spark.sql.Dataset[String] = [value: string]
+----------+
|     value|
+----------+
|Un matched|
|Un matched|
+----------+

如果我忽略case分支中的那两个struct列(用 _替换 group1:MyList,group2:MyList ,_ ,然后就可以了

case Row(key1: String, key2: String, field1: Integer, group1: MyList, group2: MyList, field2: Integer) => "Matched"

请问如何在Case类上进行模式匹配? 谢谢!

1 个答案:

答案 0 :(得分:0)

struct列在spark

中被视为org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema

因此您必须将匹配大小写定义为

import org.apache.spark.sql.catalyst.expressions._
val df2 = dfRaw.map(r => r match {
    case Row(key1: String, key2: String, field1: Integer, group1: GenericRowWithSchema, group2: GenericRowWithSchema, field2: Integer) => "Matched"
    case _ => "Un matched"
})

使用通配符(_)定义匹配大小写因为 Scala编译器隐式将org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema计算为数据类型

由于隐式评估,定义案例如下所述也应该像使用外卡一样

case Row(key1: String, key2: String, field1: Integer, group1, group2, field2: Integer) => "Matched"