使用Spark Scala将模式从字符串转换为数组[Structype]

时间:2019-09-28 09:26:02

标签: scala apache-spark apache-spark-sql

我具有如下所示的示例数据,我将需要使用Spark Scala代码将列(ABS,ALT)从字符串转换为Array [structType]。任何帮助将非常感激。

在UDF的帮助下,我能够从字符串转换为arrayType,但是在这两个列(ABS,ALT)从字符串转换为Array [structType]时需要一些帮助。

VIN         TT  MSG_TYPE ABS                           ALT
MSGXXXXXXXX 1   SIGL     [{"E":1569XXXXXXX,"V":0.0}] 
[{"E":156957XXXXXX,"V":0.0}]

df.currentSchema 
root
|-- VIN: string (nullable = true)
|-- TT: long (nullable = true)
|-- MSG_TYPE: string (nullable = true)
|-- ABS: string (nullable = true)
|-- ALT: string (nullable = true)

df.expectedSchema:

|-- VIN: string (nullable = true)
|-- TT: long (nullable = true)
|-- MSG_TYPE: string (nullable = true)
|-- ABS: array (nullable = true)
|    |-- element: struct (containsNull = true)
|    |    |-- E: long (nullable = true)
|    |    |-- V: long (nullable = true)
|-- ALT: array (nullable = true)
|    |-- element: struct (containsNull = true)
|    |    |-- E: long (nullable = true)
|    |    |-- V: double (nullable = true)

2 个答案:

答案 0 :(得分:1)

您可以使用udf来解析Json并将其转换为结构数组。

首先,定义一个解析Json的函数(基于this答案):

case class Data(E:String, V:Double)
class CC[T] extends Serializable { def unapply(a: Any): Option[T] = Some(a.asInstanceOf[T]) }
object M extends CC[Map[String, Any]]
object L extends CC[List[Any]]
object S extends CC[String]
object D extends CC[Double]

def toStruct(in: String): Array[Data] = {
  if( in == null || in.isEmpty) return new Array[Data](0)
  val result = for {
    Some(L(map)) <- List(JSON.parseFull(in))
    M(data) <- map
    S(e) = data("E")
    D(v) = data("V")
  } yield {
    Data(e, v)
  }
  result.toArray
}

此函数返回Data对象的数组,这些对象已经具有正确的结构。现在我们使用此函数定义udf

val ts: String => Array[Data] = toStruct(_)
import org.apache.spark.sql.functions.udf
val toStructUdf = udf(ts)

最后,我们调用udf(例如,在select语句中):

val df = ...
val newdf = df.select('VIN, 'TT, 'MSG_TYPE, toStructUdf('ABS).as("ABS"), toStructUdf('ALT).as("ALT"))
newdf.printSchema()

输出:

root
 |-- VIN: string (nullable = true)
 |-- TT: string (nullable = true)
 |-- MSG_TYPE: string (nullable = true)
 |-- ABS: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- E: string (nullable = true)
 |    |    |-- V: double (nullable = false)
 |-- ALT: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- E: string (nullable = true)
 |    |    |-- V: double (nullable = false)

答案 1 :(得分:1)

如果您尝试以下操作,它也将起作用:

import org.apache.spark.sql.types.{StructField, StructType, ArrayType, StringType}

val schema = ArrayType(StructType(Seq(StructField("E", LongType), StructField("V", DoubleType))))

val final_df = newDF.withColumn("ABS", from_json($"ABS", schema)).withColumn("ALT", from_json($"ALT", schema))

final_df.printSchema:

  root
 |-- VIN: string (nullable = true)
 |-- TT: string (nullable = true)
 |-- MSG_TYPE: string (nullable = true)
 |-- ABS: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- E: long (nullable = true)
 |    |    |-- V: double (nullable = false)
 |-- ALT: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- E: long (nullable = true)
 |    |    |-- V: double (nullable = false)