Spark 2.0.1:将JSON数组列拆分为ArrayType(StringType)

时间:2016-12-20 06:16:06

标签: scala apache-spark

我有这样的数据框

root
 |-- sum_id: long (nullable = true)
 |-- json: string (nullable = true)

+-------+------------------------------+
|sum_id |json                          |
+-------+------------------------------+
|8124455|[{"itemId":11},{"itemId":12}] |
|8124457|[{"itemId":53}]               |
|8124458|[{"itemId":11},{"itemId":33}] |
+-------+------------------------------+

我想用Scala爆炸这个

root
 |-- sum_id: long (nullable = true)
 |-- itemId: int(nullable = true)

+-------+--------+
|sum_id |itemId  |
+-------+--------+
|8124455|11      |
|8124455|12      |
|8124457|53      |
|8124458|11      |
|8124458|33      |
+-------+--------+

我尝试过什么:

  1. 使用get_json_object,但该列是一个JSON对象数组,所以我认为它应该首先被爆炸到对象中,但是如何?

  2. 尝试将json列从StringType投放到ArrayType(StringType),但获得data type mismatch例外。

  3. 请指导我如何解决这个问题。

2 个答案:

答案 0 :(得分:0)

下面的代码将精确地完成您的工作。

val toItemArr = udf((jsonArrStr:String) => {
      jsonArrStr.replace("[","").replace("]","").split(",")
   })

inputDataFrame.withColumn("itemId",explode(toItemArr(get_json_object(col("json"),"$[*].itemId")))).drop("json").show


+-------+------+
|     id|itemId|
+-------+------+ 
|8124455|    11|
|8124455|    12|
|8124457|    53|
|8124458|    11|
|8124458|    33|
+-------+------+

答案 1 :(得分:0)

如果你正在使用Json,那么这可能是最好的方法:

请看一下:

import org.apache.spark._
import com.fasterxml.jackson.module.scala.DefaultScalaModule
import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
import com.fasterxml.jackson.databind.ObjectMapper
import com.fasterxml.jackson.databind.DeserializationFeature

val df = sc.parallelize(Seq((8124455,"""[{"itemId":11},{"itemId":12}]"""),(8124457,"""[{"itemId":53}]"""),(8124458,"""[{"itemId":11},{"itemId":33}]"""))).toDF("sum_id","json")
val result = df.rdd.mapPartitions(records => {
        val mapper = new ObjectMapper with ScalaObjectMapper
        mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
        mapper.registerModule(DefaultScalaModule)
      val values=records.flatMap(record => {
          try {
            Some((record.getInt(0),mapper.readValue(record.getString(1), classOf[List[Map[String,Int]]]).map(_.map(_._2).toList).flatten))
          } catch {
            case e: Exception => None
          }
        })
values.flatMap(listOfList=>listOfList._2.map(a=>(listOfList._1,a)))
    }, true)

result.toDF.show()