如何在没有SparkSQL的情况下使用fasterxml解析Spark中的JSON?

时间:2016-05-19 12:18:40

标签: json scala apache-spark

我到目前为止:

import com.fasterxml.jackson.module.scala.DefaultScalaModule
import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
import com.fasterxml.jackson.databind.ObjectMapper
import com.fasterxml.jackson.databind.DeserializationFeature

case class Person(name: String, lovesPandas: Boolean)

val mapper = new ObjectMapper()

val input = sc.textFile("files/pandainfo.json")
val result = input.flatMap(record => {
    try{
        Some(mapper.readValue(record, classOf[Person]))
    } catch {
        case e: Exception => None
    }
})
result.collect

但结果得到Array()(没有错误)。该文件为https://github.com/databricks/learning-spark/blob/master/files/pandainfo.json如何从此处继续?

咨询Spark: broadcasting jackson ObjectMapper后,我尝试了

import org.apache.spark._
import com.fasterxml.jackson.module.scala.DefaultScalaModule
import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
import com.fasterxml.jackson.databind.ObjectMapper
import com.fasterxml.jackson.databind.DeserializationFeature

case class Person(name: String, lovesPandas: Boolean)

val input = """{"name":"Sparky The Bear", "lovesPandas":true}"""
val result = input.flatMap(record => {
    try{
        val mapper = new ObjectMapper()
        mapper.registerModule(DefaultScalaModule)
        mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
        Some(mapper.readValue(record, classOf[Person]))
    } catch {
        case e: Exception => None
    }
})
result.collect

得到了

Name: Compile Error
Message: <console>:34: error: overloaded method value readValue with alternatives:
  [T](x$1: Array[Byte], x$2: com.fasterxml.jackson.databind.JavaType)T <and>
  [T](x$1: Array[Byte], x$2: com.fasterxml.jackson.core.type.TypeReference[_])T <and>
  [T](x$1: Array[Byte], x$2: Class[T])T <and>

2 个答案:

答案 0 :(得分:1)

答案 1 :(得分:0)

而不是sc.textfile("path\to\json")你可以尝试这个(我在java中写它因为我不知道scala,但是API是相同的):

SQLContext sqlContext = new SQLContext(sc);
DataFrame dfFromJson = sqlContext.read().json("path\to\json\file.json");

spark会读取你的json文件并将其转换为数据帧。

如果你的json文件是嵌套的,你可以使用

org.apache.spark.sql.functions.explode(e: Column): Column

例如,请参阅我的回答here

希望这对你有所帮助。