为什么错误"无法找到存储在数据集中的类型的编码器"使用案例类编码JSON时?

时间:2016-01-11 06:46:14

标签: scala apache-spark apache-spark-dataset apache-spark-encoders

我写过火花工作:

object SimpleApp {
  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("Simple Application").setMaster("local")
    val sc = new SparkContext(conf)
    val ctx = new org.apache.spark.sql.SQLContext(sc)
    import ctx.implicits._

    case class Person(age: Long, city: String, id: String, lname: String, name: String, sex: String)
    case class Person2(name: String, age: Long, city: String)

    val persons = ctx.read.json("/tmp/persons.json").as[Person]
    persons.printSchema()
  }
}

在运行main函数的IDE中,出现2错误:

Error:(15, 67) Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing sqlContext.implicits._  Support for serializing other types will be added in future releases.
    val persons = ctx.read.json("/tmp/persons.json").as[Person]
                                                                  ^

Error:(15, 67) not enough arguments for method as: (implicit evidence$1: org.apache.spark.sql.Encoder[Person])org.apache.spark.sql.Dataset[Person].
Unspecified value parameter evidence$1.
    val persons = ctx.read.json("/tmp/persons.json").as[Person]
                                                                  ^

但在Spark Shell中,我可以毫无错误地运行此作业。有什么问题?

3 个答案:

答案 0 :(得分:31)

错误消息显示Encoder无法接受Person案例类。

Error:(15, 67) Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing sqlContext.implicits._  Support for serializing other types will be added in future releases.

将案例类的声明移到SimpleApp范围之外。

答案 1 :(得分:2)

如果您在sqlContext.implicits._中添加spark.implicits._SimpleApp(订单无关紧要),则会出现同样的错误。

删除一个或另一个将是解决方案:

val spark = SparkSession
  .builder()
  .getOrCreate()

val sqlContext = spark.sqlContext
import sqlContext.implicits._ //sqlContext OR spark implicits
//import spark.implicits._ //sqlContext OR spark implicits

case class Person(age: Long, city: String)
val persons = ctx.read.json("/tmp/persons.json").as[Person]

使用 Spark 2.1.0

进行测试

有趣的是,如果你添加两次相同的对象,你就不会有问题。

答案 2 :(得分:2)

@Milad Khajavi

在对象SimpleApp之外定义Person案例类。 另外,在main()函数中添加import sqlContext.implicits ._。