我正在尝试从RDD y
Pattern: y: RDD[(MyObj1, scala.Iterable[MyObj2])]
所以我明确创建了 encoder :
implicit def tuple2[A1, A2](
implicit e1: Encoder[A1],
e2: Encoder[A2]
): Encoder[(A1,A2)] = Encoders.tuple[A1,A2](e1, e2)
//Create Dataset
val z = spark.createDataset(y)(tuple2[MyObj1, Iterable[MyObj2]])
当我编译这段代码时,我没有错误,但是当我尝试运行它时,我得到了这个错误:
Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for scala.Iterable[org.bean.input.MyObj2]
- field (class: "scala.collection.Iterable", name: "_2")
- root class: "scala.Tuple2"
at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor(ScalaReflection.scala:625)
at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$10.apply(ScalaReflection.scala:619)
at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$10.apply(ScalaReflection.scala:607)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:344)
at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor(ScalaReflection.scala:607)
at org.apache.spark.sql.catalyst.ScalaReflection$.serializerFor(ScalaReflection.scala:438)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:71)
at org.apache.spark.sql.Encoders$.product(Encoders.scala:275)
at org.apache.spark.sql.LowPrioritySQLImplicits$class.newProductEncoder(SQLImplicits.scala:233)
at org.apache.spark.sql.SQLImplicits.newProductEncoder(SQLImplicits.scala:33)
我的对象的一些解释(MyObj1& MyObj2)
- MyObj1:
case class MyObj1(
id:String,
type:String
)
- MyObj2:
trait MyObj2 {
val o_state:Option[String]
val n_state:Option[String]
val ch_inf: MyObj1
val state_updated:MyObj3
}
请帮助
答案 0 :(得分:1)
Spark没有为Encoder
提供Iterables
,因此,除非您想使用Encoder.kryo
或Encoder.java
,否则这项工作无法完成。
Spark提供Iterable
的最近的Encoders
子类是Seq
,所以这可能就是你应该在这里使用的子类。否则请参阅How to store custom objects in Dataset?
答案 1 :(得分:1)
尝试将声明更改为:val y: RDD[(MyObj1, Seq[MyObj2])]
,它会起作用。我查看了我的课程:
case class Key(key: String) {}
case class Value(value: Int) {}
有关:
val y: RDD[(Key, Seq[Value])] = sc.parallelize(Map(
Key("A") -> List(Value(1), Value(2)),
Key("B") -> List(Value(3), Value(4), Value(5))
).toSeq)
val z = sparkSession.createDataset(y)
z.show()
我得到了:
+---+---------------+
| _1| _2|
+---+---------------+
|[A]| [[1], [2]]|
|[B]|[[3], [4], [5]]|
+---+---------------+
如果我改为Iterable
,我得到了例外。