使用来自Spark驱动程序的Java本机readObject进行反序列化时出现ClassCastException

时间:2018-02-12 11:30:56

标签: java scala apache-spark serialization google-cloud-dataproc

我有两个火花作业A和B,这样A必须在B之前运行.A的输出必须是可读的:

  • 火花作业B
  • Spark环境之外的独立Scala程序(没有Spark依赖)

我目前正在使用Java的Scala案例类本机序列化。

来自A Spark Job:

val model = ALSFactorizerModel(...)

context.writeSerializable(resultOutputPath, model)

使用序列化方法:

def writeSerializable[T <: Serializable](path: String, obj: T): Unit = {
  val writer: OutputStream = ... // Google Cloud Storage dependant
  val oos: ObjectOutputStream = new ObjectOutputStream(writer)
  oos.writeObject(obj)
  oos.close()
  writer.close()
}

来自B Spark Job或任何独立的非Spark Scala代码:

val lastFactorizerModel: ALSFactorizerModel = context
                     .readSerializable[ALSFactorizerModel](ALSFactorizer.resultOutputPath)

使用反序列化方法:

def readSerializable[T <: Serializable](path: String): T = {
  val is : InputStream = ... // Google Cloud Storage dependant
  val ois = new ObjectInputStream(is)
  val model: T = ois
    .readObject()
    .asInstanceOf[T]
  ois.close()
  is.close()

  model
}

(嵌套)案例类:

ALSFactorizerModel:

package mycompany.algo.als.common.io.model.factorizer

import mycompany.data.item.ItemStore

@SerialVersionUID(1L)
final case class ALSFactorizerModel(
  knownItems:       Array[ALSFeaturedKnownItem],
  unknownItems:     Array[ALSFeaturedUnknownItem],
  rank:             Int,
  modelTS:          Long,
  itemRepositoryTS: Long,
  stores:           Seq[ItemStore]
) {   
}

ItemStore:

package mycompany.data.item

@SerialVersionUID(1L)
final case class ItemStore(
  id:     String,
  tenant: String,
  name:   String,
  index:  Int
) {
}

输出:

  • 来自独立的非Spark Scala计划=&gt;行
  • 来自我的开发机器本地运行的B Spark作业(Spark独立本地节点)=&gt;行
  • 从(Dataproc)Spark群集上运行的B Spark作业=&gt;因以下异常而失败:

例外:

java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field mycompany.algo.als.common.io.model.factorizer.ALSFactorizerModel.stores of type scala.collection.Seq in instance of mycompany.algo.als.common.io.model.factorizer.ALSFactorizerModel
  at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
  at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
  at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2251)
  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
  at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
  at mycompany.fs.gcs.SimpleGCSFileSystem.readSerializable(SimpleGCSFileSystem.scala:71)
  at mycompany.algo.als.batch.strategy.ALSClusterer$.run(ALSClusterer.scala:38)
  at mycompany.batch.SinglePredictorEbapBatch$$anonfun$3.apply(SinglePredictorEbapBatch.scala:55)
  at mycompany.batch.SinglePredictorEbapBatch$$anonfun$3.apply(SinglePredictorEbapBatch.scala:55)
  at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
  at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
  at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
  at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
  at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
  at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
  at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

我错过了什么吗?我应该配置Dataproc / Spark以支持对此代码使用Java序列化吗?

我使用--jars <path to my fatjar>提交作业,之前从未遇到其他问题。此Jar中不包含spark依赖项,范围为Provided

Scala版本: 2.11.8 Spark版本: 2.0.2 SBT版本: 0.13.13

感谢您的帮助

1 个答案:

答案 0 :(得分:0)

stores: Seq[ItemStore]替换为stores: Array[ItemStore]已解决了我们的问题。

或者我们可以使用另一个类加载器进行ser / deser-ialization操作。

希望这会有所帮助。