Spark:java.io.NotSerializableException

时间:2016-09-22 15:16:10

标签: scala serialization apache-spark spark-streaming

我想将path传递给在Spark Streaming中运行的函数saveAsTextFile。但是,我得到java.io.NotSerializableException。通常在类似情况下我使用骨架对象,但在这种特殊情况下我不知道如何解决问题。请有人帮帮我吗?

import java.util
import java.util.Properties
import com.fasterxml.jackson.databind.{DeserializationFeature, ObjectMapper}
import com.fasterxml.jackson.module.scala.DefaultScalaModule
import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
import com.lambdaworks.jacks.JacksMapper
import org.sedis._
import redis.clients.jedis._
import com.typesafe.config.ConfigFactory
import kafka.consumer.{Consumer, ConsumerConfig}
import kafka.utils.Logging
import org.apache.log4j.{Level, Logger}
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}

class KafkaTestConsumer(val zkQuorum: String,
                        val group: String,
                        val topicMessages: String,
                        val path: String) extends Logging
{

// ...
// DStream[String]
dstream.foreachRDD { rdd =>
   // rdd -> RDD[String], each String is a JSON
   // Parsing each JSON
   // splitted -> RDD[Map[String,Any]]
   val splitted = rdd.map(line => Utils.parseJSON(line)) 
   // ...
   splitted.saveAsTextFile(path)
}

}

object Utils {

  def parseJSON[T](json: String): Map[String,Any] = {
    val mapper = new ObjectMapper() with ScalaObjectMapper
    mapper.registerModule(DefaultScalaModule)
    mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
    mapper.readValue[Map[String,Any]](json)
  }
}

整个堆栈跟踪:

  

16/09/22 17:03:28 ERROR Utils:遇到异常   java.io.NotSerializableException:org.consumer.kafka.KafkaTestConsumer     在   java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)     在   java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)     在   java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)     在   java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)     在   java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)     在   java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)     在   java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)     在   java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)     在   java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)     在   java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)     在   java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)     在   java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)     在   java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)     在   java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)     在   java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)     在   java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)     在   java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)     在   java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)     在   java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)     在   java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)     在   java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputStream.java:441)     在   org.apache.spark.streaming.DStreamGraph $$ anonfun $ $的writeObject $ 1.适用MCV $ SP(DStreamGraph.scala:180)     在   org.apache.spark.streaming.DStreamGraph $$ anonfun $ $的writeObject 1.适用(DStreamGraph.scala:175)     在   org.apache.spark.streaming.DStreamGraph $$ anonfun $ $的writeObject 1.适用(DStreamGraph.scala:175)     在org.apache.spark.util.Utils $ .tryOrIOException(Utils.scala:1205)     在   org.apache.spark.streaming.DStreamGraph.writeObject(DStreamGraph.scala:175)     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at   sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)     在   sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)     在java.lang.reflect.Method.invoke(Method.java:498)at   java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1028)     在   java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)     在   java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)     在   java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)     在   java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)     在   org.apache.spark.serializer.SerializationDebugger $ SerializationDebugger.visitSerializableWithWriteObjectMethod(SerializationDebugger.scala:230)     在   org.apache.spark.serializer.SerializationDebugger $ SerializationDebugger.visitSerializable(SerializationDebugger.scala:189)     在   org.apache.spark.serializer.SerializationDebugger $ SerializationDebugger.visit(SerializationDebugger.scala:108)     在   org.apache.spark.serializer.SerializationDebugger $ SerializationDebugger.visitSerializable(SerializationDebugger.scala:206)     在   org.apache.spark.serializer.SerializationDebugger $ SerializationDebugger.visit(SerializationDebugger.scala:108)     在   org.apache.spark.serializer.SerializationDebugger $ .find(SerializationDebugger.scala:67)     在   org.apache.spark.serializer.SerializationDebugger $ .improveException(SerializationDebugger.scala:41)     在   org.apache.spark.streaming.StreamingContext.validate(StreamingContext.scala:560)     在   org.apache.spark.streaming.StreamingContext.liftedTree1 $ 1(StreamingContext.scala:601)     在   org.apache.spark.streaming.StreamingContext.start(StreamingContext.scala:600)     在   org.consumer.kafka.KafkaDecisionsConsumer.run(KafkaTestConsumer.scala:136)     在org.consumer.ServiceRunner $ .main(QueuingServiceRunner.scala:20)     在org.consumer.ServiceRunner.main(QueuingServiceRunner.scala)

2 个答案:

答案 0 :(得分:0)

问题是你在dstream动作中使用rdd动作saveAsText文件forEach正在运行于worker上,这就是为什么当你运行上面的代码工作者试图执行 splitted.saveAsTextFile(path)时它给出了可序列化的错误示例这是rdd动作,这就是为什么它会给出序列化错误,所以你可以像下面那样

dstream.foreachRDD { rdd =>
   // rdd -> RDD[String], each String is a JSON
   // Parsing each JSON
   // splitted -> RDD[Map[String,Any]]
   val splitted = rdd.map(line => Utils.parseJSON(line)) 
   // ...
}.saveAsTextFile(path)

答案 1 :(得分:0)

在使用Spark 2.3.0版本时,我遇到了同样的问题,我没有删除checkpoint语句。 我通过做两件事来解决它:“

运行命令:

chmod 777 checkpoint_directory

和 为它抛出错误的类实现Serializable接口。

在您的情况下,您需要为下面的类实现Serializable。希望应该解决。

org.consumer.kafka.KafkaTestConsumer