引起原因:java.io.NotSerializableException:org.apache.kafka.clients.producer.KafkaProducer

时间:2019-09-18 07:03:37

标签: java apache-kafka kafka-producer-api

我尝试通过kafka生产者发送java字符串消息。从Java spark JavaPairDStream中提取字符串消息。

JavaPairDStream<String, String> processedJavaPairStream = inputStream.mapToPair
                 (record-> new Tuple2<>(record.key(), record.value())).mapValues(message -> message.replace('>', '#'));

String outTopics = "outputTopic";
String broker = "localhost:9092";

Properties properties = new Properties();
properties.put("bootstrap.servers", broker);
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");

Producer<String, String> producer = new KafkaProducer<String, String>(properties);
processedJavaPairStream.foreachRDD(rdd -> rdd.foreach(tuple2 -> {

    ProducerRecord<String, String> message = new ProducerRecord<String, String>(outTopics, tuple2._1, tuple2._2);

    System.out.println(message.key() + " : " + message.value()); //(1)

    producer.send(message).get(); //(2)
}));

(1)行可正确打印消息字符串。但是,当我使用(2)行与kafka生产者发送这些消息时,它将引发如下异常,

Caused by: java.io.NotSerializableException: org.apache.kafka.clients.producer.KafkaProducer
Serialization stack:
    - object not serializable (class: org.apache.kafka.clients.producer.KafkaProducer, value: org.apache.kafka.clients.producer.KafkaProducer@10f6530d)
    - element of array (index: 1)
    - array (class [Ljava.lang.Object;, size 2)
    - field (class: java.lang.invoke.SerializedLambda, name: capturedArgs, type: class [Ljava.lang.Object;)
    - object (class java.lang.invoke.SerializedLambda, SerializedLambda[capturingClass=class com.aaa.StreamingKafkaDataWithSpark.streaming.SimpleDStreamExample, functionalInterfaceMethod=org/apache/spark/api/java/function/VoidFunction.call:(Ljava/lang/Object;)V, implementation=invokeStatic com/aaa/StreamingKafkaDataWithSpark/streaming/SimpleDStreamExample.lambda$3:(Ljava/lang/String;Lorg/apache/kafka/clients/producer/Producer;Lscala/Tuple2;)V, instantiatedMethodType=(Lscala/Tuple2;)V, numCaptured=2])
    - writeReplace data (class: java.lang.invoke.SerializedLambda)
    - object (class com.aaa.StreamingKafkaDataWithSpark.streaming.SimpleDStreamExample$$Lambda$941/0x000000010077e440, com.aaa.StreamingKafkaDataWithSpark.streaming.SimpleDStreamExample$$Lambda$941/0x000000010077e440@7bb180e1)
    - element of array (index: 0)
    - array (class [Ljava.lang.Object;, size 1)
    - field (class: java.lang.invoke.SerializedLambda, name: capturedArgs, type: class [Ljava.lang.Object;)
    - object (class java.lang.invoke.SerializedLambda, SerializedLambda[capturingClass=interface org.apache.spark.api.java.JavaRDDLike, functionalInterfaceMethod=scala/Function1.apply:(Ljava/lang/Object;)Ljava/lang/Object;, implementation=invokeStatic org/apache/spark/api/java/JavaRDDLike.$anonfun$foreach$1$adapted:(Lorg/apache/spark/api/java/function/VoidFunction;Ljava/lang/Object;)Ljava/lang/Object;, instantiatedMethodType=(Ljava/lang/Object;)Ljava/lang/Object;, numCaptured=1])
    - writeReplace data (class: java.lang.invoke.SerializedLambda)
    - object (class org.apache.spark.api.java.JavaRDDLike$$Lambda$942/0x000000010077c840, org.apache.spark.api.java.JavaRDDLike$$Lambda$942/0x000000010077c840@7fb1cd32)
    at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:41)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:400)
    ... 30 more
Exception in thread "main" org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:403)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:393)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
    at org.apache.spark.rdd.RDD.$anonfun$foreach$1(RDD.scala:926)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.foreach(RDD.scala:925)
    at org.apache.spark.api.java.JavaRDDLike.foreach(JavaRDDLike.scala:351)
    at org.apache.spark.api.java.JavaRDDLike.foreach$(JavaRDDLike.scala:350)
    at org.apache.spark.api.java.AbstractJavaRDDLike.foreach(JavaRDDLike.scala:45)
    at com.aaa.StreamingKafkaDataWithSpark.streaming.SimpleDStreamExample.lambda$2(SimpleDStreamExample.java:72)
    at org.apache.spark.streaming.api.java.JavaDStreamLike.$anonfun$foreachRDD$1(JavaDStreamLike.scala:272)
    at org.apache.spark.streaming.api.java.JavaDStreamLike.$anonfun$foreachRDD$1$adapted(JavaDStreamLike.scala:272)
    at org.apache.spark.streaming.dstream.DStream.$anonfun$foreachRDD$2(DStream.scala:628)
    at org.apache.spark.streaming.dstream.DStream.$anonfun$foreachRDD$2$adapted(DStream.scala:628)
    at org.apache.spark.streaming.dstream.ForEachDStream.$anonfun$generateJob$2(ForEachDStream.scala:51)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
    at org.apache.spark.streaming.dstream.ForEachDStream.$anonfun$generateJob$1(ForEachDStream.scala:51)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at scala.util.Try$.apply(Try.scala:213)
    at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.$anonfun$run$1(JobScheduler.scala:257)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:257)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)

我无法理解此异常。我确认kafaka生产者消息的类型为<String,String>,并带有第(1)行。但是为什么第(2)行会抛出该异常?我会错过任何过程吗?

1 个答案:

答案 0 :(得分:1)

您需要为每个RDD创建生产者。

RDD分布在多个执行器上,并且Producer对象无法序列化以在它们之间共享


或者,查看documentation of Structured Streaming,您可以简单地将其写入主题。无需自己创建和发送记录

stream.writeStream().format("kafka")... 

请注意,如果目标只是将一个主题映射到另一个主题,则Kafka Streams API

比Spark更加简单并且开销更少