java.lang.IllegalArgumentException:无效的lambda反序列化

时间:2017-11-13 00:07:34

标签: java apache-spark apache-kafka spark-streaming

我正在尝试与Kafka一起做一个火花流媒体工作,但是当我使用Eclipse执行我的课时我遇到了问题

在我的主要班级" JavaDirectKafkaWordCount.class"我用我的kafka params创建了我的JavaInputDStream,并且我试图计算从kafka主题中获取的单词数

    JavaInputDStream<ConsumerRecord<String, String>> messages = KafkaUtils.createDirectStream(
        jssc,
        LocationStrategies.PreferConsistent(),
        ConsumerStrategies.Subscribe(topicsSet, kafkaParams));

    // Get the lines, split them into words, count the words and print
    JavaDStream<String> lines = messages.map(ConsumerRecord::value);
    JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(SPACE.split(x)).iterator());
JavaPairDStream<String, Integer> wordCounts = words.mapToPair(s -> new Tuple2<>(s, 1))
        .reduceByKey((i1, i2) -> i1 + i2);
lines.print();
    // Start the computation
    jssc.start();
    jssc.awaitTermination();
  }

我收到此错误

17/11/13 00:20:33 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)java.io.IOException: unexpected exception type 
at java.io.ObjectStreamClass.throwMiscException(ObjectStreamClass.java:1582)
at java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1154)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)                                Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.lang.invoke.SerializedLambda.readResolve(SerializedLambda.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1148)    Caused by: java.lang.IllegalArgumentException: Invalid lambda deserialization
at start.JavaDirectKafkaWordCount.$deserializeLambda$(JavaDirectKafkaWordCount.java:1)
... 37 more

我该如何解决这个问题?

2 个答案:

答案 0 :(得分:3)

更改此行:

JavaDStream<String> lines = messages.map(ConsumerRecord::value);

JavaDStream<String> lines = messages.map(x -> x.value());

答案 1 :(得分:1)

构建一个包含所有依赖项的Uber jar,下面是spark 2.2.0的pom.xml,如果你有不同的版本,请相应地更改spark.version属性。

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.venk.exercise</groupId>
    <artifactId>test_exercise</artifactId>
    <version>0.0.1-SNAPSHOT</version>

    <properties>
        <spark.version>2.2.0</spark.version>
        <spark.kafka.version>2.2.0</spark.kafka.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_2.11</artifactId>
            <version>${spark.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.11</artifactId>
            <version>0.10.1.0</version>
            <scope>provided</scope>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.5.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
            </plugin>
        </plugins>

    </build>
</project>

更改pom.xml后,运行命令mvn clean compile assembly:single,然后使用以下命令提交作业

bin/spark-submit  --class edu.hw.test.SparkStreamingKafkaConsumer jar/test_exercise-0.0.1-SNAPSHOT-jar-with-dependencies.jar <application-arguments>