NoSuchMethodError:org.apache.kafka.common.network.NetworkSend。<init>

时间:2017-03-27 15:35:31

标签: spark-streaming kafka-consumer-api

我正在尝试spring example program。使用sbt成功编译后,我使用以下命令:

  

spark-submit --class   org.apache.spark.examples.streaming.DirectKafkaWordCount --jars   /home/hadoop/jars/kafka_2.11-0.10.2.0.jar,/home/hadoop/jars/spark-streaming_2.11-2.1.0.jar,/home/hadoop/jars/spark-streaming-kafka-0 -8_2.11-2.1.0.jar,/家庭/ Hadoop的/瓶/卡夫卡的客户端 - 0.10.0.0.jar,/家庭/ Hadoop的/瓶/度核-2.2.0.jar   --master local [2] ../streamkafkaprog_2.11-1.0.jar localhost:9092 MyTopic

但我得到了:

  

17/03/28 02:15:35 INFO NettyBlockTransferService:服务器创建于127.0.0.1:57453   17/03/28 02:15:35 INFO BlockManager:使用org.apache.spark.storage.RandomBlockReplicationPolicy进行块复制策略   17/03/28 02:15:35 INFO BlockManagerMaster:注册BlockManager BlockManagerId(驱动程序,127.0.0.1,57453,无)   17/03/28 02:15:35 INFO BlockManagerMasterEndpoint:使用413.9 MB RAM,BlockManagerId(驱动程序,127.0.0.1,57453,无)注册块管理器127.0.0.1:57453   17/03/28 02:15:35 INFO BlockManagerMaster:已注册的BlockManager BlockManagerId(驱动程序,127.0.0.1,57453,无)   17/03/28 02:15:35 INFO BlockManager:初始化BlockManager:BlockManagerId(驱动程序,127.0.0.1,57453,无)   17/03/28 02:15:36 INFO VerifiableProperties:验证属性   17/03/28 02:15:36 INFO VerifiableProperties:属性group.id被覆盖为   17/03/28 02:15:36 INFO VerifiableProperties:属性zookeeper.connect被覆盖为   17/03/28 02:15:36 INFO SimpleConsumer:因错误重新连接:   java.lang.NoSuchMethodError:org.apache.kafka.common.network.NetworkSend。(Ljava / lang / String; Ljava / nio / ByteBuffer;)V           在kafka.network.RequestOrResponseSend。(RequestOrResponseSend.scala:41)           在kafka.network.RequestOrResponseSend。(RequestOrResponseSend.scala:44)           在kafka.network.BlockingChannel.send(BlockingChannel.scala:112)           at kafka.consumer.SimpleConsumer.liftedTree1 $ 1(SimpleConsumer.scala:85)           at kafka.consumer.SimpleConsumer.kafka $ consumer $ SimpleConsumer $$ sendRequest(SimpleConsumer.scala:83)           在kafka.consumer.SimpleConsumer.send(SimpleConsumer.scala:111)           在org.apache.spark.streaming.kafka.KafkaCluster $$ anonfun $ getPartitionMetadata $ 1.apply(KafkaCluster.scala:133)           在org.apache.spark.streaming.kafka.KafkaCluster $$ anonfun $ getPartitionMetadata $ 1.apply(KafkaCluster.scala:132)           在org.apache.spark.streaming.kafka.KafkaCluster $$ anonfun $ org $ apache $ spark $ streaming $ kafka $ KafkaCluster $$ withBrokers $ 1.apply(KafkaCluster.scala:365)           在org.apache.spark.streaming.kafka.KafkaCluster $$ anonfun $ org $ apache $ spark $ streaming $ kafka $ KafkaCluster $$ withBrokers $ 1.apply(KafkaCluster.scala:361)           在scala.collection.IndexedSeqOptimized $ class.foreach(IndexedSeqOptimized.scala:33)           在scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)           在org.apache.spark.streaming.kafka.KafkaCluster.org $ apache $ spark $ streaming $ kafka $ KafkaCluster $$ withBrokers(KafkaCluster.scala:361)           在org.apache.spark.streaming.kafka.KafkaCluster.getPartitionMetadata(KafkaCluster.scala:132)           在org.apache.spark.streaming.kafka.KafkaCluster.getPartitions(KafkaCluster.scala:119)           在org.apache.spark.streaming.kafka.KafkaUtils $ .getFromOffsets(KafkaUtils.scala:211)           在org.apache.spark.streaming.kafka.KafkaUtils $ .createDirectStream(KafkaUtils.scala:484)           在org.apache.spark.examples.streaming.DirectKafkaWordCount $ .main(DitectKafkaProg.scala:60)           在org.apache.spark.examples.streaming.DirectKafkaWordCount.main(DitectKafkaProg.scala)           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)           at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)           at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)           at java.lang.reflect.Method.invoke(Method.java:498)           在org.apache.spark.deploy.SparkSubmit $ .org $ apache $ spark $ deploy $ SparkSubmit $$ runMain(SparkSubmit.scala:738)           在org.apache.spark.deploy.SparkSubmit $ .doRunMain $ 1(SparkSubmit.scala:187)           在org.apache.spark.deploy.SparkSubmit $ .submit(SparkSubmit.scala:212)           在org.apache.spark.deploy.SparkSubmit $ .main(SparkSubmit.scala:126)           在org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)   线程&#34; main&#34;中的例外情况java.lang.NoSuchMethodError:org.apache.kafka.common.network.NetworkSend。(Ljava / lang / String; Ljava / nio / ByteBuffer;)V           在kafka.network.RequestOrResponseSend。(RequestOrResponseSend.scala:41)           在kafka.network.RequestOrResponseSend。(RequestOrResponseSend.scala:44)           在kafka.network.BlockingChannel.send(BlockingChannel.scala:112)           at kafka.consumer.SimpleConsumer.liftedTree1 $ 1(SimpleConsumer.scala:98)           at kafka.consumer.SimpleConsumer.kafka $ consumer $ SimpleConsumer $$ sendRequest(SimpleConsumer.scala:83)           在kafka.consumer.SimpleConsumer.send(SimpleConsumer.scala:111)           在org.apache.spark.streaming.kafka.KafkaCluster $$ anonfun $ getPartitionMetadata $ 1.apply(KafkaCluster.scala:133)           在org.apache.spark.streaming.kafka.KafkaCluster $$ anonfun $ getPartitionMetadata $ 1.apply(KafkaCluster.scala:132)           在org.apache.spark.streaming.kafka.KafkaCluster $$ anonfun $ org $ apache $ spark $ streaming $ kafka $ KafkaCluster $$ withBrokers $ 1.apply(KafkaCluster.scala:365)           在org.apache.spark.streaming.kafka.KafkaCluster $$ anonfun $ org $ apache $ spark $ streaming $ kafka $ KafkaCluster $$ withBrokers $ 1.apply(KafkaCluster.scala:361)           在scala.collection.IndexedSeqOptimized $ class.foreach(IndexedSeqOptimized.scala:33)           在scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)           在org.apache.spark.streaming.kafka.KafkaCluster.org $ apache $ spark $ streaming $ kafka $ KafkaCluster $$ withBrokers(KafkaCluster.scala:361)           在org.apache.spark.streaming.kafka.KafkaCluster.getPartitionMetadata(KafkaCluster.scala:132)           在org.apache.spark.streaming.kafka.KafkaCluster.getPartitions(KafkaCluster.scala:119)           在org.apache.spark.streaming.kafka.KafkaUtils $ .getFromOffsets(KafkaUtils.scala:211)           在org.apache.spark.streaming.kafka.KafkaUtils $ .createDirectStream(KafkaUtils.scala:484)           在org.apache.spark.examples.streaming.DirectKafkaWordCount $ .main(DitectKafkaProg.scala:60)           在org.apache.spark.examples.streaming.DirectKafkaWordCount.main(DitectKafkaProg.scala)           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)           at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)           at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)           at java.lang.reflect.Method.invoke(Method.java:498)           在org.apache.spark.deploy.SparkSubmit $ .org $ apache $ spark $ deploy $ SparkSubmit $$ runMain(SparkSubmit.scala:738)           在org.apache.spark.deploy.SparkSubmit $ .doRunMain $ 1(SparkSubmit.scala:187)           在org.apache.spark.deploy.SparkSubmit $ .submit(SparkSubmit.scala:212)           在org.apache.spark.deploy.SparkSubmit $ .main(SparkSubmit.scala:126)           在org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)   17/03/28 02:15:36 INFO SparkContext:从关闭钩子调用stop()   17/03/28 02:15:36 INFO SparkUI:在http://127.0.0.1:4040停止了Spark Web UI   17/03/28 02:15:36 INFO MapOutputTrackerMasterEndpoint:MapOutputTrackerMasterEndpoint已停止!   17/03/28 02:15:36 INFO MemoryStore:MemoryStore已清除   17/03/28 02:15:36 INFO BlockManager:BlockManager已停止   17/03/28 02:15:36 INFO BlockManagerMaster:BlockManagerMaster已停止   17/03/28 02:15:36 INFO OutputCommitCoordinator $ OutputCommitCoordinatorEndpoint:OutputCommitCoordinator停止了!   17/03/28 02:15:36 INFO SparkContext:成功停止了SparkContext   17/03/28 02:15:36 INFO ShutdownHookManager:关闭挂钩调用   17/03/28 02:15:36 INFO ShutdownHookManager:删除目录/ tmp / spark-5f636ca8-a2f8-4185-a2d0-a31757d2e721

我使用的是spark 2.1.0,kafka_2.11.0.10.0.0,spark-streaming-kafka-0-8_2.11:2.1.0和scala-2.11.7。

请帮帮我。 谢谢

0 个答案:

没有答案