卡夫卡消息与火花消费

时间:2016-05-02 15:35:25

标签: apache-spark apache-kafka

我正在使用HDP-2.3沙箱通过运行SPARK提交作业来消费kafka消息。 我正在卡夫卡发一些消息如下:

kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic webevent

kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic test --new-producer < myfile.txt

现在我需要使用来自spark job的上述消息,如下所示:

./bin/spark-submit --master spark://192.168.255.150:7077 --executor-memory 512m --class org.apache.spark.examples.streaming.JavaDirectKafkaWordCount lib/spark-examples-1.4.1-hadoop2.4.0.jar 192.168.255.150:2181 webevent 10

其中2181是一个动物园管理员港口 我显示错误(指导我如何从Kafka消费该消息):

16/05/02 15:21:30 INFO SparkContext: Running Spark version 1.3.1
16/05/02 15:21:30 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/05/02 15:21:31 INFO SecurityManager: Changing view acls to: root
16/05/02 15:21:31 INFO SecurityManager: Changing modify acls to: root
16/05/02 15:21:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/05/02 15:21:31 INFO Slf4jLogger: Slf4jLogger started
16/05/02 15:21:31 INFO Remoting: Starting remoting
16/05/02 15:21:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@sandbox.hortonworks.com:53950]
16/05/02 15:21:32 INFO Utils: Successfully started service 'sparkDriver' on port 53950.
16/05/02 15:21:32 INFO SparkEnv: Registering MapOutputTracker
16/05/02 15:21:32 INFO SparkEnv: Registering BlockManagerMaster
16/05/02 15:21:32 INFO DiskBlockManager: Created local directory at /tmp/spark-c70b08b9-41a3-42c8-9d83-bc4258e299c6/blockmgr-c2d86de6-34a7-497c-8018-d3437a100e87
16/05/02 15:21:32 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
16/05/02 15:21:32 INFO HttpFileServer: HTTP File server directory is /tmp/spark-a8f7ade9-292c-42c4-9e54-43b3b3495b0c/httpd-65d36d04-1e2a-4e69-8d20-295465100070
16/05/02 15:21:32 INFO HttpServer: Starting HTTP Server
16/05/02 15:21:32 INFO Server: jetty-8.y.z-SNAPSHOT
16/05/02 15:21:32 INFO AbstractConnector: Started SocketConnector@0.0.0.0:37014
16/05/02 15:21:32 INFO Utils: Successfully started service 'HTTP file server' on port 37014.
16/05/02 15:21:32 INFO SparkEnv: Registering OutputCommitCoordinator
16/05/02 15:21:32 INFO Server: jetty-8.y.z-SNAPSHOT
16/05/02 15:21:32 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/05/02 15:21:32 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/05/02 15:21:32 INFO SparkUI: Started SparkUI at http://sandbox.hortonworks.com:4040
16/05/02 15:21:33 INFO SparkContext: Added JAR file:/usr/hdp/2.3.0.0-2130/spark/lib/spark-examples-1.4.1-hadoop2.4.0.jar at http://192.168.255.150:37014/jars/spark-examples-1.4.1-hadoop2.4.0.jar with timestamp 1462202493866
16/05/02 15:21:34 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@192.168.255.150:7077/user/Master...
16/05/02 15:21:34 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160502152134-0000
16/05/02 15:21:34 INFO AppClient$ClientActor: Executor added: app-20160502152134-0000/0 on worker-20160502150437-sandbox.hortonworks.com-36920 (sandbox.hortonworks.com:36920) with 1 cores
16/05/02 15:21:34 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160502152134-0000/0 on hostPort sandbox.hortonworks.com:36920 with 1 cores, 512.0 MB RAM
16/05/02 15:21:34 INFO AppClient$ClientActor: Executor updated: app-20160502152134-0000/0 is now RUNNING
16/05/02 15:21:34 INFO AppClient$ClientActor: Executor updated: app-20160502152134-0000/0 is now LOADING
16/05/02 15:21:34 INFO NettyBlockTransferService: Server created on 43440
16/05/02 15:21:34 INFO BlockManagerMaster: Trying to register BlockManager
16/05/02 15:21:34 INFO BlockManagerMasterActor: Registering block manager sandbox.hortonworks.com:43440 with 265.4 MB RAM, BlockManagerId(<driver>, sandbox.hortonworks.com, 43440)
16/05/02 15:21:34 INFO BlockManagerMaster: Registered BlockManager
16/05/02 15:21:35 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/05/02 15:21:35 INFO VerifiableProperties: Verifying properties
16/05/02 15:21:35 INFO VerifiableProperties: Property group.id is overridden to
16/05/02 15:21:35 INFO VerifiableProperties: Property zookeeper.connect is overridden to
16/05/02 15:21:35 INFO SimpleConsumer: Reconnect due to socket error: java.io.EOFException: Received -1 when reading from channel, socket has likely been closed.
Error: application failed with exception
org.apache.spark.SparkException: java.io.EOFException: Received -1 when reading from channel, socket has likely been closed.
        at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:416)
        at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:416)
        at scala.util.Either.fold(Either.scala:97)
        at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:415)
        at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:532)
        at org.apache.spark.streaming.kafka.KafkaUtils.createDirectStream(KafkaUtils.scala)
        at org.apache.spark.examples.streaming.JavaDirectKafkaWordCount.main(JavaDirectKafkaWordCount.java:71)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:577)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:174)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:197)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

我用这个:

./bin/spark-submit --master spark://192.168.255.150:7077 --executor-memory 512m --class org.apache.spark.examples.streaming.JavaDirectKafkaWordCount lib/spark-examples-1.4.1-hadoop2.4.0.jar  192.168.255.150:6667 webevent 10

其中6667是Kafka的消息生成端口,我收到此错误:

16/05/02 15:27:26 INFO SimpleConsumer: Reconnect due to socket error: java.nio.channels.ClosedChannelException
Error: application failed with exception
org.apache.spark.SparkException: java.nio.channels.ClosedChannelException
        at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:416)
        at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:416)

我不知道这是否有帮助:

./bin/spark-submit --class consumer.kafka.client.Consumer --master spark://192.168.255.150:7077 --executor-memory 1G lib/kafka-spark-consumer-1.0.6.jar 10

0 个答案:

没有答案
相关问题