在Jupyter中使用JAR工件运行kafkastream

时间:2018-11-18 14:27:30

标签: python apache-spark pyspark apache-kafka spark-streaming

我正在使用一个简单的python脚本,使用pyspark从Kafka流式传输消息,而我正在使用jupyter进行传输。

我收到一条错误消息,提示Spark Streaming's Kafka libraries not found in class path(下面有更多详细信息)。我将@ tshilidzi-mudau建议的解决方案包括在previous post中(并在docs中进行了确认),以避免此问题。我该怎么办才能修复该错误?

按照错误提示中的建议,我下载了工件的JAR,并将其存储在$SPARK_HOME/jars中,并将引用包含在代码中。

代码如下:

import os
from __future__ import print_function
import sys
from pyspark.streaming import StreamingContext
from pyspark import SparkContext,SparkConf
from pyspark.streaming.kafka import KafkaUtils

if __name__ == "__main__":

    os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars spark-streaming-kafka-0-10-assembly_2.10-2.2.2.jar pyspark-shell' #note that the "pyspark-shell" part is very important!!.

    #conf = SparkConf().setAppName("Kafka-Spark").setMaster("spark://127.0.0.1:7077")
    conf = SparkConf().setAppName("Kafka-Spark")
    #sc = SparkContext(appName="KafkaSpark")

    try:
        sc.stop()
    except:
        pass

    sc = SparkContext(conf=conf)
    stream=StreamingContext(sc,1)
    map1={'spark-kafka':1}
    kafkaStream = KafkaUtils.createStream(stream, 'localhost:9092', "name", map1) #tried with localhost:2181 too

    print("kafkastream=",kafkaStream)
    sc.stop()

这是错误:

  Spark Streaming's Kafka libraries not found in class path. Try one of the following.

  1. Include the Kafka library and its dependencies with in the
     spark-submit command as

     $ bin/spark-submit --packages org.apache.spark:spark-streaming-kafka-0-8:2.2.2 ...

  2. Download the JAR of the artifact from Maven Central http://search.maven.org/,
     Group Id = org.apache.spark, Artifact Id = spark-streaming-kafka-0-8-assembly, Version = 2.2.2.
     Then, include the jar in the spark-submit command as

     $ bin/spark-submit --jars <spark-streaming-kafka-0-8-assembly.jar> ...

TypeError                                 Traceback (most recent call last)
<ipython-input-9-34de7dbdfc7c> in <module>()
     13 ssc = StreamingContext(sc,1)
     14 broker = "<my_broker_ip>"
---> 15 directKafkaStream = KafkaUtils.createDirectStream(ssc, ["test1"], {"metadata.broker.list": broker})
     16 directKafkaStream.pprint()
     17 ssc.start()

/opt/spark/python/pyspark/streaming/kafka.pyc in createDirectStream(ssc, topics, kafkaParams, fromOffsets, keyDecoder, valueDecoder, messageHandler)
    120             return messageHandler(m)
    121 
--> 122         helper = KafkaUtils._get_helper(ssc._sc)
    123 
    124         jfromOffsets = dict([(k._jTopicAndPartition(helper),

/opt/spark/python/pyspark/streaming/kafka.pyc in _get_helper(sc)
    193     def _get_helper(sc):
    194         try:
--> 195             return sc._jvm.org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper()
    196         except TypeError as e:
    197             if str(e) == "'JavaPackage' object is not callable":

TypeError: 'JavaPackage' object is not callable

0 个答案:

没有答案