使用Kafka-Jupyter在本地进行Pyspark结构化流式传输

时间:2018-10-31 05:27:46

标签: apache-spark pyspark apache-kafka jupyter-notebook

查看其他答案后,我仍然无法弄清楚。

我能够使用kafkaProducer和kafkaConsumer从笔记本中发送和接收消息。

    producer = KafkaProducer(bootstrap_servers=['127.0.0.1:9092'],value_serializer=lambda m: json.dumps(m).encode('ascii'))
    consumer = KafkaConsumer('hr',bootstrap_servers=['127.0.0.1:9092'],group_id='abc' )

我尝试通过spark上下文和spark会话连接到流。

    from pyspark.streaming.kafka import KafkaUtils
    sc = SparkContext("local[*]", "stream")
    ssc = StreamingContext(sc, 1)

哪个给我这个错误

    Spark Streaming's Kafka libraries not found in class path. Try one 
    of the following.

    1. Include the Kafka library and its dependencies with in the
    spark-submit command as

    $ bin/spark-submit --packages org.apache.spark:spark-streaming- 
    kafka-0-8:2.3.2 ...

似乎我需要将JAR添加到我的

    !/usr/local/bin/spark-submit   --master local[*]  /usr/local/Cellar/apache-spark/2.3.0/libexec/jars/spark-streaming-kafka-0-8-assembly_2.11-2.3.2.jar pyspark-shell

返回

    Error: No main class set in JAR; please specify one with --class
    Run with --help for usage help or --verbose for debug output

我上什么课? 我如何让Pyspark连接到消费者?

1 个答案:

答案 0 :(得分:0)

您拥有的命令正在尝试运行 spark-streaming-kafka-0-8-assembly_2.11-2.3.2.jar,并试图在其中查找pyspark-shell作为Java类。

正如第一个错误所说,您在--packages之后错过了spark-submit,这意味着您会这样做

spark-submit --packages ... someApp.jar com.example.YourClass

例如,如果您只是在Jupyter本地,则可能需要尝试Kafka-Python,而不是PySpark...。开销更少,并且没有Java依赖项。