PySpark直接从Kafka流式传输

时间:2015-11-15 19:44:32

标签: apache-spark apache-kafka pyspark spark-streaming

目标

我的目标是获得一个简单的Spark Streaming示例,该示例使用与Kafka工作的直接接口方式,但我无法通过特定错误。

理想的结果是打开两个控制台窗口。一个我可以输入句子,另一个显示所有句子的“实时”字数。

控制台1

猫喜欢培根

我的猫吃了培根

控制台2

时间:..

[(“the”,2),(“cat”,1),(“like”,1),(“bacon”,1)]

时间:..

[(“the”,3),(“cat”,2),(“like”,1),(“bacon”,2),(“my”,1),(“ate”,1 )]


采取的步骤

下载并解压

kafka_2.10-0.8.2.0
spark-1.5.2-bin-hadoop2.6

在不同的屏幕中启动ZooKeeper和Kafka服务器。

screen -S zk
bin/zookeeper-server-start.sh config/zookeeper.properties

“Ctrl-a”“d”分离屏幕

screen -S kafka
bin/kafka-server-start.sh config/server.properties

“Ctrl-a”“d”

启动Kafka制作人

使用单独的控制台窗口并在其中键入单词以模拟流。

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

启动Pyspark

使用Spark streaming-Kafka包。

bin/pyspark --packages org.apache.spark:spark-streaming-kafka_2.10:1.5.2

运行简单的字数

基于docs中的示例。

from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils

ssc = StreamingContext(sc, 2)
topic = "test"
brokers = "localhost:9092"
kvs = KafkaUtils.createDirectStream(ssc, [topic], {"metadata.broker.list": brokers})
lines = kvs.map(lambda x: x[1])
counts = lines.flatMap(lambda line: line.split(" ")) \
    .map(lambda word: (word, 1)) \
    .reduceByKey(lambda a, b: a+b)
counts.pprint()
ssc.start()
ssc.awaitTermination()


错误

在Kafka制作人控制台中输入单词会产生一次结果,但随后会出现以下错误一次而不会产生更多结果(尽管“时间”部分会继续显示)。

Time: 2015-11-15 18:39:52
-------------------------------------------

15/11/15 18:42:57 ERROR PythonRDD: Error while sending iterator
java.net.SocketTimeoutException: Accept timed out
        at java.net.PlainSocketImpl.socketAccept(Native Method)
        at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)
        at java.net.ServerSocket.implAccept(ServerSocket.java:530)
        at java.net.ServerSocket.accept(ServerSocket.java:498)
        at org.apache.spark.api.python.PythonRDD$$anon$2.run(PythonRDD.scala:645)
Traceback (most recent call last):
  File "/vagrant/install_files/spark-1.5.2-bin-hadoop2.6/python/pyspark/streaming/util.py", line 62, in call
    r = self.func(t, *rdds)
  File "/vagrant/install_files/spark-1.5.2-bin-hadoop2.6/python/pyspark/streaming/dstream.py", line 171, in takeAndPrint
    taken = rdd.take(num + 1)
  File "/vagrant/install_files/spark-1.5.2-bin-hadoop2.6/python/pyspark/rdd.py", line 1299, in take
    res = self.context.runJob(self, takeUpToNumLeft, p)
  File "/vagrant/install_files/spark-1.5.2-bin-hadoop2.6/python/pyspark/context.py", line 917, in runJob
    return list(_load_from_socket(port, mappedRDD._jrdd_deserializer))
  File "/vagrant/install_files/spark-1.5.2-bin-hadoop2.6/python/pyspark/rdd.py", line 142, in _load_from_socket
    for item in serializer.load_stream(rf):
  File "/vagrant/install_files/spark-1.5.2-bin-hadoop2.6/python/pyspark/serializers.py", line 139, in load_stream
    yield self._read_with_length(stream)
  File "/vagrant/install_files/spark-1.5.2-bin-hadoop2.6/python/pyspark/serializers.py", line 156, in _read_with_length
    length = read_int(stream)
  File "/vagrant/install_files/spark-1.5.2-bin-hadoop2.6/python/pyspark/serializers.py", line 543, in read_int
    length = stream.read(4)
  File "/usr/lib/python2.7/socket.py", line 380, in read
    data = self._sock.recv(left)
error: [Errno 104] Connection reset by peer

非常感谢任何帮助或建议。

2 个答案:

答案 0 :(得分:0)

尝试运行: spark-submit --packages org.apache.spark:spark-streaming-kafka_2.10:1.5.1 your_python_file_name.py 您可以设置其他参数(--deploy-mode等)

答案 1 :(得分:0)

创建DSstreams RDD之后,我们应该使用foreachRDD来迭代RDD。

from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
ssc = StreamingContext(sc, 2)
ssc = StreamingContext(sc, 2)
topic = "test"
brokers = "localhost:9092"
kvs = KafkaUtils.createDirectStream(ssc, [topic], {"metadata.broker.list": brokers})
kvs.foreachRDD(handler)
def handler(message):
    records = message.collect()
    for record in records:
         <Data processing whatever you want >