Spark的套接字文本流为空

时间:2019-10-31 08:49:04

标签: python apache-spark spark-structured-streaming socketserver dstream

我正在跟踪Spark的流guide。我没有使用nc -lk 9999,而是创建了自己的简单Python服务器,如下所示。从下面的代码可以看出,它将随机生成字母az

import socketserver
import time
from random import choice

class AlphaTCPHandler(socketserver.BaseRequestHandler):
    def handle(self):
        print('AlphaTCPHandler')
        alphabets = list('abcdefghikjklmnopqrstuvwxyz')

        try:
            while True:
                s = f'{choice(alphabets)}'
                b = bytes(s, 'utf-8')
                self.request.sendall(b)
                time.sleep(1)
        except BrokenPipeError:
            print('broken pipe detected')

if __name__ == '__main__':
    host = '0.0.0.0'
    port = 301

    server = socketserver.TCPServer((host, port), AlphaTCPHandler)
    print(f'server starting {host}:{port}')
    server.serve_forever()

我使用以下客户端代码测试了该服务器。

import socket
import sys
import time

HOST, PORT = 'localhost', 301
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

try:
    sock.connect((HOST, PORT))
    print('socket opened')

    while True:    
        received = str(sock.recv(1024), 'utf-8')
        if len(received.strip()) > 0:
            print(f'{received}')
        time.sleep(1)
finally:
    sock.close()
    print('socket closed')

但是,我的Spark流代码似乎未接收到任何数据或未打印任何内容。代码如下。

from pyspark.streaming import StreamingContext
from time import sleep

ssc = StreamingContext(sc, 1)
ssc.checkpoint('/tmp')

lines = ssc.socketTextStream('0.0.0.0', 301)
words = lines.flatMap(lambda s: s.split(' '))
pairs = words.map(lambda word: (word, 1))
counts = pairs.reduceByKey(lambda a, b: a + b)

counts.pprint()

ssc.start()
sleep(5)
ssc.stop(stopSparkContext=False, stopGraceFully=True)

我从输出中看到的是下面的重复模式。

-------------------------------------------
Time: 2019-10-31 08:38:22
-------------------------------------------

-------------------------------------------
Time: 2019-10-31 08:38:23
-------------------------------------------

-------------------------------------------
Time: 2019-10-31 08:38:24
-------------------------------------------

关于我在做什么错的任何想法吗?

1 个答案:

答案 0 :(得分:1)

您的流式代码正常运行。是您的服务器向它提供了错误的信息-每个字母后没有行分隔符,因此Spark看到的是一条不断增长的行,它只是一直在等待该行结束,而这从未发生。修改您的服务器,以每个字母发送新行:

while True:
    s = f'{choice(alphabets)}\n'  # <-- inserted \n in here
    b = bytes(s, 'utf-8')
    self.request.sendall(b)
    time.sleep(1)

结果:

-------------------------------------------
Time: 2019-10-31 12:09:26
-------------------------------------------
('t', 1)

-------------------------------------------
Time: 2019-10-31 12:09:27
-------------------------------------------
('t', 1)

-------------------------------------------
Time: 2019-10-31 12:09:28
-------------------------------------------
('x', 1)