通过pyspark结构化流了解Foreach

时间:2019-02-22 18:30:13

标签: python apache-spark pyspark

我试图找出如何将foreach应用于pyspark中的单词计数示例,因为在我的用例中,我需要能够写多个源。但是,foreach类似乎从未执行过,也从未创建过文件。

from pyspark.sql import SparkSession
from pyspark.sql.functions import explode, split
import os
import uuid
import tempfile


spark = SparkSession.builder.appName('Struct-stream').getOrCreate()

lines = spark \
        .readStream \
        .format('socket') \
        .option('host', 'localhost') \
        .option('port', 9999) \
        .load()

words = lines.select(
   explode(
       split(lines.value, " ")
   ).alias("word")
)

wordCounts = words.groupBy("word").count()

open_dir = tempfile.mkdtemp()
process_dir = tempfile.mkdtemp()

class Writer:

    open_dir = open_dir
    process_dir = process_dir

    def open(self, partition_id, epoch_id):
        with open(os.path.join(self.open_dir, str(uuid.uuid4())), 'w') as f:
            f.write("%s\n" % str({'partition_id': partition_id, 'epoch': epoch_id}))
        return True

    def process(self, row):
        with open(os.path.join(self.process_dir, str(uuid.uuid4())), 'w') as f:
            f.write("%s\n" % str({'value': 'text'}))


query = wordCounts \
        .writeStream \
        .foreach(Writer()) \
        .outputMode('complete') \
        .format('console') \
        .start()


query.awaitTermination()

我试图理解为什么没有文件被写入,或者Writer类实际上未被执行。

0 个答案:

没有答案