DStream确实使用Pyspark将空文件保存在Spark流中

时间:2018-09-21 12:04:14

标签: pyspark spark-streaming

请原谅此问题。我正在尝试使用pyspark将流数据保存到HDFS中。 正在HDFS上成功创建文件,但是这些文件为空。下面是我正在使用的简单代码。

请帮助解决此问题。

from pyspark import SparkContext

from pyspark.streaming import StreamingContext

# Create a local StreamingContextwith two working thread and batch interval of 1 second
sc= SparkContext("local[2]", "NetworkWordCount")
ssc= StreamingContext(sc, 2)

# Create a DStream that will connect to hostname:port, like localhost:9999

linesDStream= ssc.socketTextStream("localhost", 9999)

# Split each line into words
wordsDStream= linesDStream.flatMap(lambda line: line.split(" "))

    # Count each word in each batch
pairsDStream= wordsDStream.map(lambda word: (word, 1))
wordCountsDStream= pairsDStream.reduceByKey(lambda x, y: x + y)

# save the content into HDFS

wordCountsDStream.saveAsTextFiles("/home/cloudera/stream_Output/file")
wordCountsDStream.pprint()

# Start the computation

ssc.start() 
# Wait for the computation to terminate
ssc.awaitTermination()

使用带有Spark 1.6.2版本的Cloudera快速启动VM。

0 个答案:

没有答案