Spark Streaming累计字数

时间:2014-07-16 03:40:45

标签: scala distributed apache-spark spark-streaming

这是一个用scala编写的火花流程序。它计算每1秒钟来自套接字的字数。结果将是单词count,例如,从0到1的单词计数,以及从1到2的单词计数。但是我想知道是否有某种方法可以改变这个程序以便我们可以累积字数?也就是说,字数从0到现在为止。

val sparkConf = new SparkConf().setAppName("NetworkWordCount")
val ssc = new StreamingContext(sparkConf, Seconds(1))

// Create a socket stream on target ip:port and count the
// words in input stream of \n delimited text (eg. generated by 'nc')
// Note that no duplication in storage level only for running locally.
// Replication necessary in distributed scenario for fault tolerance.
val lines = ssc.socketTextStream(args(0), args(1).toInt, StorageLevel.MEMORY_AND_DISK_SER)
val words = lines.flatMap(_.split(" "))
val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
wordCounts.print()
ssc.start()
ssc.awaitTermination()

2 个答案:

答案 0 :(得分:9)

您可以使用StateDStream。有example of stateful word count from sparks examples

object StatefulNetworkWordCount {
  def main(args: Array[String]) {
    if (args.length < 2) {
      System.err.println("Usage: StatefulNetworkWordCount <hostname> <port>")
      System.exit(1)
    }

    StreamingExamples.setStreamingLogLevels()

    val updateFunc = (values: Seq[Int], state: Option[Int]) => {
      val currentCount = values.foldLeft(0)(_ + _)

      val previousCount = state.getOrElse(0)

      Some(currentCount + previousCount)
    }

    val sparkConf = new SparkConf().setAppName("StatefulNetworkWordCount")
    // Create the context with a 1 second batch size
    val ssc = new StreamingContext(sparkConf, Seconds(1))
    ssc.checkpoint(".")

    // Create a NetworkInputDStream on target ip:port and count the
    // words in input stream of \n delimited test (eg. generated by 'nc')
    val lines = ssc.socketTextStream(args(0), args(1).toInt)
    val words = lines.flatMap(_.split(" "))
    val wordDstream = words.map(x => (x, 1))

    // Update the cumulative count using updateStateByKey
    // This will give a Dstream made of state (which is the cumulative count of the words)
    val stateDstream = wordDstream.updateStateByKey[Int](updateFunc)
    stateDstream.print()
    ssc.start()
    ssc.awaitTermination()
  }
}

它的工作方式是为每个批次获得Seq[T],然后更新Option[T],其作用类似于累加器。它是Option的原因是因为在第一批它将None并保持这种方式,除非它已更新。在这个例子中,count是一个int,如果你处理大量数据,你可能想要LongBigInt

答案 1 :(得分:0)

我有一个非常简单的答案,它只有几行代码。您会发现这是大多数火花书。记住我使用了localhost和端口9999。

from pyspark import SparkContext
from pyspark.streaming import StreamingContext

sc = SparkContext(appName="PythonStreamingNetworkWordCount")
ssc = StreamingContext(sc, 1)
lines = ssc.socketTextStream("localhost", 9999)
counts = lines.flatMap(lambda line: line.split(" "))\
                     .map(lambda word: (word, 1))\
                     .reduceByKey(lambda a, b: a+b)
counts.pprint()
ssc.start()
ssc.awaitTermination()

要停止,您可以使用简单的

ssc.stop()

这是一个非常基本的代码,但是该代码有助于对Spark Streaming(更具体地讲Dstream)的基本了解。

在终端(Mac终端)类型中将输入提供给本地主机

nc -l 9999

因此它将听完您键入的所有内容,并且单词会被计数

希望这会有所帮助。