使用Spark结构化流式传输的累计计数

时间:2019-10-15 14:14:15

标签: pyspark spark-streaming spark-structured-streaming

我想使用移动窗口来计算过去1小时内数据框列中的值的累计计数。我可以使用rangeBetween使用pyspark(非流式处理)窗口函数来获得预期的输出,但是我想使用实时数据处理,因此尝试使用Spark结构化的流式处理,以便在系统中有任何新记录/事务进入时,得到所需的输出。 / p>

数据就像

time,col
2019-04-27 01:00:00,A
2019-04-27 00:01:00,A
2019-04-27 00:05:00,B
2019-04-27 01:01:00,A
2019-04-27 00:08:00,B
2019-04-27 00:03:00,A
2019-04-27 03:03:00,A

使用pyspark(非流式传输)

from pyspark.sql.window import Window
df = sqlContext.read.format("csv") \
   .options(header='true', inferschema='false',delimiter=',') \
    .load(r'/datalocation')
df=df.withColumn("numddate",unix_timestamp(df.time, "yyyy-MM-dd HH:mm:ss"))
w1=Window.partitionBy("col").orderBy("numddate").rangeBetween(-3600, -1)
df=df.withColumn("B_cumulative_count", count("col").over(w1))

+-------------------+---+----------+------------------+
|               time|col|  numddate|B_cumulative_count|
+-------------------+---+----------+------------------+
|2019-04-27 00:05:00|  B|1556348700|                 0|
|2019-04-27 00:08:00|  B|1556348880|                 1|
|2019-04-27 00:01:00|  A|1556348460|                 0|
|2019-04-27 00:03:00|  A|1556348580|                 1|
|2019-04-27 01:00:00|  A|1556352000|                 2|
|2019-04-27 01:01:00|  A|1556352060|                 3|
|2019-04-27 03:03:00|  A|1556359380|                 0|
+-------------------+---+----------+------------------+

(This is what I required, so getting it by above code)

结构化流式传输,这是我正在尝试的

userSchema = StructType([
    StructField("time", TimestampType()),
    StructField("col", StringType())
])


lines2 = spark \
    .readStream \
.format('csv')\
.schema(userSchema)\
 .csv("/datalocation")

windowedCounts = lines2.groupBy(
    window(lines2.time, "1 hour"),
    lines2.col
).count()

windowedCounts.writeStream.format("memory").outputMode("complete").queryName("test2").option("truncate","false").start()

spark.table("test2").show(truncate=False)

streaming output:
+------------------------------------------+---+-----+
|window                                    |col|count|
+------------------------------------------+---+-----+
|[2019-04-27 03:00:00, 2019-04-27 04:00:00]|A  |1    |
|[2019-04-27 00:00:00, 2019-04-27 01:00:00]|A  |2    |
|[2019-04-27 01:00:00, 2019-04-27 02:00:00]|A  |2    |
|[2019-04-27 00:00:00, 2019-04-27 01:00:00]|B  |2    |
+------------------------------------------+---+-----+

如何使用Spark结构化流复制它们?

1 个答案:

答案 0 :(得分:0)

您可以按窗口幻灯片分组并对其进行计数。

结构化流式处理中的字数示例-

 val lines = spark.readStream
  .format("socket")
  .option("host", host)
  .option("port", port)
  .option("includeTimestamp", true)
  .load()

// Split the lines into words, retaining timestamps
val words = lines.as[(String, Timestamp)].flatMap(line =>
  line._1.split(" ").map(word => (word, line._2))
).toDF("word", "timestamp")

val windowDuration = "10 seconds"
val slideDuration = "5 seconds"

// Group the data by window and word and compute the count of each group
val windowedCounts = words.groupBy(
  window($"timestamp", windowDuration, slideDuration), $"word"
).count().orderBy("window")

// Start running the query that prints the windowed word counts to the console
    val query = windowedCounts.writeStream
      .outputMode("complete")
      .format("console")
      .option("truncate", "false")
      .start()

query.awaitTermination()