pyspark在该列上使用groupby之前更改列的值

时间:2017-02-27 18:43:16

标签: apache-spark pyspark apache-spark-sql spark-streaming pyspark-sql

我有这个json数据,我想汇总时间戳'按小时编制栏目,同时汇总列中的数据' b' &安培; '一个&#39 ;.

{"a":1 , "b":1, "timestamp":"2017-01-26T01:14:55.719214Z"}
{"a":1 , "b":1,"timestamp":"2017-01-26T01:14:55.719214Z"}
{"a":1 , "b":1,"timestamp":"2017-01-26T02:14:55.719214Z"}
{"a":1 , "b":1,"timestamp":"2017-01-26T03:14:55.719214Z"}

这是我想要的最终输出

{"a":2 , "b":2, "timestamp":"2017-01-26T01:00:00"}
{"a":1 , "b":1,"timestamp":"2017-01-26T02:00:00"}
{"a":1 , "b":1,"timestamp":"2017-01-26T03:00:00"}

这是我到目前为止所写的内容

df = spark.read.json(inputfile)
df2 = df.groupby("timestamp").agg(f.sum(df["a"],f.sum(df["b"])

但是我应该如何更改' timestamp'使用groupby函数之前的列?提前谢谢!

2 个答案:

答案 0 :(得分:1)

我想这是实现此目的的一种方法

df2 = df.withColumn("r_timestamp",df["r_timestamp"].substr(0,12)).groupby("timestamp").agg(f.sum(df["a"],f.sum(df["b"])

有没有更好的解决方案来获取所需格式的时间戳?

答案 1 :(得分:1)

from pyspark.sql import functions as f   

df = spark.read.load(path='file:///home/zht/PycharmProjects/test/disk_file', format='json')
df = df.withColumn('ts', f.to_utc_timestamp(df['timestamp'], 'EST'))
win = f.window(df['ts'], windowDuration='1 hour')
df = df.groupBy(win).agg(f.sum(df['a']).alias('sumA'), f.sum(df['b']).alias('sumB'))
res = df.select(df['window']['start'].alias('start_time'), df['sumA'], df['sumB'])
res.show(truncate=False)

# output:
+---------------------+----+----+                                               
|start_time           |sumA|sumB|
+---------------------+----+----+
|2017-01-26 15:00:00.0|1   |1   |
|2017-01-26 16:00:00.0|1   |1   |
|2017-01-26 14:00:00.0|2   |2   |
+---------------------+----+----+

f.window 更灵活