火花如何减少数据类型的列是日期

时间:2018-11-13 18:41:10

标签: apache-spark apache-zeppelin

我正在处理一个DataFrame,如下所示:

$ ./bin/famtst
s2g->foo: 1
s2g->bar: 2
 1 2 3 4 5 6 7 8 9 10

每天的样本数量非常随机。

我想每天只获取一个样本,例如:

-------------------------------
| time                | value | 
-------------------------------
| 2014-12-01 02:54:00 |    2  |
| 2014-12-01 03:54:00 |    3  |
| 2014-12-01 04:54:00 |    4  |
| 2014-12-01 05:54:00 |    5  |
| 2014-12-02 02:54:00 |    6  |
| 2014-12-02 02:54:00 |    7  |
| 2014-12-03 02:54:00 |    8  |
-------------------------------

我不在乎我一天会得到哪个样本,但是 我想确保得到一个,所以没有一天重复 在“时间”列上。

2 个答案:

答案 0 :(得分:2)

您可以先创建日期列,然后再基于dropDuplicates列创建date;以pyspark为例,如果您使用scalajava,则语法应相似:

import pyspark.sql.functions as f
df.withColumn('date', f.to_date('time', 'yyyy-MM-dd HH:mm:ss')) \
  .dropDuplicates(['date']).drop('date').show()
+-------------------+-----+
|               time|value|
+-------------------+-----+
|2014-12-02 02:54:00|    6|
|2014-12-03 02:54:00|    8|
|2014-12-01 02:54:00|    2|
+-------------------+-----+

答案 1 :(得分:1)

您可以使用窗口函数,通过对日期值进行分区来生成row_number,并根据row_number = 1进行过滤

检查一下:

val df = Seq(("2014-12-01 02:54:00","2"),("2014-12-01 03:54:00","3"),("2014-12-01 04:54:00","4"),("2014-12-01 05:54:00","5"),("2014-12-02 02:54:00","6"),("2014-12-02 02:54:00","7"),("2014-12-03 02:54:00","8"))
  .toDF("time","value")
df.withColumn("time",'time.cast("timestamp")).withColumn("value",'value.cast("int"))
df.createOrReplaceTempView("timetab")
spark.sql(
  """ with order_ts( select time, value , row_number() over(partition by date_format(time,"yyyyMMdd") order by value ) as rn from timetab)
    select time,value from order_ts where rn=1
  """).show(false)

输出:

+-------------------+-----+
|time               |value|
+-------------------+-----+
|2014-12-02 02:54:00|6    |
|2014-12-01 02:54:00|2    |
|2014-12-03 02:54:00|8    |
+-------------------+-----+