Pyspark - 在groupby和orderBy之后选择列中的不同值

时间:2018-06-09 08:27:58

标签: python pyspark pyspark-sql

所以我的表看起来像这样:

+-------------------+-------+----------+------------+
|            trip_id|line_id|  ef_ar_ts|     station|
+-------------------+-------+----------+------------+
|80:06____:17401:000|  17401|         0|Schaffhausen|
|80:06____:17402:000|  17402|1505278458|Schaffhausen|
|80:06____:17403:000|  17403|         0|Schaffhausen|
|80:06____:17406:000|  17406|1505282110|Schaffhausen|
|80:06____:17409:000|  17409|         0|Schaffhausen|
|80:06____:17410:000|  17410|1505285757|Schaffhausen|
|80:06____:17411:000|  17411|         0|Schaffhausen|
|80:06____:17416:000|  17416|1505292890|Schaffhausen|
|80:06____:17417:000|  17417|         0|Schaffhausen|
|80:06____:17418:000|  17418|1505296501|Schaffhausen|
|80:06____:17419:000|  17419|         0|Schaffhausen|
|80:06____:17420:000|  17420|1505300253|Schaffhausen|
|80:06____:17421:000|  17421|         0|Schaffhausen|
|80:06____:17422:000|  17422|1505303814|Schaffhausen|
|80:06____:17423:000|  17423|         0|Schaffhausen|
|80:06____:17425:000|  17425|         0|Schaffhausen|
|80:06____:17426:000|  17426|1505307355|Schaffhausen|
|80:06____:17427:000|  17427|         0|Schaffhausen|
|80:06____:17428:000|  17428|1505310983|Schaffhausen|
|80:06____:17429:000|  17429|         0|Schaffhausen|
+-------------------+-------+----------+------------+

这是列车的数据集,我想做的是:

按列车的 line_id 分组,以便我将所有与他们的线路连在一起;在每个组中按( ef_ar_ts )排序;然后按顺序获取的SET:每个line_id一个列表。这样,我将对我的电台进行排序,并重建整条线。

到目前为止我尝试的是:

from pyspark.sql.functions import udf
@functions.udf
def keepline(df):
    """Keep lines splitted;"""
    firstline = data1.first().trip_id

dftemp = df.where(data1.trip_id==firstline)

data1 = data1.fillna({'ef_ar_ts':0})

dftemp = dftemp.orderBy('ef_ar_ts')



return mylist

data2 = data1.select('*').groupby(data1.line_id).agg(udfmyfunc)

有任何帮助吗?提前完成!

1 个答案:

答案 0 :(得分:1)

我们可以通过 line_id 进行分组,收集 ef_ar_ts & 列并使用UDF对这两个集合进行排序。希望这会有所帮助。

由于您的数据帧在工作站中具有相同的值,因此我添加了两行虚拟工作站以供参考,

+-------------------+-------+----------+-------------+
|            trip_id|line_id|  ef_ar_ts|      station|
+-------------------+-------+----------+-------------+
|80:06____:17401:000|  17401|         0| Schaffhausen|
|80:06____:17402:000|  17402|1505278458| Schaffhausen|
|80:06____:17403:000|  17403|         0| Schaffhausen|
......................................................
......................................................
|80:06____:17427:000|  17427|         0| Schaffhausen|
|80:06____:17428:000|  17428|1505310983| Schaffhausen|
|80:06____:17429:000|  17429|         0| Schaffhausen|
|80:06____:17429:000|  17401|1505278478|dummystation1|
|80:06____:17429:000|  17429|1505307355|dummystation2|
+-------------------+-------+----------+-------------+

## group and collect for each line id ##
df1 = df.groupby('line_id').agg(F.collect_list('ef_ar_ts').alias('ef_ar_ts'),F.collect_list('station').alias('station'))

+-------+---------------+-----------------------------+
|line_id|ef_ar_ts       |station                      |
+-------+---------------+-----------------------------+
|17419  |[0]            |[Schaffhausen]               |
|17420  |[1505300253]   |[Schaffhausen]               |
|17403  |[0]            |[Schaffhausen]               |
|17406  |[1505282110]   |[Schaffhausen]               |
|17428  |[1505310983]   |[Schaffhausen]               |
|17421  |[0]            |[Schaffhausen]               |
|17427  |[0]            |[Schaffhausen]               |
|17411  |[0]            |[Schaffhausen]               |
|17416  |[1505292890]   |[Schaffhausen]               |
|17429  |[0, 1505307355]|[Schaffhausen, dummystation2]|
|17401  |[0, 1505278478]|[Schaffhausen, dummystation1]|
|17423  |[0]            |[Schaffhausen]               |
|17417  |[0]            |[Schaffhausen]               |
|17402  |[1505278458]   |[Schaffhausen]               |
|17418  |[1505296501]   |[Schaffhausen]               |
|17425  |[0]            |[Schaffhausen]               |
|17409  |[0]            |[Schaffhausen]               |
|17422  |[1505303814]   |[Schaffhausen]               |
|17426  |[1505307355]   |[Schaffhausen]               |
|17410  |[1505285757]   |[Schaffhausen]               |
+-------+---------------+-----------------------------+

## an UDF for merge both collections and sort them ##
from operator import itemgetter
udf1 = F.udf(lambda x,y : [st[1] for st in sorted(zip(x,y),key=itemgetter(0))])
df1.select('line_id',udf1('ef_ar_ts','station').alias('stations')).show(truncate=False)

+-------+-----------------------------+
|line_id|stations                     |
+-------+-----------------------------+
|17419  |[Schaffhausen]               |
|17420  |[Schaffhausen]               |
|17403  |[Schaffhausen]               |
|17406  |[Schaffhausen]               |
|17428  |[Schaffhausen]               |
|17421  |[Schaffhausen]               |
|17427  |[Schaffhausen]               |
|17411  |[Schaffhausen]               |
|17416  |[Schaffhausen]               |
|17429  |[Schaffhausen, dummystation2]|
|17401  |[Schaffhausen, dummystation1]|
|17423  |[Schaffhausen]               |
|17417  |[Schaffhausen]               |
|17402  |[Schaffhausen]               |
|17418  |[Schaffhausen]               |
|17425  |[Schaffhausen]               |
|17409  |[Schaffhausen]               |
|17422  |[Schaffhausen]               |
|17426  |[Schaffhausen]               |
|17410  |[Schaffhausen]               |
+-------+-----------------------------+