我有以下DataFrame列:["id", "timestamp", "x", "y"]
:
+---+----------+---+---+
| id| timestamp| x| y|
+---+----------+---+---+
| 0|1443489380|100| 1|
| 0|1443489390|200| 0|
| 0|1443489400|300| 0|
| 1|1443489410|400| 1|
| 1|1443489550|100| 1|
| 2|1443489560|600| 0|
| 2|1443489570|200| 0|
| 2|1443489580|700| 1|
+---+----------+---+---+
我定义了以下Window
:
from pyspark.sql import Window
w = Window.partitionBy("id").orderBy("timestamp")
我想只提取窗口w
中的第一行和最后一行数据。我怎么能做到这一点?
答案 0 :(得分:0)
如果您想要同一行的第一个和最后一个值,一种方法是使用pyspark.sql.functions.first()
:
from pyspark.sql import Window
from pyspark.sql.functions import first
w1 = Window.partitionBy("id").orderBy("timestamp")
w2 = Window.partitionBy("id").orderBy(f.col("timestamp").desc()) # sort desc
df.select(
"id",
*([first(c).over(w1).alias("first_" + c) for c in df.columns if c != "id"] +
[first(c).over(w2).alias("last_" + c) for c in df.columns if c != "id"])
)\
.distinct()\
.show()
#+---+---------------+-------+-------+--------------+------+------+
#| id|first_timestamp|first_x|first_y|last_timestamp|last_x|last_y|
#+---+---------------+-------+-------+--------------+------+------+
#| 0| 1443489380| 100| 1| 1443489400| 300| 0|
#| 1| 1443489410| 400| 1| 1443489550| 100| 1|
#| 2| 1443489560| 600| 0| 1443489580| 700| 1|
#+---+---------------+-------+-------+--------------+------+------+