Pyspark:重命名DataFrame列中的字典键

时间:2016-05-25 17:20:56

标签: python dictionary apache-spark dataframe pyspark

经过一些处理后,我得到一个数据帧,我在数据帧列中有一个字典。现在我想更改列中字典的键。从“_ 1”“product_id”“_ 2”“timestamp”

以下是处理代码:

df1 = data.select("user_id","product_id","timestamp_gmt").rdd.map(lambda x: (x[0], (x[1],x[2]))).groupByKey()\
.map(lambda x:(x[0], list(x[1]))).toDF()\
.withColumnRenamed('_1', 'user_id')\
.withColumnRenamed('_2', 'purchase_info')

结果如下:

1 个答案:

答案 0 :(得分:3)

Spark 2.0 +

使用collect_liststruct

from pyspark.sql.functions import collect_list, struct, col

df = sc.parallelize([
    (1, 100, "2012-01-01 00:00:00"),
    (1, 200, "2016-04-04 00:00:01")
]).toDF(["user_id","product_id","timestamp_gmt"])

pi = (collect_list(struct(col("product_id"), col("timestamp_gmt")))
    .alias("purchase_info"))

df.groupBy("user_id").agg(pi)

Spark< 2.0

使用Rows

(df
    .select("user_id", struct(col("product_id"), col("timestamp_gmt")))
    .rdd.groupByKey()
    .toDF(["user_id", "purchase_info"]))

这可以说是更优雅但是应该具有类似的效果来替换你传递给map的函数:

lambda x: (x[0], Row(product_id=x[1], timestamp_gmt=x[2]))

另一方面,这些不是字典(MapType),而是structsStructType)。