将数据帧转换为:按顺序将多个列转换为单个列

时间:2017-08-03 15:57:29

标签: python apache-spark pyspark spark-dataframe

我正在使用Spark 2.1.1和数据帧。这是我的输入数据框:

+----+---------+---------+-------+
| key|parameter|reference| subkey|
+----+---------+---------+-------+
|key1|       45|       10|subkey1|
|key1|       45|       20|subkey2|
|key2|       70|       40|subkey2|
|key2|       70|       30|subkey1|
+----+---------+---------+-------+

我需要将数据框转换为下一个:

result data (by pandas):
+-----+-----------+
|label|   features|
+-----+-----------+
|   45|[10.0,20.0]|
|   70|[30.0,40.0]|
+-----+-----------+

我可以在熊猫的帮助下进行转型:

def convert_to_flat_by_pandas(df):
    pandas_data_frame = df.toPandas()
    all_keys = pandas_data_frame['key'].unique()

    flat_values = []
    for key in all_keys:
        key_rows = pandas_data_frame.loc[pandas_data_frame['key'] == key]
        key_rows = key_rows.sort_values(by=['subkey'])

        parameter_values = key_rows['parameter']
        parameter_value = parameter_values.real[0]        

        key_reference_value = [reference_values for reference_values in key_rows['reference']]

        flat_values.append((parameter_value, key_reference_value))

    loaded_data = [(label, Vectors.dense(features)) for (label, features) in flat_values]
    spark_df = spark.createDataFrame(loaded_data, ["label", "features"])

    return spark_df

似乎,我需要使用GroupBy,但我不明白如何排序和转换组(几行)到单行。

工作样本来源(在大熊猫的帮助下):https://github.com/constructor-igor/TechSugar/blob/master/pythonSamples/pysparkSamples/df_flat.py

在2个答案的帮助下,我得到了2个可能的解决方案:

UPD1解决方案#1

def convert_to_flat_by_sparkpy(df):
    subkeys = df.select("subkey").dropDuplicates().collect()
    subkeys = [s[0] for s in subkeys]
    print('subkeys: ', subkeys)
    assembler = VectorAssembler().setInputCols(subkeys).setOutputCol("features")
    spark_df = assembler.transform(df.groupBy("key", "parameter").pivot("subkey").agg(first(col("reference"))))    
    spark_df = spark_df.withColumnRenamed("parameter", "label")
    spark_df = spark_df.select("label", "features")
    return spark_df

UPD1解决方案#2

def convert_to_flat_by_sparkpy_v2(df):
    spark_df = df.orderBy("subkey")
    spark_df = spark_df.groupBy("key").agg(first(col("parameter")).alias("label"), collect_list("reference").alias("features"))
    spark_df = spark_df.select("label", "features")
    return spark_df

2 个答案:

答案 0 :(得分:2)

您可以使用groupby和collect_list函数来获取输出

import org.apache.spark.sql.functions._

df.groupBy("parameter").agg(collect_list("reference").alias("features"))

df1.withColumnRenamed("parameter", "label")

输出:

+---------+--------+
|parameter|features|
+---------+--------+
|       45|[10, 20]|
|       70|[40, 30]|
+---------+--------+

希望这有帮助!

答案 1 :(得分:1)

对于您提供的有限样本数据,您可以使用子项作为标题将数据框转换为宽格式,然后使用VectorAssembler将它们作为要素收集:

from pyspark.sql.functions import first, col
from pyspark.ml.feature import VectorAssembler

assembler = VectorAssembler().setInputCols(["subkey1", "subkey2"]).setOutputCol("features")

assembler.transform(
    df.groupBy("key", "parameter").pivot("subkey").agg(first(col("reference")))
).show()
+----+---------+-------+-------+-----------+
| key|parameter|subkey1|subkey2|   features|
+----+---------+-------+-------+-----------+
|key1|       45|     10|     20|[10.0,20.0]|
|key2|       70|     30|     40|[30.0,40.0]|
+----+---------+-------+-------+-----------+

动态子项的更新:

假设您有这样的数据框:

df.show()
+----+---------+---------+-------+    
| key|parameter|reference| subkey|
+----+---------+---------+-------+
|key1|       45|       10|subkey1|
|key1|       45|       20|subkey2|
|key2|       70|       40|subkey2|
|key2|       70|       30|subkey1|
|key2|       70|       70|subkey3|
+----+---------+---------+-------+

首先收集所有唯一子键,然后使用子键创建汇编程序:

subkeys = df.select("subkey").dropDuplicates().rdd.map(lambda r: r[0]).collect()
assembler = VectorAssembler().setInputCols(subkeys).setOutputCol("features")

assembler.transform(    
    df.groupBy("key", "parameter").pivot("subkey").agg(first(col("reference"))).na.fill(0)
).show()
+----+---------+-------+-------+-------+----------------+
| key|parameter|subkey1|subkey2|subkey3|        features|
+----+---------+-------+-------+-------+----------------+
|key1|       45|     10|     20|      0| [20.0,10.0,0.0]|
|key2|       70|     30|     40|     70|[40.0,30.0,70.0]|
+----+---------+-------+-------+-------+----------------+
相关问题