PySpark Dataframe中的重复行基于另一列中的值

时间:2017-01-05 16:11:09

标签: dataframe duplicates pyspark

我的数据框如下所示:

ID    NumRecords
123   2
456   1
789   3

我想创建一个新的数据框,它连接两列并根据NumRecords中的值复制行

所以输出应该是

ID_New  123-1
ID_New  123-2
ID_New  456-1
ID_New  789-1
ID_New  789-2
ID_New  789-3

我正在研究“爆炸”功能,但基于我看到的例子它似乎只需要一个常数。

2 个答案:

答案 0 :(得分:0)

我有类似的问题,此代码将根据NumRecords列中的值复制行:

from pyspark.sql import Row


def duplicate_function(row):
    data = []  # list of rows to return
    to_duplicate = float(row["NumRecords"])

    i = 0
    while i < to_duplicate:
        row_dict = row.asDict()  # convert a Spark Row object to a Python dictionary
        row_dict["SERIAL_NO"] = str(i)
        new_row = Row(**row_dict)  # create a Spark Row object based on a Python dictionary
        to_return.append(new_row)  # adds this Row to the list
        i += 1

    return data  # returns the final list


# create final dataset based on value in NumRecords column
df_flatmap = df_input.rdd.flatMap(duplicate_function).toDF(df_input.schema)

答案 1 :(得分:-1)

您可以使用udf

from pyspark.sql.functions import udf, explode, concat_ws
from pyspark.sql.types import *

range_ = udf(lambda x: [str(y) for y in range(1, x + 1)], ArrayType(StringType()))

df.withColumn("records", range_("NumRecords") \
  .withColumn("record", explode("records")) \
  .withColumn("ID_New", concat_ws("-", "id", "record"))