解压缩pyspark数据帧中的元组列表

时间:2018-01-25 15:36:57

标签: list pyspark tuples spark-dataframe

我希望在pyspark数据帧的列中解压缩元组列表

我们要将一列作为[(blue, 0.5), (red, 0.1), (green, 0.7)],我想分成两列,第一列为[blue, red, green],第二列为[0.5, 0.1, 0.7]

+-----+-------------------------------------------+
|Topic|  Tokens                                   |
+-----+-------------------------------------------+
|    1|  ('blue', 0.5),('red', 0.1),('green', 0.7)|
|    2|  ('red', 0.9),('cyan', 0.5),('white', 0.4)|
+-----+-------------------------------------------+

可以使用以下代码创建:

df = sqlCtx.createDataFrame(
    [
        (1, ('blue', 0.5),('red', 0.1),('green', 0.7)),
        (2, ('red', 0.9),('cyan', 0.5),('white', 0.4))
    ],
    ('Topic', 'Tokens')
)

并且输出应该如下:

+-----+--------------------------+-----------------+
|Topic|  Tokens                  | Weights         |
+-----+--------------------------+-----------------+
|    1|  ['blue', 'red', 'green']| [0.5, 0.1, 0.7] |
|    2|  ['red', 'cyan', 'white']| [0.9, 0.5, 0.4] |
+-----+--------------------------------------------+

2 个答案:

答案 0 :(得分:1)

您可以使用udf()进行简单索引来实现此目的:

from pyspark.sql.functions import udf, col

# create the dataframe
df = sqlCtx.createDataFrame(
    [
        (1, [('blue', 0.5),('red', 0.1),('green', 0.7)]),
        (2, [('red', 0.9),('cyan', 0.5),('white', 0.4)])
    ],
    ('Topic', 'Tokens')
)

def get_colors(l):
    return [x[0] for x in l] 

def get_weights(l):
    return [x[1] for x in l]

# make udfs from the above functions - Note the return types
get_colors_udf = udf(get_colors, ArrayType(StringType()))
get_weights_udf = udf(get_weights, ArrayType(FloatType()))

# use withColumn and apply the udfs
df.withColumn('Weights', get_weights_udf(col('Tokens')))\
    .withColumn('Tokens', get_colors_udf(col('Tokens')))\
    .select(['Topic', 'Tokens', 'Weights'])\
    .show()

输出:

+-----+------------------+---------------+
|Topic|            Tokens|        Weights|
+-----+------------------+---------------+
|    1|[blue, red, green]|[0.5, 0.1, 0.7]|
|    2|[red, cyan, white]|[0.9, 0.5, 0.4]|
+-----+------------------+---------------+

答案 1 :(得分:1)

如果DataFrame的架构如下所示:

 root
  |-- Topic: long (nullable = true)
  |-- Tokens: array (nullable = true)
  |    |-- element: struct (containsNull = true)
  |    |    |-- _1: string (nullable = true)
  |    |    |-- _2: double (nullable = true)

然后你可以选择:

from pyspark.sql.functions import col

df.select(
    col("Topic"),
    col("Tokens._1").alias("Tokens"), col("Tokens._2").alias("weights")
).show()
# +-----+------------------+---------------+       
# |Topic|            Tokens|        weights|
# +-----+------------------+---------------+
# |    1|[blue, red, green]|[0.5, 0.1, 0.7]|
# |    2|[red, cyan, white]|[0.9, 0.5, 0.4]|
# +-----+------------------+---------------+

并概括:

cols = [
    col("Tokens.{}".format(n)) for n in 
    df.schema["Tokens"].dataType.elementType.names]

df.select("Topic", *cols)

参考Querying Spark SQL DataFrame with complex types