PySpark从现有列创建一个新列并包含值列表

时间:2019-08-22 08:30:14

标签: python pyspark

我有一个像这样的DataFrame:

from pyspark.sql import SparkSession
from pyspark import Row

spark = SparkSession.builder \
    .appName('DataFrame') \
    .master('local[*]') \
    .getOrCreate()

df = spark.createDataFrame([Row(a=1, b='', c=['0', '1'], d='foo'),
                            Row(a=2, b='', c=['0', '1'], d='bar'),
                            Row(a=3, b='', c=['0', '1'], d='foo')])

|  a|  b|     c|  d|
+---+---+------+---+
|  1|   |[0, 1]|foo|
|  2|   |[0, 1]|bar|
|  3|   |[0, 1]|foo|
+---+---+------+---+

我想创建带有"e"列的第一个元素的列"c"和带有"f"列的第二个元素的"c"列”,看起来像这样:

|a  |b  |c     |d  |e  |f  |
+---+---+------+---+---+---+
|1  |   |[0, 1]|foo|0  |1  |
|2  |   |[0, 1]|bar|0  |1  |
|3  |   |[0, 1]|foo|0  |1  |
+---+---+------+---+---+---+

1 个答案:

答案 0 :(得分:1)

df = spark.createDataFrame([Row(a=1, b='', c=['0', '1'], d='foo'),
                            Row(a=2, b='', c=['0', '1'], d='bar'),
                            Row(a=3, b='', c=['0', '1'], d='foo')])

df2 = df.withColumn('e', df['c'][0]).withColumn('f', df['c'][1])
df2.show()

+---+---+------+---+---+---+
|a  |b  |c     |d  |e  |f  |
+---+---+------+---+---+---+
|1  |   |[0, 1]|foo|0  |1  |
|2  |   |[0, 1]|bar|0  |1  |
|3  |   |[0, 1]|foo|0  |1  |
+---+---+------+---+---+---+