PySpark-如何使用一列中的行值来访问与行值具有相同名称的另一列

时间:2018-01-24 22:40:02

标签: apache-spark pyspark apache-spark-sql pyspark-sql apache-spark-1.6

我有一个PySpark df:

+---+---+---+---+---+---+---+---+
| id| a1| b1| c1| d1| e1| f1|ref|
+---+---+---+---+---+---+---+---+
|  0|  1| 23|  4|  8|  9|  5| b1|
|  1|  2| 43|  8| 10| 20| 43| e1|
|  2|  3| 15|  0|  1| 23|  7| b1|
|  3|  4|  2|  6| 11|  5|  8| d1|
|  4|  5|  6|  7|  2|  8|  1| f1|
+---+---+---+---+---+---+---+---+

我最终想要创建另一个专栏" out"其值基于" ref"柱。例如,在第一行中,ref列具有b1作为值。在" out"我想看专栏" b1"价值,即23。 这是预期的输出:

+---+---+---+---+---+---+---+---+---+
| id| a1| b1| c1| d1| e1| f1|ref|out|
+---+---+---+---+---+---+---+---+---+
|  0|  1| 23|  4|  8|  9|  5| b1| 23|
|  1|  2| 43|  8| 10| 20| 43| e1| 20|
|  2|  3| 15|  0|  1| 23|  7| b1| 15|
|  3|  4|  2|  6| 11|  5|  8| d1| 11|
|  4|  5|  6|  7|  2|  8|  1| f1|  1|
+---+---+---+---+---+---+---+---+---+

请告知如何实现" out"柱。我使用Spark 1.6版本。谢谢

2 个答案:

答案 0 :(得分:3)

与版本无关,您可以转换为RDDmap,然后转换回DataFrame

df = spark.createDataFrame(
    [(0, 1, 23, 4, 8, 9, 5, "b1"), (1, 2, 43, 8, 10, 20, 43, "e1")], 
    ("id", "a1", "b1", "c1", "d1", "e1", "f1", "ref")
)

df.rdd.map(lambda row: row + (row[row.ref], )).toDF(df.columns + ["out"])
+---+---+---+---+---+---+---+---+---+
| id| a1| b1| c1| d1| e1| f1|ref|out|
+---+---+---+---+---+---+---+---+---+
|  0|  1| 23|  4|  8|  9|  5| b1| 23|
|  1|  2| 43|  8| 10| 20| 43| e1| 20|
+---+---+---+---+---+---+---+---+---+

您还可以保留架构

from pyspark.sql.types import LongType, StructField

spark.createDataFrame(
    df.rdd.map(lambda row: row + (row[row.ref], )), 
    df.schema.add(StructField("out", LongType())))

使用DataFrames,您可以撰写复杂的Columns。在1.6:

from pyspark.sql.functions import array, col, udf
from pyspark.sql.types import  LongType, MapType, StringType

data_cols = [x for x in df.columns if x not in {"id", "ref"}]

# Literal map from column name to index
name_to_index = udf(
    lambda: {x: i for i, x in enumerate(data_cols)},
    MapType(StringType(), LongType())
)()

# Array of data
data_array = array(*[col(c) for c in data_cols])
df.withColumn("out", data_array[name_to_index[col("ref")]])
+---+---+---+---+---+---+---+---+---+
| id| a1| b1| c1| d1| e1| f1|ref|out|
+---+---+---+---+---+---+---+---+---+
|  0|  1| 23|  4|  8|  9|  5| b1| 23|
|  1|  2| 43|  8| 10| 20| 43| e1| 20|
+---+---+---+---+---+---+---+---+---+

在2.x中,您可以跳过中间对象:

from pyspark.sql.functions import create_map, lit, col
from itertools import chain

# Map from column name to column value
name_to_value = create_map(*chain.from_iterable(
    (lit(c), col(c)) for c in data_cols
))

df.withColumn("out", name_to_value[col("ref")])
+---+---+---+---+---+---+---+---+---+
| id| a1| b1| c1| d1| e1| f1|ref|out|
+---+---+---+---+---+---+---+---+---+
|  0|  1| 23|  4|  8|  9|  5| b1| 23|
|  1|  2| 43|  8| 10| 20| 43| e1| 20|
+---+---+---+---+---+---+---+---+---+

最后,您可以使用when

from pyspark.sql.functions import col, lit, when
from functools import reduce

out = reduce(
    lambda acc, x: when(col("ref") == x, col(x)).otherwise(acc), 
    data_cols,
    lit(None)
)
+---+---+---+---+---+---+---+---+---+
| id| a1| b1| c1| d1| e1| f1|ref|out|
+---+---+---+---+---+---+---+---+---+
|  0|  1| 23|  4|  8|  9|  5| b1| 23|
|  1|  2| 43|  8| 10| 20| 43| e1| 20|
+---+---+---+---+---+---+---+---+---+

答案 1 :(得分:0)

OP询问了python解决方案。我只是在spark-scala 2.X中回答相同的问题,以供参考。希望对别人有帮助

scala> val df = Seq((0, 1, 23, 4, 8, 9, 5, "b1"), (1, 2, 43, 8, 10, 20, 43, "e1"), (2,  3, 15,  0,  1, 23,  7, "b1"),(3,  4,  2,  6, 11,  5,  8, "d1"),(4,  5,  6,  7,  2,  8,  1, "f1")).toDF("id", "a1", "b1", "c1", "d1", "e1", "f1", "ref")
df: org.apache.spark.sql.DataFrame = [id: int, a1: int ... 6 more fields]

scala> df.show(false)
+---+---+---+---+---+---+---+---+
|id |a1 |b1 |c1 |d1 |e1 |f1 |ref|
+---+---+---+---+---+---+---+---+
|0  |1  |23 |4  |8  |9  |5  |b1 |
|1  |2  |43 |8  |10 |20 |43 |e1 |
|2  |3  |15 |0  |1  |23 |7  |b1 |
|3  |4  |2  |6  |11 |5  |8  |d1 |
|4  |5  |6  |7  |2  |8  |1  |f1 |
+---+---+---+---+---+---+---+---+


scala> val colx = df.columns.filter(x=>x!="ref").filter(x=>x!="id")
colx: Array[String] = Array(a1, b1, c1, d1, e1, f1)

scala> val colm = colx.map( x=> when(col("ref")===lit(x),col(x)) )
colm: Array[org.apache.spark.sql.Column] = Array(CASE WHEN (ref = a1) THEN a1 END, CASE WHEN (ref = b1) THEN b1 END, CASE WHEN (ref = c1) THEN c1 END, CASE WHEN (ref = d1) THEN d1 END, CASE WHEN (ref = e1) THEN e1 END, CASE WHEN (ref = f1) THEN f1 END)

scala> df.select(col("*"),concat_ws("",array(colm:_*)).as("res1")).show(false)
+---+---+---+---+---+---+---+---+----+
|id |a1 |b1 |c1 |d1 |e1 |f1 |ref|res1|
+---+---+---+---+---+---+---+---+----+
|0  |1  |23 |4  |8  |9  |5  |b1 |23  |
|1  |2  |43 |8  |10 |20 |43 |e1 |20  |
|2  |3  |15 |0  |1  |23 |7  |b1 |15  |
|3  |4  |2  |6  |11 |5  |8  |d1 |11  |
|4  |5  |6  |7  |2  |8  |1  |f1 |1   |
+---+---+---+---+---+---+---+---+----+


scala>