PySpark:根据另一个df中定义的关系更改df的列名

时间:2017-07-29 10:30:13

标签: apache-spark pyspark apache-spark-sql

我有两个从csv加载的Spark数据框:

mapping_fields(带有映射名称的df):

new_name old_name
A        aa
B        bb
C        cc

aa bb cc dd
1  2  3  43
12 21 4  37

转变为:

A  B  C D
1  2  3 
12 21 4

由于dd在原始表中没有任何映射,因此D列应该具有所有空值。

如何在不将mapping_df转换为字典并单独检查映射名称的情况下如何做到这一点? (这意味着我必须收集mapping_fields并检查,这与我分布式处理所有数据集的用例相矛盾)

谢谢!

2 个答案:

答案 0 :(得分:1)

here借用melt你可以:

from pyspark.sql import functions as f

mapping_fields = spark.createDataFrame(
    [("A", "aa"), ("B", "bb"), ("C", "cc")],
    ("new_name", "old_name"))
df = spark.createDataFrame(
    [(1, 2, 3, 43), (12, 21, 4, 37)],
    ("aa", "bb", "cc", "dd"))

(melt(df.withColumn("id", f.monotonically_increasing_id()),
       id_vars=["id"],  value_vars=df.columns, var_name="old_name")
    .join(mapping_fields, ["old_name"], "left_outer")
    .withColumn("value", f.when(f.col("new_name").isNotNull(), col("value")))
    .withColumn("new_name", f.coalesce("new_name", f.upper(col("old_name"))))
    .groupBy("id")
    .pivot("new_name")
    .agg(f.first("value"))
    .drop("id")
    .show())

+---+---+---+----+
|  A|  B|  C|  DD|
+---+---+---+----+
|  1|  2|  3|null|
| 12| 21|  4|null|
+---+---+---+----+

但在你的描述中没有任何理由这样做。因为列数相当有限,我宁愿:

mapping = dict(
    mapping_fields
        .filter(f.col("old_name").isin(df.columns))
        .select("old_name", "new_name").collect())

df.select([
  (f.lit(None).cast(t) if c not in mapping else col(c)).alias(mapping.get(c, c.upper()))
  for (c, t) in df.dtypes])

+---+---+---+----+
|  A|  B|  C|  DD|
+---+---+---+----+
|  1|  2|  3|null|
| 12| 21|  4|null|
+---+---+---+----+

在一天结束时,您应该在提供性能或可伸缩性改进时使用分布式处理。在这里它会做相反的事情并使你的代码过于复杂。

忽略不匹配:

(melt(df.withColumn("id", f.monotonically_increasing_id()),
       id_vars=["id"],  value_vars=df.columns, var_name="old_name")
    .join(mapping_fields, ["old_name"])
    .groupBy("id")
    .pivot("new_name")
    .agg(f.first("value"))
    .drop("id")
    .show())

df.select([
    col(c).alias(mapping.get(c))
    for (c, t) in df.dtypes if c in mapping])

答案 1 :(得分:0)

我尝试了一个简单的for循环,希望这也有帮助。

from pyspark.sql import functions as F
l1 = [('A','aa'),('B','bb'),('C','cc')]
l2 = [(1,2,3,43),(12,21,4,37)]

df1 = spark.createDataFrame(l1,['new_name','old_name'])
df2 = spark.createDataFrame(l2,['aa','bb','cc','dd'])

print df1.show()
+--------+--------+
|new_name|old_name|
+--------+--------+
|       A|      aa|
|       B|      bb|
|       C|      cc|
+--------+--------+
>>> df2.show()
+---+---+---+---+
| aa| bb| cc| dd|
+---+---+---+---+
|  1|  2|  3| 43|
| 12| 21|  4| 37|
+---+---+---+---+

当您需要具有空值的缺失列时,

>>>cols = df2.columns

>>> for i in cols:
       val = df1.where(df1['old_name'] == i).first()
       if val is not None:
           df2 = df2.withColumnRenamed(i,val['new_name'])
       else:
           df2 = df2.withColumn(i,F.lit(None))
>>> df2.show()
+---+---+---+----+
|  A|  B|  C|  dd|
+---+---+---+----+
|  1|  2|  3|null|
| 12| 21|  4|null|
+---+---+---+----+

当我们只需要映射列时,更改else部分,

else:
  df2 = df2.drop(i)

>>> df2.show()
+---+---+---+
|  A|  B|  C|
+---+---+---+
|  1|  2|  3|
| 12| 21|  4|
+---+---+---+

这会改变原来的df2数据帧。