我需要使用pyspark
更改数据框df
的列名相对于其他数据框df_col
DF
+----+---+----+----+
|code| id|name|work|
+----+---+----+----+
| ASD|101|John| DEV|
| klj|102| ben|prod|
+----+---+----+----+
df_col
+-----------+-----------+
|col_current|col_updated|
+-----------+-----------+
| id| Row_id|
| name| Name|
| code| Row_code|
| Work| Work_Code|
+-----------+-----------+
如果df列与col_current匹配,则df列应替换为col_updated。例如:如果df.id与df.col_current匹配,则df.id应替换为Row_id。
预期产出
Row_id,Name,Row_code,Work_code
101,John,ASD,DEV
102,ben,klj,prod
注意:我希望此过程是动态的。
答案 0 :(得分:4)
只需将df_col
收集为字典:
df = spark.createDataFrame(
[("ASD", "101" "John", "DEV"), ("klj","102", "ben", "prod")],
("code", "id", "name", "work")
)
df_col = spark.createDataFrame(
[("id", "Row_id"), ("name", "Name"), ("code", "Row_code"), ("Work", "Work_Code")],
("col_current", "col_updated")
)
name_dict = df_col.rdd.collectAsMap()
并将select
与列表理解结合使用:
df.select([df[c].alias(name_dict.get(c, c)) for c in df.columns]).printSchema()
# root
# |-- Row_code: string (nullable = true)
# |-- Row_id: string (nullable = true)
# |-- Name: string (nullable = true)
# |-- work: string (nullable = true)
其中name_dict
是标准的Python字典:
{'Work': 'Work_Code', 'code': 'Row_code', 'id': 'Row_id', 'name': 'Name'}
name_dict.get(c, c)
获取新名称,给定当前名称或当前名称(如果不匹配):
name_dict.get("code", "code")
# 'Row_code'
name_dict.get("work", "work") # Case sensitive
# 'work'
和alias
只需将列(df[col]
)重命名为name_dict.get
返回的名称。