将列表列拆分为同一PySpark数据帧中的多个列

时间:2018-04-04 12:21:45

标签: pyspark apache-spark-sql spark-dataframe

我有以下数据框,其中包含2列:

第一列有列名

第二列有值列表。

+--------------------+--------------------+
|              Column|            Quantile|
+--------------------+--------------------+
|                rent|[4000.0, 4500.0, ...|
|     is_rent_changed|[0.0, 0.0, 0.0, 0...|
|               phone|[7.022372888E9, 7...|
|          Area_house|[1000.0, 1000.0, ...|
|       bedroom_count|[1.0, 1.0, 1.0, 1...|
|      bathroom_count|[1.0, 1.0, 1.0, 1...|
|    maintenance_cost|[0.0, 0.0, 0.0, 0...|
|            latitude|[12.8217605, 12.8...|
|            Max_rent|[9000.0, 10000.0,...|
|                Beds|[2.0, 2.0, 2.0, 2...|
|                Area|[1000.0, 1000.0, ...|
|            Avg_Rent|[3500.0, 4000.0, ...|
|      deposit_amount|[0.0, 0.0, 0.0, 0...|
|          commission|[0.0, 0.0, 0.0, 0...|
|        monthly_rent|[0.0, 0.0, 0.0, 0...|
|is_min_rent_guara...|[0.0, 0.0, 0.0, 0...|
|min_guarantee_amount|[0.0, 0.0, 0.0, 0...|
|min_guarantee_dur...|[1.0, 1.0, 1.0, 1...|
|        furnish_cost|[0.0, 0.0, 0.0, 0...|
|  owner_furnish_part|[0.0, 0.0, 0.0, 0...|
+--------------------+--------------------+

如何将第二列拆分为多列保留相同的数据集。

我可以使用以下方式访问这些值:

univar_df10.select("Column", univar_df10.Quantile[0],univar_df10.Quantile[1],univar_df10.Quantile[2]).show()

+--------------------+-------------+-------------+------------+
|              Column|  Quantile[0]|  Quantile[1]| Quantile[2]|
+--------------------+-------------+-------------+------------+
|                rent|       4000.0|       4500.0|      5000.0|
|     is_rent_changed|          0.0|          0.0|         0.0|
|               phone|7.022372888E9|7.042022842E9|7.07333021E9|
|          Area_house|       1000.0|       1000.0|      1000.0|
|       bedroom_count|          1.0|          1.0|         1.0|
|      bathroom_count|          1.0|          1.0|         1.0|
|    maintenance_cost|          0.0|          0.0|         0.0|
|            latitude|   12.8217605|   12.8490502|   12.863517|
|            Max_rent|       9000.0|      10000.0|     11500.0|
|                Beds|          2.0|          2.0|         2.0|
|                Area|       1000.0|       1000.0|      1000.0|
|            Avg_Rent|       3500.0|       4000.0|      4125.0|
|      deposit_amount|          0.0|          0.0|         0.0|
|          commission|          0.0|          0.0|         0.0|
|        monthly_rent|          0.0|          0.0|         0.0|
|is_min_rent_guara...|          0.0|          0.0|         0.0|
|min_guarantee_amount|          0.0|          0.0|         0.0|
|min_guarantee_dur...|          1.0|          1.0|         1.0|
|        furnish_cost|          0.0|          0.0|         0.0|
|  owner_furnish_part|          0.0|          0.0|         0.0|
+--------------------+-------------+-------------+------------+
only showing top 20 rows

我希望我的新数据框能够将我的第二列列表拆分为多个列,就像上面的数据集一样。 提前谢谢。

2 个答案:

答案 0 :(得分:4)

假设(您的问题被标记为关闭,因为不清楚您要问的是什么)您的问题是Quantile列中的列表有一定长度,因此手动构建相应的命令是不方便的,这里是使用列表添加和理解作为select的参数的解决方案:

spark.version
# u'2.2.1'

# make some toy data
from pyspark.sql import Row
df = spark.createDataFrame([Row([0,45,63,0,0,0,0]),
                            Row([0,0,0,85,0,69,0]),
                            Row([0,89,56,0,0,0,0])],
                            ['features'])

df.show()
# result:
+-----------------------+
|features               |
+-----------------------+
|[0, 45, 63, 0, 0, 0, 0]|
|[0, 0, 0, 85, 0, 69, 0]|
|[0, 89, 56, 0, 0, 0, 0]|
+-----------------------+

# get the length of your lists, if you don't know it already (here is 7):
length = len(df.select('features').take(1)[0][0])
length
# 7

df.select([df.features] + [df.features[i] for i in range(length)]).show()
# result:
+--------------------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+
|            features|features[0]|features[1]|features[2]|features[3]|features[4]|features[5]|features[6]|  
+--------------------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+
|[0, 45, 63, 0, 0,...|          0|         45|         63|          0|          0|          0|          0| 
|[0, 0, 0, 85, 0, ...|          0|          0|          0|         85|          0|         69|          0|
|[0, 89, 56, 0, 0,...|          0|         89|         56|          0|          0|          0|          0|
+--------------------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+

所以,在你的情况下,

univar_df10.select([univar_df10.Column] + [univar_df10.Quantile[i] for i in range(length)])
在将length计算为

之后,

应该完成这项工作

length = len(univar_df10.select('Quantile').take(1)[0][0])

答案 1 :(得分:1)

这是在scala中执行此操作的伪代码:-

import org.apache.spark.sql.functions.split 
import org.apache.spark.sql.functions.col

#Create column which you wanted to be .
val quantileColumn = Seq("quantile1","qunatile2","quantile3")

#Get the number of columns
val numberOfColums = quantileColumn.size

#Create a list of column
val columList = for (i <- 0 until numberOfColums ) yield split(col("Quantile"),",").getItem(i).alias(quantileColumn(i))

#Just perfom Select operation.
df.select(columList: _ *)

# If you want some columns to be added or dropped , use withColumn & dropp on df.
相关问题