使用用户定义的值在数据框中添加新列。 (pyspark)

时间:2018-01-23 06:45:03

标签: python pyspark pyspark-sql

数组A1的三个值来自某个函数 -

A1 = [1,2,3,4]
A1 = [5,6,7,8]
A1 = [1,3,4,1]

我的数据框,我想在其中添加一个包含数组值的新列 -

+---+---+-----+
| x1| x2|   x3|
+---+---+-----+
|  1|  A|  3.0|
|  2|  B|-23.0|
|  3|  C| -4.0|
+---+---+-----+

我试过这个(假设'df'是我的数据帧) -

for i  in range(0, 2):
   df = df.withColumn("x4", array(lit(A1[0]), lit(A1[1]), lit(A1[2]))

但是这个代码的问题是更新列的数组'A1'的最后一个值是这样的 -

+---+---+-----+---------+
| x1| x2|   x3|       x4|
+---+---+-----+---------+
|  1|  A|  3.0|[1,3,4,1]|
|  2|  B|-23.0|[1,3,4,1]|
|  3|  C| -4.0|[1,3,4,1]|
+---+---+-----+---------+

但我想这样 -

+---+---+-----+---------+
| x1| x2|   x3|       x4|
+---+---+-----+---------+
|  1|  A|  3.0|[1,2,3,4]|
|  2|  B|-23.0|[5,6,7,8]|
|  3|  C| -4.0|[1,3,4,1]|
+---+---+-----+---------+

我需要在代码中添加额外内容吗?

4 个答案:

答案 0 :(得分:1)

怎么样:

from pyspark.sql import SparkSession
import pandas as pd

spark = SparkSession.builder.appName('test').getOrCreate()
df=spark.createDataFrame(data=[(1,'A',3),(2,'B',-23),(3,'C',-4)],schema=['x1','x2','x3'])

+---+---+---+
| x1| x2| x3|
+---+---+---+
|  1|  A|  3|
|  2|  B|-23|
|  3|  C| -4|
+---+---+---+

mydict = {1:[1,2,3,4] , 2:[5,6,7,8], 3:[1,3,4,1]}

def addExtraColumn(df,mydict):
    names = df.schema.names
    count=1
    mylst=[]
    for row in df.rdd.collect():
        RW=row.asDict()
        rowLst=[]
        for name in names:
            rowLst.append(RW[name])
        rowLst.append(mydict[count])
        count=count+1
        mylst.append(rowLst)
    return mylst

newlst = addExtraColumn(df,mydict)

df1 = spark.sparkContext.parallelize(newlst).toDF(['x1','x2','x3','x4'])

df1.show()

+---+---+---+------------+
| x1| x2| x3|          x4|
+---+---+---+------------+
|  1|  A|  3|[1, 2, 3, 4]|
|  2|  B|-23|[5, 6, 7, 8]|
|  3|  C| -4|[1, 3, 4, 1]|
+---+---+---+------------+

答案 1 :(得分:0)

查看您的代码,我认为A1值至少取决于列x1,x2或x3中的一个。

因此,您无法使用A1定义新列,但使用的函数将使用定义A1所需的列作为参数。

这只是一个假设,但也许,你只需要一个词典,A = {1:[1,2,3,4] , 2:[5,6,7,8], 3:[1,3,4,1],}并在你的withColumn的UDF中使用它。

答案 2 :(得分:0)

所以,在打破了我的脑袋之后,我发现使用pyspark的withColumn函数无法做到这一点,因为它会创建一个列但是所有相同的行。此外,我无法使用udf,因为我的新列不依赖于现有数据帧的任何先前列。

所以我做了这样的事情 - 假设你在for循环中获得了不同的数组A1值(在我的情况下,这就是场景)

f_array = []
for i in range(0,10):
   f_array.extend([(i, A1)])

# Creating a new df for my array.

df1 = spark.createDataFrame(data = f_array, schema = ["id", "x4"])
df1.show()

+---+---------+
| id|       x4|
+---+---------+
|  0|[1,2,3,4]|
|  1|[5,6,7,8]|
|  2|[1,3,4,1]|
+---+---------+
# Suppose no columns matches to our df then creating one extra column named `id` as present in our `df1`. This is used for joining both the dataframes.

df = df.withColumn('id', monotonically_increasing_id())
df.show()

+---+---+---+-----+
| id| x1| x2|   x3|
+---+---+---+-----+
|  0|  1|  A|  3.0|
|  1|  2|  B|-23.0|
|  2|  3|  C| -4.0|
+---+---+---+-----+

# Now join both the dataframes using common column `id`.

df = df.join(df1, df.id == df1.id).drop(df.id).drop(df1.id)
df.show()

+---+---+---+------------+
| x1| x2| x3|          x4|
+---+---+---+------------+
|  1|  A|  3|[1, 2, 3, 4]|
|  2|  B|-23|[5, 6, 7, 8]|
|  3|  C| -4|[1, 3, 4, 1]|
+---+---+---+------------+

答案 3 :(得分:-1)

这有效:

from pyspark.sql import SparkSession
import pandas as pd
spark = SparkSession.builder.appName('test').getOrCreate()

df=spark.createDataFrame(data=[(1,'A',3),(2,'B',-23),(3,'C',-4)],schema=['x1','x2','x3'])

+---+---+---+
| x1| x2| x3|
+---+---+---+
|  1|  A|  3|
|  2|  B|-23|
|  3|  C| -4|
+---+---+---+

将df转换为列表

mylst = df.toPandas().values.tolist()

创建字典

mydict = {1:[1,2,3,4] , 2:[5,6,7,8], 3:[1,3,4,1]}

使用字典元素附加列表

count =1
for x in mylst:
    x.append(mydict[count])
    count = count + 1

将附加列表转换为dataframe

sc = spark.sparkContext
df1 = sc.parallelize(mylst).toDF(['x1','x2','x3','x4'])
df1.show()

+---+---+---+------------+
| x1| x2| x3|          x4|
+---+---+---+------------+
|  1|  A|  3|[1, 2, 3, 4]|
|  2|  B|-23|[5, 6, 7, 8]|
|  3|  C| -4|[1, 3, 4, 1]|
+---+---+---+------------+