使用VectorAssembler计算平均值和最大值

时间:2017-07-17 16:34:13

标签: python pyspark

我正在使用数据框,例如:

from pyspark.mllib.linalg import Vectors
from pyspark.ml.feature import VectorAssembler


from pyspark.sql.types import *

schema = StructType([
    StructField("ClientId", IntegerType(), True),
    StructField("m_ant21", IntegerType(), True),
    StructField("m_ant22", IntegerType(), True),
    StructField("m_ant23", IntegerType(), True),
    StructField("m_ant24", IntegerType(), True)
])

df = sqlContext.createDataFrame(
                             data=[(0, 5, 5, 4, 0),
                                   (1, 23, 13, 17, 99),
                                   (2, 0, 0, 0, 1),
                                   (3, 0, 4, 1, 0),
                                   (4, 2, 1, 30, 10),
                                   (5, 0, 0, 0, 0)],
                                   schema=schema)

我需要计算平均值和每行的最大值并使用列" m_ant21"," m_ant22"," m_ant23"," m_ant24"

我尝试使用vectorAssembler:

assembler = VectorAssembler(
    inputCols=["m_ant21", "m_ant22", "m_ant23","m_ant24"],
    outputCol="muestra")
output = assembler.transform(df)
output.show()

现在,我创建了一个函数来制作均值,但是输入变量是" DenseVector"叫" dv":

   dv = output.collect()[0].asDict()['muestra']

def mi_media( dv ) :
    return float( sum( dv ) / dv.size  ) 


udf_media  = udf( mi_media, DoubleType() )
output1 = output.withColumn( "mediaVec",  udf_media ( output.muestra ) )
output1.show()

与最大值相同:

def mi_Max( dv ) :
        return float(max( dv )  )   
udf_max  = udf( mi_Max, DoubleType() )
output2 = output.withColumn( "maxVec",  udf_max ( output.muestra ) )
output2.show()

问题是output1.show()和output2.show()中的错误。只是它不工作,我不知道代码发生了什么。 我究竟做错了什么? 请帮帮我。

3 个答案:

答案 0 :(得分:1)

我试过了,检查一下,

from pyspark.sql import functions as F

df.show()
+--------+-------+-------+-------+-------+
|ClientId|m_ant21|m_ant22|m_ant23|m_ant24|
+--------+-------+-------+-------+-------+
|       0|      5|      5|      4|      0|
|       1|     23|     13|     17|     99|
|       2|      0|      0|      0|      1|
|       3|      0|      4|      1|      0|
|       4|      2|      1|     30|     10|
|       5|      0|      0|      0|      0|
+--------+-------+-------+-------+-------+

df1 = df.withColumn('mean',sum(df[c] for c in df.columns[1:])/len(df.columns[1:]))
df1 = df1.withColumn('max',F.greatest(*[F.coalesce(df[c],F.lit(0)) for c in df.columns[1:]]))

df1.show()

+--------+-------+-------+-------+-------+-----+---+
|ClientId|m_ant21|m_ant22|m_ant23|m_ant24| mean|max|
+--------+-------+-------+-------+-------+-----+---+
|       0|      5|      5|      4|      0|  3.5|  5|
|       1|     23|     13|     17|     99| 38.0| 99|
|       2|      0|      0|      0|      1| 0.25|  1|
|       3|      0|      4|      1|      0| 1.25|  4|
|       4|      2|      1|     30|     10|10.75| 30|
|       5|      0|      0|      0|      0|  0.0|  0|
+--------+-------+-------+-------+-------+-----+---+

答案 1 :(得分:0)

可以使用DenseVector进行操作,但可以采用RDD方式:

output2 = output.rdd.map(lambda x: (x.ClientId, 
                                   x.m_ant21, 
                                   x.m_ant22,
                                   x.m_ant23,
                                   x.m_ant24,
                                   x.muestra, 
                                   float(max(x.muestra))))
output2 = spark.createDataFrame(output2)
output2.show()

给出:

+---+---+---+---+---+--------------------+----+
| _1| _2| _3| _4| _5|                  _6|  _7|
+---+---+---+---+---+--------------------+----+
|  0|  5|  5|  4|  0|   [5.0,5.0,4.0,0.0]| 5.0|
|  1| 23| 13| 17| 99|[23.0,13.0,17.0,9...|99.0|
|  2|  0|  0|  0|  1|       (4,[3],[1.0])| 1.0|
|  3|  0|  4|  1|  0|   [0.0,4.0,1.0,0.0]| 4.0|
|  4|  2|  1| 30| 10| [2.0,1.0,30.0,10.0]|30.0|
|  5|  0|  0|  0|  0|           (4,[],[])| 0.0|
+---+---+---+---+---+--------------------+----+

现在剩下的就是重命名列了,例如使用withColumnRename函数。平均情况是相同的。

也可以使用SparseVector进行此操作,但是在这种情况下,必须访问self class value variable

output2 = output.rdd.map(lambda x: (x.ClientId, 
                                       x.m_ant21, 
                                       x.m_ant22,
                                       x.m_ant23,
                                       x.m_ant24,
                                       x.muestra, 
                                       float(max(x.muestra.values))))
output2 = spark.createDataFrame(output2)

如果df有很多列并且无法在VectorAssembler阶段之前计算最大值,则这种方法会更好。

答案 2 :(得分:0)

我可以找到有关此问题的解决方案

import pyspark.sql.functions  as f
import pyspark.sql.types as t

min_of_vector = f.udf(lambda vec: vec.toArray().min(), t.DoubleType())

max_of_vector = f.udf(lambda vec: vec.toArray().max(), t.DoubleType())

mean_of_vector = f.udf(lambda vec: vec.toArray().mean(), t.DoubleType())

final = output.withColumn('min', min_of_vector('muestra')) \
        .withColumn('max', max_of_vector('muestra')) \
        .withColumn('mean', mean_of_vector('muestra'))