我正在尝试执行以下操作:
+-----+-------------------------+----------+-------------------------------------------+
|label|features |prediction|probability |
+-----+-------------------------+----------+-------------------------------------------+
|0.0 |(3,[],[]) |0 |[0.9999999999999979,2.093996169658831E-15] |
|1.0 |(3,[0,1,2],[0.1,0.1,0.1])|0 |[0.999999999999999,9.891337521299582E-16] |
|2.0 |(3,[0,1,2],[0.2,0.2,0.2])|0 |[0.9999999999999979,2.0939961696578572E-15]|
|3.0 |(3,[0,1,2],[9.0,9.0,9.0])|1 |[2.093996169659668E-15,0.9999999999999979] |
|4.0 |(3,[0,1,2],[9.1,9.1,9.1])|1 |[9.89133752128275E-16,0.999999999999999] |
|5.0 |(3,[0,1,2],[9.2,9.2,9.2])|1 |[2.0939961696605603E-15,0.9999999999999979]|
+-----+-------------------------+----------+-------------------------------------------+
将上述数据框转换为另外两列:prob1
& prob2
每列都具有probability
列中显示的相应值。
我发现了类似的问题 - 一个在PySpark,另一个在Scala。我不知道如何翻译PySpark代码,我收到了Scala代码的错误。
PySpark代码:
split1_udf = udf(lambda value: value[0].item(), FloatType())
split2_udf = udf(lambda value: value[1].item(), FloatType())
output2 = randomforestoutput.select(split1_udf('probability').alias('c1'), split2_udf('probability').alias('c2'))
或者将这些列附加到原始数据框:
randomforestoutput.withColumn('c1', split1_udf('probability')).withColumn('c2', split2_udf('probability'))
Scala代码:
import org.apache.spark.sql.functions.udf
val getPOne = udf((v: org.apache.spark.mllib.linalg.Vector) => v(1))
model.transform(testDf).select(getPOne($"probability"))
运行Scala代码时出现以下错误:
scala> predictions.select(getPOne(col("probability"))).show(false)
org.apache.spark.sql.AnalysisException: cannot resolve 'UDF(probability)' due to data type mismatch: argument 1 requires vector type, however, '`probability`' is of vector type.;;
'Project [UDF(probability#39) AS UDF(probability)#135]
+- Project [label#0, features#1, prediction#34, UDF(features#1) AS probability#39]
+- Project [label#0, features#1, UDF(features#1) AS prediction#34]
+- Relation[label#0,features#1] libsvm
我目前正在使用Scala 2.11.11和Spark 2.1.1
答案 0 :(得分:6)
我从您的问题中了解到,您正在尝试将split
probability
列分为两列prob1
和prob2
。如果是这样的话,那么array
的简单withColumn
功能可以解决您的问题。
predictions
.withColumn("prob1", $"probability"(0))
.withColumn("prob2", $"probability"(1))
.drop("probability")
您可以找到可以帮助您将来应用于dataframes
的{{3}}。
已修改
我创建了一个临时dataframe
以与您的column
匹配
val predictions = Seq(Array(1.0,2.0), Array(2.0939961696605603E-15,0.9999999999999979), Array(Double.NaN,Double.NaN)).toDF("probability")
+--------------------------------------------+
|probability |
+--------------------------------------------+
|[1.0, 2.0] |
|[2.0939961696605603E-15, 0.9999999999999979]|
|[NaN, NaN] |
+--------------------------------------------+
我应用上面的withColumns
结果
+----------------------+------------------+
|prob1 |prob2 |
+----------------------+------------------+
|1.0 |2.0 |
|2.0939961696605603E-15|0.9999999999999979|
|NaN |NaN |
+----------------------+------------------+
架构不匹配编辑
现在,由于Vector
列的schema
probability
与arrayType
schema
的上述解决方案不符,上述解决方案不适用于您条件。请使用以下解决方案。
您必须创建udf
函数并按预期返回值
val first = udf((v: Vector) => v.toArray(0))
val second = udf((v: Vector) => v.toArray(1))
predictions
.withColumn("prob1", first($"probability"))
.withColumn("prob2", second($"probability"))
.drop("probability")
我希望你能得到理想的结果。