使用Java将数组转换为Spark DataFrame中的DenseVector

时间:2018-10-22 10:35:34

标签: java apache-spark dataframe apache-spark-sql user-defined-functions

我正在运行Spark 2.3。我想将以下DataFrame中的列featuresArrayType转换为DenseVector。我在Java中使用Spark。

+---+--------------------+
| id|            features|
+---+--------------------+
|  0|[4.191401, -1.793...|
| 10|[-0.5674514, -1.3...|
| 20|[0.735613, -0.026...|
| 30|[-0.030161237, 0....|
| 40|[-0.038345724, -0...|
+---+--------------------+

root
 |-- id: integer (nullable = false)
 |-- features: array (nullable = true)
 |    |-- element: float (containsNull = false)

我写了以下UDF,但似乎不起作用:

private static UDF1 toVector = new UDF1<Float[], Vector>() {

    private static final long serialVersionUID = 1L;

    @Override
    public Vector call(Float[] t1) throws Exception {

        double[] DoubleArray = new double[t1.length];
        for (int i = 0 ; i < t1.length; i++)
        {
            DoubleArray[i] = (double) t1[i];
        }   
    Vector vector = (org.apache.spark.mllib.linalg.Vector) Vectors.dense(DoubleArray);
    return vector;
    }
}

我希望提取以下特征作为矢量,以便可以对其进行聚类。

我也在注册UDF,然后继续按以下方式调用它:

spark.udf().register("toVector", (UserDefinedAggregateFunction) toVector);
df3 = df3.withColumn("featuresnew", callUDF("toVector", df3.col("feautres")));
df3.show();  

在运行此代码段时,我遇到以下错误:

  

ReadProcessData $ 1无法强制转换为org.apache.spark.sql.expressions。 UserDefinedAggregateFunction

1 个答案:

答案 0 :(得分:2)

问题出在如何在Spark中注册udf。您不应该使用UserDefinedAggregateFunction,它不是udf而是用于聚合的udaf。相反,您应该做的是:

spark.udf().register("toVector", toVector, new VectorUDT());

然后使用注册功能,请使用:

df3.withColumn("featuresnew", callUDF("toVector",df3.col("feautres")));

udf本身应作如下稍微调整:

UDF1 toVector = new UDF1<Seq<Float>, Vector>(){

  public Vector call(Seq<Float> t1) throws Exception {

    List<Float> L = scala.collection.JavaConversions.seqAsJavaList(t1);
    double[] DoubleArray = new double[t1.length()]; 
    for (int i = 0 ; i < L.size(); i++) { 
      DoubleArray[i]=L.get(i); 
    } 
    return Vectors.dense(DoubleArray); 
  } 
};

请注意,在 Spark 2.3 + 中,您可以创建可直接调用的Scala风格的udf。从这个answer

UserDefinedFunction toVector = udf(
  (Seq<Float> array) -> /* udf code or method to call */, new VectorUDT()
);

df3.withColumn("featuresnew", toVector.apply(col("feautres")));