自定义Transformer中的Spark(Java)transformSchema()

时间:2016-10-26 16:19:34

标签: java apache-spark spark-dataframe pipeline apache-spark-ml

我想将自定义变换器与StandardScaler一起使用:

VectorizerTransformer vectorizerTransformer = new VectorizerTransformer(field.getName());
                pipelineStages.add(vectorizerTransformer);
                StandardScaler scaler = new StandardScaler()
                        .setInputCol(vectorizerTransformer.getOutputColumn())
                        .setOutputCol(field.getName() + "_norm")
                        .setWithStd(true)
                        .setWithMean(true);
                pipelineStages.add(scaler);

然而,当我跑步时:

PipelineModel pipelineModel = pipeline.fit(dframe);

我得到了一个例外:

Exception in thread "main" java.lang.IllegalArgumentException: Field "trans_vector" does not exist.
at org.apache.spark.sql.types.StructType$$anonfun$apply$1.apply(StructType.scala:228)
at org.apache.spark.sql.types.StructType$$anonfun$apply$1.apply(StructType.scala:228)
at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
at scala.collection.AbstractMap.getOrElse(Map.scala:59)
at org.apache.spark.sql.types.StructType.apply(StructType.scala:227)
at org.apache.spark.ml.util.SchemaUtils$.checkColumnType(SchemaUtils.scala:40)
at org.apache.spark.ml.feature.StandardScalerParams$class.validateAndTransformSchema(StandardScaler.scala:68)
at org.apache.spark.ml.feature.StandardScaler.validateAndTransformSchema(StandardScaler.scala:88)
at org.apache.spark.ml.feature.StandardScaler.transformSchema(StandardScaler.scala:124)
at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:180)
at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:180)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:186)
at org.apache.spark.ml.Pipeline.transformSchema(Pipeline.scala:180)
at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:70)
at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:132)
at org.sparkexample.PipelineExample.main(PipelineExample.java:90)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

字段名称是VectorizerTransformer的输出字段。

在VectorizerTransformer中我有代码:

@Override
public StructType transformSchema(StructType arg0) {
    return arg0;
}

我相信问题就在这里。所以我需要在那里写点什么,但到底是什么?我只是在dataframe中添加新字段

trans -> trans_vector

1 个答案:

答案 0 :(得分:3)

@Override
public StructType transformSchema(StructType structType) {
    return structType.add(getOutputColumn(),new VectorUDT(),true);
}

就是这样。

注意:我使用http://supunsetunga.blogspot.ru/2016/05/custom-transformers-for-spark.html作为java变换器的代码。