创建复合变压器火花

时间:2017-06-01 09:05:07

标签: apache-spark composition transformer

我使用的是NGram Transformer,而是CountVectorizerModel

我需要能够创建一个复合变换器,以便以后再使用。

我能够通过制作List<Transformer>并循环浏览所有元素来实现这一目标,但我想知道是否可以使用其他Transformer来创建Transformer

1 个答案:

答案 0 :(得分:2)

这实际上非常简单,您只需要使用Pipeline API来创建管道:

import java.util.Arrays;

import org.apache.spark.ml.Pipeline;
import org.apache.spark.ml.PipelineModel;
import org.apache.spark.ml.PipelineStage;
import org.apache.spark.ml.feature.CountVectorizer;
import org.apache.spark.ml.feature.NGram;
import org.apache.spark.ml.feature.Tokenizer;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.Metadata;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;

List<Row> data = Arrays.asList(
            RowFactory.create(0, "Hi I heard about Spark"),
            RowFactory.create(1, "I wish Java could use case classes"),
            RowFactory.create(2, "Logistic,regression,models,are,neat")
    );

StructType schema = new StructType(new StructField[]{
            new StructField("id", DataTypes.IntegerType, false, Metadata.empty()),
            new StructField("sentence", DataTypes.StringType, false, Metadata.empty())
});

现在让我们定义我们的管道(tokenizer,ngram transformer和count vectorizer):

Tokenizer tokenizer = new Tokenizer().setInputCol("text").setOutputCol("words");

NGram ngramTransformer = NGram().setN(2).setInputCol("words").setOutputCol("ngrams");

CountVectorizer countVectorizer = new CountVectorizer()
  .setInputCol("ngrams")
  .setOutputCol("feature")
  .setVocabSize(3)
  .setMinDF(2);

我们现在可以创建管道并对其进行训练:

Pipeline pipeline = new Pipeline()
            .setStages(new PipelineStage[]{tokenizer, ngramTransformer, countVectorizer});

// Fit the pipeline to training documents.
PipelineModel model = pipeline.fit(sentenceDataFrame);

我希望这会有所帮助