管道中的Spark访问估算器

时间:2016-11-11 23:09:20

标签: apache-spark pipeline

Is it possible to access estimator attributes in spark.ml pipelines?类似,我想访问估算工具,例如管道中的最后一个元素。

这里提到的方法似乎不再适用于spark 2.0.1。它现在如何运作?

修改

或许我应该更详细地解释一下: 这是我的估算器+矢量汇编程序:

val numRound = 20
val numWorkers = 4
val xgbBaseParams = Map(
    "max_depth" -> 10,
    "eta" -> 0.1,
    "seed" -> 50,
    "silent" -> 1,
    "objective" -> "binary:logistic"
  )

val xgbEstimator = new XGBoostEstimator(xgbBaseParams)
    .setFeaturesCol("features")
    .setLabelCol("label")

val vectorAssembler = new VectorAssembler()
    .setInputCols(train.columns
      .filter(!_.contains("label")))
    .setOutputCol("features")

  val simplePipeParams = new ParamGridBuilder()
    .addGrid(xgbEstimator.round, Array(numRound))
    .addGrid(xgbEstimator.nWorkers, Array(numWorkers))
    .build()

   val simplPipe = new Pipeline()
    .setStages(Array(vectorAssembler, xgbEstimator))

  val numberOfFolds = 2
  val cv = new CrossValidator()
    .setEstimator(simplPipe)
    .setEvaluator(new BinaryClassificationEvaluator()
      .setLabelCol("label")
      .setRawPredictionCol("prediction"))
    .setEstimatorParamMaps(simplePipeParams)
    .setNumFolds(numberOfFolds)
    .setSeed(gSeed)

  val cvModel = cv.fit(train)
  val trainPerformance = cvModel.transform(train)
  val testPerformance = cvModel.transform(test)

现在我想执行自定义评分,例如!= 0.5截止点。如果我掌握了模型,这是可能的:

val realModel = cvModel.bestModel.asInstanceOf[XGBoostClassificationModel]

但这里的这一步不能编译。 感谢您的建议,我可以获得该模型:

 val pipelineModel: Option[PipelineModel] = cvModel.bestModel match {
    case p: PipelineModel => Some(p)
    case _ => None
  }

  val realModel: Option[XGBoostClassificationModel] = pipelineModel
    .flatMap {
      _.stages.collect { case t: XGBoostClassificationModel => t }
        .headOption
    }
  // TODO write it nicer
  val measureResults = realModel.map {
    rm =>
      {
        for (
          thresholds <- Array(Array(0.2, 0.8), Array(0.3, 0.7), Array(0.4, 0.6),
            Array(0.6, 0.4), Array(0.7, 0.3), Array(0.8, 0.2))
        ) {
          rm.setThresholds(thresholds)

          val predResult = rm.transform(test)
            .select("label", "probabilities", "prediction")
            .as[LabelledEvaluation]
          println("cutoff was ", thresholds)
          calculateEvaluation(R, predResult)
        }
      }
  }

然而,问题在于

val predResult = rm.transform(test)

将失败,因为train不包含vectorAssembler的功能列。  此列仅在运行完整管道时创建。

所以我决定创建第二个管道:

val scoringPipe = new Pipeline()
            .setStages(Array(vectorAssembler, rm))
val predResult = scoringPipe.fit(train).transform(test)

但这似乎有点笨拙。你有更好/更好的想法吗?

1 个答案:

答案 0 :(得分:2)

Spark 2.0.0中没有任何变化,同样的方法也有效。 Example Pipeline

import org.apache.spark.ml.{Pipeline, PipelineModel}
import org.apache.spark.ml.classification.LogisticRegression
import org.apache.spark.ml.feature.{HashingTF, Tokenizer}
import org.apache.spark.ml.linalg.Vector
import org.apache.spark.sql.Row

// Prepare training documents from a list of (id, text, label) tuples.
val training = spark.createDataFrame(Seq(
  (0L, "a b c d e spark", 1.0),
  (1L, "b d", 0.0),
  (2L, "spark f g h", 1.0),
  (3L, "hadoop mapreduce", 0.0)
)).toDF("id", "text", "label")

// Configure an ML pipeline, which consists of three stages: tokenizer, hashingTF, and lr.
val tokenizer = new Tokenizer()
  .setInputCol("text")
  .setOutputCol("words")
val hashingTF = new HashingTF()
  .setNumFeatures(1000)
  .setInputCol(tokenizer.getOutputCol)
  .setOutputCol("features")
val lr = new LogisticRegression()
  .setMaxIter(10)
  .setRegParam(0.01)
val pipeline = new Pipeline()
  .setStages(Array(tokenizer, hashingTF, lr))

// Fit the pipeline to training documents.
val model = pipeline.fit(training)

模特:

val logRegModel = model.stages.last
  .asInstanceOf[org.apache.spark.ml.classification.LogisticRegressionModel]