Spark 1.5.1,MLLib随机森林概率

时间:2015-10-28 20:54:53

标签: scala apache-spark random-forest apache-spark-mllib

我将Spark 1.5.1与MLLib一起使用。我使用MLLib构建了一个随机森林模型,现在使用该模型进行预测。我可以使用.predict函数找到预测类别(0.0或1.0)。但是,我无法找到检索概率的函数(请参阅随附的屏幕截图)。我认为火花1.5.1随机森林会提供概率,我在这里遗漏了什么吗?

enter image description here

2 个答案:

答案 0 :(得分:6)

不幸的是,旧版Spark MLlib 1.5.1中没有该功能。

但是,您可以在Spark MLlib 2.x中最近的Pipeline API中找到它<dependency> <groupId>org.flywaydb</groupId> <artifactId>flyway-core</artifactId> </dependency>

RandomForestClassifier

注意:此示例来自Spark MLlib ML - Random forest classifier的官方文档。

以下是对某些输出列的一些解释:

  • import org.apache.spark.ml.Pipeline import org.apache.spark.ml.classification.RandomForestClassifier import org.apache.spark.ml.feature.{IndexToString, StringIndexer, VectorIndexer} import org.apache.spark.mllib.util.MLUtils // Load and parse the data file, converting it to a DataFrame. val data = MLUtils.loadLibSVMFile(sc, "data/mllib/sample_libsvm_data.txt").toDF // Index labels, adding metadata to the label column. // Fit on whole dataset to include all labels in index. val labelIndexer = new StringIndexer() .setInputCol("label") .setOutputCol("indexedLabel").fit(data) // Automatically identify categorical features, and index them. // Set maxCategories so features with > 4 distinct values are treated as continuous. val featureIndexer = new VectorIndexer() .setInputCol("features") .setOutputCol("indexedFeatures") .setMaxCategories(4).fit(data) // Split the data into training and test sets (30% held out for testing) val Array(trainingData, testData) = data.randomSplit(Array(0.7, 0.3)) // Train a RandomForest model. val rf = new RandomForestClassifier() .setLabelCol(labelIndexer.getOutputCol) .setFeaturesCol(featureIndexer.getOutputCol) .setNumTrees(10) // Convert indexed labels back to original labels. val labelConverter = new IndexToString() .setInputCol("prediction") .setOutputCol("predictedLabel") .setLabels(labelIndexer.labels) // Chain indexers and forest in a Pipeline val pipeline = new Pipeline() .setStages(Array(labelIndexer, featureIndexer, rf, labelConverter)) // Fit model. This also runs the indexers. val model = pipeline.fit(trainingData) // Make predictions. val predictions = model.transform(testData) // predictions: org.apache.spark.sql.DataFrame = [label: double, features: vector, indexedLabel: double, indexedFeatures: vector, rawPrediction: vector, probability: vector, prediction: double, predictedLabel: string] predictions.show(10) // +-----+--------------------+------------+--------------------+-------------+-----------+----------+--------------+ // |label| features|indexedLabel| indexedFeatures|rawPrediction|probability|prediction|predictedLabel| // +-----+--------------------+------------+--------------------+-------------+-----------+----------+--------------+ // | 0.0|(692,[124,125,126...| 1.0|(692,[124,125,126...| [0.0,10.0]| [0.0,1.0]| 1.0| 0.0| // | 0.0|(692,[124,125,126...| 1.0|(692,[124,125,126...| [1.0,9.0]| [0.1,0.9]| 1.0| 0.0| // | 0.0|(692,[129,130,131...| 1.0|(692,[129,130,131...| [1.0,9.0]| [0.1,0.9]| 1.0| 0.0| // | 0.0|(692,[154,155,156...| 1.0|(692,[154,155,156...| [1.0,9.0]| [0.1,0.9]| 1.0| 0.0| // | 0.0|(692,[154,155,156...| 1.0|(692,[154,155,156...| [1.0,9.0]| [0.1,0.9]| 1.0| 0.0| // | 0.0|(692,[181,182,183...| 1.0|(692,[181,182,183...| [1.0,9.0]| [0.1,0.9]| 1.0| 0.0| // | 1.0|(692,[99,100,101,...| 0.0|(692,[99,100,101,...| [4.0,6.0]| [0.4,0.6]| 1.0| 0.0| // | 1.0|(692,[123,124,125...| 0.0|(692,[123,124,125...| [10.0,0.0]| [1.0,0.0]| 0.0| 1.0| // | 1.0|(692,[124,125,126...| 0.0|(692,[124,125,126...| [10.0,0.0]| [1.0,0.0]| 0.0| 1.0| // | 1.0|(692,[125,126,127...| 0.0|(692,[125,126,127...| [10.0,0.0]| [1.0,0.0]| 0.0| 1.0| // +-----+--------------------+------------+--------------------+-------------+-----------+----------+--------------+ // only showing top 10 rows 代表预测的标签。
  • predictionCol表示长度为#的类的Vector,其中包含树节点处的训练实例标签的计数,用于进行预测(仅适用于分类)。
  • rawPredictionCol表示长度#等级的概率向量等于probabilityCol归一化为多项分布(仅适用于分类)。

答案 1 :(得分:4)

您不能直接获得分类概率,但自己计算它相对容易。 RandomForest是树的集合,其输出概率是这些树的多数投票除以树的总数。

由于MLib中的RandomForestModel为您提供了经过训练的树,因此您可以轻松完成。以下代码给出了二进制分类问题的概率。它对多类分类的推广很简单。

  def predict(points: RDD[LabeledPoint], model: RandomForestModel) = {
    val numTrees = model.trees.length
    val trees = points.sparkContext.broadcast(model.trees)
    points.map { point =>
    trees.value
    .map(_.predict(point.features))
    .sum / numTrees
  }

}

对于多类案例,您只需要用.map(_。predict(point.features) - > 1.0)替换地图,然后按键而不是求和,最后取最大值。