我通过spark.ml.classification.LogisticRegressionModel.predict
得到了预测。许多行的prediction
列为1.0
,probability
列为.04
。 model.getThreshold
为0.5
,因此我假设模型将0.5
概率阈值上的所有内容归类为1.0
。
我应该如何解释1.0 prediction
和probability
为0.04的结果?
答案 0 :(得分:4)
执行LogisticRegression
的概率列应包含一个与类数相同的列表,其中每个索引给出该类的相应概率。我举了两个类来举例说明:
case class Person(label: Double, age: Double, height: Double, weight: Double)
val df = List(Person(0.0, 15, 175, 67),
Person(0.0, 30, 190, 100),
Person(1.0, 40, 155, 57),
Person(1.0, 50, 160, 56),
Person(0.0, 15, 170, 56),
Person(1.0, 80, 180, 88)).toDF()
val assembler = new VectorAssembler().setInputCols(Array("age", "height", "weight"))
.setOutputCol("features")
.select("label", "features")
val df2 = assembler.transform(df)
df2.show
+-----+------------------+
|label| features|
+-----+------------------+
| 0.0| [15.0,175.0,67.0]|
| 0.0|[30.0,190.0,100.0]|
| 1.0| [40.0,155.0,57.0]|
| 1.0| [50.0,160.0,56.0]|
| 0.0| [15.0,170.0,56.0]|
| 1.0| [80.0,180.0,88.0]|
+-----+------------------+
val lr = new LogisticRegression().setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8)
val Array(testing, training) = df2.randomSplit(Array(0.7, 0.3))
val model = lr.fit(training)
val predictions = model.transform(testing)
predictions.select("probability", "prediction").show(false)
+----------------------------------------+----------+
|probability |prediction|
+----------------------------------------+----------+
|[0.7487950501224138,0.2512049498775863] |0.0 |
|[0.6458452667523259,0.35415473324767416]|0.0 |
|[0.3888393314864866,0.6111606685135134] |1.0 |
+----------------------------------------+----------+
以下是算法的概率以及最终预测。最终概率最高的类是预测的类。