如何从Pyspark一对一多分类器获取概率

时间:2018-12-14 22:28:43

标签: python apache-spark pyspark prediction

Pyspark Onv-vs-Rest分类器似乎没有提供概率。有没有办法做到这一点?

我在下面附加代码。我正在添加标准的多类分类器进行比较。

from pyspark.ml.classification import LogisticRegression, OneVsRest
from pyspark.ml.evaluation import MulticlassClassificationEvaluator

# load data file.
inputData = spark.read.format("libsvm") \
    .load("/data/mllib/sample_multiclass_classification_data.txt")

(train, test) = inputData.randomSplit([0.8, 0.2])

# instantiate the base classifier.
lr = LogisticRegression(maxIter=10, tol=1E-6, fitIntercept=True)

# instantiate the One Vs Rest Classifier.
ovr = OneVsRest(classifier=lr)


# train the multiclass model.
ovrModel = ovr.fit(train)
lrm = lr.fit(train)

# score the model on test data.
predictions = ovrModel.transform(test)
predictions2 = lrm.transform(test)

predictions.show(6)
predictions2.show(6)

1 个答案:

答案 0 :(得分:1)

我认为您无法访问概率(置信度)向量,因为它会占用最大的置信度并降低置信度向量。要进行测试,您可以复制该类并对其进行修改,然后删除.drop(accColName)

http://spark.apache.org/docs/2.0.1/api/python/_modules/pyspark/ml/classification.html

# output the index of the classifier with highest confidence as prediction
labelUDF = udf(
    lambda predictions: float(max(enumerate(predictions), key=operator.itemgetter(1))[0]),
    DoubleType())

# output label and label metadata as prediction
return aggregatedDataset.withColumn(
    self.getPredictionCol(), labelUDF(aggregatedDataset[accColName])).drop(accColName)