如何获得分类模型的预测概率?

时间:2019-03-28 10:58:04

标签: python scikit-learn

我正在尝试使用二进制因变量(占用/未占用)尝试不同的分类模型。我感兴趣的模型是Logistic回归,决策树和高斯朴素贝叶斯。

我的输入数据是一个csv文件,带有日期时间索引(例如2019-01-07 14:00),三个变量列(“ R”,“ P”,“ C”,包含数值),并且因变量列(“值”,包含二进制值)。

训练模型不是问题,一切正常。所有模型都以二进制值(当然应该是最终结果)为我提供了预测,但是我也希望看到使它们决定使用其中一个二进制值的预测概率。有没有办法获得这些值?

我已经尝试了所有与yellowbrick软件包一起使用的分类可视化工具(ClassBalance,ROCAUC,ClassificationReport,ClassPredictionError)。但是,所有这些都不给我一个图表,该图表显示了模型针对数据集计算出的概率。

import pandas as pd
import numpy as np
data = pd.read_csv('testrooms_data.csv', parse_dates=['timestamp'])


from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report

##split dataset into test and trainig set
X = data.drop("value", axis=1) # X contains all the features
y = data["value"] # y contains only the label

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.5, random_state = 1)

###model training
###Logistic Regression###
clf_lr = LogisticRegression()

# fit the dataset into LogisticRegression Classifier

clf_lr.fit(X_train, y_train)
#predict on the unseen data
pred_lr = clf_lr.predict(X_test)

###Decision Tree###

from sklearn.tree import DecisionTreeClassifier

clf_dt = DecisionTreeClassifier()
pred_dt = clf_dt.fit(X_train, y_train).predict(X_test)

###Bayes###
from sklearn.naive_bayes import GaussianNB

bayes = GaussianNB()
pred_bayes = bayes.fit(X_train, y_train).predict(X_test)


###visualization for e.g. LogReg
from yellowbrick.classifier import ClassificationReport
from yellowbrick.classifier import ClassPredictionError
from yellowbrick.classifier import ROCAUC

#classificationreport
visualizer = ClassificationReport(clf_lr, support=True)

visualizer.fit(X_train, y_train)  # Fit the visualizer and the model
visualizer.score(X_test, y_test)  # Evaluate the model on the test data
g = visualizer.poof()             # Draw/show/poof the data

#classprediction report
visualizer2 = ClassPredictionError(LogisticRegression())

visualizer2.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer2.score(X_test, y_test) # Evaluate the model on the test data
g2 = visualizer2.poof() # Draw visualization

#(ROC)
visualizer3 = ROCAUC(LogisticRegression())

visualizer3.fit(X_train, y_train)  # Fit the training data to the visualizer
visualizer3.score(X_test, y_test)  # Evaluate the model on the test data
g3 = visualizer3.poof()             # Draw/show/poof the data


最好有一个类似于pred_lr的数组,其中包含为csv文件的每一行计算的概率。那可能吗?如果可以,我该怎么办?

1 个答案:

答案 0 :(得分:2)

在大多数sklearn估计量(如果不是全部)中,您有一种方法来获取排除分类的概率,无论是对数概率还是概率。

例如,如果您有Naive Bayes分类器,并且想要获取概率而不是分类本身,则可以这样做(我使用了与代码中相同的命名法):

from sklearn.naive_bayes import GaussianNB

bayes = GaussianNB()
pred_bayes = bayes.fit(X_train, y_train).predict(X_test)

#for probabilities
bayes.predict_proba(X_test)
bayes.predict_log_proba(X_test)

希望这会有所帮助。

相关问题