多类概率解释器的 Force_plot

时间:2021-04-22 10:17:37

标签: python shap xgbclassifier

我遇到了关于 Python SHAP 库的错误。 虽然根据对数几率创建力图没有问题,但我无法根据概率创建力图。 目标是获得总和为预测概率的 base_values 和 shap_values。

这有效:

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import xgboost as xgb
import sklearn
import shap

X, y = shap.datasets.iris()
X_display, y_display = shap.datasets.iris(display=True)

X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size = 0.2, random_state = 42)

#fit xgboost model
params = {
    'objective': "multi:softprob",
    'eval_metric': "mlogloss",
    'num_class': 3
}

xgb_fit = xgb.train(
   params = params
   , dtrain = xgb.DMatrix(data = X_train, label = y_train) 
)

#create shap values and perform tests
explainer = shap.TreeExplainer(xgb_fit)
shap_values = explainer.shap_values(X_train)

这不起作用:

explainer = shap.TreeExplainer(
    model = xgb_fit
    , data = X_train
    , feature_perturbation='interventional'
    , model_output = 'probability'
)

enter image description here

使用过的包:

matplotlib 3.4.1

numpy 1.20.2

熊猫 1.2.4

scikit-learn 0.24.1

形成 0.39.0

xgboost 1.4.1

1 个答案:

答案 0 :(得分:1)

要查看多类分类的原始分数在概率空间中的累加情况,请尝试KernelExplainer

from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from shap import datasets, KernelExplainer, force_plot, initjs
from scipy.special import softmax, expit

initjs()

X, y = datasets.iris()
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)
clf = XGBClassifier(random_state=42, 
                    eval_metric="mlogloss", 
                    use_label_encoder=False)
clf.fit(X_train, y_train)
ke = KernelExplainer(clf.predict_proba, data=X_train)
shap_values = ke.shap_values(X_test)

force_plot(ke.expected_value[1], shap_values[1][0], feature_names=X.columns)

enter image description here

健全性检查:

  1. 预期结果(最多舍入误差):
clf.predict_proba(X_test[:1])
#array([[0.0031177 , 0.9867134 , 0.01016894]], dtype=float32)
  1. 基本值:
clf.predict_proba(X_train).mean(0)
#array([0.3339472 , 0.34133017, 0.32472247], dtype=float32)

(或者如果您愿意np.unique(y_train, return_counts=True)[1]/len(y_train)