SHAP ValueError:两个形状中的维度 1 必须相等,但分别为 2 和 1。形状为 [?,2] 和 [?,1]

时间:2021-02-21 19:05:29

标签: python tensorflow valueerror dimensions shap

基于之前训练过的前馈网络,我尝试使用 SHAP 来获取特征重要性。我按照文档中描述的所有步骤操作,但仍然收到以下错误

ValueError: Dimension 1 in both shapes must be equal, but are 2 and 1. Shapes are [?,2] and [?,1]

以下代码生成了一个具有相同错误的重现示例。

import pandas as pd
from numpy.random import randint
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, BatchNormalization, Dropout, Activation
from keras.optimizers import Adam
import shap

# Train_x data creation
train_x = pd.DataFrame({
    'v1': randint(2, 20, 1489),
    'v2': randint(50, 200, 1489),
    'v3': randint(30, 90, 1489),
    'v4': randint(100, 150, 1489)
})

# Train_y data creation
train_y = randint(0, 2, 1489)

# One-hot encoding as I use categorical cross-entropy
train_y = to_categorical(train_y, num_classes=2)

# Start construction of a DNN Sequential model.
model = Sequential()

# First input layer with a dropout and batch normalization layer following
model.add(Dense(256, input_dim=train_x.shape[1]))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(rate=0.2))

# Output layer
model.add(Dense(2))
model.add(Activation('softmax'))

# Use the Adam optimizer
optimizer = Adam(lr=0.001)

# Compile the model
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])

model.summary()

# Fit model
hist = model.fit(train_x, train_y, epochs=100, batch_size=128, shuffle=False, verbose=2)

# SHAP calculation
explainer = shap.DeepExplainer(model, train_x)
shap_values = explainer.shap_values(train_x[:500].values)

我的输入形状为 (None, 4),最后有一个 softmax 激活函数,有 2 个神经元,因为我将其用于二元分类。以下代码片段中的 train_x 数据是形状为 (1489, 4) 的 Pandas 数据框。

我尝试更改 train_x 形状,但出现类似错误。任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:0)

请参阅以下使用 TF 进行二元分类的工作示例:

from numpy.random import randint
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, BatchNormalization, Dropout, Activation
from tensorflow.keras.optimizers import Adam
import shap
import tensorflow
print(shap.__version__, "\n",tensorflow.__version__)

# Train_x data creation
train_x = pd.DataFrame({
    'v1': randint(2, 20, 1489),
    'v2': randint(50, 200, 1489),
    'v3': randint(30, 90, 1489),
    'v4': randint(100, 150, 1489)
})

# Train_y data creation
train_y = randint(0, 2, 1489)

# One-hot encoding as I use categorical cross-entropy
train_y = to_categorical(train_y, num_classes=2)

# Start construction of a DNN Sequential model.
model = Sequential()

# First input layer with a dropout and batch normalization layer following
model.add(Dense(256, input_dim=train_x.shape[1]))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(rate=0.2))

# Output layer
model.add(Dense(2))
model.add(Activation('softmax'))

# Use the Adam optimizer
optimizer = Adam(lr=0.001)

# Compile the model
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])

# model.summary()

# Fit model
hist = model.fit(train_x, train_y, epochs=100, batch_size=128, shuffle=False, verbose=0)

# SHAP calculation
shap.explainers._deep.deep_tf.op_handlers["AddV2"] = shap.explainers._deep.deep_tf.passthrough
explainer = shap.DeepExplainer(model, train_x)

shap_values = explainer.shap_values(train_x[:500].values)

shap.summary_plot(shap_values[1])

0.38.2 
 2.2.0

enter image description here

注意几点:

  1. 软件包版本(我相信 tf 应该低于 2.4)
  2. 添加 "AddV2"(参见讨论 here