我正在使用一个keras神经网络来识别数据所属的类别。
self.model.compile(loss='categorical_crossentropy',
optimizer=keras.optimizers.Adam(lr=0.001, decay=0.0001),
metrics=[categorical_accuracy])
健身功能
history = self.model.fit(self.X,
{'output': self.Y},
validation_split=0.3,
epochs=400,
batch_size=32
)
我对找出哪些标签在验证步骤中被错误分类感兴趣。似乎是了解幕后情况的好方法。
答案 0 :(得分:0)
您可以使用model.predict_classes(validation_data)
来获取验证数据的预测类,并将这些预测与实际标签进行比较,以找出模型错误的地方。像这样:
predictions = model.predict_classes(validation_data)
wrong = np.where(predictions != Y_validation)
答案 1 :(得分:0)
如果您有兴趣在“内幕”下寻找,我建议您使用
model.predict(validation_data_x)
查看每个类别的分数,以及对验证集的每次观察。 这应该可以说明该模型不太擅长分类的类别。预测最终课程的方法是
scores = model.predict(validation_data_x)
preds = np.argmax(scores, axis=1)
请确保为np.argmax
使用正确的轴(我假设您的观察轴为1)。然后使用preds与真实类进行比较。
另外,要查看该数据集的整体准确性,请使用
model.evaluate(x=validation_data_x, y=validation_data_y)
答案 2 :(得分:0)
我最终创建了一个度量标准,该度量标准在每次迭代中都显示“表现最差的类别ID +得分”。来自link
的想法import tensorflow as tf
import numpy as np
class MaxIoU(object):
def __init__(self, num_classes):
super().__init__()
self.num_classes = num_classes
def max_iou(self, y_true, y_pred):
# Wraps np_max_iou method and uses it as a TensorFlow op.
# Takes numpy arrays as its arguments and returns numpy arrays as
# its outputs.
return tf.py_func(self.np_max_iou, [y_true, y_pred], tf.float32)
def np_max_iou(self, y_true, y_pred):
# Compute the confusion matrix to get the number of true positives,
# false positives, and false negatives
# Convert predictions and target from categorical to integer format
target = np.argmax(y_true, axis=-1).ravel()
predicted = np.argmax(y_pred, axis=-1).ravel()
# Trick from torchnet for bincounting 2 arrays together
# https://github.com/pytorch/tnt/blob/master/torchnet/meter/confusionmeter.py
x = predicted + self.num_classes * target
bincount_2d = np.bincount(x.astype(np.int32), minlength=self.num_classes**2)
assert bincount_2d.size == self.num_classes**2
conf = bincount_2d.reshape((self.num_classes, self.num_classes))
# Compute the IoU and mean IoU from the confusion matrix
true_positive = np.diag(conf)
false_positive = np.sum(conf, 0) - true_positive
false_negative = np.sum(conf, 1) - true_positive
# Just in case we get a division by 0, ignore/hide the error and set the value to 0
with np.errstate(divide='ignore', invalid='ignore'):
iou = false_positive / (true_positive + false_positive + false_negative)
iou[np.isnan(iou)] = 0
return np.max(iou).astype(np.float32) + np.argmax(iou).astype(np.float32)
〜
用法:
custom_metric = MaxIoU(len(catagories))
self.model.compile(loss='categorical_crossentropy',
optimizer=keras.optimizers.Adam(lr=0.001, decay=0.0001),
metrics=[categorical_accuracy, custom_metric.max_iou])