如何在Keras输出每级精度?

时间:2017-08-29 04:35:19

标签: machine-learning neural-network keras conv-neural-network

Caffe不仅可以打印整体精确度,还可以打印每一级精度。

在Keras日志中,只有整体准确性。我很难计算出单独的班级准确度。

  

Epoch 168/200

     

0s - 损失:0.0495 - acc:0.9818 - val_loss:0.0519 - val_acc:0.9796

     

大纪元169/200

     

0s - 损失:0.0519 - acc:0.9796 - val_loss:0.0496 - val_acc:0.9815

     

大纪元170/200

     

0s - 损失:0.0496 - acc:0.9815 - val_loss:0.0514 - val_acc:0.9801

任何知道如何在keras中输出每级精度的人?

3 个答案:

答案 0 :(得分:13)

精密&召回是多类别分类的更有用的措施(见definitions)。遵循Keras MNIST CNN示例(10级分类),您可以使用sklearn.metrics中的classification_report获取每个班级的度量:

from sklearn.metrics import classification_report
import numpy as np

Y_test = np.argmax(y_test, axis=1) # Convert one-hot to index
y_pred = model.predict_classes(x_test)
print(classification_report(Y_test, y_pred))

结果如下:

         precision    recall  f1-score   support

      0       0.99      1.00      1.00       980
      1       0.99      0.99      0.99      1135
      2       1.00      0.99      0.99      1032
      3       0.99      0.99      0.99      1010
      4       0.98      1.00      0.99       982
      5       0.99      0.99      0.99       892
      6       1.00      0.99      0.99       958
      7       0.97      1.00      0.99      1028
      8       0.99      0.99      0.99       974
      9       0.99      0.98      0.99      1009

avg / total   0.99      0.99      0.99     10000

答案 1 :(得分:3)

您可能正在寻找使用回调方法,可以将其轻松添加到model.fit()调用中。

例如,您可以使用keras.callbacks.Callback界面定义自己的类。我建议使用on_epoch_end()函数,因为如果您决定使用该详细程度设置进行打印,它会在您的训练摘要中很好地格式化。请注意,此特定代码块设置为使用3类,但是您当然可以将其更改为所需的数字。

# your class labels
classes = ["class_1","class_2", "class_3"]

class AccuracyCallback(tf.keras.callbacks.Callback):

    def __init__(self, test_data):
        self.test_data = test_data

    def on_epoch_end(self, epoch, logs=None):
        x_data, y_data = self.test_data

        correct = 0
        incorrect = 0

        x_result = self.model.predict(x_data, verbose=0)

        x_numpy = []

        for i in classes:
            self.class_history.append([])

        class_correct = [0] * len(classes)
        class_incorrect = [0] * len(classes)

        for i in range(len(x_data)):
            x = x_data[i]
            y = y_data[i]

            res = x_result[i]

            actual_label = np.argmax(y)
            pred_label = np.argmax(res)

            if(pred_label == actual_label):
                x_numpy.append(["cor:", str(y), str(res), str(pred_label)])     
                class_correct[actual_label] += 1   
                correct += 1
            else:
                x_numpy.append(["inc:", str(y), str(res), str(pred_label)])
                class_incorrect[actual_label] += 1
                incorrect += 1

        print("\tCorrect: %d" %(correct))
        print("\tIncorrect: %d" %(incorrect))

        for i in range(len(classes)):
            tot = float(class_correct[i] + class_incorrect[i])
            class_acc = -1
            if (tot > 0):
                class_acc = float(class_correct[i]) / tot

            print("\t%s: %.3f" %(classes[i],class_acc)) 

        acc = float(correct) / float(correct + incorrect)  

        print("\tCurrent Network Accuracy: %.3f" %(acc))

然后,您将要配置新的回调以适合您的模型。假设您的验证数据(val_data)是一些元组对,则可以使用以下内容:

accuracy_callback = AccuracyCallback(val_data)

# you can use the history if desired
history = model.fit( x=_, y=_, verbose=1, 
           epochs=_, shuffle=_, validation_data = val_data,
           callbacks=[accuracy_callback], batch_size=_
         )

请注意_表示值可能会根据您的配置而改变

答案 2 :(得分:0)

对于每个课程的培训准确性:在培训数据集上之下实现-在数据集上进行培训之后(和/或之前)。


对于原始的每类验证准确性:

def per_class_accuracy(y_preds,y_true,class_labels):
    return [np.mean([
        (y_true[pred_idx] == np.round(y_pred)) for pred_idx, y_pred in enumerate(y_preds) 
      if y_true[pred_idx] == int(class_label)
                    ]) for class_label in class_labels]

def update_val_history():
    [val_history[class_label].append(np.mean( np.asarray(temp_history).T[class_idx] )
                             ) for class_idx, class_label in enumerate(class_labels)]

示例:

class_labels = ['0','1','2','3']
val_history = {class_label:[] for class_label in class_labels}

y_true   = np.asarray([0,0,0,0, 1,1,1,1, 2,2,2,2, 3,3,3,3])
y_preds1 = np.asarray([0,3,3,3, 1,1,0,0, 2,2,2,0, 3,3,3,3])
y_preds2 = np.asarray([0,0,3,3, 0,1,0,0, 2,2,2,2, 0,0,0,0])

y_preds1 = model.predict(x1)
temp_hist.append(per_class_accuracy(y_preds1,y_true,class_labels))
update_val_history()
y_preds2 = model.predict(x2)
temp_hist.append(per_class_accuracy(y_preds2,y_true,class_labels))
update_val_history()

print(val_history)

>> {
'0':[0.25,0.50],
'1':[0.50,0.25],
'2':[0.75,1.00],
'3':[1.00,0.00]
}