使用custom_metrics和自定义损失加载keras模型

时间:2020-11-12 02:36:25

标签: python tensorflow keras

我通过子类keras.model创建了一个keras模型。我还使用了自定义损失(焦点损失),自定义指标(对keras.metrics进行子分类)和学习率衰减。我已经训练了模型并使用tf.keras.callbacks.ModelCheckpoint(model_path)保存了模型。

当我尝试加载模型时,出现错误提示ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements get_config and from_config when saving. In addition, please use the custom_objects arg when calling load_model()

在研究了错误之后,我才知道有关传递custom_objects的信息。但是,在阅读并尝试了几件事之后,我仍然无法加载模型。有人可以让我知道正确的做法。我的代码如下:

def get_metrics():
    train_accuracy = tf.keras.metrics.CategoricalAccuracy(name="train_accuracy")
    val_accuracy = tf.keras.metrics.CategoricalAccuracy(name="val_accuracy")
    confusion_matrix = ConfusionMatrixMetric(20)
    return confusion_matrix, train_accuracy, val_accuracy


def loss_fn(labels, logits):

    epsilon = 1e-9
    model_out = tf.nn.softmax(logits, axis=-1) + epsilon
    ce = - (tf.math.log(model_out) * labels)
    weight = labels * tf.math.pow(1 - model_out, gamma)
    fl = alpha * weight * ce
    loss = tf.reduce_max(fl, axis=-1)
    return loss


def get_optimizer(steps_per_epoch, finetune=False):
    lr = 0.001
    if finetune:
        lr = 0.00001
    lr_fn = tf.keras.optimizers.schedules.PiecewiseConstantDecay(
        [steps_per_epoch * 10], [lr, lr / 10], name=None
    )
    opt_op = tf.keras.optimizers.Adam(learning_rate=lr_fn)
    return opt_op
        
class MyModel(keras.Model):
    def compile(self, optimizer, loss_fn, metric_fn):
        super(MyModel, self).compile()
        self.optimizer = optimizer
        self.loss_fn = loss_fn
        self.confusion_matrix, self.train_accuracy, self.val_accuracy = metric_fn()

    def train_step(self, train_data):
        X, y = train_data
        with tf.GradientTape() as tape:
            logits = self(X, training=True)
            loss = self.loss_fn(y, logits)

        # Compute gradients
        trainable_vars = self.trainable_variables
        gradients = tape.gradient(loss, trainable_vars)

        # Update weights
        self.optimizer.apply_gradients(zip(gradients, trainable_vars))

        # compute metrics keeping an moving average
        y_pred = tf.nn.softmax(y, axis=-1)

        self.train_accuracy.update_state(y, y_pred )
        self.confusion_matrix.update_state(y, y_pred)

        update_dict = {"train_accuracy": self.train_accuracy.result()}
        if 'confusion_matrix_metric' in self.metrics_names:
            self.metrics[0].add_results(update_dict)
        return update_dict
    
class ConfusionMatrixMetric(tf.keras.metrics.Metric):
    def __init__(self, num_classes, **kwargs):
        super(ConfusionMatrixMetric, self).__init__(name='confusion_matrix_metric', **kwargs)  # handles base args (e.g., dtype)
        self.num_classes = num_classes
        self.total_cm = self.add_weight("total", shape=(num_classes, num_classes), initializer="zeros")

    def reset_states(self):
        for s in self.variables:
            s.assign(tf.zeros(shape=s.shape))

    def update_state(self, y_true, y_pred, sample_weight=None):
        self.total_cm.assign_add(self.confusion_matrix(y_true, y_pred))
        return self.total_cm

    def result(self):
        return self.process_confusion_matrix()

    def confusion_matrix(self, y_true, y_pred):
        y_pred = tf.math.argmax(y_pred, 1)
        cm = tf.math.confusion_matrix(y_true, y_pred, dtype=tf.float32, num_classes=self.num_classes)
        return cm

    def process_confusion_matrix(self):
        cm = self.total_cm
        diag_part = tf.linalg.diag_part(cm)
        # accuracy = tf.math.reduce_sum(diag_part) / (tf.math.reduce_sum(cm) + tf.constant(1e-15))
        precision = diag_part / (tf.math.reduce_sum(cm, 0) + tf.constant(1e-15))
        recall = diag_part / (tf.math.reduce_sum(cm, 1) + tf.constant(1e-15))
        f1 = 2 * precision * recall / (precision + recall + tf.constant(1e-15))
        return f1

    def add_results(self, output):
        results = self.result()
        for i in range(self.num_classes):
            output['F1_{}'.format(i)] = results[i]

if __name__ == "__main__":
    model_path = 'model/my_custom_model/'
    create_folder(model_path)
    callbacks = [tf.keras.callbacks.ModelCheckpoint(model_path)]
    # train
    model = MyModel(inputs, outputs)
    model.summary()
    opt_op = get_optimizer(100)

    model.compile(optimizer=opt_op,
                  loss_fn=loss_fn,
                  metric_fn=get_metrics)

    model.fit(train_data_gen(),
              epochs=10,
              callbacks=callbacks)

    tf.keras.models.load_model(model_path)

很抱歉输入长代码。但是只是想确保我所做的一切都是正确且可以理解的。

1 个答案:

答案 0 :(得分:1)

如您的错误提示,如果您想要子类化、使用和加载自定义指标,您应该实现 get_config 方法。

您已经正确构建了您的指标子类化类 tf.Keras.metrics.Metric,您只需要添加 get_config 并使用它获取您的参数(从我看来,您只有 num_classes) :

def get_config(self):
    base_config = super().get_config()
    return {**base_config, "num_classes": self.num_classes}

此外,当您加载时,您还必须加载您的自定义指标:

tf.keras.models.load_model(model_path, custom_objects={"ConfusionMatrixMetric": ConfusionMatrixMetric )

但请注意以下内容(来自 Hands-On Machine Learning with Scikit-Learn and TensorFlow by Aurélien Géron, 2nd Edition 一书中):

<块引用>

Keras API 目前仅指定如何使用子类化来定义层模型、回调和正则化器。如果您构建其他组件(例如损失、指标、初始值设定项或约束)。使用子类化,它们可能无法移植到其他 Keras 实现。 Keras API 可能会更新,以便为所有这些组件指定子类。