Keras多输出模型中每个输出的自定义精度/损失

时间:2018-08-02 23:18:47

标签: python tensorflow keras

我正在尝试使用Keras在两个输出的神经网络模型中为每个输出定义自定义损失和准确性函数。我们将两个输出称为:A和B。

我的目标是:

  1. 给出输出名称之一的精度/损失函数,以便可以在张量板上的相同图形上报告它们,与我所放置的较旧/现有模型的相应输出相同。因此,例如,这两个输出网络中输出A的精度和损失应该在张量板上的同一张图中可见,就像我拥有的​​某些旧模型的输出A一样。更具体地说,这些较旧的模型都输出A_output_accval_A_output_accA_output_lossval_A_output_loss。因此,我希望在这个新模型中,用于A输出的相应度量标准读数也具有这些名称,以便在张量板上的同一图形上可以查看/比较它们。
  2. 允许轻松配置精度/损耗函数,这样我就可以立即为每个输出交换不同的损耗/精度,而无需对其进行硬编码。

我有一个Modeler类,用于构造和编译网络。相关代码如下。

class Modeler(BaseModeler):
  def __init__(self, loss=None,accuracy=None, ...):
    """
    Returns compiled keras model.  

    """
    self.loss = loss
    self.accuracy = accuracy
    model = self.build()

    ...

    model.compile(
        loss={ # we are explicit here and name the outputs even though in this case it's not necessary
            "A_output": self.A_output_loss(),#loss,
            "B_output": self.B_output_loss()#loss
        },
        optimizer=optimus,
        metrics= { # we need to tie each output to a specific list of metrics
            "A_output": [self.A_output_acc()],
                            # self.A_output_loss()], # redundant since it's already reported via `loss` param,
                                                        # ends up showing up as `A_output_loss_1` since keras
                                                        # already reports `A_output_loss` via loss param
            "B_output": [self.B_output_acc()]
                            # self.B_output_loss()]  # redundant since it's already reported via `loss` param
                                                        # ends up showing up as `B_output_loss_1` since keras
                                                        # already reports `B_output_loss` via loss param
        })

    self._model = model


  def A_output_acc(self):
    """
    Allows us to output custom train/test accuracy/loss metrics to desired names e.g. 'A_output_acc' and
    'val_A_output_acc' respectively so that they may be plotted on same tensorboard graph as the accuracies from
    other models that same outputs.

    :return:    accuracy metric
    """

    acc = None
    if self.accuracy == TypedAccuracies.BINARY:
        def acc(y_true, y_pred):
            return self.binary_accuracy(y_true, y_pred)
    elif self.accuracy == TypedAccuracies.DICE:
        def acc(y_true, y_pred):
            return self.dice_coef(y_true, y_pred)
    elif self.accuracy == TypedAccuracies.JACARD:
        def acc(y_true, y_pred):
            return self.jacard_coef(y_true, y_pred)
    else:
        logger.debug('ERROR: undefined accuracy specified: {}'.format(self.accuracy))

    return acc


  def A_output_loss(self):
    """
    Allows us to output custom train/test accuracy/loss metrics to desired names e.g. 'A_output_acc' and
    'val_A_output_acc' respectively so that they may be plotted on same tensorboard graph as the accuracies from
    other models that same outputs.

    :return:    loss metric
    """

    loss = None
    if self.loss == TypedLosses.BINARY_CROSSENTROPY:
        def loss(y_true, y_pred):
            return self.binary_crossentropy(y_true, y_pred)
    elif self.loss == TypedLosses.DICE:
        def loss(y_true, y_pred):
            return self.dice_coef_loss(y_true, y_pred)
    elif self.loss == TypedLosses.JACARD:
        def loss(y_true, y_pred):
            return self.jacard_coef_loss(y_true, y_pred)
    else:
        logger.debug('ERROR: undefined loss specified: {}'.format(self.accuracy))

    return loss


  def B_output_acc(self):
    """
    Allows us to output custom train/test accuracy/loss metrics to desired names e.g. 'A_output_acc' and
    'val_A_output_acc' respectively so that they may be plotted on same tensorboard graph as the accuracies from
    other models that same outputs.

    :return:    accuracy metric
    """

    acc = None
    if self.accuracy == TypedAccuracies.BINARY:
        def acc(y_true, y_pred):
            return self.binary_accuracy(y_true, y_pred)
    elif self.accuracy == TypedAccuracies.DICE:
        def acc(y_true, y_pred):
            return self.dice_coef(y_true, y_pred)
    elif self.accuracy == TypedAccuracies.JACARD:
        def acc(y_true, y_pred):
            return self.jacard_coef(y_true, y_pred)
    else:
        logger.debug('ERROR: undefined accuracy specified: {}'.format(self.accuracy))

    return acc


  def B_output_loss(self):
    """
    Allows us to output custom train/test accuracy/loss metrics to desired names e.g. 'A_output_acc' and
    'val_A_output_acc' respectively so that they may be plotted on same tensorboard graph as the accuracies from
    other models that same outputs.

    :return:    loss metric
    """

    loss = None
    if self.loss == TypedLosses.BINARY_CROSSENTROPY:
        def loss(y_true, y_pred):
            return self.binary_crossentropy(y_true, y_pred)
    elif self.loss == TypedLosses.DICE:
        def loss(y_true, y_pred):
            return self.dice_coef_loss(y_true, y_pred)
    elif self.loss == TypedLosses.JACARD:
        def loss(y_true, y_pred):
            return self.jacard_coef_loss(y_true, y_pred)
    else:
        logger.debug('ERROR: undefined loss specified: {}'.format(self.accuracy))

    return loss


  def load_model(self, model_path=None):
    """
    Returns built model from model_path assuming using the default architecture.

    :param model_path:   str, path to model file
    :return:             defined model with weights loaded
    """

    custom_objects = {'A_output_acc': self.A_output_acc(),
                      'A_output_loss': self.A_output_loss(),
                      'B_output_acc': self.B_output_acc(),
                      'B_output_loss': self.B_output_loss()}
    self.model = load_model(filepath=model_path, custom_objects=custom_objects)
    return self


  def build(self, stuff...):
    """
    Returns model architecture.  Instead of just one task, it performs two: A and B.

    :return:            model
    """

    ...

    A_conv_final = Conv2D(1, (1, 1), activation="sigmoid", name="A_output")(up_conv_224)

    B_conv_final = Conv2D(1, (1, 1), activation="sigmoid", name="B_output")(up_conv_224)

    model = Model(inputs=[input], outputs=[A_conv_final, B_conv_final], name="my_model")
    return model

培训工作正常。但是,当我稍后使用上面的load_model()函数来加载模型以进行推理时,Keras抱怨说它不知道我提供的自定义指标:

ValueError: Unknown loss function:loss

似乎正在发生的事情是Keras将上述(def loss(...)def acc(...))的每个自定义指标函数中创建的返回函数附加到metrics中给出的字典键中model.compile()调用的参数。 因此,例如,键为A_output,我们为此调用了自定义精度函数A_output_acc(),它返回了一个名为acc的函数。因此,结果为A_output + acc = A_output_acc。这意味着我无法命名这些返回的函数:acc / loss其他名称,因为那样会弄乱报告/图形。 一切都很好,但是我不知道如何使用正确定义的load参数编写custom_objects函数(或为此定义/命名我的自定义指标函数),以便Keras知道每个输出头将加载哪些自定义精度/损耗功能。

更重要的是,似乎想要custom_objects中具有以下形式的load_model()字典(由于明显的原因,该字典不起作用):

custom_objects = {'acc': self.A_output_acc(),
                  'loss': self.A_output_loss(),
                  'acc': self.B_output_acc(),
                  'loss': self.B_output_loss()}

代替:

custom_objects = {'A_output_acc': self.A_output_acc(),
                  'A_output_loss': self.A_output_loss(),
                  'B_output_acc': self.B_output_acc(),
                  'B_output_loss': self.B_output_loss()}

有任何见解或解决方法吗?

谢谢!

编辑:

我已经确认以上关于键/函数名串联的推理对于Keras的metrics调用的model.compile()参数是正确的。但是,对于loss中的model.compile()参数,Keras只是将键与单词loss串联在一起,但希望{的custom_objects参数中的自定义损失函数的名称{1}}。。。

1 个答案:

答案 0 :(得分:0)

在损失和指标末尾删除(),应该是这样。看起来像这样

loss={ 
       "A_output": self.A_output_loss,
       "B_output": self.B_output_loss
      }