在Keras训练期间动态更改损失函数,而无需重新编译优化器

时间:2019-05-04 03:22:54

标签: python keras callback epoch loss

是否可以在回调中设置model.loss,而无需在之后重新编译model.compile(...)(因为优化器状态被重置),而仅重新编译model.loss,例如: / p>

class NewCallback(Callback):

        def __init__(self):
            super(NewCallback,self).__init__()

        def on_epoch_end(self, epoch, logs={}):
            self.model.loss=[loss_wrapper(t_change, current_epoch=epoch)]
            self.model.compile_only_loss() # is there a version or hack of 
                                           # model.compile(...) like this?

要在堆栈溢出的先前示例中进行更多扩展:

要实现一个取决于历元数的损失函数,例如(如this stackoverflow question):

def loss_wrapper(t_change, current_epoch):
    def custom_loss(y_true, y_pred):
        c_epoch = K.get_value(current_epoch)
        if c_epoch < t_change:
            # compute loss_1
        else:
            # compute loss_2
    return custom_loss

其中“ current_epoch”是Keras变量,已通过回调进行了更新:

current_epoch = K.variable(0.)
model.compile(optimizer=opt, loss=loss_wrapper(5, current_epoch), 
metrics=...)

class NewCallback(Callback):
    def __init__(self, current_epoch):
        self.current_epoch = current_epoch

    def on_epoch_end(self, epoch, logs={}):
        K.set_value(self.current_epoch, epoch)

从本质上讲,人们可以将python代码转换成后端函数,以使丢失工作如下:

def loss_wrapper(t_change, current_epoch):
    def custom_loss(y_true, y_pred):
        # compute loss_1 and loss_2
        bool_case_1=K.less(current_epoch,t_change)
        num_case_1=K.cast(bool_case_1,"float32")
        loss = (num_case_1)*loss_1 + (1-num_case_1)*loss_2
        return loss
    return custom_loss
it works.

我对这些骇客不满意,想知道是否可以在回调中设置model.loss而无需在之后重新编译model.compile(...)(因为优化器状态被重置),而只是重新编译model.loss

0 个答案:

没有答案