如何在损失函数中使用模型输入?

时间:2020-07-02 07:24:41

标签: python tensorflow keras tensorflow2.0

我正在尝试使用自定义损失函数,该函数取决于模型没有的一些参数。

该模型有两个输入(mel_specspred_inp),并期望使用labels张量进行训练:

def to_keras_example(example):
    # Preparing inputs
    return (mel_specs, pred_inp), labels

# Is a tf.train.Dataset for model.fit(train_data, ...)
train_data = load_dataset(fp, 'train).map(to_keras_example).repeat()

在损失函数中,我需要计算mel_specspred_inp的长度。这意味着我的损失看起来像这样:

def rnnt_loss_wrapper(y_true, y_pred, mel_specs_inputs_):
    input_lengths = get_padded_length(mel_specs_inputs_[:, :, 0])
    label_lengths = get_padded_length(y_true)
    return rnnt_loss(
        acts=y_pred,
        labels=tf.cast(y_true, dtype=tf.int32),
        input_lengths=input_lengths,
        label_lengths=label_lengths
    )

但是,无论我选择哪种方法,我都面临一些问题。


选项1)在model.compile()中设置损失函数

如果我实际上包装了损失函数s.t.它返回一个函数,该函数需要像这样的y_truey_pred

def rnnt_loss_wrapper(mel_specs_inputs_):
    def inner_(y_true, y_pred):
        input_lengths = get_padded_length(mel_specs_inputs_[:, :, 0])
        label_lengths = get_padded_length(y_true)
        return rnnt_loss(
            acts=y_pred,
            labels=tf.cast(y_true, dtype=tf.int32),
            input_lengths=input_lengths,
            label_lengths=label_lengths
        )
    return inner_

model = create_model(hparams)
model.compile(
    optimizer=optimizer,
    loss=rnnt_loss_wrapper(model.inputs[0]
)

在致电_SymbolicException之后,我得到了model.fit()

tensorflow.python.eager.core._SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found [...]

选项2)使用model.add_loss()

add_loss()的文档中指出:

[Adds a..] loss tensor(s), potentially dependent on layer inputs.
..
Arguments:
  losses: Loss tensor, or list/tuple of tensors. Rather than tensors, losses
    may also be zero-argument callables which create a loss tensor.
  inputs: Ignored when executing eagerly. If anything ...

所以我尝试执行以下操作:

def rnnt_loss_wrapper(y_true, y_pred, mel_specs_inputs_):
    input_lengths = get_padded_length(mel_specs_inputs_[:, :, 0])
    label_lengths = get_padded_length(y_true)
    return rnnt_loss(
        acts=y_pred,
        labels=tf.cast(y_true, dtype=tf.int32),
        input_lengths=input_lengths,
        label_lengths=label_lengths
    )

model = create_model(hparams)
model.add_loss(
    rnnt_loss_wrapper(
        y_true=model.inputs[2],
        y_pred=model.outputs[0],
        mel_specs_inputs_=model.inputs[0],
    ),
    inputs=True
)
model.compile(
    optimizer=optimizer
)

但是,调用model.fit()会抛出ValueError

ValueError: No gradients provided for any variable: [...]

以上任何选项都应该起作用吗?

2 个答案:

答案 0 :(得分:0)

我使用了 add_loss 方法如下:

def custom_loss(y_true, y_pred, input_):
# custom loss function
    y_estim = input_[...,0]*y_pred
    shape = tf.cast(tf.shape(y_true)[1], dtype='float32')
    return tf.reduce_mean(1/shape*tf.reduce_sum(tf.pow(y_true-y_estim, 2), axis=1))


mix_input = layers.Input(shape=(301, 257, 4)) # input 1
ref_input = layers.Input(shape=(301, 257, 1)) # input 2
target = layers.Input(shape=(301, 257))       # output target

smss_model = Model(inputs=[mix_input, ref_input], outputs=smss) # my model that accept two inputs

model = Model(inputs=[mix_input, ref_input, target], outputs=smss) # this one used just to train the model, with the additional paramters
model.add_loss(custom_loss(target, smss, mix_input)) # the add_loss where to pass the custom loss function
model.summary()

model.compile(loss=None, optimizer='sgd')
model.fit([mix, ref, y], epochs=1, batch_size=1, verbose=1)

即使我使用过这种方法并且有效,我仍在寻找另一种方法,不涉及创建训练模型

答案 1 :(得分:-2)

使用lambda函数起作用吗? (https://www.w3schools.com/python/python_lambda.asp

loss = lambda x1, x2: rnnt_loss(x1, x2, acts, labels, input_lengths,
                                label_lengths, blank_label=0)

通过这种方式,损失函数应该是接受参数x1x2的函数,但是rnnt_loss也可以知道actslabels,{{1} },input_lengthslabel_lengths