自定义损失功能-Keras-

时间:2019-12-02 18:31:32

标签: python tensorflow keras autoencoder

我正在尝试实现一个混合模型,其中一部分是可变自动编码器,另一部分则占用了潜在空间并对输入的属性进行了预测。我想联合训练这两个模型。这是我的模型:

# build encoder model
inputs = Input(shape=input_shape, name='encoder_input')
x = Dense(intermediate_dim1, activation='relu')(inputs)
x1 = Dense(intermediate_dim2, activation='relu')(x)
x2 = Dense(intermediate_dim3, activation='relu')(x1)
z_mean = Dense(latent_dim, name='z_mean')(x2)
z_log_var = Dense(latent_dim, name='z_log_var')(x2)

# use reparameterization trick to push the sampling out as input
# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])

# instantiate encoder model
encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')
# build decoder model
latent_inputs = Input(shape=(latent_dim,), name='z_sampling1')
x1 = Dense(intermediate_dim3, activation='relu')(latent_inputs)
x2 = Dense(intermediate_dim2, activation='relu')(x1)
x3 = Dense(intermediate_dim1, activation='relu')(x2)
outputs = Dense(2*original_dim+1, activation='sigmoid')(x3)

# instantiate decoder model
decoder = Model(latent_inputs, outputs, name='decoder')
#build property predictor model
latent_inputs = Input(shape=(latent_dim,), name='z_sampling2')
x1 = Dense(64, activation='relu')(latent_inputs)
x2 = Dense(128, activation='relu')(x1)
outputs = Dense(property_dim, activation='sigmoid')(x2)

predModel = Model(latent_inputs, outputs, name='predictor')

这是完整的模型,具有编码器的输入和仅预测器模型的输出。

#build full model
vaeOutputs = decoder(encoder(inputs)[2])
predOutputs = predModel(encoder(inputs)[0])
vaePred = Model(inputs, [vaeOutputs,predOutputs], name='vae_fullimage')
vaePred.summary()

现在我在定义损失函数和训练模型时遇到了麻烦:

这是我的尝试:

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    help_ = "Load h5 model trained weights"
    parser.add_argument("-w", "--weights", help=help_)
    help_ = "Use mse loss instead of binary cross entropy (default)"
    parser.add_argument("-m",
                        "--mse",
                        help=help_, action='store_true')
    #args = parser.parse_args()
    args = parser.parse_known_args()[0]
    models = (encoder, decoder)
    def custom_loss(y_true, y_pred):
            kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
            kl_loss = K.sum(kl_loss, axis=-1)
            kl_loss *= -0.5

            reconstruction_loss = binary_crossentropy(y_true[0], y_pred[0])
            reconstruction_loss*= original_dim

            #y_pred = predOutputs

            prediction_loss =K.square(y_pred[1] - y_true[1])

            total_loss =  K.mean(prediction_loss, axis= -1) + K.mean (reconstruction_loss) + K.mean(kl_loss)
            return total_loss

    optimizer =  keras.optimizers.Adam(learning_rate=0.001)
    vaePred.compile(optimizer, custom_loss)
    vaePred.summary()

    if args.weights:
        vaePred.load_weights(args.weights)
    else:
        # train the autoencoder
        history =vaePred.fit(x=x_train, y=[x_train,property_train],
                epochs=epochs,
                callbacks=callbacks,
                batch_size=batch_size,
                validation_data=(x_test, [x_test,property_test]))

1 个答案:

答案 0 :(得分:0)

您似乎正在训练自动编码器(AE)(试图自我预测的生成型模型)。如果是完美的,则AE的输出应等于输入。因此,您应该将 y_true 更改为 input

更改:

prediction_loss = mse(y_true, predOutputs)

成为:

prediction_loss = mse(inputs, predOutputs)

注意:我尚未运行或测试任何此代码。它似乎是Keras的示例代码。