使用keras训练VAE时出现奇怪的错误

时间:2020-04-30 12:09:08

标签: tensorflow machine-learning keras autoencoder

我正在尝试在人脸图像上训练VAE,并在model.fit()方法之后出现错误。 我找不到解决我问题的解决方案。

我得到的错误是:

ValueError: Cannot create an execution function which is comprised of elements from multiple graphs.

编码器:

    def build_encoder(self):
        global K
        K.clear_session()

        conv_filters = [32, 64, 64, 64]
        conv_kernel_size = [3, 3, 3, 3]
        conv_strides = [2, 2, 2, 2]

        n_layers = len(conv_filters)

        x = self.encoder_input

        for i in range(n_layers):
            x = Conv2D(filters=conv_filters[i],
                       kernel_size=conv_kernel_size[i],
                       strides=conv_strides[i],
                       padding='same',
                       name='encoder_conv_' + str(i)
                       )(x)
            if self.use_batch_norm:
                x = BatchNormalization()(x)

            x = LeakyReLU()(x)

            if self.use_dropout:
                x = Dropout(rate=0.25)(x)

        self.shape_before_flattening = K.int_shape(x)[1:]

        x = Flatten()(x)

        self.mean_layer = Dense(self.encoder_output_dim, name='mu')(x)
        self.sd_layer = Dense(self.encoder_output_dim, name='log_var')(x)

        def sampling(args):
            mean_mu, log_var = args
            epsilon = K.random_normal(shape=K.shape(mean_mu), mean=0., stddev=1.)
            return mean_mu + K.exp(log_var / 2) * epsilon


        encoder_output = Lambda(sampling, name='encoder_output')([self.mean_layer, self.sd_layer])

        return Model(self.encoder_input, encoder_output, name="VAE_Encoder")

解码器:

    def build_decoder(self):
        conv_filters = [64, 64, 32, 3]
        conv_kernel_size = [3, 3, 3, 3]
        conv_strides = [2, 2, 2, 2]

        n_layers = len(conv_filters)

        decoder_input = self.decoder_input

        x = Dense(np.prod(self.shape_before_flattening))(decoder_input)
        x = Reshape(self.shape_before_flattening)(x)

        for i in range(n_layers):
            x = Conv2DTranspose(filters=conv_filters[i],
                                kernel_size=conv_kernel_size[i],
                                strides=conv_strides[i],
                                padding='same',
                                name='decoder_conv_' + str(i)
                                )(x)
            if i < n_layers - 1:
                x = LeakyReLU()(x)
            else:
                x = Activation('sigmoid')(x)

        self.decoder_output = x

        return Model(decoder_input, self.decoder_output, name="VAE_Decoder")

组合模型:

    def build_autoencoder(self):
        self.encoder = self.build_encoder()
        self.decoder = self.build_decoder()


        self.autoencoder = Model(self.encoder_input, self.decoder(self.encoder(self.encoder_input)),
                                 name="Variational_Auto_Encoder")

        self.autoencoder.compile(optimizer=self.adam_optimizer, loss=self.total_loss,
                                 metrics=[self.r_loss, self.kl_loss],
                                 experimental_run_tf_function=False)
        self.autoencoder.summary()

        if os.path.exists(self.model_name + ".h5"):
            self.autoencoder.load_weights(self.model_name + ".h5")  # Loading pre-trained weights

        return self.autoencoder

培训:

    def train(self):

        filenames = np.array(glob.glob(os.path.join(self.data_dir, '*/*.jpg')))
        NUM_IMAGES = len(filenames)
        print("Total number of images : " + str(NUM_IMAGES))

        data_flow = ImageDataGenerator(rescale=1. / 255).flow_from_directory(self.data_dir,
                                                                             target_size=self.input_shape[:2],
                                                                             batch_size=self.batch_size,
                                                                             shuffle=True,
                                                                             class_mode='input',
                                                                             subset='training'
                                                                             )

        self.autoencoder.fit_generator(data_flow,
                                       shuffle=True,
                                       epochs=self.epochs,
                                       initial_epoch=0,
                                       steps_per_epoch=NUM_IMAGES // self.batch_size
                                       # callbacks=[self.checkpoint_callback]
                                       )

        self.autoencoder.save_weights(self.save_dir + self.model_name + ".h5")

我知道这可能不是在这里提问的最佳方法,但是我真的不知道如何解决它 希望你能告诉我我在做什么错。☺

1 个答案:

答案 0 :(得分:0)

此行会引起问题:

K.clear_session()

删除它可以解决问题。