对Keras的Autoencoder行为感到困惑?

时间:2017-09-26 09:39:41

标签: keras autoencoder

我在Keras中训练了一个自动编码器,并将模型保存为两个独立的模型:编码器和解码器。

我成功加载了这些,然后使用以下内容重新创建整个自动编码器:

ae_v = decoder(encoder(ae_in))

autoencoder = Model(inputs = ae_in, outputs = ae_v)

现在,当我使用:

autoencoder.predict(sample)

模型正常运行。

然而,我的印象是这相当于:

a = encoder.predict(sample)

b = decoder.predict(a)

但是,在执行此操作时,模型会产生完全不同的结果。也就是说,对于完全相同的样本,最终结果(b)与autoencoder.predict(样本)不同。但据我所知,这些不应该相同吗?

所有这一切都发生在后一个代码中,我故意从编码器中获取输出并将其传递给解码器,而不是进行一次连续的正向传递。我希望能够这样做,所以我可以对编码器产生的输出进行小的修改,然后再将其传递给解码器。

我错过了一些明显的东西吗?

编辑:显示网络构建

x_2 = Input(shape = (HEIGHT, WIDTH, CHANNELS))
x_3 = Input(shape=(CHOKE_DIM_2, CHOKE_DIM_1))

en_1 = Convolution2D(32, (4, 4,), subsample = (2, 2), activation = 'relu', init = INIT, padding = "same")(x_2)
en_bn_1 = BatchNormalization(momentum = momentum)(en_1)
en_drop_1 = Dropout(dropout)(en_bn_1)

en_2 = Convolution2D(64, (2, 2,), subsample = (2, 2), activation = 'relu', init = INIT, padding = "same")(en_drop_1)
en_bn_2 = BatchNormalization(momentum = momentum)(en_2)
en_drop_2 = Dropout(dropout)(en_bn_2)

en_3 = Convolution2D(64, (2, 2,), activation = 'relu', init = INIT, padding = "same")(en_drop_2)
en_bn_3 = BatchNormalization(momentum = momentum)(en_3)
en_drop_3 = Dropout(dropout)(en_bn_3)
en_flat = Flatten()(en_drop_3)


chokepoint, d_mid, d_out = [], [], []
for i in range(CHOKE_DIM_2):
    a = Dense(CHOKE_DIM_1, init = INIT)(en_flat)
    chokepoint.append(a)
en_out_1 = Concatenate(axis = 1)(chokepoint)
en_out_2 = BatchNormalization(momentum = momentum)(en_out_1)
en_out = Reshape((CHOKE_DIM_2, CHOKE_DIM_1))(en_out_2)    

for i in range(CHOKE_DIM_2):
    x_tmp = Lambda(lambda x : x[:, i, 0:32])(x_3)
    x_tmp.trainable = False
    de_1 = Dense(CHOKE_DIM_1*2, activation = 'relu', init = INIT)(x_tmp)
    de_bn_1 = BatchNormalization(momentum = momentum)(de_1)
    de_drop_1 = Dropout(dropout)(de_bn_1)

    de_2 = Dense(CHOKE_DIM_1*4, activation = 'relu', init = INIT)(de_drop_1)
    de_bn_2 = BatchNormalization(momentum = momentum)(de_2)
    de_drop_2 = Dropout(dropout)(de_bn_2)

    de_3 = Dense(CHOKE_DIM_1*8, activation = 'relu', init = INIT)(de_drop_2)
    de_bn_3 = BatchNormalization(momentum = momentum)(de_3)
    de_drop_3 = Dropout(dropout)(de_bn_2)

    de_4 = Dense(30*40*3, activation = 'relu', init = INIT)(de_drop_3)
    r = Reshape((30, 40, 3))(de_4)
    de_out = UpSampling2D()(r)

    d_mid.extend([x_tmp, de_1, de_bn_1, de_drop_1, de_2, de_bn_2, de_drop_2, de_3, de_bn_3, de_drop_3, de_4, r])
    d_out.append(de_out)
de_true_out = Add()(d_out)

encoder = Model(inputs = x_2, outputs = en_out)
decoder = Model(inputs = x_3, outputs = de_true_out)
encoder.load_weights(LOADPATH + "encoder_weights.h5")
decoder.load_weights(LOADPATH + "decoder_weights.h5")

ae_v = decoder(encoder(x_2))
autoencoder = Model(inputs = x_2, outputs = ae_v)

0 个答案:

没有答案