我编写了以下代码,并试图从变型自动编码器模型预测图像:
编码器:
input_img = Input(shape=(28, 28, 3))
x = Conv2D(32, 3,
padding='same',
activation='relu')(input_img)
x = Conv2D(64, 3,
padding='same',
activation='relu',
strides=(2, 2))(x)
x = Conv2D(64, 3,
padding='same',
activation='relu')(x)
x = Conv2D(64, 3,
padding='same',
activation='relu')(x)
x = Flatten()(x)
x = Dense(16, activation='relu')(x)
# Two outputs, latent mean and (log)variance
z_mu = Dense(latent_dim)(x)
z_log_sigma = Dense(latent_dim)(x)
encoder = Model(inputs = input_img, outputs = x)
解码器:
# decoder takes the latent distribution sample as input
decoder_input = Input(K.int_shape(z)[1:])
# Expand to 784 total pixels
x = Dense(np.prod(shape_before_flattening[1:]),
activation='relu')(decoder_input)
# reshape
x = Reshape(shape_before_flattening[1:])(x)
# use Conv2DTranspose to reverse the conv layers
x = Conv2DTranspose(32, 3,
padding='same',
activation='relu',
strides=(2, 2))(x)
x = Conv2D(3, 3,
padding='same',
activation='sigmoid')(x)
# decoder model statement
decoder = Model(decoder_input, x)
# apply the decoder to the sample from the latent distribution
z_decoded = decoder(z)
编码器如下:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_13 (InputLayer) (None, 28, 28, 3) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 28, 28, 32) 896
_________________________________________________________________
conv2d_2 (Conv2D) (None, 14, 14, 64) 18496
_________________________________________________________________
conv2d_3 (Conv2D) (None, 14, 14, 64) 36928
_________________________________________________________________
conv2d_4 (Conv2D) (None, 14, 14, 64) 36928
_________________________________________________________________
flatten_1 (Flatten) (None, 12544) 0
_________________________________________________________________
dense_10 (Dense) (None, 16) 200720
=================================================================
Total params: 293,968
Trainable params: 293,968
Non-trainable params: 0
以及类似的解码器:
Layer (type) Output Shape Param #
=================================================================
input_15 (InputLayer) (None, 2) 0
_________________________________________________________________
dense_14 (Dense) (None, 12544) 37632
_________________________________________________________________
reshape_3 (Reshape) (None, 14, 14, 64) 0
_________________________________________________________________
conv2d_transpose_2 (Conv2DTr (None, 28, 28, 32) 18464
_________________________________________________________________
conv2d_6 (Conv2D) (None, 28, 28, 3) 867
=================================================================
Total params: 56,963
Trainable params: 56,963
Non-trainable params: 0
_________________________________________________________________
它运行得很好。这是完整的模型:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_13 (InputLayer) (None, 28, 28, 3) 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 28, 28, 32) 896 input_13[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 14, 14, 64) 18496 conv2d_1[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 14, 14, 64) 36928 conv2d_2[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 14, 14, 64) 36928 conv2d_3[0][0]
__________________________________________________________________________________________________
flatten_1 (Flatten) (None, 12544) 0 conv2d_4[0][0]
__________________________________________________________________________________________________
dense_10 (Dense) (None, 16) 200720 flatten_1[0][0]
__________________________________________________________________________________________________
dense_11 (Dense) (None, 2) 34 dense_10[0][0]
__________________________________________________________________________________________________
dense_12 (Dense) (None, 2) 34 dense_10[0][0]
__________________________________________________________________________________________________
lambda_5 (Lambda) (None, 2) 0 dense_11[0][0]
dense_12[0][0]
__________________________________________________________________________________________________
model_16 (Model) (None, 28, 28, 3) 56963 lambda_5[0][0]
__________________________________________________________________________________________________
custom_variational_layer_3 (Cus [(None, 28, 28, 3), 0 input_13[0][0]
model_16[1][0]
==================================================================================================
Total params: 350,999
Trainable params: 350,999
Non-trainable params: 0
__________________________________________________________________________________________________
问题是当我尝试基于现有图像创建图像时。这显示了训练集中的图像:
rnd_file = np.random.choice(files)
file_id = os.path.basename(rnd_file)
img = imread(rnd_file)
plt.imshow(img)
plt.show()
然后,我将图像添加到编码器中以获得图像的潜在表示:
z = encoder.predict(img)
我有潜在的表示形式,我根据给定的表示形式将其解码为图像:
decoder.predict(z)
出现以下错误:
ValueError:检查输入时出错:预期input_15具有形状(2,)但具有形状(16,)的数组
z看起来像这样:
[0. 0. 0. 0. 0. 0.03668813
0.10211123 0.08731555 0. 0.01327576 0. 0.
0. 0. 0.03561973 0.02009114]
编码器的输出为(None,16),与我的z相同。它作为模型运行。我怎样才能解决这个问题?预先感谢
答案 0 :(得分:0)
缺少一些代码来确切地了解您要实现的目标,但是至少存在两个问题:
(None, 16)
,而是(16,)
。您需要添加一个尺寸,例如:z = encoder.predict(img[np.newaxis, :])
答案 1 :(得分:0)
错误消息告诉我,它期望一个长度为2的元组。
例如在此介绍性文章中:
https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html
他们这样做:
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)
您的代码仅传递了target_seq
,但没有传递states_value
,在我看来,为什么会出现该错误。