keras-tensorflow CAE维度不匹配

时间:2017-09-20 11:00:50

标签: tensorflow keras conv-neural-network autoencoder

我基本上遵循this指南来构建具有张量流后端的卷积自动编码器。与指南的主要区别在于我的数据是257x257灰度图像。以下代码:

try
{
    double c = Convert.ToDouble(TextBox1.Text);
    c = Multiply(c, c);
    Button1.Text = c.ToString();
}
catch { Button1.Text = "NaN"; }

给我一​​个错误: TRAIN_FOLDER = 'data/OIRDS_gray/' EPOCHS = 10 SHAPE = (257,257,1) FILELIST = os.listdir(TRAIN_FOLDER) def loadTrainData(): train_data = [] for fn in FILELIST: img = misc.imread(TRAIN_FOLDER + fn) img = np.reshape(img,(len(img[0,:]), len(img[:,0]), SHAPE[2])) if img.shape != SHAPE: print "image shape mismatch!" print "Expected: " print SHAPE print "but got:" print img.shape sys.exit() train_data.append (img) train_data = np.array(train_data) train_data = train_data.astype('float32')/ 255 return np.array(train_data) def createModel(): input_img = Input(shape=SHAPE) x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(8, (3, 3), activation='relu', padding='same')(x) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(8, (3, 3), activation='relu', padding='same')(x) encoded = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded) x = UpSampling2D((2, 2))(x) x = Conv2D(8, (3, 3), activation='relu', padding='same')(x) x = UpSampling2D((2, 2))(x) x = Conv2D(16, (3, 3), activation='relu',padding='same')(x) x = UpSampling2D((2, 2))(x) decoded = Conv2D(1, (3, 3), activation='sigmoid',padding='same')(x) return Model(input_img, decoded) x_train = loadTrainData() autoencoder = createModel() autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') print x_train.shape autoencoder.summary() # Run the network autoencoder.fit(x_train, x_train, epochs=EPOCHS, batch_size=128, shuffle=True)

正如您所看到的,这不是theano / tensorflow后端暗淡排序的标准问题,而是其他内容。我检查了我的数据是ValueError: Error when checking target: expected conv2d_7 to have shape (None, 260, 260, 1) but got array with shape (859, 257, 257, 1)所应用的数据:

print x_train.shape

我还运行(859, 257, 257, 1)

autoencoder.summary()

现在我不确定问题出在哪里,但看起来看起来像conv2d_6(Param#太高)的问题。我确实知道CAE在原理上是如何工作的,但我并不熟悉确切的技术细节,我试图通过弄乱反卷积填充(而不是相同,使用有效)来解决这个问题。我得到的dims匹配关闭是_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 257, 257, 1) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 257, 257, 16) 160 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 129, 129, 16) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 129, 129, 8) 1160 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 65, 65, 8) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 65, 65, 8) 584 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 33, 33, 8) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 33, 33, 8) 584 _________________________________________________________________ up_sampling2d_1 (UpSampling2 (None, 66, 66, 8) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 66, 66, 8) 584 _________________________________________________________________ up_sampling2d_2 (UpSampling2 (None, 132, 132, 8) 0 _________________________________________________________________ conv2d_6 (Conv2D) (None, 132, 132, 16) 1168 _________________________________________________________________ up_sampling2d_3 (UpSampling2 (None, 264, 264, 16) 0 _________________________________________________________________ conv2d_7 (Conv2D) (None, 264, 264, 1) 145 ================================================================= Total params: 4,385 Trainable params: 4,385 Non-trainable params: 0 _________________________________________________________________ 。我通过在解卷积方面盲目地尝试填充的不同组合来实现这一点,而不是真正解决问题的聪明方法......

此时此刻我感到茫然,任何帮助都会受到赞赏

1 个答案:

答案 0 :(得分:1)

由于输入和输出数据相同,因此最终输出形状应与输入形状相同。

最后一个卷积层的形状应为(None, 257,257,1)

问题正在发生,因为你有一个奇数作为图像的大小(257)。

当您应用MaxPooling时,它应该将数字除以2,因此它选择向上或向下舍入(它正在上升,请参阅129,来自257/2 = 128.5)

稍后,当您执行UpSampling时,模型不知道当前尺寸是否已舍入,它只是将值加倍。这种顺序发生的是为最终结果增加7个像素。

您可以尝试裁剪结果或填充输入。

我通常使用兼容尺寸的图像。如果您有3个MaxPooling图层,则您的尺寸应为2³的倍数。答案是264。

直接填充输入数据:

x_train = numpy.lib.pad(x_train,((0,0),(3,4),(3,4),(0,0)),mode='constant')

这需要SHAPE=(264,264,1)

在模型中填充:

import keras.backend as K

input_img = Input(shape=SHAPE)
x = Lambda(lambda x: K.spatial_2d_padding(x, padding=((3, 4), (3, 4))), output_shape=(264,264,1))(input_img)

裁剪结果:

在您不直接更改实际数据(numpy数组)的任何情况下都需要这样做。

decoded = Lambda(lambda x: x[:,3:-4,3:-4,:], output_shape=SHAPE)(x)