卷积自动编码器仅在1个通道上学习

时间:2018-08-05 22:37:18

标签: python tensorflow machine-learning deep-learning autoencoder

我的数据的形状为(100,2,2),我的代码正在通道[:,:,0]上学习,但不在通道[:,:,1]上学习

我的代码的相关部分是

设置

self.encoder_input = tf.placeholder(tf.float32, input_shape, name='x')
self.regularizer = tf.contrib.layers.l2_regularizer(scale=0.1)

编码器:

with tf.variable_scope("encoder"):
   conv1 = tf.layers.conv2d(self.encoder_input, filters=32, kernel_size=(2, 2),
   activation=tf.nn.relu, padding='same', kernel_regularizer=self.regularizer)
   mp1 = tf.layers.max_pooling2d(conv1, pool_size=(4, 1), strides=(4, 1))
   conv2 = tf.layers.conv2d(mp1, filters=64, kernel_size=(2, 2),
   activation=None, padding='same', kernel_regularizer=self.regularizer)
   return conv2

然后将conv2馈入解码器:

def _construct_decoder(self, encoded):
   with tf.variable_scope("decoder"):
   upsample1 = tf.image.resize_images(encoded, size=(50, 2), method=tf.image.ResizeMethod.BILINEAR)
   conv4 = tf.layers.conv2d(inputs=upsample1, filters=32, kernel_size=(2, 2), padding='same',
   activation=tf.nn.relu, kernel_regularizer=self.regularizer)
   upsample2 = tf.image.resize_images(conv4, size=(100, 2), method=tf.image.ResizeMethod.BILINEAR)
   conv5 = tf.layers.conv2d(inputs=upsample2, filters=2, kernel_size=(2, 2), padding='same',
   activation=None, kernel_regularizer=self.regularizer)
   self.decoder = conv5

我的损失如下:

base_loss = tf.losses.mean_squared_error(labels=self.encoder_input, predictions=self.decoder)
reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
loss = tf.add_n([base_loss] + reg_losses, name="loss")

cost = tf.reduce_mean(loss)
tf.summary.scalar('cost', cost)
optimizer = tf.train.AdamOptimizer(self.lr)

grads = optimizer.compute_gradients(cost)
# Update the weights wrt to the gradient
optimizer = optimizer.apply_gradients(grads)
# Save the grads with tf.summary.histogram
for index, grad in enumerate(grads):
    tf.summary.histogram("{}-grad".format(grads[index][1].name), grads[index])

我知道它不是在第二个通道上学习,因为我针对每个通道的实际值和预测值之差绘制了max,min,s.dev等。我不太确定为什么要在第一个学习而不是第二个学习-有人有什么想法吗?

0 个答案:

没有答案