我最大限度地积累了张量。但是我不必使用内置的tf.nn.max_pool(),而是使用tf.reduce_max()。但它给出了错误:
ValueError: Shape must be rank 4 but is rank 5 for 'conv2_1/Conv2D' (op: 'Conv2D') with input shapes: [1,?,1,224,64], [3,3,64,128].
以下是代码:
with tf.name_scope('conv1_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32,
stddev=1e-1), name='weights')
conv = tf.nn.conv2d(self.conv1_1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv1_2 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
self.pool1=tf.reduce_max(self.conv1_2,reduction_indices=[1], keep_dims=True)
# conv2_1
with tf.name_scope('conv2_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,
stddev=1e-1), name='weights')
sess = tf.InteractiveSession()
tf.Print(self.pool1,[self.pool1],message="hellow fatima")
conv = tf.nn.conv2d([self.pool1], kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),
trainable=True, name='biases')
out = tf.nn.bias_add(conv, biases)
self.conv2_1 = tf.nn.relu(out, name=scope)
self.parameters += [kernel, biases]
答案 0 :(得分:1)
您可以尝试删除传递到[]
的第一个参数中的tf.nn.conv2d
。因此,您应该尝试以下方式而不是conv = tf.nn.conv2d([self.pool1], kernel, [1, 1, 1, 1], padding='SAME')
:conv = tf.nn.conv2d(self.pool1, kernel, [1, 1, 1, 1], padding='SAME')