在这里我得到了张量板中重量和偏差的图片
答案 0 :(得分:0)
似乎batch_norm不再存在,但是有batch_normalization?在我的情况下,这是一个正确的实现吗?
h_conv1 = tf.nn.relu(tf.nn.conv2d(input_layer, conv_weights_1, strides=[1, 4, 4, 1], padding="SAME") + conv_biases_1)
#batch normalization
bn_mean, bn_variance = tf.nn.moments(h_conv1,[0,1,2])
bn_scale = tf.Variable(tf.ones([32]))
bn_offset = tf.Variable(tf.zeros([32]))
bn_epsilon = 1e-3
bn_conv1 = tf.nn.batch_normalization(h_conv1, bn_mean, bn_variance, bn_offset, bn_scale, bn_epsilon)
#h_conv1 = tf.nn.relu(tf.nn.conv2d(input_layer, conv_weights_1, strides=[1, 4, 4, 1], padding="SAME") + conv_biases_1)
#h_pool1 = max_pool_2x2(h_conv1)
h_pool1 = max_pool_2x2(bn_conv1)