使用张量流提取的CNN特征大多为零

时间:2017-04-05 12:17:01

标签: tensorflow deep-learning feature-extraction

我使用tensorflow训练了CNN模型。之后,我输出并保存fc1层的功能,但我发现大多数功能都是零。

我的模型如下。我使用h_fc1层作为功能。训练和测试似乎很好,但我不明白为什么提取的功能大多是零,是正常的,还是我犯了一些错误?我怀疑输入图像如何用这种稀疏特征来表示。任何建议或提示将不胜感激,谢谢

def get_model(x):
# First convolutional layer - maps one grayscale image to 32 feature maps.
W_conv1 = weight_variable([3, 3, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)

# Second convolutional layer -- maps 32 feature maps to 64.
W_conv2 = weight_variable([3, 3, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

# Third convolutional layer -- maps 64 feature maps to 128.
W_conv3 = weight_variable([3, 3, 64, 128])
b_conv3 = bias_variable([128])
h_conv3 = tf.nn.relu(conv2d(h_pool2, W_conv3) + b_conv3)
h_pool3 = max_pool_2x2(h_conv3)

# Fourth convolutional layer -- maps 128 feature maps to 256.
W_conv4 = weight_variable([3, 3, 128, 256])
b_conv4 = bias_variable([256])
h_conv4 = tf.nn.relu(conv2d(h_pool3, W_conv4) + b_conv4)
h_pool4 = max_pool_2x2(h_conv4)


# Fully connected layer 1
# is down to 4x4x256 feature maps -- maps this to 1024 features.
W_fc1 = weight_variable([4 * 4 * 256, 1024])
b_fc1 = bias_variable([1024])

h_pool2_flat = tf.reshape(h_pool4, [-1, 4*4*256])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

# Dropout - controls the complexity of the model, prevents co-adaptation of
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

# Map the 1024 features to 10 classes, one for each digit
W_fc2 = weight_variable([1024, 512])
b_fc2 = bias_variable([512])
#h_pool2_flat = tf.reshape(h_pool5, [-1, 4*4*512])
h_fc2 = tf.nn.relu(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
h_fc2_drop = tf.nn.dropout(h_fc2, keep_prob)

# Map the 1024 features to 10 classes, one for each digit
W_fc3 = weight_variable([512, FLAGS.nClasses])
b_fc3 = bias_variable([FLAGS.nClasses])

y_conv = tf.matmul(h_fc2_drop, W_fc3) + b_fc3
# here, h_fc1 as the output features.
return y_conv, keep_prob, h_fc1

1 个答案:

答案 0 :(得分:0)

是的,你是对的,这很可能是因为ReLU。但是,我仍然建议在ReLU之后使用这些功能。

尽管如此,我认为使用两者并查看哪一个效果更好是一个不错的实验。祝你好运;)

相关问题