如果没有batch_size,所有条件都相同,GradCAM可以不同吗?

时间:2018-11-18 09:22:15

标签: keras deep-learning conv-neural-network visualize

我使用CNN体系结构。

然后我通过keras-vis使用了gradCAM。

我发现了一些奇怪的东西。

当我刚刚更改输入图像的batch_size时,结果是不同的。 (相同的batch_size,相同的结果)

我不知道为什么会这样。

在函数visualize_cam_with_losses中,如果模型和输入图像相同,则“渐变”可以不同吗?

penultimate_output = penultimate_layer.output
opt = Optimizer(input_tensor, losses, wrt_tensor=penultimate_output, 
norm_grads=False)
_, grads, penultimate_output_value = opt.minimize(seed_input, max_iter=1, 
grad_modifier=grad_modifier, verbose=False)

# For numerical stability. Very small grad values along with small penultimate_output_value can cause
# w * penultimate_output_value to zero out, even for reasonable fp precision of float32.
grads = grads / (np.max(grads) + K.epsilon())

# Average pooling across all feature maps.
# This captures the importance of feature map (channel) idx to the output.
channel_idx = 1 if K.image_data_format() == 'channels_first' else -1
other_axis = np.delete(np.arange(len(grads.shape)), channel_idx)
weights = np.mean(grads, axis=tuple(other_axis))

# Generate heatmap by computing weight * output over feature maps
output_dims = utils.get_img_shape(penultimate_output)[2:]
heatmap = np.zeros(shape=output_dims, dtype=K.floatx())
for i, w in enumerate(weights):
    if channel_idx == -1:
        heatmap += w * penultimate_output_value[0, ..., i]
    else:
        heatmap += w * penultimate_output_value[0, i, ...]

# ReLU thresholding to exclude pattern mismatch information (negative gradients).
heatmap = np.maximum(heatmap, 0)

# The penultimate feature map size is definitely smaller than input image.
input_dims = utils.get_img_shape(input_tensor)[2:]
heatmap = imresize(heatmap, input_dims, interp='bicubic', mode='F')

# Normalize and create heatmap.
heatmap = utils.normalize(heatmap)
return heatmap, np.uint8(cm.jet(heatmap)[..., :3] * 255)

0 个答案:

没有答案