我正在尝试使用CNN可视化分类任务的重要区域。
我正在使用VGG16 +我自己的顶层(全局平均池化层和密集层)
Activity.finish()
编译并拟合模型后,我尝试使用Grad-CAM拍摄新图像:
model_vgg16_conv = VGG16(weights='imagenet', include_top=False, input_shape=(100, 100, 3))
model = models.Sequential()
model.add(model_vgg16_conv)
model.add(Lambda(global_average_pooling, output_shape=global_average_pooling_shape))
model.add(Dense(4, activation = 'softmax', init='uniform'))
那之后我要执行
image = cv2.imread("data/example_images/test.jpg")
# Resize to 100x100
image = resize(image,(100,100),anti_aliasing=True, mode='constant')
# Because it's a grey scale image extend the dimensions
image = np.repeat(image.reshape(1,100, 100, 1), 3, axis=3)
class_weights = model.get_layer("dense_1").get_weights()[0]
final_conv_layer = model.get_layer("vgg16").get_layer("block5_conv3")
input1 = model.get_layer("vgg16").layers[0].input
output1 = model.get_layer("dense_1").output
get_output = K.function([input1], [final_conv_layer.output, output1])
导致以下错误:
InvalidArgumentError:您必须使用dtype float和shape [?,100,100,3]输入占位符张量'vgg16_input'的值 [[{{node vgg16_input}}]] [[dense_1 / Softmax / _233]]
其他信息
[conv_outputs, predictions] = get_output([image])
模型摘要:
def global_average_pooling(x):
return K.mean(x, axis = (2, 3))
def global_average_pooling_shape(input_shape):
return input_shape[0:2]
VGG模型摘要:
Layer (type) Output Shape Param #
=================================================================
vgg16 (Model) (None, 3, 3, 512) 14714688
_________________________________________________________________
lambda_1 (Lambda) (None, 3) 0
_________________________________________________________________
dense_1 (Dense) (None, 4) 16
=================================================================
Total params: 14,714,704
Trainable params: 16
Non-trainable params: 14,714,688
我是Grad-CAM的新手,我不确定是否只是在监督某些事情,还是误解了整个概念。
答案 0 :(得分:2)
使用Sequential,使用add()方法添加图层。在这种情况下,由于直接添加了模型对象,因此模型现在有两个输入-一个通过Sequential输入,另一个通过model_vgg16_conv输入。
>>> layer = model.layers[0]
>>> layer.get_input_at(0)
<tf.Tensor 'input_1:0' shape=(?, ?, ?, 3) dtype=float32>
>>> layer.get_input_at(1)
<tf.Tensor 'vgg16_input:0' shape=(?, ?, ?, 3) dtype=float32>
由于使用了K.function,因此仅提供了一个输入,因此关于“ vgg16_input”的缺少输入存在错误。可以,
get_output = K.function([input1] + [model.input], [final_conv_layer.output, output1])
[conv_outputs, predictions] = get_output([image, image])
但是在这种情况下可以使用功能性API:
model_vgg16_conv = VGG16(weights='imagenet', include_top=False, input_shape=(100, 100, 3))
gavg = Lambda(global_average_pooling, output_shape=global_average_pooling_shape)(model_vgg16_conv.output)
output = Dense(4, activation = 'softmax', init='uniform')(gavg)
model_f = Model(model_vgg16_conv.input, output)
final_conv_layer = model_f.get_layer("block5_conv3")
get_output = K.function([model_f.input], [final_conv_layer.output, model_f.output])
[conv_outputs, predictions] = get_output([image])