我试图跟随this example使用我自己的模型,如下所示:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, 150, 150, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 150, 150, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 150, 150, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 75, 75, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 75, 75, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 75, 75, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 37, 37, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 37, 37, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 37, 37, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 37, 37, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 18, 18, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 18, 18, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 18, 18, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 18, 18, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 9, 9, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 9, 9, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 9, 9, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 9, 9, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 4, 4, 512) 0
_________________________________________________________________
sequential_1 (Sequential) (None, 1) 2097665
=================================================================
但是我收到了这个错误:
AttributeError:Layer sequential_2有多个入站节点,因此"层输出的概念"是不明确的。请改用
get_output_at(node_index)
。
我无从哪里开始。经过一些搜索后,我认为它与最后一层是序列层而不是Dense层有关,它在示例中的VGG16模型中。
该模型与Keras的Cat或Dog示例一样,具有微调功能。
我将非常感谢任何有关如何从这里开始的帮助或想法!
编辑: 如果它有助于查看代码:
model = load_model('final_finetuned_model.h5')
layer_idx = utils.find_layer_idx(model, 'sequential_1')
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)
plt.rcParams['figure.figsize'] = (18, 6)
img1 = utils.load_img('test1/cat/5.jpg', target_size=(150, 150))
img2 = utils.load_img('test1/cat/6.jpg', target_size=(150, 150))
for modifier in [None, 'guided', 'relu']:
plt.figure()
f, ax = plt.subplots(1, 2)
plt.suptitle("vanilla" if modifier is None else modifier)
for i, img in enumerate([img1, img2]):
# 20 is the imagenet index corresponding to `ouzel`
grads = visualize_cam(model, layer_idx, filter_indices=20,
seed_input=img, backprop_modifier=modifier)
# Lets overlay the heatmap onto original image.
jet_heatmap = np.uint8(cm.jet(grads)[..., :3] * 255)
ax[i].imshow(overlay(jet_heatmap, img))
plt.show()
答案 0 :(得分:2)
对于具有两个输出节点的非常相似的网络,我有一个类似的错误:dense_1_1 / Relu:0和sequential_2 / dense_1 / Relu:0。我的解决方案是转到loss.py并将layer_output = self.layer.output
更改为layer_output = self.layer.get_output_at(-1)
。这更像是一种解决方法,而不是一种解决方案。当有一个输出节点时,取最后一个[-1]就可以了,当有两个节点时,最后一个节点为我工作。但这应该会让你领先。如果有的话,还可以尝试使用layer_output = self.layer.get_output_at(0)或其他节点。
有一个相关的未决问题here。