我有一个使用Keras
的Python代码。我没有发布代码,因为它有点长,问题似乎与代码本身无关。
这是我遇到的错误:
File "h5py\h5a.pyx", line 77, in h5py.h5a.open (D:\Build\h5py\h5py-2.7.0\h5py\h5a.c:2350)
KeyError: "Can't open attribute (Can't locate attribute: 'nb_layers')"
可能是什么问题?它与Keras有关吗?我该如何解决这个问题?
编辑1
错误似乎与这部分代码有关:
# load VGG16 weights
f = h5py.File(weights_path)
for k in range(f.attrs['nb_layers']):
if k >= len(model.layers):
break
g = f['layer_{}'.format(k)]
weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]
model.layers[k].set_weights(weights)
f.close()
print('Model loaded.')
感谢。
答案 0 :(得分:1)
使用https://github.com/fchollet/deep-learning-models/releases
中的权重文件vgg16_weights_th_dim_ordering_th_kernels.h5此文件采用Keras 2格式。
答案 1 :(得分:0)
显然,“ nb_layers”是指层数,因此您可以使用替代方法。 在这种情况下:
f = h5py.File(filename, 'r')
nb_layers = len(f.attrs["layer_names"])
答案 2 :(得分:0)
我有同样的问题。我通过添加此行来在需要的地方构建vgg16网络来解决它。
Vmodel = applications.VGG16(weights='imagenet', include_top=False, input_shape=(3, img_width, img_height))
print('Model loaded.')
# build a classifier model to put on top of the convolutional model
top_model = Sequential()
top_model.add(Flatten(input_shape=Vmodel.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
# note that it is necessary to start with a fully-trained
# classifier, including the top classifier,
# in order to successfully do fine-tuning
top_model.load_weights(top_model_weights_path)
# add the model on top of the convolutional base
# model.add(top_model)
model = Model(inputs=Vmodel.input, outputs=top_model(Vmodel.output))
因此,基本上不必创建自己的vgg16转换网并将vgg16权重加载到其中。我创建了vgg16模型,然后将最后一层添加到模型中。我希望这对您有用。