load_weights Keras模型错误

时间:2016-12-29 08:16:10

标签: python tensorflow keras conv-neural-network

我对CNN很新。这是我第一次使用keras,tensorflow等。我的load_weights函数有问题。我已经训练了CNN(cifar100),现在我想通过加载它的权重并进行评估来测试它。

这是我得到的错误的堆栈追溯:

Traceback (most recent call last):

  File "<ipython-input-17-247d6312ea1b>", line 1, in <module>
    runfile('/home/nikola/Desktop/cifar100-Version2.py', wdir='/home/nikola/Desktop')

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile
    execfile(filename, namespace)

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile
    builtins.execfile(filename, *where)

  File "/home/nikola/Desktop/cifar100-Version2.py", line 80, in <module>
    model.load_weights('cifar100_best_accuracy.hdf5')

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 2520, in load_weights
    self.load_weights_from_hdf5_group(f)

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 2605, in load_weights_from_hdf5_group
    K.batch_set_value(weight_value_tuples)

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 1045, in batch_set_value
    assign_op = x.assign(assign_placeholder)

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 575, in assign
    return state_ops.assign(self._variable, value, use_locking=use_locking)

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_state_ops.py", line 47, in assign
    use_locking=use_locking, name=name)

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op
    op_def=op_def)

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2242, in create_op
    set_shapes_for_outputs(ret)

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1617, in set_shapes_for_outputs
    shapes = shape_func(op)

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1568, in call_with_requiring
    return call_cpp_shape_fn(op, require_shape_fn=True)

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn
    debug_python_shape_fn, require_shape_fn)

  File "/home/nikola/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 675, in _call_cpp_shape_fn_impl
    raise ValueError(err.message)

ValueError: Dimension 0 in both shapes must be equal, but are 3 and 32 for 'Assign_11' (op: 'Assign') with input shapes: [3,3,3,32], [32,3,3,3].

我正在尝试将keras cifar10代码扩展为cifar100代码。我设法训练它,但我也想评估它。通过评估,我可以确定我的模型是否良好以及它的分数是多少。

这是我的代码:

from __future__ import print_function
from keras.datasets import cifar100
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils, generic_utils
from six.moves import range

#import numpy as np
#import matplotlib.pyplot as plt

batch_size = 32
nb_classes = 100

classes = [...100 classes...`enter code here`]

test_only =True;
save_weights = True;

nb_epoch = 200
data_augmentation = True

# input image dimensions
img_rows, img_cols = 32, 32
# The CIFAR10 images are RGB.
img_channels = 3

# The data, shuffled and split between train and test sets:
(X_train, y_train), (X_test, y_test) = cifar100.load_data()
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')

# Convert class vectors to binary class matrices.
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

model = Sequential()

model.add(Convolution2D(32, 3, 3, border_mode='same',
                        input_shape=X_train.shape[1:]))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))

# Let's train the model using RMSprop
model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])




if test_only:
    model.load_weights('cifar100_best_accuracy.hdf5')

X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255

if not data_augmentation:
    print('Not using data augmentation.')
    model.fit(X_train, Y_train,
              batch_size=batch_size,
              nb_epoch=nb_epoch,
              validation_data=(X_test, Y_test),
              shuffle=True)
     score = model.evaluate(X_test, Y_test, batch_size = batch_size)
     print('Test score:', score)
else:
    print('Using real-time data augmentation.')
    # This will do preprocessing and realtime data augmentation:
    datagen = ImageDataGenerator(
        featurewise_center=False,  # set input mean to 0 over the dataset
        samplewise_center=False,  # set each sample mean to 0
        featurewise_std_normalization=False,  # divide inputs by std of the dataset
        samplewise_std_normalization=False,  # divide each input by its std
        zca_whitening=False,  # apply ZCA whitening
        rotation_range=0,  # randomly rotate images in the range (degrees, 0 to 180)
        width_shift_range=0.1,  # randomly shift images horizontally (fraction of total width)
        height_shift_range=0.1,  # randomly shift images vertically (fraction of total height)
        horizontal_flip=True,  # randomly flip images
        vertical_flip=False)  # randomly flip images

    # Compute quantities required for featurewise normalization
    # (std, mean, and principal components if ZCA whitening is applied).
    datagen.fit(X_train)


    model_check_point = ModelCheckpoint('cifar100_best_accuracy.hdf5', monitor='acc', verbose=0, save_best_only=True, save_weights_only=False, mode='auto')


    # Fit the model on the batches generated by datagen.flow().
    model.fit_generator(datagen.flow(X_train, Y_train,
                        batch_size=batch_size),
                        samples_per_epoch=X_train.shape[0],
                        nb_epoch=nb_epoch,
                        callbacks=[model_check_point],
                        validation_data=(X_test, Y_test))

1 个答案:

答案 0 :(得分:0)

您正在保存模型,然后将其作为重量加载,之后再次训练您的模型。

首先,修复脚本以仅保存权重,加载它们并检查问题是否仍然存在。