加载保存的最佳重量时的Keras问题

时间:2019-08-08 18:05:40

标签: python keras

我正在交叉验证keras模型。对于每个拆分,请保存最佳模型的权重并重新加载以获取性能。但是,有时失败并导致

ValueError:您正在尝试将包含13层的权重文件加载到具有11层的模型中。

我曾经在我的函数中包括每个图层,而今天只是将重复的图层块集成到另一个函数中,并简单地调用它来创建一个图层块。那可能是问题所在。

def basic_block(x, num_conv, size_kernel, size_pooling):
    x = Conv2D(num_conv,size_kernel, padding='same')(x)
    x = BatchNormalization(axis=-1)(x)
    x = ELU()(x)
    x = MaxPooling2D(pool_size=size_pooling)(x)
    return x
def multi_level():
    melgram_input_CNN = Input(shape=(96,235,1))
    x = BatchNormalization(axis=-1, name='bn_0_freq')(melgram_input_CNN)
    x = basic_block(x, 128, 3, (2,2))
    low_layer = basic_block(x, 64, 3, (2,4))
    mid_layer = basic_block(low_layer, 64, 3, (2,4))
    high_layer = basic_block(mid_layer, 128, 3, (2,4))
    low_layer = GlobalAveragePooling2D()(low_layer)
    mid_layer = GlobalAveragePooling2D()(mid_layer)
    high_layer = GlobalAveragePooling2D()(high_layer)
    multi = concatenate([low_layer,mid_layer,high_layer])
    out = Dense(128, activation = 'elu')(multi)
    out = Dropout(0.5)(out)
    out = Dense(10, activation = 'softmax',activity_regularizer =l2(0.01) )(out)
    model = Model(melgram_input_CNN, out)
    return model
# looping 10-fold cv, after prepare train and test
for train_index, test_index in kf.split(X1,genre_list):
    model = multi_level()
    filepath="weights.best.hdf5"
    checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, 
    save_best_only=True, mode='min')
    model.fit(
    X1_train,
    y_train,
    validation_data=(X1_val, y_val),
    callbacks =[checkpoint],
    ) 
    model = multi_level()
    model.load_weights("weights.best.hdf5")
    model.compile(optimizer='adam', loss='categorical_crossentropy',  metrics=['accuracy'])
#then do the test

0 个答案:

没有答案