VGGNet在微调时没有学习。
我在ECG数据上训练了VGGnet 16层模型。之后,我设计了一个新模型,该模型采用了conv_base
的VGGnet并在其顶部建立了完全连接的层。新模式根本没有学习。它显示出相同的准确性和损失时期。稍后,我使用Keras库从头开始设计了完整的new_model(VGGNet的某些变体),但是在训练时该模型也没有得到改进。可能是什么原因?无论我训练的是哪种型号(以前都能正常工作),都能获得89.02%的准确率。
Model summary is Layer (type) Output Shape Param # Connected to
input_1 (InputLayer) (None, 1201, 1) 0
input_2 (InputLayer) (None, 401, 1) 0
sequential_1 (Sequential) (None, 2560) 8176064 input_1[0][0]
sequential_2 (Sequential) (None, 25088) 49664 input_2[0][0]
concatenate_1 (Concatenate) (None, 27648) 0 sequential_1[1][0], sequential_2[1][0]
dense_1 (Dense) (None, 1024) 28312576 concatenate_1[0][0]
dropout_1 (Dropout) (None, 1024) 0 dense_1[0][0]
dense_2 (Dense) (None, 512) 524800 dropout_1[0][0]
dropout_2 (Dropout) (None, 512) 0 dense_2[0][0]
dense_3 (Dense) (None, 256) 131328 dropout_2[0][0]
dropout_3 (Dropout) (None, 256) 0 dense_3[0][0]
dense_4 (Dense) (None, 64) 16448 dropout_3[0][0]
dense_5 (Dense) (None, 2) 130 dense_4[0][0]
培训代码
from keras.optimizers import adam
from keras.callbacks import ModelCheckpoint
checkpointer =
ModelCheckpoint(filepath='modifiedVGGBasic.bestweights.hdf5',
verbose=1, monitor='val_acc',mode='max', \
save_best_only=True)
earlystop = EarlyStopping(monitor='val_acc', min_delta=0.001, patience=50, \
verbose=2, mode='max', restore_best_weights=True)
ecg_model.compile(optimizer='adam', loss='categorical_crossentropy',metrics=['acc'])
result =ecg_model.fit([xt1r,xt2r],yt,validation_data=([xv1r,xv2r],yv), \
batch_size=128,class_weight=class_weights, \
epochs=150, verbose=2, callbacks=[earlystop, checkpointer])
以下显示了两个时期的输出。在所有时期内,它的准确度均达到89.02%,并且不会学习。
对53819个样本进行训练,对13455个样本进行验证
时代1/150 -916s-损失:1.7631-acc:0.8866-val_loss:1.7705-val_acc:0.8902
Epoch 00001:val_acc从-inf改进为0.89015,将模型保存为 修改后的VGGBasic.bestweights.hdf5
史诗2/150 -888s-损失:1.7703-acc:0.8902-val_loss:1.7705-val_acc:0.8902