我正在从事一个项目,在该项目中,CNN模型应该能够预测图像斑块的均方误差。这些图像是视频的帧。我拆分了数据集(80%-20%)(X_train,y_train)和(X_test,y_test)。 X_ *包含48x48个色块,y_ *包含地面真实质量得分。
CNN模型如下:
print('CNN Model loading...')
from keras import optimizers
from keras.models import Sequential
from keras.layers.convolutional import Conv2D, MaxPooling2D, AveragePooling2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers import Activation, Dense, Dropout, Flatten
from keras import backend as K
from keras.callbacks import LearningRateScheduler
print('CNN Model working...')
model3=Sequential()
model3.add(Conv2D(filters=32, kernel_size=(3, 3), padding="same",
input_shape=X_train.shape[1:], activation='relu'))
model3.add(MaxPooling2D(pool_size=(2, 2)))
model3.add(Conv2D(filters=64, kernel_size=(3, 3), padding="same", activation='relu'))
model3.add(MaxPooling2D(pool_size=(2, 2)))
model3.add(Conv2D(filters=128, kernel_size=(3, 3), padding="same", activation='relu'))
model3.add(MaxPooling2D(pool_size=(2, 2)))
model3.add(Flatten())
model3.add(Dense(512, activation='relu'))
#model3.add(Dense(512, activation='relu'))
model3.add(Dense(1, activation='relu'))
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model3.compile(loss='mse', optimizer=sgd, metrics=['accuracy'])
def scheduler(epoch):
if epoch%2==0 and epoch!=0:
lr = K.get_value(model3.optimizer.lr)
K.set_value(model3.optimizer.lr, lr*.9)
print("lr changed to {}".format(lr*.9))
return K.get_value(model3.optimizer.lr)
lr_decay = LearningRateScheduler(scheduler)
model3_fit=model3.fit(X_train, y_train, validation_data = (X_test, y_test),
epochs=10, verbose=1, batch_size=100,
callbacks=[lr_decay])
print('CNN Model done...')
问题是,此网络显示的准确度= 0.0000e00 +某物,而损失= 2753.something,这是最糟糕的。我还尝试了CNN的不同参数,但它不断向我展示最差的性能。
我的目标是使用VGG16获得质量得分,最初它的性能很差,这就是为什么我试图首先在较简单的CNN模型上获得良好结果的原因。
我将感谢您的帮助。
这是10个纪元的输出:
CNN Model loading...
CNN Model working...
Train on 195840 samples, validate on 48960 samples
Epoch 1/10
195840/195840 [==============================] - 146s 746us/step - loss: 2652.0550 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 2/10
195840/195840 [==============================] - 146s 745us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 3/10
lr changed to 0.008999999798834325
195840/195840 [==============================] - 146s 745us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 4/10
195840/195840 [==============================] - 146s 745us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 5/10
lr changed to 0.008099999651312828
195840/195840 [==============================] - 145s 742us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 6/10
195840/195840 [==============================] - 146s 746us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 7/10
lr changed to 0.007289999350905419
195840/195840 [==============================] - 146s 745us/step - loss: 2589.2725 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 8/10
195840/195840 [==============================] - 146s 747us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 9/10
lr changed to 0.006560999248176813
195840/195840 [==============================] - 148s 757us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
Epoch 10/10
195840/195840 [==============================] - 149s 759us/step - loss: 2589.2724 - acc: 0.0000e+00 - val_loss: 2589.8413 - val_acc: 0.0000e+00
在优化器中更改'lr'的值后,我也观察到了结果,但对性能影响不大。