我正在使用两个不同的数据集,每个数据集有1200个图像。第一个数据集有4个类,第二个数据集有6个类。
这是简单的图像分类问题。但是在训练时,在每个时代我都获得了相同的数据集验证准确度值。
我已使用imagemagick将两个数据集的所有图像调整为100x100。
我不知道我在哪里弄错了。 提前致谢
终端输出:
Using Theano backend.
Couldn't import dot_parser, loading of dot files will not be possible.
X_train shape: (880, 3, 100, 100)
880 train samples
220 test samples
train:
0 418
3 179
2 174
1 109
dtype: int64
test:
0 98
3 55
2 43
1 24
dtype: int64
Train on 880 samples, validate on 220 samples
Epoch 1/5
880/880 [==============================] - 582s - loss: 1.3444 - acc: 0.4500 - val_loss: 1.2752 - val_acc: 0.4455
Epoch 2/5
880/880 [==============================] - 540s - loss: 1.2624 - acc: 0.4750 - val_loss: 1.2802 - val_acc: 0.4455
Epoch 3/5
880/880 [==============================] - 540s - loss: 1.2637 - acc: 0.4750 - val_loss: 1.2712 - val_acc: 0.4455
Epoch 4/5
880/880 [==============================] - 538s - loss: 1.2484 - acc: 0.4750 - val_loss: 1.2623 - val_acc: 0.4455
Epoch 5/5
880/880 [==============================] - 537s - loss: 1.2375 - acc: 0.4750 - val_loss: 1.2486 - val_acc: 0.4455
prediction on test data:
In [26]: model.predict_classes(X_test)
220/220 [==============================] - 37s
Out[26]:
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
代码:
from __future__ import print_function
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten, Reshape
from keras.layers.convolutional import Convolution2D, MaxPooling2D, Convolution1D, MaxPooling1D
from keras.optimizers import SGD
from keras.utils import np_utils, generic_utils
import numpy as np
from sklearn.cross_validation import train_test_split
import pandas as pd
batch_size = 30
nb_classes = 4
nb_epoch = 10
img_rows, img_cols = 100, 100
img_channels = 3
X = np.load( 'image-data.npy' )
y = np.load( 'image-class.npy' )
# the data, shuffled and split between train and test sets
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=100 )
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
print("train:\n ",pd.value_counts(y_train))
print("test:\n",pd.value_counts(y_test))
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(img_channels, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1,1) ))
model.add(Dropout(0.25))
model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1,1) ))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(X_train, Y_train , batch_size = batch_size, nb_epoch = nb_epoch,shuffle=True, show_accuracy=True,validation_data=(X_test,Y_test) )
out = model.predict_classes(X_test)
答案 0 :(得分:0)
问题出在优化器上。可以看出,您正在使用SDG作为优化器,该优化器通常在CNN上表现不佳。请使用adam / nadam / tanh激活。