为什么这个keras网络不“学习”?

时间:2018-07-05 09:19:51

标签: tensorflow keras deep-learning

我正在尝试建立卷积神经网络以对猫和狗进行分类(这是非常基本的事情,因为我想学习)。我正在尝试的方法之一是让2个输出神经元检查类(例如,而不是只使用1并使0-> cat和1-> dog)。但是由于某种原因,网络无法学习,有人可以帮助我吗?

这是模型:

from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop,Adam
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils

optimizer = Adam(lr=1e-4)
objective = 'categorical_crossentropy'


def classifier():

    model = Sequential()

    model.add(Conv2D(64, 3, padding='same',input_shape=train.shape[1:],activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))

    model.add(Conv2D(256, 3, padding='same',activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))

    model.add(Conv2D(256, 3, padding='same',activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))

    model.add(Conv2D(256, 3, padding='same',activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))


    model.add(Flatten())
    model.add(Dense(256, activation='relu'))
    model.add(Dropout(0.5))

    model.add(Dense(256, activation='relu'))
    model.add(Dropout(0.5))

    model.add(Dense(256, activation='relu'))
    model.add(Dropout(0.5))

    model.add(Dense(2))
    model.add(Activation('softmax'))

    print("Compiling model...")
    model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy'])
    return model

print("Creating model:")
model = classifier()

这是主循环

from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils

epochs = 5000
batch_size = 16

class LossHistory(Callback):
    def on_train_begin(self, logs={}):
        self.losses = []
        self.val_losses = []

    def on_epoch_end(self, batch, logs={}):
        self.losses.append(logs.get('loss'))
        self.val_losses.append(logs.get('val_loss'))

early_stopping = EarlyStopping(monitor='val_loss', patience=4, verbose=1, mode='min')        


def run():

    history = LossHistory()
    print("running model...")
    model.fit(train, labels, batch_size=batch_size, epochs=epochs,
              validation_split=0.10, verbose=2, shuffle=True, callbacks=[history, early_stopping])

    print("making predictions on test set...")
    predictions = model.predict(test, verbose=0)
    return predictions, history

predictions, history = run()

loss = history.losses
val_loss = history.val_losses

这是输入标签的一个示例:

array([[1, 0],
       [0, 1],
       [1, 0],
       ..., 
       [0, 1],
       [0, 1],
       [0, 1]])

PS:不要理会输入格式,因为使用相同的输入对二进制分类器有效。

谢谢大家,我们将不胜感激。

1 个答案:

答案 0 :(得分:0)

您的辍学层的rate自变量太大。辍学层用作深度学习神经网络的正则化技术,并克服了过度拟合问题。您的rate自变量指定了训练时要从上一层的激活中降低多少百分比。 0.5 rate表示放弃上一层激活的50%。尽管有时候rate的很大一部分参数是可行的,但有时它会阻碍神经网络的学习速度。因此,在选择辍学层的rate参数时应该小心。