如何在Keras中将我的多班训练更改为二进制

时间:2019-11-14 22:55:06

标签: python keras deep-learning classification multiclass-classification

我是深度学习和Keras的新手。通过下面的简单培训代码,我对10个班级进行了分类。但是现在我想重新使用此代码,并将此代码转换为二进制形式,在这种情况下,我说图像是否是我的对象。

我尝试将激活从softmax更改为sigmoid,还更改了更新后的loss='binary_crossentropy'。这足以改变吗?还有其他变化吗?

我收到一条错误消息:

  File "train.py", line 94, in <module>
    shuffle=True, callbacks=callbacks_list)
  File "/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 1732, in fit_generator
    initial_epoch=initial_epoch)
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/training_generator.py", line 260, in fit_generator
    callbacks.on_epoch_end(epoch, epoch_logs)
  File "/usr/local/lib/python3.5/dist-packages/keras/callbacks/callbacks.py", line 152, in on_epoch_end
    callback.on_epoch_end(epoch, logs)
  File "/usr/local/lib/python3.5/dist-packages/keras/callbacks/callbacks.py", line 702, in on_epoch_end
    filepath = self.filepath.format(epoch=epoch + 1, **logs)
KeyError: 'acc'

这是我针对多类别分类的简单培训代码:

#==========================
HEIGHT = 300
WIDTH = 300
TRAIN_DIR = "data"
BATCH_SIZE = 8 #8
steps_per_epoch = 1000 #1000
NUM_EPOCHS = 50 #50
lr= 0.00001
#==========================
FC_LAYERS = [1024, 1024]
dropout = 0.5

def build_finetune_model(base_model, dropout, fc_layers, num_classes):
    for layer in base_model.layers:
        layer.trainable = False

    x = base_model.output
    x = Flatten()(x)
    for fc in fc_layers:
        # New FC layer, random init
        x = Dense(fc, activation='relu')(x) 
        x = Dropout(dropout)(x)

    # New softmax layer
    predictions = Dense(num_classes, activation='softmax')(x) 
    finetune_model = Model(inputs=base_model.input, outputs=predictions)
    return finetune_model

train_datagen =  ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator = train_datagen.flow_from_directory(TRAIN_DIR, 
                                                    target_size=(HEIGHT, WIDTH), 
                                                    batch_size=BATCH_SIZE)
base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(HEIGHT, WIDTH, 3))

root=TRAIN_DIR
class_list = [ item for item in os.listdir(root) if os.path.isdir(os.path.join(root, item)) ]
print (class_list)

FC_LAYERS = [1024, 1024]
dropout = 0.5

finetune_model = build_finetune_model(base_model, dropout=dropout, fc_layers=FC_LAYERS, num_classes=len(class_list))
adam = Adam(lr=0.00001)
finetune_model.compile(adam, loss='categorical_crossentropy', metrics=['accuracy'])
filepath="./checkpoints/" + "MobileNetV2_{epoch:02d}_{acc:.2f}" +"_model_weights.h5"
checkpoint = ModelCheckpoint(filepath, monitor=["acc"], verbose=1, mode='max', save_weights_only=True)
callbacks_list = [checkpoint]

history = finetune_model.fit_generator(train_generator, epochs=NUM_EPOCHS, workers=8, 
                                       steps_per_epoch=steps_per_epoch, 
                                       shuffle=True, callbacks=callbacks_list)

1 个答案:

答案 0 :(得分:2)

好吧,您应该只有一个输出节点,因为它将具有该类别的概率(因此为1-即不具有该类别的概率)。

将激活更改为sigmoid是正确的,因为您希望使用条件类型的概率输出而不是联接概率。

由于要处理二进制目标数据,因此二进制交叉熵是正确适用的

最后,错误似乎是您在monitor中传递的关键字。尝试将其替换为monitor= "val_accuracy"