即使工人设定为12,波动率仍然很低

时间:2018-12-12 07:59:39

标签: keras

这是我代码的一部分。

batch_size = 512

# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
    rescale=1./255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1./255)

# this is a generator that will read pictures found in
# subfolers of 'data/train', and indefinitely generate
# batches of augmented image data
train_generator = train_datagen.flow_from_directory(
    '/home/archimedes/abs/data/scanVsColor/colorVsScan/train',  # this is the target directory
    target_size=(150, 150),  # all images will be resized to 150x150
    batch_size=batch_size,
    class_mode='binary')  # since we use binary_crossentropy loss, we need binary labels

# this is a similar generator, for validation data
validation_generator = test_datagen.flow_from_directory(
    '/home/archimedes/abs/data/scanVsColor/colorVsScan/test',
    target_size=(150, 150),
    batch_size=batch_size,
    class_mode='binary')

model.fit_generator(
    train_generator,
    steps_per_epoch=2000 // batch_size,
    epochs=50,
    validation_data=validation_generator,
    workers = 12,
    use_multiprocessing =True,
    validation_steps=800 // batch_size)
 model.save_weights('pigminator.h5')  # always save your weights after       training or during training

我已经添加了worker = 12和multiprocessing = True。 但是当我训练模型时,尽管脚本消耗了gpu,但波动率仍保持为0。关于如何增加波动率的任何建议都将非常有帮助。还考虑到批量大小设置为512,但是代码不会中断,我觉得尽管脚本占用了gpu内存,但它是在cpu上进行训练的。

预先感谢

0 个答案:

没有答案