VGG16的训练时间比预期的长得多

时间:2019-06-12 21:57:17

标签: tensorflow keras google-colaboratory

使用转移学习,我正在尝试使用Google Colab在Keras中训练VGG16。以下是笔记本中的代码:(注意:输出被写为注释)

    from keras import Sequential
    from keras.layers import Dense, Flatten
    from keras.applications import vgg16
    from keras.applications.vgg16 import preprocess_input as vgg_pi
    from keras.preprocessing.image import ImageDataGenerator
    from keras.models import Model

    base_model = vgg16.VGG16(include_top=False, weights='imagenet',
                             input_shape=(224, 224, 3))
    for layer in base_model.layers:
      layer.trainable = False
    base_model.summary()

    # Total params: 14,714,688
    # Trainable params: 0
    # Non-trainable params: 14,714,688

    x = base_model.output
    x = Flatten(name='flatten', input_shape=base_model.output_shape)(x) 
    x = Dense(10, activation='softmax', name='predictions')(x)
    model = Model(inputs=base_model.input, outputs=x)
    model.summary()

    # Total params: 14,965,578
    # Trainable params: 250,890
    # Non-trainable params: 14,714,688

    model.compile(optimizer='adam', 
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])


    train_datagen = ImageDataGenerator(
        rescale=1./255,
        rotation_range=40,
        width_shift_range=0.2,
        height_shift_range=0.2,
        shear_range=0.2,
        zoom_range=0.2,
        fill_mode='nearest',
    )
    validation_datagen = ImageDataGenerator(
        rescale=1./255,
    )

    train_generator = train_datagen.flow_from_directory(
            '/content/drive/My Drive/Colab Notebooks/domat/solo-dataset/train/', 
            target_size=(224, 224),
            batch_size=32,
            class_mode='categorical',
    )
    validation_generator = validation_datagen.flow_from_directory(
            '/content/drive/My Drive/Colab Notebooks/domat/solo-dataset/validation/',
            target_size=(224, 224),
            batch_size=32,
            class_mode='categorical',
    )

    # Found 11614 images belonging to 10 classes.
    # Found 2884 images belonging to 10 classes.

    # check if GPU is running
    import tensorflow as tf
    device_name = tf.test.gpu_device_name()
    if device_name != '/device:GPU:0':
      raise SystemError('GPU device not found')
    print('Found GPU at: {}'.format(device_name))

    # Found GPU at: /device:GPU:0

    t_steps = 11614 // 32
    v_steps = 2884 // 32
    history = model.fit_generator(train_generator, 
                                  epochs=500, 
                                  steps_per_epoch=t_steps, 
                                  validation_data=validation_generator,
                                  validation_steps=v_steps,
                                 )

    # Epoch 1/500
    #   8/362 [..............................] - ETA: 41:02 - loss: 2.9058 - acc: 0.2383

因此,由于某种原因,一个历时大约需要40分钟,而我真的不明白为什么它这么慢。
以前,我使用不同的参数(添加更多的完全连接的图层),每个历元大约在3分钟内完成,尽管显然过拟合,因为有1400万个可用参数,而且数据集要小得多。

任何人都对如何解决这个问题有任何想法?我已经尝试了一百万次,但是速度太慢了。我什至无法恢复到原始配置以查看我之前的操作,因此每个纪元大约需要3分钟才能完成。

1 个答案:

答案 0 :(得分:-1)

使用colab设置为您的环境设置GPU模式。