使用GPU进行训练非常慢

时间:2020-09-04 16:31:41

标签: python tensorflow keras deep-learning

我有一个570,000张图像的数据集,分为90%,5%和5%的火车,验证和测试。

我开始使用MobileNetV2的迁移学习训练模型。

数据正在加载:

train_dataset = image_dataset_from_directory(
    directory=TRAIN_DIR,
    labels="inferred",
    label_mode="categorical",
    class_names=["0", "10", "5"],
    image_size=SIZE,
    seed=SEED,
    subset=None,
    interpolation="bilinear",
    follow_links=False,
)

型号:

baseModel = MobileNetV2(
           include_top=False,
           input_shape=INPUT_SHAPE,
           weights='imagenet')

headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(7, 7))(headModel)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(512, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(3, activation="softmax")(headModel)
# place the head FC model on top of the base model (this will become
# the actual model we will train)
model = Model(inputs=baseModel.input, outputs=headModel)
# loop over all layers in the base model and freeze them so they will
# *not* be updated during the training process
for layer in baseModel.layers:
    layer.trainable = False

模型摘要:

Total params: 2,915,395
Trainable params: 657,411
Non-trainable params: 2,257,984

我正在使用的Nvidia K80正在使用:

jupyter@tensorflow-4-vm:~$ nvidia-smi
Fri Sep  4 16:23:01 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.01    Driver Version: 418.87.01    CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K80           Off  | 00000000:00:04.0 Off |                    0 |
| N/A   55C    P0    58W / 149W |  10871MiB / 11441MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      8129      C   /opt/conda/bin/python                      10858MiB |
+-----------------------------------------------------------------------------+
METRICS = [
      TruePositives(name='tp'),
      FalsePositives(name='fp'),
      TrueNegatives(name='tn'),
      FalseNegatives(name='fn'), 
      BinaryAccuracy(name='accuracy'),
      Precision(name='precision'),
      Recall(name='recall'),
      AUC(name='auc'),
]

model.compile(optimizer=Adam(learning_rate=0.0001), 
              loss="categorical_crossentropy",
              metrics=METRICS)

CALLBACKS = [
    ReduceLROnPlateau(verbose=1),
    ModelCheckpoint(
        '/home/jupyter/checkpoint/model.{epoch:02d}-{val_loss:.2f}.hdf5',
        verbose=1),
]
history = model.fit(train_dataset,epochs = 50,verbose=1, batch_size= 32, callbacks= CALLBACKS, validation_data=validation_dataset)

但是在单个时期训练非常慢! 这么慢的可能是什么原因?

# Batch size = 32

Epoch 1/50
   17/16229 [..............................] - ETA: 196:20:59 - loss: 1.2727 - tp: 169.0000 - fp: 211.0000 - tn: 877.0000 - fn: 375.0000 - accuracy: 0.6409 - precision: 0.4447 - recall: 0.3107 - auc: 0.5755

2 个答案:

答案 0 :(得分:1)

我认为数据加载可能是个问题。如果您通过网络加载每个文件,则无需考虑任何事情。最好的方法是将数据复制到本地存储,然后进行训练。 如果无法实现,请尝试使用TFRecord加载数据(您可以在此处查看如何使用它们:https://www.tensorflow.org/tutorials/load_data/tfrecord)。另外,确保存储和VM位于同一区域。

答案 1 :(得分:0)

使用以下方法将数据集直接加载到VM实例即可解决此问题:

gcloud compute scp /Users/yudhiesh/Desktop/frames_split.zip jupyter@tensorflow-5-vm:~

然后将文件夹解压缩到VM实例的主目录。

模型训练现在每个纪元不到一个小时。