Resnet模型训练时间过长

时间:2020-09-25 16:42:21

标签: python tensorflow machine-learning keras deep-learning

我正在使用this教程来学习模型的转移学习。我们可以看到他的单个时期平均为1秒。

Epoch 1/100
1080/1080 [==============================] - 10s 10ms/step - loss: 3.6862 - acc: 0.2000
Epoch 2/100
1080/1080 [==============================] - 1s 1ms/step - loss: 3.0746 - acc: 0.2574
Epoch 3/100
1080/1080 [==============================] - 1s 1ms/step - loss: 2.6839 - acc: 0.3185
Epoch 4/100
1080/1080 [==============================] - 1s 1ms/step - loss: 2.3929 - acc: 0.3583
Epoch 5/100
1080/1080 [==============================] - 1s 1ms/step - loss: 2.1382 - acc: 0.3870
Epoch 6/100
1080/1080 [==============================] - 1s 1ms/step - loss: 1.7810 - acc: 0.4593

但是当我为cifar模型遵循几乎相同的代码时,我的一个纪元运行大约需要1个小时。

Train on 50000 samples
 3744/50000 [=>............................] - ETA: 43:38 - loss: 3.3223 - acc: 0.1760
1

我的代码是

from tensorflow.keras.applications import ResNet50
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense, Dropout
from tensorflow.keras import Model

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x_train = x_train / 255.0
x_test = x_test / 255.0

y_train = to_categorical(y_train)
y_test = to_categorical(y_test)

base_model = ResNet50(weights= None, include_top=False, input_shape= (32,32,3))

x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dropout(0.4)(x)
predictions = Dense(10 , activation= 'softmax')(x)
model = Model(inputs = base_model.input, outputs = predictions)

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])

hist = model.fit(x_train, y_train)

请注意,我正在为此模型使用cifar 10数据集。我的代码或数据有什么问题吗?我该如何改善? 1个历时需要1个小时才能到达。我也有NVIDIA MX-110 2GB,这是TensorFlow所使用的。

2 个答案:

答案 0 :(得分:1)

看起来好像不是批处理数据。结果,模型的每个前向传递仅看到一个训练实例,这是非常低效的。

尝试在您的model.fit()调用中设置批次大小:

hist = model.fit(x_train, y_train, batch_size=16, epochs=num_epochs, 
                 validation_data=(x_test, y_test), shuffle=True)

调整您的批处理大小,使其成为可容纳在GPU内存中的最大批处理大小-在设置为一个之前,尝试一些其他值。

答案 1 :(得分:1)

我复制并运行了您的代码,但是要使其运行,我必须在下面进行更改

import tensorflow as tf
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense, Dropout
from tensorflow.keras import Model

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
print (len(x_train))
x_train = x_train / 255.0
x_test = x_test / 255.0

y_train = tf.keras.utils.to_categorical(y_train)
y_test = tf.keras.utils.to_categorical(y_test)

base_model = ResNet50(weights= None, include_top=False, input_shape= (32,32,3))

x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dropout(0.4)(x)
predictions = Dense(10 , activation= 'softmax')(x)
model = Model(inputs = base_model.input, outputs = predictions)

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])

hist = model.fit(x_train, y_train, )
# the result for 2 epochs is shown below
50000
Epoch 1/2
1563/1563 [==============================] - 58s 37ms/step - loss: 2.8654 - acc: 0.2537
Epoch 2/2
1563/1563 [==============================] - 51s 33ms/step - loss: 2.5331 - acc: 0.2748

每个model.fit文档,如果未指定批处理大小,则默认为32。因此,每50,000个样本/ 32 = 1563个步骤。由于您的代码中的某些原因,批处理大小默认为1。我不知道为什么。因此,设置batch_size = 50,则将需要1000个步骤。 为了进一步加快速度,我将设置weights =“ imagenet”并使用

冻结基本模型中的图层
for layer in base_model.layers:
    layer.trainable = False
#if you set batch_size=50, weights="imagenet" with the base model frozen you get
50000
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5
94773248/94765736 [==============================] - 5s 0us/step
Epoch 1/2
1000/1000 [==============================] - 16s 16ms/step - loss: 2.5101 - acc: 0.1487
Epoch 2/2
1000/1000 [==============================] - 10s 10ms/step - loss: 2.1159 - acc: 0.2249