为什么我在这里得到GPU内存不足错误?

时间:2019-07-05 09:44:06

标签: python tensorflow keras

我对深度学习非常陌生,并且正在尝试使用keras做一个猫/狗分类器。该模型花费了太多时间在笔记本电脑上进行培训,因此我决定使用GTX 750Ti(2GB)在台式机上对其进行培训。我正在使用带有tensorflow-gpu后端的keras,但每次都会出现OOM错误。即使我将批处理大小减小为1。如何在这里控制分配给GPU的数据量?

CODE

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Activation, Conv2D, MaxPooling2D, Flatten, Dropout
images = ImageDataGenerator()
train = images.flow_from_directory('./dataset', class_mode='binary', target_size=(200, 200), batch_size=64)

model = Sequential()

model.add(Conv2D(32, (3, 3), padding='same', input_shape=(200,200,3), activation='relu'))
model.add(Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(128, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(256, (3, 3), padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
            optimizer='adam',
            metrics=['accuracy'])

model.fit_generator(train, steps_per_epoch=len(train.filenames)//32, epochs=100)

model.save_weights('model.h5')

这是模型摘要:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 200, 200, 32)      896       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 200, 200, 32)      9248      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 100, 100, 32)      0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 100, 100, 64)      18496     
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 100, 100, 64)      36928     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 50, 50, 64)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 50, 50, 128)       73856     
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 50, 50, 128)       147584    
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 25, 25, 128)       0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 25, 25, 256)       295168    
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 25, 25, 256)       590080    
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 12, 12, 256)       0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 36864)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 256)               9437440   
_________________________________________________________________
dropout_1 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 256)               65792     
_________________________________________________________________
dropout_2 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 1)                 257       
_________________________________________________________________
activation_1 (Activation)    (None, 1)                 0         
=================================================================
Total params: 10,675,745
Trainable params: 10,675,745
Non-trainable params: 0
_________________________________________________________________

1 个答案:

答案 0 :(得分:1)

通常,发生OOM错误时,是因为batch_size太大或您的VRAM太小了。

在您的情况下,由于VRAM太小,GPU只是耗尽了内存。对于10.000.000参数的神经网络,2GB的视频内存很少。

对于计算机视觉任务,大多数神经网络至少需要6GB的VRAM。

解决方案绝对是使用具有更多内存的视频卡。