即使在读取小数据时GPU也会内存不足?使用“Quadro m1000m 4GB GPU”

时间:2018-02-08 05:41:05

标签: python-3.x tensorflow deep-learning gpu theano

Resource exhausted: OOM when allocating tensor with shape[256,128,3,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

在这里,我尝试使用vgg来学习使用Fast.ai课程的深度学习概念。当我试图读取4个图像的小数据时,它也会显示上述空间中给出的错误。这是我正在使用的vgg16文件的链接: https://github.com/fastai/courses/blob/master/deeplearning1/nbs/vgg16.py

以下代码中的路径是示例数据的路径,仅包含4-5个图像。

path = "data/dogscats/sample/"

import vgg16
from vgg16 import Vgg16

batch_size = 4
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
#batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)

1 个答案:

答案 0 :(得分:1)

我解决了这个问题。实际上Tensowflow版本需要大量内存,因此我将Keras后端更改为Theano,这解决了这个问题,我想这里与VGG无关。 切换可以在keras.json文件的.keras文件夹中完成,并将后端更改为theano。