使用GPU为Keras设置Theano Flags

时间:2016-07-21 01:54:53

标签: python theano keras theano-cuda

我正在尝试使用Theano后端在Keras训练模型并使用我的GPU。如果我运行100张图像或20K我从Theano收到错误(见下文)说我没有足够的内存,但我的GPU上有6GB。我使用了来自Keras docs" THEANO_FLAGS = device = gpu,floatX = float32 python my_keras_script.py"的THEANO_FLAG。问题我使用了stackoverflow中的this one(echo -e" \ n [global] \ nfloatX = float32 \ ndevice = gpu0 \ n [lib] \ ncnmem = 0 \ n">> ;〜/ .theanorc )设置cnmem变量,所以如果你使用Keras中的标志,它将使用你设置的cnmem与堆栈溢出一个。

我已经将cnmem设置为0.83(它最高可以立即出错)和0并且没有任何东西有它所需的822 MB,但我有6GB的视频内存。我确信我做的是一些简单的错误,但我找不到任何可以找到的信息。

我在Ubuntu 14.04上安装了CUDA。我刚刚使用&#34来做Keras MNIST示例; THEANO_FLAGS = device = gpu,floatX = float32 python mnist_transfer_cnn.py "

MemoryError: Error allocating 822083584 bytes of device memory (out of  memory).
Apply node that caused the error: GpuElemwise{add,no_inplace} (GpuDnnConv{algo='small', inplace=True}.0, GpuReshape{4}.0)
Toposort index: 375
Inputs types: [CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, (True, False, True, True))]
Inputs shapes: [(64, 64, 224, 224), (1, 64, 1, 1)]
Inputs strides: [(3211264, 50176, 224, 1), (0, 1, 0, 0)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[GpuElemwise{Composite{(i0 * (i1 + Abs(i1)))},no_inplace}(CudaNdarrayConstant{[[[[ 0.5]]]]}, GpuElemwise{add,no_inplace}.0), GpuElemwise{Composite{((i0 * i1) + (i0 * i1 * sgn(i2)))}}[(0, 1)](CudaNdarrayConstant{[[[[ 0.5]]]]}, GpuDnnPoolGrad{mode='max'}.0, GpuElemwise{add,no_inplace}.0)]]

HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

0 个答案:

没有答案