我正在使用script provided in the tutorial for that purpose:
使用GPU测试Theano# Start gpu_test.py
# From http://deeplearning.net/software/theano/tutorial/using_gpu.html#using-gpu
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
# End gpu_test.py
如果我指定floatX=float32
,则它在GPU上运行:
francky@here:/fun$ THEANO_FLAGS='mode=FAST_RUN,device=gpu2,floatX=float32' python gpu_test.py
Using gpu device 2: GeForce GTX TITAN X (CNMeM is disabled)
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(Gp
Looping 1000 times took 1.458473 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761
1.62323296]
Used the gpu
如果我没有指定floatX=float32
,它将在CPU上运行:
francky@here:/fun$ THEANO_FLAGS='mode=FAST_RUN,device=gpu2'
Using gpu device 2: GeForce GTX TITAN X (CNMeM is disabled)
[Elemwise{exp,no_inplace}(<TensorType(float64, vector)>)]
Looping 1000 times took 3.086261 seconds
Result is [ 1.23178032 1.61879341 1.52278065 ..., 2.20771815 2.29967753
1.62323285]
Used the cpu
如果我指定floatX=float64
,它将在CPU上运行:
francky@here:/fun$ THEANO_FLAGS='mode=FAST_RUN,device=gpu2,floatX=float64' python gpu_test.py
Using gpu device 2: GeForce GTX TITAN X (CNMeM is disabled)
[Elemwise{exp,no_inplace}(<TensorType(float64, vector)>)]
Looping 1000 times took 3.148040 seconds
Result is [ 1.23178032 1.61879341 1.52278065 ..., 2.20771815 2.29967753
1.62323285]
Used the cpu
为什么floatX
标志会影响Theano中是否使用GPU?
我用:
pip freeze
),import platform; platform.architecture()
),nvidia-smi
),nvcc --version
),nvidia-smi
),lsb_release -a
和uname -i
)。我阅读floatX
上的文档,但它没有帮助。它只是说:
config.floatX
字符串值:'float64'或'float32'
默认值:'float64'
这设置了tensor.matrix()返回的默认dtype, tensor.vector()和类似的函数。它还设置默认值 作为Python浮点传递的参数的theano位宽 号。
答案 0 :(得分:2)
从http://deeplearning.net/software/theano/tutorial/using_gpu.html#gpuarray-backend我读到可以在GPU上执行float64计算,但您必须从源代码安装libgpuarray
。
我设法安装它,请参阅this script,我使用virtualenv,您甚至不必拥有sudo
。
安装完成后,您可以将旧后端与config flag device=gpu
一起使用,将新后端与device=cuda
一起使用。
新的后端可以执行64位计算,但对我来说它的工作方式不同。一些操作停止了工作。 ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law
:)
答案 1 :(得分:1)
据我所知,这是因为他们还没有为GPU实现float64。
http://deeplearning.net/software/theano/tutorial/using_gpu.html:
只能加速使用float32数据类型的计算。预计即将推出的硬件对float64提供更好的支持,但float64计算仍然相对较慢(2010年1月)。