我尝试按照Easy Installation of an Optimized Theano on Current Ubuntu上的说明操作但不起作用:每当我使用GPU运行Theano脚本时,它都会显示错误消息:
已安装CUDA,但设备gpu不可用(错误:无法获取可用的gpus数量:未检测到支持CUDA的设备)
更具体地说,按照链接网页中的说明,我执行了以下步骤:
# Install Theano
sudo apt-get install python-numpy python-scipy python-dev python-pip python-nose g++ libopenblas-dev git
sudo pip install Theano
# Install Nvidia drivers and CUDA
sudo apt-get install nvidia-current
sudo apt-get install nvidia-cuda-toolkit
然后我重新启动并尝试运行:
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python gpu_test.py # gpu_test.py comes from http://deeplearning.net/software/theano/tutorial/using_gpu.html
但我明白了:
f@f-Aurora-R4:~$ THEANO_FLAGS=’mode=FAST_RUN,device=gpu,floatX=float32,cuda.root=/usr/lib/nvidia-cuda-toolkit’ python gpu_test.py WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not available (error: Unable to get the number of gpus available: no CUDA-capable device is detected) [Elemwise{exp,no_inplace}(<TensorType(float32, vector)>)] Looping 1000 times took 2.199992 seconds Result is [ 1.23178029 1.61879337 1.52278066 ..., 2.20771813 2.29967761 1.62323284] Used the cpu
答案 0 :(得分:6)
(我在Ubuntu 14.04.4 LTS x64和Kubuntu 14.04.4 LTS x64上测试了以下内容,我想它应该适用于大多数Ubuntu变种)
官方网站上的说明已过时。相反,您可以使用以下说明(假设新安装的Kubuntu 14.04 LTS x64):
# Install Theano
sudo apt-get install python-numpy python-scipy python-dev python-pip python-nose g++ libopenblas-dev git
sudo pip install Theano
# Install Nvidia drivers, CUDA and CUDA toolkit, following some instructions from http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
wget http://developer.download.nvidia.com/compute/cuda/7.5/Prod/local_installers/cuda-repo-ubuntu1404-7-5-local_7.5-18_amd64.deb # Got the link at https://developer.nvidia.com/cuda-downloads
sudo dpkg -i cuda-repo-ubuntu1404-7-5-local_7.5-18_amd64.deb
sudo apt-get update
sudo apt-get install cuda
sudo reboot
此时,运行nvidia-smi
应该有效,但正在运行nvcc
无法正常工作。
# Execute in console, or (add in ~/.bash_profile then run "source ~/.bash_profile"):
export PATH=/usr/local/cuda-7.5/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:$LD_LIBRARY_PATH
此时,nvidia-smi
和nvcc
都应该有效。
测试Theano是否能够使用GPU:
将以下内容复制粘贴到gpu_test.py
:
# Start gpu_test.py
# From http://deeplearning.net/software/theano/tutorial/using_gpu.html#using-gpu
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
# End gpu_test.py
并运行它:
THEANO_FLAGS='mode=FAST_RUN,device=gpu,floatX=float32' python gpu_test.py
应返回:
f@f-Aurora-R4:~$ THEANO_FLAGS='mode=FAST_RUN,device=gpu,floatX=float32' python gpu_test.py
Using gpu device 0: GeForce GTX 690
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.658292 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761
1.62323296]
Used the gpu
了解您的CUDA版本:
nvcc -V
示例:
username@server:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Tue_Aug_11_14:27:32_CDT_2015
Cuda compilation tools, release 7.5, V7.5.17
添加cuDNN(来自http://deeplearning.net/software/theano/library/sandbox/cuda/dnn.html的说明):
tar -xvf cudnn-7.0-linux-x64-v3.0-prod.tgz
选项1:将*.h
个文件复制到CUDA_ROOT/include
,将*.so*
个文件复制到CUDA_ROOT/lib64
(默认情况下,CUDA_ROOT
为/usr/local/cuda
的Linux)。
sudo cp cuda/lib64/* /usr/local/cuda/lib64/
sudo cp cuda/include/cudnn.h /usr/local/cuda/include/
选项2:
export LD_LIBRARY_PATH=/home/user/path_to_CUDNN_folder/lib64:$LD_LIBRARY_PATH
export CPATH=/home/user/path_to_CUDNN_folder/include:$CPATH
export LIBRARY_PATH=/home/user/path_to_CUDNN_folder/lib64:$LD_LIBRARY_PATH
默认情况下,Theano会检测它是否可以使用cuDNN。如果是这样,它将使用它。如果没有,Theano优化将不会引入cuDNN操作。如果用户没有手动引入它们,Theano仍然可以工作。
如果Theano无法使用cuDNN,请收到错误,请使用此Theano标记:optimizer_including=cudnn
。
示例:
THEANO_FLAGS='mode=FAST_RUN,device=gpu,floatX=float32,optimizer_including=cudnn' python gpu_test.py
了解您的cuDNN版本:
cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
CNMeM library是一个简单的库,可以帮助深度学习框架管理CUDA内存。&#34;。
# Build CNMeM without the unit tests
git clone https://github.com/NVIDIA/cnmem.git cnmem
cd cnmem
mkdir build
cd build
sudo apt-get install -y cmake
cmake ..
make
# Copy files to proper location
sudo cp ../include/cnmem.h /usr/local/cuda/include
sudo cp *.so /usr/local/cuda/lib64/
cd ../..
要与Theano一起使用,您需要添加lib.cnmem
标志。例如:
THEANO_FLAGS='mode=FAST_RUN,device=gpu,floatX=float32,lib.cnmem=0.8,optimizer_including=cudnn' python gpu_test.py
脚本的第一个输出应为:
Using gpu device 0: GeForce GTX TITAN X (CNMeM is enabled with initial size: 80.0% of memory, cuDNN 5005)
lib.cnmem=0.8
意味着它可以使用高达80%的GPU。
加速取决于许多因素,如形状和模型本身。速度从0加快到2倍。
如果你不改变Theano标志allow_gc,你可以期望GPU加速20%。在某些情况下(小型号),我们看到了50%的加速。
作为旁注,您可以使用OMP_NUM_THREADS=[number_of_cpu_cores]
flag在多个CPU核心上运行Theano。例如:
OMP_NUM_THREADS=4 python gpu_test.py
脚本theano/misc/check_blas.py
输出有关使用哪个BLAS的信息:
cd [theano_git_directory]
OMP_NUM_THREADS=4 python theano/misc/check_blas.py
运行Theano的测试套件:
nosetests theano
或
sudo pip install nose-parameterized
import theano
theano.test()
常见问题: