我正在努力使用NVIDIA GeForce 960M在我的MSI Windows 10机器上运行GPU上的tensorflow。我想我已经在互联网上使用过这个主题的所有提示,我无法成功,所以问题是,你是否可以给我任何额外的暗示,这可以帮助我实现目标 - 哪个在GPU上运行tensorflow?
更具体地说:
所以,我下载并安装了CUDA Toolkit 8.0(我下载了文件 cuda_8.0.61_win10.exe 和带有补丁的文件 cuda_8.0.61.2_windows.exe )。我执行了它们并让它们运行标准选项。然后,为了检查安装是否成功,我从CUDA Samples集编译了deviceQuery并成功执行了它。请参阅以下结果:
<pre>
C:\ProgramData\NVIDIA Corporation\CUDA Samples\v8.0\bin\win64\Debug>deviceQuery.exe
deviceQuery.exe Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 960M"
CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 5.0
Total amount of global memory: 2048 MBytes (2147483648 bytes)
( 5) Multiprocessors, (128) CUDA Cores/MP: 640 CUDA Cores
GPU Max Clock rate: 1176 MHz (1.18 GHz)
Memory Clock rate: 2505 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model)
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 960M
Result = PASS
</pre>
...所以它看起来不错,至少对我而言......然后我下载并解压缩了cuDNN v5.1。另外,我已经将该库的dll文件的路径添加到PATH系统变量。我还检查了我的显卡是否列在兼容设备列表中,它是。
然后我安装了tensorflow。为此我使用了以下命令:
*pip install tensorflow-gpu*
安装时没有任何错误消息。最后一条消息是:
Successfully installed tensorflow-1.3.0 tensorflow-gpu-1.3.0
该计划是:
import tensorflow as tf
device_name = "/gpu:0" # ...it works fine with "/cpu:0"; it doesn't with "/gpu:0"
with tf.device(device_name):
ran_matrix = tf.random_uniform(shape=(1,1), minval=0, maxval=1)
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
result = sess.run(ran_matrix)
print(result)
...结果是(不幸的),如下面的屏幕截图所示。我是从PyCharm那里执行的。
最重要的错误讯息是:
File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation 'random_uniform/sub': Operation was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/cpu:0 ]. Make sure the device specification refers to a valid device.
[[Node: random_uniform/sub = Sub[T=DT_FLOAT, _device="/device:GPU:0"](random_uniform/max, random_uniform/min)]]
此外,我尝试使用CPU而不是GPU运行相同的程序。为此,我更改了以下行中的参数: device_name =“/ cpu:0”
......它运作良好......
我在互联网上搜索提示,这里可能有什么问题,但我找不到任何具体的答案(大多数讨论涉及Ubuntu中的问题,我使用的是Windows 10,我无法改变它)。
我应该从哪里开始解决问题?
答案 0 :(得分:1)
我刚刚通过重新安装tensorflow-gpu和所有依赖库来解决了这个问题(我试图在一个月之前就已经这样做了,但到那时它没有用;现在它终于工作正常了: - ))。一些依赖库肯定有新版本,但我不能说,哪一个可能是问题的根本原因。
答案 1 :(得分:-1)
检查一下:https://github.com/tensorflow/tensorflow/issues/12416
我将tf从1.2升级到1.3后遇到了同样的问题,并通过更新cuDNN v6.0修复了它。