当我运行keras脚本时,我得到以下输出:
Using TensorFlow backend.
2017-06-14 17:40:44.621761: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use SSE4.1 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621783: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use SSE4.2 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621788: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use AVX instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621791: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use AVX2 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621795: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use FMA instructions, but these are
available
on your machine and could speed up CPU computations.
2017-06-14 17:40:44.721911: I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero
2017-06-14 17:40:44.722288: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0
with properties:
name: GeForce GTX 850M
major: 5 minor: 0 memoryClockRate (GHz) 0.9015
pciBusID 0000:0a:00.0
Total memory: 3.95GiB
Free memory: 3.69GiB
2017-06-14 17:40:44.722302: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
2017-06-14 17:40:44.722307: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y
2017-06-14 17:40:44.722312: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 850M,
pci bus id: 0000:0a:00.0)
这是什么意思?我使用GPU或CPU版本的tensorflow吗?
在安装keras之前,我正在使用GPU版本的tensorflow。
同样sudo pip3 list
显示tensorflow-gpu(1.1.0)
,而不是tensorflow-cpu
。
运行[此stackoverflow问题]中提到的命令,提供以下内容:
The TensorFlow library wasn't compiled to use SSE4.1 instructions,
but these are available on your machine and could speed up CPU
computations.
2017-06-14 17:53:31.424793: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use SSE4.2 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424803: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use AVX instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424812: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use AVX2 instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424820: W
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow
library wasn't compiled to use FMA instructions, but these are
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.540959: I
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful
NUMA node read from SysFS had negative value (-1), but there must be
at least one NUMA node, so returning NUMA node zero
2017-06-14 17:53:31.541359: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0
with properties:
name: GeForce GTX 850M
major: 5 minor: 0 memoryClockRate (GHz) 0.9015
pciBusID 0000:0a:00.0
Total memory: 3.95GiB
Free memory: 128.12MiB
2017-06-14 17:53:31.541407: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
2017-06-14 17:53:31.541420: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y
2017-06-14 17:53:31.541441: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 850M,
pci bus id: 0000:0a:00.0)
2017-06-14 17:53:31.547902: E
tensorflow/stream_executor/cuda/cuda_driver.cc:893] failed to
allocate 128.12M (134348800 bytes) from device:
CUDA_ERROR_OUT_OF_MEMORY
Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce
GTX 850M, pci bus id: 0000:0a:00.0
2017-06-14 17:53:31.549482: I
tensorflow/core/common_runtime/direct_session.cc:257] Device
mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce
GTX 850M, pci bus id: 0000:0a:00.0
答案 0 :(得分:67)
您正在使用GPU版本。您可以列出可用的张量流设备(也请检查this问题):
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
在你的情况下,cpu和gpu都可用,如果使用exporflow的cpu版本,则不会列出gpu。在您的情况下,如果不设置tensorflow设备(with tf.device("..")
),tensorflow将自动选择您的gpu!
此外,您的sudo pip3 list
清楚地表明您正在使用tensorflow-gpu。如果您拥有tensoflow cpu版本,则名称将类似于tensorflow(1.1.0)
。
检查this问题以获取有关警告的信息。
答案 1 :(得分:20)
要让Keras使用GPU,必须做很多事情。 Keras(以及TF和PyTorch)默默地退回到CPU,这通常是我不想要的。
我在开发盒中进行了很多更改,具有双启动,多个环境等。在Jupyter中,很容易将错误的内核附加到可能未配置为GPU的内核上。
为减轻混乱,在我的家用笔记本电脑中,我做了一个小小的验证,我想在顶部附近的单元格中放置一个
# confirm TensorFlow sees the GPU
from tensorflow.python.client import device_lib
assert 'GPU' in str(device_lib.list_local_devices())
# confirm Keras sees the GPU
from keras import backend
assert len(backend.tensorflow_backend._get_available_gpus()) > 0
# confirm PyTorch sees the GPU
from torch import cuda
assert cuda.is_available()
assert cuda.device_count() > 0
print(cuda.get_device_name(cuda.current_device()))
答案 2 :(得分:3)
要找出将您的操作和张量分配给哪些设备,请在将log_device_placement配置选项设置为True的情况下创建会话。
# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
您应该看到以下输出:
Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K40c, pci bus
id: 0000:05:00.0
b: /job:localhost/replica:0/task:0/device:GPU:0
a: /job:localhost/replica:0/task:0/device:GPU:0
MatMul: /job:localhost/replica:0/task:0/device:GPU:0
[[ 22. 28.]
[ 49. 64.]]
有关更多详细信息,请参阅链接Using GPU with tensorflow