使用带有GPU

时间:2016-07-03 19:24:56

标签: python tensorflow

我甚至不确定如何处理这个问题,或者搜索什么,但是当我在GPU上运行一些代码时,当我使用tf.train.Saver对象来跟踪时,我会抛出InvalidValueException。变量状态。当我注释掉Saver实例化,或者切换到CPU:0时,代码运行正常。

  File "entrypoint.py", line 496, in <module>
    online_mvrcca_multipie_test3()
  File "entrypoint.py", line 490, in online_mvrcca_multipie_test3
    gs_res = gridsearch_optimizer_cb(parameter_ranges,exp_f_handle);
  File "/homes/sj16/LPLUSS/deps/sjpy_utils/exptools/parameter_search.py", line 48, in gridsearch_optimizer_async
    f_handle(parameter_instance);
  File "entrypoint.py", line 487, in <lambda>
    {}\
  File "/homes/sj16/LPLUSS/deps/pyena/src/sessions.py", line 115, in submit_to_local_session
    run_metadata_ptr)
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 636, in _run
    feed_dict_string, options, run_metadata)
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 708, in _do_run
    target_list, options, run_metadata)
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 728, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node 'save/Const': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and devices:
Identity: CPU
Const: CPU
         [[Node: save/Const = Const[dtype=DT_STRING, value=Tensor<type: string shape: [] values: model>, _device="/device:GPU:0"]()]]
Caused by op u'save/Const', defined at:
  File "entrypoint.py", line 496, in <module>
    online_mvrcca_multipie_test3()
  File "entrypoint.py", line 490, in online_mvrcca_multipie_test3
    gs_res = gridsearch_optimizer_cb(parameter_ranges,exp_f_handle);
  File "/homes/sj16/LPLUSS/deps/sjpy_utils/exptools/parameter_search.py", line 48, in gridsearch_optimizer_async
    f_handle(parameter_instance);
  File "entrypoint.py", line 487, in <lambda>
    {}\
  File "/homes/sj16/LPLUSS/deps/pyena/src/sessions.py", line 115, in submit_to_local_session
    worker_result=worker_task(*worker_args);
  File "/homes/sj16/LPLUSS/src/experiments/matrix_reconstruction/online/mvrcca_online/image_exp/experiment_workers.py", line 41, in batch_mv_recon_test_mc7
    saver = tf.train.Saver()   #Here is the offending call to Saver(), having set up the graph
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 845, in __init__
    restore_sequentially=restore_sequentially)
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 504, in build
    filename_tensor = constant_op.constant("model")
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/ops/constant_op.py", line 166, in constant
    attrs={"value": tensor_value, "dtype": dtype_value}, name=name).outputs[0]
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2260, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1230, in __init__
    self._traceback = _extract_stack()

我的意思是,如果您处于GPU模式,看起来TF无法将tf.constant保存到检查点文件中?因为没有“内核”的GPU实现(不确定在这种情况下意味着什么)来执行save / Const节点(保存常量?)。

这有点奇怪......无法保存和恢复命名常量......

此外,我从不使用tf.constant(),但我猜测当使用numeric / numpy变量调用tf.convert_to_tensor时会创建一个Constant节点?

-----------编辑显示最小的例子-----

环境:

CUDA 7.5.18 w / a Tesla K40c; Ubuntu 14.04; GPU Tensorflow 0.9.0rc0,使用python 2.7 miniconda环境

import os,math
import operator as op
import tensorflow as tf


with tf.device('/gpu:0'):
    tf_session=tf.Session()

    exp_model_dir= os.path.join(os.path.expanduser("~"),'tf_scratchpad/saver_failure_dense_only')

    if not os.path.isdir(exp_model_dir):
        os.mkdir(exp_model_dir)

    ranklim=10
    dense_widths=[64,ranklim,64, 128]

    # input to the network

    input_data = tf.placeholder(tf.float32, [1,128], name='input_data')

    current_input = input_data

    for layer_i, n_output in enumerate(dense_widths[0:]):

        n_input = int(current_input.get_shape()[1])
        W = tf.Variable(
            tf.random_uniform([n_input, n_output],
                              -1.0 / math.sqrt(n_input),
                              1.0 / math.sqrt(n_input)))
        b = tf.Variable(tf.zeros([n_output]))

        output = tf.nn.relu(tf.matmul(current_input, W) + b)
        current_input = output

    # reconstruction through the network
    y = current_input
    cost = tf.reduce_sum(tf.square(y - input_data))

    train_writer = tf.train.SummaryWriter(os.path.join(exp_model_dir,'train'),
                                          tf_session.graph)


    optimizer = tf.train.GradientDescentOptimizer(0.0075).minimize(cost)

    saver = tf.train.Saver()

    tf_session.run(tf.initialize_all_variables())  

产生

I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:924] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: Tesla K40c
major: 3 minor: 5 memoryClockRate (GHz) 0.745
pciBusID 0000:05:00.0
Total memory: 11.25GiB
Free memory: 11.15GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x2a95d80
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:924] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 1 with properties:
name: Quadro K600
major: 3 minor: 0 memoryClockRate (GHz) 0.8755
pciBusID 0000:04:00.0
Total memory: 1023.31MiB
Free memory: 425.00MiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 1
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y N
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 1:   N Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:806] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K40c, pci bus id: 0000:05:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:793] Ignoring gpu device (device: 1, name: Quadro K600, pci bus id: 0000:04:00.0) with Cuda multiprocessor count: 1. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT.
Traceback (most recent call last):
  File "tfcrash.py", line 48, in <module>
    tf_session.run(tf.initialize_all_variables())
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 372, in run
    run_metadata_ptr)
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 636, in _run
    feed_dict_string, options, run_metadata)
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 708, in _do_run
    target_list, options, run_metadata)
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 728, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node 'save/Const': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and devices:
Identity: CPU
Const: CPU
   [[Node: save/Const = Const[dtype=DT_STRING, value=Tensor<type: string shape: [] values: model>, _device="/device:GPU:0"]()]]
Caused by op u'save/Const', defined at:
  File "tfcrash.py", line 46, in <module>
    saver = tf.train.Saver()
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 845, in __init__
    restore_sequentially=restore_sequentially)
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 504, in build
    filename_tensor = constant_op.constant("model")
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/ops/constant_op.py", line 166, in constant
    attrs={"value": tensor_value, "dtype": dtype_value}, name=name).outputs[0]
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2260, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/homes/sj16/miniconda/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1230, in __init__
    self._traceback = _extract_stack()

错误实际上是在initialize_all_variables()中引发的,但是在调用tf.train.Saver()时会受到指责。注释掉Saver()调用,或者使用'/ cpu:0'可以防止异常。

2 个答案:

答案 0 :(得分:1)

基本上,tf.train.Saver()不应归入with tf.device('/gpu:0')

TensorFlow中的每个操作都有自己的设备分配。应始终将saver op分配给cpu。

答案 1 :(得分:0)

无法将设备分配给节点save/Const:无法满足显式设备规范/device:GPU:0,因为没有支持GPU设备的内核。

您可以将saver = tf.train.Saver()移开with tf.device('/gpu:0'):,而操作将放在/CPU:0上,并修复它。