安装tensorflow时出错,在ubuntu中提供服务

时间:2017-01-30 08:56:43

标签: ubuntu tensorflow tensorflow-serving

我正在安装Tensorflow服务,我必须在ubuntu中安装tensorflow。我在./configure根目录中运行tf命令。 这是输出:

Please specify the location of python. [Default is /usr/bin/python]: 
Please specify optimization flags to use during compilation [Default is -march=native]:        
Do you wish to use jemalloc as the malloc implementation? [Y/n] y
jemalloc enabled
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] y
Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N] y
Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] y
XLA JIT support will be enabled for TensorFlow
Found possible Python library paths:
  /usr/local/lib/python2.7/dist-packages
  /usr/lib/python2.7/dist-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages]

Using python library path: /usr/local/lib/python2.7/dist-packages
Do you wish to build TensorFlow with OpenCL support? [y/N] y
OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] y
CUDA support will be enabled for TensorFlow
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: 
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 
Please specify the location where CUDA  toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 
Please specify the Cudnn version you want to use. [Leave empty to use system default]: 
Please specify the location where cuDNN  library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "3.5,5.2"]: 
Please specify which C++ compiler should be used as the host C++ compiler. [Default is ]: 
Invalid C++ compiler path.  cannot be found
Please specify which C++ compiler should be used as the host C++ compiler. [Default is ]: /usr/bin/g++
Please specify which C compiler should be used as the host C compiler. [Default is ]: /usr/bin/gcc
Please specify the location where ComputeCpp for SYCL 1.2 is installed. [Default is /usr/local/computecpp]: 
.................................................................
INFO: Starting clean (this may take a while). Consider using --expunge_async if the clean takes more than several minutes.
.........
ERROR: package contains errors: tensorflow/stream_executor.
ERROR: error loading package 'tensorflow/stream_executor': Encountered error while reading extension file 'cuda/build_defs.bzl': no such package '@local_config_cuda//cuda': Traceback (most recent call last):
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 813
        _create_cuda_repository(repository_ctx)
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 727, in _create_cuda_repository
        _get_cuda_config(repository_ctx)
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 584, in _get_cuda_config
        _cudnn_version(repository_ctx, cudnn_install_base..., ...)
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 295, in _cudnn_version
        _find_cuda_define(repository_ctx, cudnn_install_base..., ...)
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 270, in _find_cuda_define
        auto_configure_fail("Cannot find cudnn.h at %s" % st...))
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 93, in auto_configure_fail
        fail("
%sAuto-Configuration Error:%s ...))

Auto-Configuration Error: Cannot find cudnn.h at /usr/lib/x86_64-linux-gnu/include/cudnn.h
.

没有名为/usr/lib/x86_64-linux-gnu/include的文件夹。我在libcudnn.so/usr/lib/x86_64-linux-gnu/文件夹cudnn.h中有/usr/include个文件。我不知道配置文件是如何生成路径但它找不到cudnn虽然我已经成功安装了caffe,其CMakeLists.txt可以很容易地找到cuda和cudnn安装的路径。我该如何解决这个问题?

1 个答案:

答案 0 :(得分:0)

假设您确实安装了cudnn 使用 -
找到你的cuda的安装位置 which nvcc

在我的情况下,它返回 - /usr/local/cuda-6.5/bin/nvcc

所以cudnn.h位于/usr/local/cuda-6.5/include如果安装了 cudnn)

配置张量流时,会询问您 -
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:

此处您必须明确指定cudnn的位置 就我而言,它是/usr/local/cuda-6.5/include/