Tensorflow Serving,教程“ Serving Inception Model”的问题-tensorflow.python.framework.errors_impl.InvalidArgumentError

时间:2018-10-11 23:34:40

标签: python docker tensorflow tensorflow-serving

我正在遵循教程"Serving Inception Model with TensorFlow Serving and Kubernetes",但在“导出初始模型”上失败。这是详细信息:

第一部分“构建TensorFlow服务初始模型导出器”正确运行:

tools/bazel_in_docker.sh -d tensorflow/serving:latest-devel-gpu bazel build -c opt tensorflow_serving/example:inception_saved_model

但是以下命令失败:

tools/bazel_in_docker.sh -d tensorflow/serving:latest-devel-gpu bazel-bin/tensorflow_serving/example/inception_saved_model --checkpoint_dir=inception-v3 --output_dir=models/inception

具有巨大的输出量:

== Pulling docker image: tensorflow/serving:latest-devel-gpu latest-devel-gpu: Pulling from tensorflow/serving
Digest: sha256:540e4bb6f0587c7ed17c08c05604a3e0e8bfa6b363f4f11ec524a4ae1e02b980
Status: Image is up to date for tensorflow/serving:latest-devel-gpu
== Running cmd: sh -c 'cd /home/USER/serving; TEST_TMPDIR=.cache bazel-bin/tensorflow_serving/example/inception_saved_model --checkpoint_dir=/home/USER/serving/../tmp/inception-v3 --output_dir=models/inception'
Traceback (most recent call last):
  File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/tf_serving/tensorflow_serving/example/inception_saved_model.py", line 203, in <module>
tf.app.run()
 File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/org_tensorflow/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
 File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/tf_serving/tensorflow_serving/example/inception_saved_model.py", line 199, in main
export()
 File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/tf_serving/tensorflow_serving/example/inception_saved_model.py", line 93, in export
table = tf.contrib.lookup.index_to_string_table_from_tensor(class_tensor)
 File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/org_tensorflow/tensorflow/python/util/lazy_loader.py", line 53, in __getattr__
module = self._load()
 File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/org_tensorflow/tensorflow/python/util/lazy_loader.py", line 42, in _load
module = importlib.import_module(self.__name__)
 File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
 File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/org_tensorflow/tensorflow/contrib/__init__.py", line 40, in <module>
from tensorflow.contrib import distribute
 File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/org_tensorflow/tensorflow/contrib/distribute/__init__.py", line 34, in <module>
from tensorflow.contrib.distribute.python.tpu_strategy import TPUStrategy
 File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/org_tensorflow/tensorflow/contrib/distribute/python/tpu_strategy.py", line 27, in <module>
from tensorflow.contrib.tpu.python.ops import tpu_ops
 File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/org_tensorflow/tensorflow/contrib/tpu/__init__.py", line 69, in <module>
from tensorflow.contrib.tpu.python.ops.tpu_ops import *
 File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/org_tensorflow/tensorflow/contrib/tpu/python/ops/tpu_ops.py", line 39, in <module>
resource_loader.get_path_to_datafile("_tpu_ops.so"))
 File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/org_tensorflow/tensorflow/contrib/util/loader.py", line 56, in load_op_library
ret = load_library.load_op_library(path)
 File "/home/USER/serving/bazel-bin/tensorflow_serving/example/inception_saved_model.runfiles/org_tensorflow/tensorflow/python/framework/load_library.py", line 60, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Invalid name: 
An op that loads optimization parameters into HBM for embedding. Must be
preceded by a ConfigureTPUEmbeddingHost op that sets up the correct
embedding table configuration. For example, this op is used to install
parameters that are loaded from a checkpoint before a training loop is
executed.

我拍摄了latest-devel-gpu张张量/服务的图像,看到带有TPU的模块出现错误是很奇怪的。

关于如何处理此问题的任何建议?

谢谢

0 个答案:

没有答案