在请求时找不到Servable:在加载旧版本的模型时

时间:2017-12-14 06:22:55

标签: grpc tensorflow-serving

我是一个模型,让我们说 mymodel 和两个不同的数据集: setA setB 。< / p>

经过单独培训(在本地计算机上) setA setB 后,tensorflow服务创建了两个不同的目录: 100 200 分别用于 setA setB

在docker中托管模型

root@ccb58054cae5:/# ls /serving/model/
100 200
root@ccb58054cae5:/# bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=mymodel --model_base_path=/serving/model &> log &

现在,当我对 setB 进行推理时,我成功地获得了响应,因为默认情况下服务的张量流加载 200 ,因为它认为这是最新型号。

现在我想查询 setA ,所以我需要在代码中提到要访问的托管模型的版本,那将是 100

就代码而言:request.model_spec.version.value = 100

为了完整性,这里是其他相关的客户代码:

host, port = FLAGS.server.split(':')
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'mymodel'
request.model_spec.signature_name = signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
request.model_spec.version.value = 100

我从here了解request.model_spec.version.value = 100。但是我很不走运,我得到了:

Traceback (most recent call last):
  File "C:\Program Files\Anaconda3\lib\site-packages\grpc\beta\_client_adaptations.py", line 193, in _blocking_unary_unary
    credentials=_credentials(protocol_options))
  File "C:\Program Files\Anaconda3\lib\site-packages\grpc\_channel.py", line 492, in __call__
    return _end_unary_response_blocking(state, call, False, deadline)
  File "C:\Program Files\Anaconda3\lib\site-packages\grpc\_channel.py", line 440, in _end_unary_response_blocking
    raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.NOT_FOUND, Servable not found for request: Specific(mymodel, 100))>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Program Files\Anaconda3\lib\site-packages\flask\app.py", line 1988, in wsgi_app
    response = self.full_dispatch_request()
  File "C:\Program Files\Anaconda3\lib\site-packages\flask\app.py", line 1641, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "C:\Program Files\Anaconda3\lib\site-packages\flask_cors\extension.py", line 188, in wrapped_function
    return cors_after_request(app.make_response(f(*args, **kwargs)))
  File "C:\Program Files\Anaconda3\lib\site-packages\flask\app.py", line 1544, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "C:\Program Files\Anaconda3\lib\site-packages\flask\_compat.py", line 33, in reraise
    raise value
  File "C:\Program Files\Anaconda3\lib\site-packages\flask\app.py", line 1639, in full_dispatch_request
    rv = self.dispatch_request()
  File "C:\Program Files\Anaconda3\lib\site-packages\flask\app.py", line 1625, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "mymodel.py", line 63, in json_test
    response =  main.main(ser, query = que)
  File "D:\mymodel_temp\temp\main.py", line 23, in main
    return json_for_inference(model.inference(query), query, service_id)
  File "D:\mymodel_temp\temp\src\utils.py", line 30, in wrapper
    outputs = function(self, *args, **kwargs)
  File "D:\mymodel_temp\temp\src\model.py", line 324, in inference
    result = stub.Predict(request, 10.0) # 10 seconds
  File "C:\Program Files\Anaconda3\lib\site-packages\grpc\beta\_client_adaptations.py", line 309, in __call__
    self._request_serializer, self._response_deserializer)
  File "C:\Program Files\Anaconda3\lib\site-packages\grpc\beta\_client_adaptations.py", line 195, in _blocking_unary_unary
    raise _abortion_error(rpc_error_call)
grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.NOT_FOUND, details="Servable not found for request: Specific(mymodel, 100)")

1 个答案:

答案 0 :(得分:3)

您收到Servable not found for request: Specific(mymodel, 100)错误,因为没有加载此特定版本号的模型。

root@ccb58054cae5:/# bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=mymodel --model_base_path=/serving/model &> log &

当您在docker上运行上述命令以供服务模型时,它将根据Gens Code of Tensorflow Serving中的版本策略提及here加载模型。默认情况下,只会加载一个版本的模型,该版本将为 最新版本 (更高版本号)。

如果要加载多个版本或多个模型,则需要添加--model_config_file标志。同时删除--model_name--model_base_path标志,因为我们将在模型配置文件中添加它们。

所以现在你的命令将是这样的: -

root@ccb58054cae5:/# bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_config_file=mymodel.conf &> log &

模型配置文件的格式如下: -

model_config_list: {

    config: {
        name: "mymodel",
        base_path: "/serving/model",
        model_platform: "tensorflow"
        model_version_policy: {all{}}
    }
}

因此,您可以将latest(默认设置)以外的版本政策设置为allspecific。通过选择all,您可以将所有版本的模型加载为可服务,然后您可以使用客户端代码中的request.model_spec.version.value = 100轻松访问特定版本。如果要加载多个模型,可以通过为这些模型附加模型配置在同一个配置文件中执行此操作。