我已经部署了一个Tensorflow服务器,该服务器可以提供多种模型。
客户端代码就像client.py
这样,我将其称为预测函数。
channel = implementations.insecure_channel(host, port)
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
request = predict_pb2.PredictRequest()
def predict(data, shape, model_name, signature_name="predict"):
request.model_spec.name = model_name
request.model_spec.signature_name = signature_name
request.inputs['image'].CopyFrom(tf.contrib.util.make_tensor_proto(data, shape=shape))
result = stub.Predict(request, 10.0)
return result.outputs['prediction'].float_val[0]
我有大约100个具有相同配置的客户端。
以下是调用predict
函数的示例代码:
from client import predict
while True:
print(predict(data, shape, model_name))
# time.sleep some while
首先,当我运行客户端代码时,我可以正确接收响应。 但是几个小时后,客户端因错误而崩溃
_Rendezvous of RPC that terminated with (StatusCode.UNAVAILABLE, Socket closed)
我尝试将客户代码修改为
def predict(data, shape, model_name, signature_name="predict"):
channel = implementations.insecure_channel(host, port)
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = model_name
request.model_spec.signature_name = signature_name
request.inputs['image'].CopyFrom(tf.contrib.util.make_tensor_proto(data, shape=shape))
result = stub.Predict(request, 10.0)
return result.outputs['prediction'].float_val[0]
这意味着每次调用predict
函数时,我都会尝试与tfs服务器建立连接。但是此代码也像以前一样失败。
那我应该怎么处理呢?
答案 0 :(得分:1)
最后,我在channel.close()
之前添加了return
,它工作正常。