我试图了解如何通过双向流处理(使用Python API)处理grpc api。
说我有以下简单的服务器定义:
syntax = "proto3";
package simple;
service TestService {
rpc Translate(stream Msg) returns (stream Msg){}
}
message Msg
{
string msg = 1;
}
说要从客户端发送的消息是异步发送的(由于用户选择了一些ui元素)。
为客户端生成的python存根将包含方法Translate
,该方法将接受生成器函数并返回迭代器。
我不清楚我将如何编写生成器函数,该函数将返回用户创建的消息。在等待消息时在线程上休眠听起来不是最好的解决方案。
答案 0 :(得分:2)
这现在有点笨拙,但是您可以按照以下步骤完成用例:
#!/usr/bin/env python
from __future__ import print_function
import time
import random
import collections
import threading
from concurrent import futures
from concurrent.futures import ThreadPoolExecutor
import grpc
from translate_pb2 import Msg
from translate_pb2_grpc import TestServiceStub
from translate_pb2_grpc import TestServiceServicer
from translate_pb2_grpc import add_TestServiceServicer_to_server
def translate_next(msg):
return ''.join(reversed(msg))
class Translator(TestServiceServicer):
def Translate(self, request_iterator, context):
for req in request_iterator:
print("Translating message: {}".format(req.msg))
yield Msg(msg=translate_next(req.msg))
class TranslatorClient(object):
def __init__(self):
self._stop_event = threading.Event()
self._request_condition = threading.Condition()
self._response_condition = threading.Condition()
self._requests = collections.deque()
self._last_request = None
self._expected_responses = collections.deque()
self._responses = {}
def _next(self):
with self._request_condition:
while not self._requests and not self._stop_event.is_set():
self._request_condition.wait()
if len(self._requests) > 0:
return self._requests.popleft()
else:
raise StopIteration()
def next(self):
return self._next()
def __next__(self):
return self._next()
def add_response(self, response):
with self._response_condition:
request = self._expected_responses.popleft()
self._responses[request] = response
self._response_condition.notify_all()
def add_request(self, request):
with self._request_condition:
self._requests.append(request)
with self._response_condition:
self._expected_responses.append(request.msg)
self._request_condition.notify()
def close(self):
self._stop_event.set()
with self._request_condition:
self._request_condition.notify()
def translate(self, to_translate):
self.add_request(to_translate)
with self._response_condition:
while True:
self._response_condition.wait()
if to_translate.msg in self._responses:
return self._responses[to_translate.msg]
def _run_client(address, translator_client):
with grpc.insecure_channel('localhost:50054') as channel:
stub = TestServiceStub(channel)
responses = stub.Translate(translator_client)
for resp in responses:
translator_client.add_response(resp)
def main():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
add_TestServiceServicer_to_server(Translator(), server)
server.add_insecure_port('[::]:50054')
server.start()
translator_client = TranslatorClient()
client_thread = threading.Thread(
target=_run_client, args=('localhost:50054', translator_client))
client_thread.start()
def _translate(to_translate):
return translator_client.translate(Msg(msg=to_translate)).msg
translator_pool = futures.ThreadPoolExecutor(max_workers=4)
to_translate = ("hello", "goodbye", "I", "don't", "know", "why",)
translations = translator_pool.map(_translate, to_translate)
print("Translations: {}".format(zip(to_translate, translations)))
translator_client.close()
client_thread.join()
server.stop(None)
if __name__ == "__main__":
main()
基本思想是让一个名为TranslatorClient
的对象在单独的线程上运行,从而将请求和响应相关联。它期望响应将按照发出请求的顺序返回。它还实现了迭代器接口,以便您可以将其直接传递给存根上的Translate
方法的调用。
我们启动了一个运行_run_client
的线程,该线程将响应从TranslatorClient
中拉出,并用add_response
反馈到另一端。
我这里包含的main
函数实际上只是一个稻草人,因为我没有您的UI代码的详细信息。我正在_translate
中运行ThreadPoolExecutor
,以证明即使translator_client.translate
是同步的,它也会产生收益,使您可以一次处理多个运行中的请求。
我们认识到,为这样一个简单的用例编写的代码很多。最终,答案将是asyncio
支持。我们在不久的将来对此有计划。但是就目前而言,无论您是运行python 2还是python 3,这种解决方案都应该使您继续前进。