放弃龙卷风期货

时间:2017-08-01 11:47:51

标签: python tornado

我正在考虑在龙卷风中使用扇出代理来查询多个后端服务器以及让它在返回之前不等待所有响应的可能用例。

如果您使用WaitIterator但是在收到有用的回复后没有继续等待剩余的期货是否有问题?

也许其他期货的结果不会被清理干净?也许回调可以添加到任何剩余的期货以丢弃他们的结果?

#!./venv/bin/python

from tornado import gen
from tornado import httpclient
from tornado import ioloop
from tornado import web
import json


class MainHandler(web.RequestHandler):
    @gen.coroutine
    def get(self):
        r1 = httpclient.HTTPRequest(
            url="http://apihost1.localdomain/api/object/thing",
            connect_timeout=4.0,
            request_timeout=4.0,
        )
        r2 = httpclient.HTTPRequest(
            url="http://apihost2.localdomain/api/object/thing",
            connect_timeout=4.0,
            request_timeout=4.0,
        )
        http = httpclient.AsyncHTTPClient()
        wait = gen.WaitIterator(
            r1=http.fetch(r1),
            r2=http.fetch(r2)
        )
        while not wait.done():
            try:
                reply = yield wait.next()
            except Exception as e:
                print("Error {} from {}".format(e, wait.current_future))
            else:
                print("Result {} received from {} at {}".format(
                    reply, wait.current_future,
                    wait.current_index))
                if reply.code == 200:
                    result = json.loads(reply.body)
                    self.write(json.dumps(dict(result, backend=wait.current_index)))
                    return


def make_app():
    return web.Application([
        (r'/', MainHandler)
    ])


if __name__ == '__main__':
    app = make_app()
    app.listen(8888)
    ioloop.IOLoop.current().start()

1 个答案:

答案 0 :(得分:0)

所以我查看了WaitIterator的来源。

它跟踪添加回调的期货,当触发迭代器对结果进行排队时,或者(如果你已经称为next())履行了它给你的未来。

由于您等待的未来只能通过致电.next()来创建,看来您可以退出while not wait.done()而不会在没有观察员的情况下留下任何未来。

引用计数应该允许WaitIterator实例保留,直到所有期货都解除了它们的回调然后被回收。

  

更新2017/08/02
  通过额外的日志记录进一步测试子类WaitIterator,是的,当所有期货都返回时,迭代器将被清理,但如果这些期货中的任何一个返回异常,则会记录此异常未被观察到

     

ERROR:tornado.application:Future exception was never retrieved: HTTPError: HTTP 599: Timeout while connecting

     

总结并回答我的问题:从清理的角度来看,完成WaitIterator并不是必要的,但从日志记录的角度来看可能是这样做的。

如果你想确定,将等待迭代器传递给将要完成消费并添加观察者的新未来可能就足够了。例如

@gen.coroutine
def complete_wait_iterator(wait):
    rounds = 0
    while not wait.done():
        rounds += 1
        try:
            reply = yield wait.next()
        except Exception as e:
            print("Not needed Error {} from {}".format(e, wait.current_future))
        else:
            print("Not needed result {} received from {} at {}".format(
                reply, wait.current_future,
                wait.current_index))
    log.info('completer finished after {n} rounds'.format(n=rounds))


class MainHandler(web.RequestHandler):
    @gen.coroutine
    def get(self):
        r1 = httpclient.HTTPRequest(
            url="http://apihost1.localdomain/api/object/thing",
            connect_timeout=4.0,
            request_timeout=4.0,
        )
        r2 = httpclient.HTTPRequest(
            url="http://apihost2.localdomain/api/object/thing",
            connect_timeout=4.0,
            request_timeout=4.0,
        )
        http = httpclient.AsyncHTTPClient()
        wait = gen.WaitIterator(
            r1=http.fetch(r1),
            r2=http.fetch(r2)
        )
        while not wait.done():
            try:
                reply = yield wait.next()
            except Exception as e:
                print("Error {} from {}".format(e, wait.current_future))
            else:
                print("Result {} received from {} at {}".format(
                    reply, wait.current_future,
                    wait.current_index))
                if reply.code == 200:
                    result = json.loads(reply.body)
                    self.write(json.dumps(dict(result, backend=wait.current_index)))
                    consumer = complete_wait_iterator(wait)
                    consumer.add_done_callback(lambda f: f.exception())
                    return