如何在python concurrent.futures中打破time.sleep()

时间:2016-07-19 14:38:39

标签: python concurrency concurrent.futures

我正在玩concurrent.futures

目前我的未来电话time.sleep(secs)

Future.cancel()似乎比我想象的要少。

如果未来已经执行,则time.sleep()不会被取消。

wait()的超时参数相同。它不会取消我的time.sleep()

如何取消在concurrent.futures中执行的time.sleep()

为了测试我使用ThreadPoolExecutor

3 个答案:

答案 0 :(得分:4)

如果向ThreadPoolExecutor提交函数,执行程序将在线程中运行该函数并将其返回值存储在Future对象中。由于并发线程数量有限,您可以选择取消 挂起执行未来,但是一旦工作线程中的控件传递给可调用对象,没有办法停止执行。

考虑以下代码:

import concurrent.futures as f
import time

T = f.ThreadPoolExecutor(1) # Run at most one function concurrently
def block5():
    time.sleep(5)
    return 1
q = T.submit(block5)
m = T.submit(block5)

print q.cancel()  # Will fail, because q is already running
print m.cancel()  # Will work, because q is blocking the only thread, so m is still queued

一般来说,每当你想要取消某些东西时,你自己就有责任确保它是。

虽然有一些现成的选择。 例如,考虑使用asyncio,他们也have an example using sleep。无论何时调用任何潜在的阻塞操作,该概念都会绕过该问题,而是将控制返回到在最外层上下文中运行的控制循环,同时注意每当结果可用时应继续执行 - 或者,你的情况,经过n秒后。

答案 1 :(得分:1)

我对concurrent.futures了解不多,但您可以使用此逻辑来打破时间。使用循环代替sleep.time()或wait()

for i in range(sec):
    sleep(1)

中断或中断可用于退出循环。

答案 2 :(得分:0)

正如其 link 中所写,您可以使用 with 语句来确保及时清理线程,如下例所示:

import concurrent.futures
import urllib.request

URLS = ['http://www.foxnews.com/',
        'http://www.cnn.com/',
        'http://europe.wsj.com/',
        'http://www.bbc.co.uk/',
        'http://some-made-up-domain.com/']

# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
    with urllib.request.urlopen(url, timeout=timeout) as conn:
        return conn.read()

# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
    # Start the load operations and mark each future with its URL
    future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
    for future in concurrent.futures.as_completed(future_to_url):
        url = future_to_url[future]
        try:
            data = future.result()
        except Exception as exc:
            print('%r generated an exception: %s' % (url, exc))
        else:
            print('%r page is %d bytes' % (url, len(data)))