假设我们有一堆要下载的链接,每个链接可能需要不同的下载时间。我只允许使用最多3个连接下载。现在,我想确保使用asyncio有效地完成此任务。
这就是我想要实现的目标:在任何时候,尽量确保我至少运行3次下载。
Connection 1: 1---------7---9---
Connection 2: 2---4----6-----
Connection 3: 3-----5---8-----
数字代表下载链接,而连字符代表等待下载。
这是我现在正在使用的代码
from random import randint
import asyncio
count = 0
async def download(code, permit_download, no_concurrent, downloading_event):
global count
downloading_event.set()
wait_time = randint(1, 3)
print('downloading {} will take {} second(s)'.format(code, wait_time))
await asyncio.sleep(wait_time) # I/O, context will switch to main function
print('downloaded {}'.format(code))
count -= 1
if count < no_concurrent and not permit_download.is_set():
permit_download.set()
async def main(loop):
global count
permit_download = asyncio.Event()
permit_download.set()
downloading_event = asyncio.Event()
no_concurrent = 3
i = 0
while i < 9:
if permit_download.is_set():
count += 1
if count >= no_concurrent:
permit_download.clear()
loop.create_task(download(i, permit_download, no_concurrent, downloading_event))
await downloading_event.wait() # To force context to switch to download function
downloading_event.clear()
i += 1
else:
await permit_download.wait()
await asyncio.sleep(9)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main(loop))
finally:
loop.close()
输出符合预期:
downloading 0 will take 2 second(s)
downloading 1 will take 3 second(s)
downloading 2 will take 1 second(s)
downloaded 2
downloading 3 will take 2 second(s)
downloaded 0
downloading 4 will take 3 second(s)
downloaded 1
downloaded 3
downloading 5 will take 2 second(s)
downloading 6 will take 2 second(s)
downloaded 5
downloaded 6
downloaded 4
downloading 7 will take 1 second(s)
downloading 8 will take 1 second(s)
downloaded 7
downloaded 8
但这是我的问题:
此刻,我只是等待9秒钟让主功能继续运行,直到下载完成。在退出main函数之前是否有一种等待上次下载完成的有效方法? (我知道有asyncio.wait,但是我需要存储它的所有任务引用才能工作)
做这种任务的好图书馆是什么?我知道javascript有很多异步库,但是Python呢?
编辑: 2.什么是一个很好的库来处理常见的异步模式? (类似于https://www.npmjs.com/package/async)
答案 0 :(得分:18)
我用米哈伊尔斯(Mikhails)的答案回答了这个小宝石
async def gather_with_concurrency(n, *tasks):
semaphore = asyncio.Semaphore(n)
async def sem_task(task):
async with semaphore:
return await task
return await asyncio.gather(*(sem_task(task) for task in tasks))
您将使用哪种方式而不是正常的收集方式
await gather_with_concurrency(100, *my_coroutines)
答案 1 :(得分:17)
如果我没弄错,你正在寻找asyncio.Semaphore。用法示例:
var keys = new Array();
for(var key in ops){
keys.push(key);
}
keys = keys.sort();
for(var key in keys){
console.log(key+': '+ops[key]);
}
输出:
import asyncio
from random import randint
async def download(code):
wait_time = randint(1, 3)
print('downloading {} will take {} second(s)'.format(code, wait_time))
await asyncio.sleep(wait_time) # I/O, context will switch to main function
print('downloaded {}'.format(code))
sem = asyncio.Semaphore(3)
async def safe_download(i):
async with sem: # semaphore limits num of simultaneous downloads
return await download(i)
async def main():
tasks = [
asyncio.ensure_future(safe_download(i)) # creating task starts coroutine
for i
in range(9)
]
await asyncio.gather(*tasks) # await moment all downloads done
if __name__ == '__main__':
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(main())
finally:
loop.run_until_complete(loop.shutdown_asyncgens())
loop.close()
可以找到downloading 0 will take 3 second(s)
downloading 1 will take 3 second(s)
downloading 2 will take 1 second(s)
downloaded 2
downloading 3 will take 3 second(s)
downloaded 1
downloaded 0
downloading 4 will take 2 second(s)
downloading 5 will take 1 second(s)
downloaded 5
downloaded 3
downloading 6 will take 3 second(s)
downloading 7 will take 1 second(s)
downloaded 4
downloading 8 will take 2 second(s)
downloaded 7
downloaded 8
downloaded 6
的{{1}}异步下载示例。
答案 2 :(得分:10)
您基本上需要一个固定大小的池下载任务。 asyncio
没有提供开箱即用的功能,但很容易创建一个:只需保留一组任务,不要让它超过限制。虽然这个问题表明你不愿意沿着这条路走下去,但代码却更加优雅:
async def download(code):
wait_time = randint(1, 3)
print('downloading {} will take {} second(s)'.format(code, wait_time))
await asyncio.sleep(wait_time) # I/O, context will switch to main function
print('downloaded {}'.format(code))
async def main(loop):
no_concurrent = 3
dltasks = set()
i = 0
while i < 9:
if len(dltasks) >= no_concurrent:
# Wait for some download to finish before adding a new one
_done, dltasks = await asyncio.wait(
dltasks, return_when=asyncio.FIRST_COMPLETED)
dltasks.add(loop.create_task(download(i)))
i += 1
# Wait for the remaining downloads to finish
await asyncio.wait(dltasks)
另一种方法是创建一个固定数量的协同程序进行下载,就像固定大小的线程池一样,并使用asyncio.Queue
为它们提供工作。这消除了手动限制下载次数的需要,这将自动受到调用download()
的协同程序数量的限制:
# download() defined as above
async def download_from(q):
while True:
code = await q.get()
if code is None:
# pass on the word that we're done, and exit
await q.put(None)
break
await download(code)
async def main(loop):
q = asyncio.Queue()
dltasks = [loop.create_task(download_from(q)) for _ in range(3)]
i = 0
while i < 9:
await q.put(i)
i += 1
# Inform the consumers there is no more work.
await q.put(None)
await asyncio.wait(dltasks)
至于你的另一个问题,显而易见的选择是aiohttp
。
答案 3 :(得分:4)
asyncio-pool库完全满足您的需求。
https://pypi.org/project/asyncio-pool/
LIST_OF_URLS = ("http://www.google.com", "......")
pool = AioPool(size=3)
await pool.map(your_download_coroutine, LIST_OF_URLS)
答案 4 :(得分:2)
使用信号量,你也可以创建一个装饰器来包装函数
data.reduceByKey(lambda x,y: mean(x[1],y[1])).collect()
然后,将装饰器添加到源下载功能中。
import asyncio
from functools import wraps
def request_concurrency_limit_decorator(limit=3):
# Bind the default event loop
sem = asyncio.Semaphore(limit)
def executor(func):
@wraps(func)
async def wrapper(*args, **kwargs):
async with sem:
return await func(*args, **kwargs)
return wrapper
return executor
现在你可以像以前一样调用下载函数了,但是用信号量来限制并发。
@request_concurrency_limit_decorator(limit=...)
async def download(...):
...
需要注意的是,装饰器函数执行时,创建的Semaphore绑定到默认的事件循环,所以不能调用await download(...)
来创建新的循环。相反,调用 asyncio.run
以使用默认事件循环。
asyncio.Semaphore RuntimeError: Task got Future attached to a different loop
答案 5 :(得分:1)
小更新:不再需要创建循环。我调整了下面的代码。只是稍微清理一下。
# download(code) is the same
async def main():
no_concurrent = 3
dltasks = set()
for i in range(9):
if len(dltasks) >= no_concurrent:
# Wait for some download to finish before adding a new one
_done, dltasks = await asyncio.wait(dltasks, return_when=asyncio.FIRST_COMPLETED)
dltasks.add(asyncio.create_task(download(i)))
# Wait for the remaining downloads to finish
await asyncio.wait(dltasks)
if __name__ == '__main__':
asyncio.run(main())
答案 6 :(得分:1)
如果你有一个生成器来生成你的任务,那么可能有更多的任务同时存在于内存中。
经典的 asyncio.Semaphore
上下文管理器模式将所有任务同时竞争到内存中。
我不喜欢 asyncio.Queue
模式。您可以阻止它将所有任务预加载到内存中(通过设置 maxsize=1
),但它仍然需要样板来定义、启动和关闭工作协程(从队列中消耗) ,并且您必须确保在任务引发异常时工作人员不会失败。感觉不像pythonic,就像实现自己的multiprocessing.pool
一样。
相反,这里有一个替代方案:
sem = asyncio.Semaphore(n := 5) # specify maximum concurrency
async def task_wrapper(args):
try:
await my_task(*args)
finally:
sem.release()
for args in my_generator: # may yield too many to list
await sem.acquire()
asyncio.create_task(task_wrapper(args))
# wait for all tasks to complete
for i in range(n):
await sem.acquire()
当有足够多的活动任务时,这会暂停生成器,并让事件循环清理已完成的任务。请注意,对于较旧的 Python 版本,请将 create_task
替换为 ensure_future
。