我编写了一个基准实用程序,可以批量查询REST端点。它以三种方式做到这一点:
loop.run_in_executor()
包装每个请求,以下是具有不同批次大小的结果:
batch_size = 16
concur_times seq_times concur_aiohttp_times
count 50.000000 50.000000 50.000000
mean 0.123786 0.235883 0.087843
std 0.009733 0.018039 0.029977
min 0.108682 0.210515 0.071560
25% 0.118666 0.222436 0.075565
50% 0.121978 0.231876 0.080050
75% 0.125740 0.242939 0.086345
max 0.169194 0.283809 0.267874
batch_size = 4
concur_times seq_times concur_aiohttp_times
count 50.000000 50.000000 50.000000
mean 0.080764 0.091276 0.052807
std 0.008342 0.016509 0.033814
min 0.069541 0.078517 0.041993
25% 0.076142 0.082242 0.044563
50% 0.079046 0.085540 0.045735
75% 0.081645 0.092659 0.049428
max 0.111622 0.170785 0.281397
如结果所示,aiohttp例程始终在更大程度上并行化。此外,使用小批量(4),使用loop.run_in_executor
(“ concur_times”列)的第二种方法只能达到顺序方法速度的1/9。
那是为什么?我的代码有问题吗?我将其包含在下面。
我尝试将网络IO换成sleep
和asyncio.sleep
,这产生了方法2和3的预期结果同等快,而方法1的batch_size倍慢。
代码:
import asyncio
import requests
from cytoolz.curried import *
import pandas as pd
from timeit import default_timer as now
url = 'https://jsonplaceholder.typicode.com/todos/'
def dl_todo_with_requests(session, n):
response = session.get(url + str(n))
assert(response.status_code == 200)
text = response.text
return text
dl_todo_with_requests = curry(dl_todo_with_requests)
def seq_dl(todos_to_get):
with requests.Session() as session:
todos = pipe(
todos_to_get,
map( dl_todo_with_requests(session) ),
list )
return todos
get_todos_from_futures = lambda futures: \
pipe( futures,
map( lambda fut: fut.result() ),
list
)
async def concur_dl(todos_to_get):
loop = asyncio.get_running_loop()
with requests.Session() as session:
completed_futures, _pending = await \
pipe(
todos_to_get,
map( lambda n:
loop.run_in_executor(
None,
lambda: dl_todo_with_requests(session, n)
)),
list,
asyncio.wait
);
todos = get_todos_from_futures(completed_futures)
return todos
import aiohttp
async def concur_dl_aiohttp(todos_to_get):
async def dl(session, todo):
async with session.get(url + str(todo)) as resp:
assert(resp.status == 200)
return resp.text()
dl = curry(dl)
async with aiohttp.ClientSession() as session:
loop = asyncio.get_running_loop()
unexecuted = pipe(
todos_to_get,
map( dl(session) ),
list )
completed_futures, _pending = await asyncio.wait(unexecuted)
todos = get_todos_from_futures(completed_futures)
return todos
def check_todos_received(todos):
assert(len(todos) == len(todos_to_get))
todo_has_content = lambda todo: len(todo) > len('{}')
assert(all(map(todo_has_content, todos)))
return True
def measure_it(f):
start = now();
f()
elapsed = now() - start
return elapsed
inspect = lambda f, it: map(do(f), it)
inspect = curry(inspect)
def bench(n_iters=50,batch_size=4):
todos_to_get = range(1,batch_size+1)
seq_dl(todos_to_get)
# heat caches, if any
measure_seq = lambda: pipe(
seq_dl(todos_to_get),
inspect(check_todos_received) )
measure_concur = lambda: pipe(
asyncio.run(concur_dl(todos_to_get)),
inspect(check_todos_received) )
measure_concur_aiohttp = lambda: pipe(
asyncio.run(concur_dl_aiohttp(todos_to_get)),
inspect(check_todos_received) )
do_the_bench = lambda dl_f, title: \
pipe( range(n_iters),
inspect(
lambda n: \
print("doing %s/%s %s batch download" \
% (n+1,n_iters,title))),
map(lambda _: measure_it(dl_f)),
list )
concur_times = do_the_bench(measure_concur,'concurrent')
concur_aiohttp_times = do_the_bench(measure_concur_aiohttp,'concurrent_aiohttp')
seq_times = do_the_bench(measure_seq,'sequential')
return dict(
concur_times=concur_times,
seq_times=seq_times,
concur_aiohttp_times=concur_aiohttp_times)
基准测试运行如下:bench(n_iters=50,batch_size=4)
。然后将输出通过lambda output: pandas.DataFrame(output).describe()
生成表。
答案 0 :(得分:2)
asyncio的run_in_executor
的默认执行程序是ThreadPoolExecutor,它使用Python线程。因此,它也受GIL线程中所述的this的影响。
在您的情况下,一次只运行一个具有异步作业的线程,因此aiohttp表现出更好的性能。