我有一个包含100个ID的列表,我需要对每个ID进行查找。查找大约需要3秒才能运行。以下是运行它所需的顺序代码:
ids = [102225077, 102225085, 102225090, 102225097, 102225105, ...]
for id in ids:
run_updates(id)
我想同时使用gevent或多处理器同时运行十(10)个这些。我该怎么做?这是我为gevent尝试的但它很慢:
def chunks(l, n):
""" Yield successive n-sized chunks from l.
"""
for i in xrange(0, len(l), n):
yield l[i:i+n]
ids = [102225077, 102225085, 102225090, 102225097, 102225105, ...]
if __name__ == '__main__':
for list_of_ids in list(chunks(ids, 10)):
jobs = [gevent.spawn(run_updates(id)) for id in list_of_ids]
gevent.joinall(jobs, timeout=200)
分割ids列表并一次运行10次的正确方法是什么?我甚至愿意使用多处理器或gevent(不太熟悉)。
对于100个ID,它按顺序执行需要364秒。
使用多处理器在100个ID上花费大约207秒,每次执行5次:
pool = Pool(processes=5)
pool.map(run_updates, list_of_apple_ids)
使用gevent需要介于两者之间:
jobs = [gevent.spawn(run_updates, apple_id) for apple_id in list_of_apple_ids]
有没有什么办法可以让我获得比Pool.map更好的性能?我有一台相当不错的电脑,网速很快,应该能够更快地完成......
答案 0 :(得分:0)
查看grequests库。你可以这样做:
import grequests
for list_of_ids in list(chunks(ids, 10)):
urls = [''.join(('http://www.example.com/id?=', id)) for id in list_of_ids]
requests = (grequests.get(url) for url in urls)
responses = grequests.map(requests)
for response in responses:
print response.content
我知道这会破坏你的模型,因为你的请求是用run_updates
方法封装的,但我认为它仍然值得探索。
答案 1 :(得分:0)
from multiprocessing import Process
from random import Random.random
ids = [random() for _ in range(100)] # make some fake ids, whatever
def do_thing(arg):
print arg # Here's where you'd do lookup
while ids:
curs, ids = ids[:10], ids[10:]
procs = [Process(target=do_thing, args=(c,)) for c in curs]
for proc in procs:
proc.run()
我猜,这大致是我做的。