我是Python多处理的新手。我不太了解Pool和Process之间的区别。有人可以建议我应该使用哪一个来满足我的需求吗?
我有成千上万的HTTP GET请求要发送。在发送每个并获得响应之后,我想将响应(一个简单的int)存储到(共享)dict。我的最终目标是将dict中的所有数据写入文件。
这根本不是CPU密集型的。我的目标是加快发送http GET请求的速度,因为它太多了。这些请求都是孤立的,不相互依赖。
在这种情况下,我应该使用Pool还是Process?
谢谢!
----以下代码添加在8/28 ---
我用多处理编程。我面临的主要挑战是:
1)GET请求有时会失败。我必须设置3次重试以最小化重新运行我的代码/所有请求的需要。我只想重试失败的那些。我可以在不使用Pool的情况下使用异步http请求实现此目的吗?
2)我想检查每个请求的响应值,并进行异常处理
以下代码简化了我的实际代码。它工作正常,但我想知道它是否是最有效的做事方式。谁能提出任何建议?非常感谢!
def get_data(endpoint, get_params):
response = requests.get(endpoint, params = get_params)
if response.status_code != 200:
raise Exception("bad response for " + str(get_params))
return response.json()
def get_currency_data(endpoint, currency, date):
get_params = {'currency': currency,
'date' : date
}
for attempt in range(3):
try:
output = get_data(endpoint, get_params)
# additional return value check
# ......
return output['value']
except:
time.sleep(1) # I found that sleeping for 1s almost always make the retry successfully
return 'error'
def get_all_data(currencies, dates):
# I have many dates, but not too many currencies
for currency in currencies:
results = []
pool = Pool(processes=20)
for date in dates:
results.append(pool.apply_async(get_currency_data, args=(endpoint, date)))
output = [p.get() for p in results]
pool.close()
pool.join()
time.sleep(10) # Unfortunately I have to give the server some time to rest. I found it helps to reduce failures. I didn't write the server. This is not something that I can control
答案 0 :(得分:2)
都不是。使用asynchronous programming。考虑直接从该文章中提取的以下代码(信用转到PawełMiech)
#!/usr/local/bin/python3.5
import asyncio
from aiohttp import ClientSession
async def fetch(url, session):
async with session.get(url) as response:
return await response.read()
async def run(r):
url = "http://localhost:8080/{}"
tasks = []
# Fetch all responses within one Client session,
# keep connection alive for all requests.
async with ClientSession() as session:
for i in range(r):
task = asyncio.ensure_future(fetch(url.format(i), session))
tasks.append(task)
responses = await asyncio.gather(*tasks)
# you now have all response bodies in this variable
print(responses)
def print_responses(result):
print(result)
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(run(4))
loop.run_until_complete(future)
只是创建一个URL的数组,而不是给定的代码,循环该数组并将每个数组发送到fetch
。
根据下面的@roganjosh评论,requests_futures是完成此任务的超级简单方法。
from requests_futures.sessions import FuturesSession
sess = FuturesSession()
urls = ['http://google.com', 'https://stackoverflow.com']
responses = {url: sess.get(url) for url in urls}
contents = {url: future.result().content
for url, future in responses.items()
if future.result().status_code == 200}
您也可以使用我们的grequests,它支持Python 2.7以执行异步URL调用。
import grequests
urls = ['http://google.com', 'http://stackoverflow.com']
responses = grequests.map(grequests.get(u) for u in urls)
print([len(r.content) for r in rs])
# [10475, 250785]
如果您想使用多处理执行此操作,则可以。免责声明:这样做会产生大量开销,并且它不会像异步编程那样高效......但它有可能。
实际上非常简单,你通过http GET函数映射URL:
import requests
urls = ['http://google.com', 'http://stackoverflow.com']
from multiprocessing import Pool
pool = Pool(8)
responses = pool.map(requests.get, urls)
池的大小将是同时发出GET请求的数量。调整大小应该可以提高网络效率,但它会增加本地计算机上的通信和分叉开销。
同样,我不推荐这个,但它肯定是可能的,如果你有足够的内核,它可能比同步调用更快。