如何使用多处理循环一个大的URL列表?

时间:2018-01-31 21:07:42

标签: python multithreading multiprocessing python-multiprocessing

问题:检查超过1000个网址的列表并获取网址返回码(status_code)。

我运作的剧本但速度很慢。

我认为必须有更好的,pythonic(更美妙)的方式,我可以产生10或20个线程来检查网址并收集共鸣。 (即:

200 -> www.yahoo.com
404 -> www.badurl.com
...

输入文件:Url10.txt

www.example.com
www.yahoo.com
www.testsite.com

...

import requests

with open("url10.txt") as f:
    urls = f.read().splitlines()

print(urls)
for url in urls:
    url =  'http://'+url   #Add http:// to each url (there has to be a better way to do this)
    try:
        resp = requests.get(url, timeout=1)
        print(len(resp.content), '->', resp.status_code, '->', resp.url)
    except Exception as e:
        print("Error", url)

挑战: 通过多处理提高速度。

使用多处理

但它不起作用。 我收到以下错误:(注意:我不确定我是否已经正确实现了这一点)

AttributeError: Can't get attribute 'checkurl' on <module '__main__' (built-in)>

-

import requests
from multiprocessing import Pool

with open("url10.txt") as f:
    urls = f.read().splitlines()

def checkurlconnection(url):

    for url in urls:
        url =  'http://'+url
        try:
            resp = requests.get(url, timeout=1)
            print(len(resp.content), '->', resp.status_code, '->', resp.url)
        except Exception as e:
            print("Error", url)

if __name__ == "__main__":
    p = Pool(processes=4)
    result = p.map(checkurlconnection, urls)

2 个答案:

答案 0 :(得分:2)

在这种情况下,您的任务是I / O绑定而不是处理器绑定 - 网站回复所需的时间比CPU通过脚本循环一次(不包括TCP请求)所需的时间长。这意味着你不会通过并行执行此任务获得任何加速(这是multiprocessing所做的)。你想要的是多线程。实现这一目标的方法是使用记录较少的,可能命名不佳的multiprocessing.dummy

import requests
from multiprocessing.dummy import Pool as ThreadPool 

urls = ['https://www.python.org',
        'https://www.python.org/about/']

def get_status(url):
    r = requests.get(url)
    return r.status_code

if __name__ == "__main__":
    pool = ThreadPool(4)  # Make the Pool of workers
    results = pool.map(get_status, urls) #Open the urls in their own threads
    pool.close() #close the pool and wait for the work to finish 
    pool.join() 

See here有关Python中多处理与多线程的示例。

答案 1 :(得分:0)

checkurlconnection函数中,参数必须为urls而不是url。 否则,在for循环中,urls将指向全局变量,这不是您想要的。

import requests
from multiprocessing import Pool

with open("url10.txt") as f:
    urls = f.read().splitlines()

def checkurlconnection(urls):
    for url in urls:
        url =  'http://'+url
        try:
            resp = requests.get(url, timeout=1)
            print(len(resp.content), '->', resp.status_code, '->', resp.url)
        except Exception as e:
            print("Error", url)

if __name__ == "__main__":
    p = Pool(processes=4)
    result = p.map(checkurlconnection, urls)