如何通过多处理python检查网页是否存活

时间:2011-08-05 22:38:32

标签: python http

我有一个网址列表(大约25k),我正在尝试检查它们是否还活着(200响应)。想要使用Python的多处理库并行执行这些检查。我写了以下内容(主要基于Python doc示例),但它似乎运行得很慢。有什么方法可以让这个脚本运行得更快吗?

    import urllib2
    import time
    import random

    from multiprocessing import Process, Queue, current_process, freeze_support

    class HeadRequest(urllib2.Request):
        def get_method(self):
            return "HEAD"
    #
    # Function run by worker processes
    #

    def worker(input, output):
        for args in iter(input.get, 'STOP'):
            result = alive(args) 
            output.put(result)

    #
    # Functions referenced by tasks
    #

    def alive(x):
        x = x.strip()
        try:
            return x, ":", urllib2.urlopen(HeadRequest(x)).getcode()
        except urllib2.HTTPError as e:
            return x, ":", e.code
        except:
            return x, ": Error"

    #
    #
    #

    def check():
        NUMBER_OF_PROCESSES = 500
        text_file = open("url.txt", "r")
        TASKS1 = text_file.readlines()

        # Create queues
        task_queue = Queue()
        done_queue = Queue()

        # Submit tasks
        for task in TASKS1:
            task_queue.put(task)

        # Start worker processes
        for i in range(NUMBER_OF_PROCESSES):
            Process(target=worker, args=(task_queue, done_queue)).start()

        # Get and print results
        for i in range(len(TASKS1)):
            print done_queue.get()

        # Tell child processes to stop
        for i in range(NUMBER_OF_PROCESSES):
            task_queue.put('STOP')


    if __name__ == '__main__':
        freeze_support()
        check()

感谢任何帮助

1 个答案:

答案 0 :(得分:1)

有一种简单的方法:

http://scrapy.org/

Scrapy为Python提供了网络爬虫框架:您可以为其提供一个要抓取的网址列表(在您的情况下,它不需要关注链接),它会自动扩展到您提供的流程/线程限制内的多个抓取工具它 - 您不需要了解多进程通信的细节并自行扩展。

http://doc.scrapy.org/topics/scrapyd.html#topics-scrapyd

您自己的代码唯一剩下的就是分析结果。