Python多线程爬虫

时间:2012-05-29 13:53:41

标签: python multithreading thread-safety web-crawler

您好!我正在尝试使用python编写Web爬虫。我想使用python多线程。即使在阅读了之前的建议论文和教程之后,我仍然有问题。我的代码在这里(整个源代码是here):

class Crawler(threading.Thread):

    global g_URLsDict 
    varLock = threading.Lock()
    count = 0

    def __init__(self, queue):
        threading.Thread.__init__(self)
        self.queue = queue
        self.url = self.queue.get()

    def run(self):
        while 1:
            print self.getName()+" started" 
            self.page = getPage(self.url)
            self.parsedPage = getParsedPage(self.page, fix=True)
            self.urls = getLinksFromParsedPage(self.parsedPage)

            for url in self.urls:

                self.fp = hashlib.sha1(url).hexdigest()

                #url-seen check
                Crawler.varLock.acquire() #lock for global variable g_URLs
                if self.fp in g_URLsDict:
                    Crawler.varLock.release() #releasing lock
                else:
                    #print url+" does not exist"
                    Crawler.count +=1
                    print "total links: %d"%len(g_URLsDict)
                    print self.fp
                    g_URLsDict[self.fp] = url
                    Crawler.varLock.release() #releasing lock
                    self.queue.put(url)

                    print self.getName()+ " %d"%self.queue.qsize()
                    self.queue.task_done()
            #self.queue.task_done()
        #self.queue.task_done()


print g_URLsDict
queue = Queue.Queue()
queue.put("http://www.ertir.com")

for i in range(5):
    t = Crawler(queue)
    t.setDaemon(True)
    t.start()

queue.join()

它根本不起作用,它在线程1之后没有给出任何结果,并且它以不同的方式执行某些时间会产生此错误:

Exception in thread Thread-2 (most likely raised during interpreter shutdown):

我该如何解决?而且我认为这不仅仅比循环更有效。

我试图修复run():

def run(self):
    while 1:
        print self.getName()+" started" 
        self.page = getPage(self.url)
        self.parsedPage = getParsedPage(self.page, fix=True)
        self.urls = getLinksFromParsedPage(self.parsedPage)

        for url in self.urls:

            self.fp = hashlib.sha1(url).hexdigest()

            #url-seen check
            Crawler.varLock.acquire() #lock for global variable g_URLs
            if self.fp in g_URLsDict:
                Crawler.varLock.release() #releasing lock
            else:
                #print url+" does not exist"
                print self.fp
                g_URLsDict[self.fp] = url
                Crawler.varLock.release() #releasing lock
                self.queue.put(url)

                print self.getName()+ " %d"%self.queue.qsize()
                #self.queue.task_done()
        #self.queue.task_done()
    self.queue.task_done()

我尝试使用task_done()命令,在不同的地方,任何人都可以解释差异吗?

1 个答案:

答案 0 :(得分:3)

当线程初始化时,您只能调用self.url = self.queue.get()。如果您想要获取新的URL以便进一步处理,您需要尝试从while循环中的队列中重新获取URL。

尝试将self.page = getPage(self.url)替换为self.page = getPage(self.queue.get())。请注意get函数将无限期地阻塞。您可能希望在一段时间后超时并为您的后台线程添加一些方法以通过请求正常退出(这将消除您看到的异常)。

some good examples on effbot.org以我上面描述的方式使用get()。

修改 - 初始评论的答案:

看看the docs for task_done();对于每次调用get()(没有超时),您应该调用task_done()来告诉join()的任何阻塞调用现在处理该队列中的所有内容。每次调用get()都会在等待队列中发布新网址时阻塞(休眠)。

Edit2 - 尝试使用此替代运行功能:

def run(self):
    while 1:
        print self.getName()+" started"
        url = self.queue.get() # <-- note that we're blocking here to wait for a url from the queue
        self.page = getPage(url)
        self.parsedPage = getParsedPage(self.page, fix=True)
        self.urls = getLinksFromParsedPage(self.parsedPage)

        for url in self.urls:

            self.fp = hashlib.sha1(url).hexdigest()

            #url-seen check
            Crawler.varLock.acquire() #lock for global variable g_URLs
            if self.fp in g_URLsDict:
                Crawler.varLock.release() #releasing lock
            else:
                #print url+" does not exist"
                Crawler.count +=1
                print "total links: %d"%len(g_URLsDict)
                print self.fp
                g_URLsDict[self.fp] = url
                Crawler.varLock.release() #releasing lock
                self.queue.put(url)

                print self.getName()+ " %d"%self.queue.qsize()

        self.queue.task_done() # <-- We've processed the url this thread pulled off the queue so indicate we're done with it.