Scrapy递归链接爬虫

时间:2013-09-25 10:00:29

标签: python recursion scrapy

它以网络上的网址开头(例如:http://python.org),获取与该网址对应的网页,并将该网页上的所有链接解析为链接存储库。接下来,它从刚刚创建的存储库中获取任何url的内容,将此新内容中的链接解析到存储库,并继续对存储库中的所有链接执行此过程,直到停止或获取给定数量的链接为止。 / p>

我怎么能用python和scrapy做到这一点?我能够抓取网页中的所有链接,但如何以递归方式执行它

2 个答案:

答案 0 :(得分:1)

几条评论:

  • 你不需要Scrapy来完成这么简单的任务。 Urllib(或Requests)和一个html解析器(美丽的汤等)可以完成这项工作
  • 我不记得我听到过哪里,但我认为最好使用BFS算法进行抓取。您可以轻松避免循环引用。

下面是一个简单的实现:它没有提取内部链接(只有绝对形成的超链接),也没有任何错误处理(403,404,没有链接,...),而且它非常慢(multiprocessing在这种情况下,模块可以提供很多帮助)。

import BeautifulSoup
import urllib2
import itertools
import random


class Crawler(object):
    """docstring for Crawler"""

    def __init__(self):

        self.soup = None                                        # Beautiful Soup object
        self.current_page   = "http://www.python.org/"          # Current page's address
        self.links          = set()                             # Queue with every links fetched
        self.visited_links  = set()

        self.counter = 0 # Simple counter for debug purpose

    def open(self):

        # Open url
        print self.counter , ":", self.current_page
        res = urllib2.urlopen(self.current_page)
        html_code = res.read()
        self.visited_links.add(self.current_page) 

        # Fetch every links
        self.soup = BeautifulSoup.BeautifulSoup(html_code)

        page_links = []
        try :
            page_links = itertools.ifilter(  # Only deal with absolute links 
                                            lambda href: 'http://' in href,
                                                ( a.get('href') for a in self.soup.findAll('a') )  )
        except Exception: # Magnificent exception handling
            pass



        # Update links 
        self.links = self.links.union( set(page_links) ) 



        # Choose a random url from non-visited set
        self.current_page = random.sample( self.links.difference(self.visited_links),1)[0]
        self.counter+=1


    def run(self):

        # Crawl 3 webpages (or stop if all url has been fetched)
        while len(self.visited_links) < 3 or (self.visited_links == self.links):
            self.open()

        for link in self.links:
            print link



if __name__ == '__main__':

    C = Crawler()
    C.run()

输出:

In [48]: run BFScrawler.py
0 : http://www.python.org/
1 : http://twistedmatrix.com/trac/
2 : http://www.flowroute.com/
http://www.egenix.com/files/python/mxODBC.html
http://wiki.python.org/moin/PyQt
http://wiki.python.org/moin/DatabaseProgramming/
http://wiki.python.org/moin/CgiScripts
http://wiki.python.org/moin/WebProgramming
http://trac.edgewall.org/
http://www.facebook.com/flowroute
http://www.flowroute.com/
http://www.opensource.org/licenses/mit-license.php
http://roundup.sourceforge.net/
http://www.zope.org/
http://www.linkedin.com/company/flowroute
http://wiki.python.org/moin/TkInter
http://pypi.python.org/pypi
http://pycon.org/#calendar
http://dyn.com/
http://www.google.com/calendar/ical/j7gov1cmnqr9tvg14k621j7t5c%40group.calendar.
google.com/public/basic.ics
http://www.pygame.org/news.html
http://www.turbogears.org/
http://www.openbookproject.net/pybiblio/
http://wiki.python.org/moin/IntegratedDevelopmentEnvironments
http://support.flowroute.com/forums
http://www.pentangle.net/python/handbook/
http://dreamhost.com/?q=twisted
http://www.vrplumber.com/py3d.py
http://sourceforge.net/projects/mysql-python
http://wiki.python.org/moin/GuiProgramming
http://software-carpentry.org/
http://www.google.com/calendar/ical/3haig2m9msslkpf2tn1h56nn9g%40group.calendar.
google.com/public/basic.ics
http://wiki.python.org/moin/WxPython
http://wiki.python.org/moin/PythonXml
http://www.pytennessee.org/
http://labs.twistedmatrix.com/
http://www.found.no/
http://www.prnewswire.com/news-releases/voip-innovator-flowroute-relocates-to-se
attle-190011751.html
http://www.timparkin.co.uk/
http://docs.python.org/howto/sockets.html
http://blog.python.org/
http://docs.python.org/devguide/
http://www.djangoproject.com/
http://buildbot.net/trac
http://docs.python.org/3/
http://www.prnewswire.com/news-releases/flowroute-joins-voxbones-inum-network-fo
r-global-voip-calling-197319371.html
http://www.psfmember.org
http://docs.python.org/2/
http://wiki.python.org/moin/Languages
http://sip-trunking.tmcnet.com/topics/enterprise-voip/articles/341902-grandstrea
m-ip-voice-solutions-receive-flowroute-certification.htm
http://www.twitter.com/flowroute
http://wiki.python.org/moin/NumericAndScientific
http://www.google.com/calendar/ical/b6v58qvojllt0i6ql654r1vh00%40group.calendar.
google.com/public/basic.ics
http://freecode.com/projects/pykyra
http://www.xs4all.com/
http://blog.flowroute.com
http://wiki.python.org/moin/PyGtk
http://twistedmatrix.com/trac/
http://wiki.python.org/moin/
http://wiki.python.org/moin/Python2orPython3
http://stackoverflow.com/questions/tagged/twisted
http://www.pycon.org/

答案 1 :(得分:0)

以下是从网页递归写入剪贴簿链接的主要抓取方法。此方法将抓取URL并将所有已爬网的URL放入缓冲区。现在,多个线程将等待从此全局缓冲区弹出URL,并再次调用此爬网方法。

def crawl(self,urlObj):
    '''Main function to crawl URL's '''

    try:
        if ((urlObj.valid) and (urlObj.url not in CRAWLED_URLS.keys())):
            rsp = urlcon.urlopen(urlObj.url,timeout=2)
            hCode = rsp.read()
            soup = BeautifulSoup(hCode)
            links = self.scrap(soup)
            boolStatus = self.checkmax()
            if boolStatus:
                CRAWLED_URLS.setdefault(urlObj.url,"True")
            else:
                return
            for eachLink in links:
                if eachLink not in VISITED_URLS:
                    parsedURL = urlparse(eachLink)
                    if parsedURL.scheme and "javascript" in parsedURL.scheme:
                        #print("***************Javascript found in scheme " + str(eachLink) + "**************")
                        continue
                    '''Handle internal URLs '''
                    try:
                        if not parsedURL.scheme and not parsedURL.netloc:
                            #print("No scheme and host found for "  + str(eachLink))
                            newURL = urlunparse(parsedURL._replace(**{"scheme":urlObj.scheme,"netloc":urlObj.netloc}))
                            eachLink = newURL
                        elif not parsedURL.scheme :
                            #print("Scheme not found for " + str(eachLink))
                            newURL = urlunparse(parsedURL._replace(**{"scheme":urlObj.scheme}))
                            eachLink = newURL
                        if eachLink not in VISITED_URLS: #Check again for internal URL's
                            #print(" Found child link " + eachLink)
                            CRAWL_BUFFER.append(eachLink)
                            with self._lock:
                                self.count += 1
                                #print(" Count is =================> " + str(self.count))
                            boolStatus = self.checkmax()
                            if boolStatus:
                                VISITED_URLS.setdefault(eachLink, "True")
                            else:
                                return
                    except TypeError:
                        print("Type error occured ")
        else:
            print("URL already present in visited " + str(urlObj.url))
    except socket.timeout as e:
        print("**************** Socket timeout occured*******************" )
    except URLError as e:
        if isinstance(e.reason, ConnectionRefusedError):
            print("**************** Conn refused error occured*******************")
        elif isinstance(e.reason, socket.timeout):
            print("**************** Socket timed out error occured***************" )
        elif isinstance(e.reason, OSError):
            print("**************** OS error occured*************")
        elif isinstance(e,HTTPError):
            print("**************** HTTP Error occured*************")
        else:
            print("**************** URL Error occured***************")
    except Exception as e:
        print("Unknown exception occured while fetching HTML code" + str(e))
        traceback.print_exc()

https://github.com/tarunbansal/crawler

提供完整的源代码和说明