如何使用Scrapy抓取网站所有页面上的链接

时间:2015-06-01 18:49:27

标签: web web-crawler scrapy extract

我正在学习scrapy,并且我试图提取包含以下内容的所有链接:" http://lattes.cnpq.br/andasequenceofnumbers"例如:http://www.ppgcc.ufv.br/ 但我不知道网站上包含这些信息的页面是什么。 例如,这个网站:

http://www.ppgcc.ufv.br/?page_id=697

我想要的链接在此页面上:

(http://lattes.cnpq.br/asequenceofnumber)

我该怎么办? 我试图使用规则,但我不知道如何正确使用正则表达式。 谢谢

1编辑----

我需要在主要(ppgcc.ufv.br)网站的所有网页上搜索链接类型class ExampleSpider(scrapy.Spider): name = "example" allowed_domains = ["ppgcc.ufv.br"] start_urls = ( 'http://www.ppgcc.ufv.br/', ) rules = [Rule(SgmlLinkExtractor(allow=[r'.*']), follow=True), Rule(SgmlLinkExtractor(allow=[r'@href']), callback='parse')] def parse(self, response): filename = str(random.randint(1, 9999)) open(filename, 'wb').write(response.body) #I'm trying to understand how to use rules correctly 。我的目标是获取所有链接lattes.cnpq.br/numbers但我不知道它们在哪里。我使用的是一个简单的代码:

class ExampleSpider(CrawlSpider):
    name = "example"
    allowed_domains = [".ppgcc.ufv.br"]
    start_urls = (
        'http://www.ppgcc.ufv.br/',
    )
    rules = [Rule(SgmlLinkExtractor(allow=[r'.*']), follow=True),
            Rule(SgmlLinkExtractor(allow=[r'@href']), callback='parse_links')]
    def parse_links(self, response):
        filename = "Lattes.txt"
        arquivo = open(filename, 'wb')
        extractor = LinkExtractor(allow=r'lattes\.cnpq\.br/\d+')
        for link in extractor.extract_links(response):
            url = link.urlextractor = LinkExtractor(allow=r'lattes\.cnpq\.br/\d+')
            arquivo.writelines("%s\n" % url)                
            print url

2编辑----

使用:

C:\Python27\Scripts\tutorial3>scrapy crawl example
2015-06-02 08:08:18-0300 [scrapy] INFO: Scrapy 0.24.6 started (bot: tutorial3)
2015-06-02 08:08:18-0300 [scrapy] INFO: Optional features available: ssl, http11
2015-06-02 08:08:18-0300 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial3.spiders', 'SPIDER_MODULES': ['tutorial3
.spiders'], 'BOT_NAME': 'tutorial3'}
2015-06-02 08:08:19-0300 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState

2015-06-02 08:08:19-0300 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMidd
leware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMidd
leware, ChunkedTransferMiddleware, DownloaderStats
2015-06-02 08:08:19-0300 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLe
ngthMiddleware, DepthMiddleware
2015-06-02 08:08:19-0300 [scrapy] INFO: Enabled item pipelines:
2015-06-02 08:08:19-0300 [example] INFO: Spider opened
2015-06-02 08:08:19-0300 [example] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-06-02 08:08:19-0300 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-06-02 08:08:19-0300 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-06-02 08:08:19-0300 [example] DEBUG: Crawled (200) <GET http://www.ppgcc.ufv.br/> (referer: None)
2015-06-02 08:08:19-0300 [example] DEBUG: Filtered offsite request to 'www.cgu.gov.br': <GET http://www.cgu.gov.br/acessoainformacao
gov/>
2015-06-02 08:08:19-0300 [example] DEBUG: Filtered offsite request to 'www.brasil.gov.br': <GET http://www.brasil.gov.br/>
2015-06-02 08:08:19-0300 [example] DEBUG: Filtered offsite request to 'www.ppgcc.ufv.br': <GET http://www.ppgcc.ufv.br/>
2015-06-02 08:08:19-0300 [example] DEBUG: Filtered offsite request to 'www.ufv.br': <GET http://www.ufv.br/>
2015-06-02 08:08:19-0300 [example] DEBUG: Filtered offsite request to 'www.dpi.ufv.br': <GET http://www.dpi.ufv.br/>
2015-06-02 08:08:19-0300 [example] DEBUG: Filtered offsite request to 'www.portal.ufv.br': <GET http://www.portal.ufv.br/?page_id=84
>
2015-06-02 08:08:19-0300 [example] DEBUG: Filtered offsite request to 'www.wordpress.org': <GET http://www.wordpress.org/>
2015-06-02 08:08:19-0300 [example] INFO: Closing spider (finished)
2015-06-02 08:08:19-0300 [example] INFO: Dumping Scrapy stats:
        {'downloader/request_bytes': 215,
         'downloader/request_count': 1,
         'downloader/request_method_count/GET': 1,
         'downloader/response_bytes': 18296,
         'downloader/response_count': 1,
         'downloader/response_status_count/200': 1,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2015, 6, 2, 11, 8, 19, 912000),
         'log_count/DEBUG': 10,
         'log_count/INFO': 7,
         'offsite/domains': 7,
         'offsite/filtered': 42,
         'request_depth_max': 1,
         'response_received_count': 1,
         'scheduler/dequeued': 1,
         'scheduler/dequeued/memory': 1,
         'scheduler/enqueued': 1,
         'scheduler/enqueued/memory': 1,
         'start_time': datetime.datetime(2015, 6, 2, 11, 8, 19, 528000)}
2015-06-02 08:08:19-0300 [example] INFO: Spider closed (finished)

它告诉我:

{{1}}

我正在寻找该网站的源代码,有更多页面链接爬行没有获取,也许我的规则不正确

1 个答案:

答案 0 :(得分:2)

所以,先做几件事:

1)rules属性仅在您扩展CrawlSpider课程时才有效,如果您扩展更简单的scrapy.Spider,他们将无法工作。

2)如果你选择rulesCrawlSpider路由,则不应覆盖默认的parse回调,因为默认实现是实际调用规则的 - 所以你想要为你的回调使用另一个名字。

3)要实际提取所需的链接,可以在回调中使用LinkExtractor来抓取页面中的链接:

from scrapy.contrib.linkextractors import LinkExtractor

class MySpider(scrapy.Spider):
    ...

    def parse_links(self, response):
        extractor = LinkExtractor(allow=r'lattes\.cnpq\.br/\d+')
        for link in extractor.extract_links(response):
            item = LattesItem()
            item['url'] = link.url

我希望它有所帮助。