Scrapy spider没有关注链接和错误

时间:2017-03-29 02:43:51

标签: python web-scraping scrapy scrapy-spider

我正在尝试使用scrapy编写我的第一个网络爬虫/数据提取器,但无法使其跟随链接。我也遇到了错误:

  

错误:蜘蛛错误处理<得到   https://en.wikipedia.org/wiki/Wikipedia:Unusual_articles>

我知道蜘蛛正在扫描页面一次,因为我能够从a标记和h1元素中提取信息。

是否有人知道如何按照页面上的链接删除错误?

import scrapy
from scrapy.linkextractors import LinkExtractor
from wikiCrawler.items import WikicrawlerItem
from scrapy.spiders import Rule


class WikispyderSpider(scrapy.Spider):
    name = "wikiSpyder"

    allowed_domains = ['https://en.wikipedia.org/']

    start_urls = ['https://en.wikipedia.org/wiki/Wikipedia:Unusual_articles']

    rules = (
        Rule(LinkExtractor(canonicalize=True, unique=True), follow=True, callback="parse"),
    )

    def start_requests(self):
        for url in self.start_urls:
            yield scrapy.Request(url, callback=self.parse, dont_filter=True)

    def parse(self, response):
        items = []
        links = LinkExtractor(canonicalize=True, unique=True).extract_links(response)
        for link in links:
            item = WikicrawlerItem()
            item['url_from'] = response.url
            item['url_to'] = link.url
            items.append(item)
            print(items)
        return items

1 个答案:

答案 0 :(得分:1)

如果要使用链接提取器,则需要使用特殊的蜘蛛类 - CrawlSpider

from scrapy.spiders import CrawlSpider

class WikispyderSpider(CrawlSpider):
    # ...

这是一个简单的蜘蛛,它跟随你的起始网址链接并打印出页面标题:

from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider

from scrapy.spiders import Rule


class WikispyderSpider(CrawlSpider):
    name = "wikiSpyder"

    allowed_domains = ['en.wikipedia.org']
    start_urls = ['https://en.wikipedia.org/wiki/Wikipedia:Unusual_articles']

    rules = (
        Rule(LinkExtractor(canonicalize=True, unique=True), follow=True, callback="parse_link"),
    )

    def parse_link(self, response):
        print(response.xpath("//title/text()").extract_first())