Scrapy爬行与下一页不起作用

时间:2016-11-25 07:45:40

标签: python scrapy web-crawler

我想抓取一个网站,但是当我抓取下一页时它不起作用,这是蜘蛛代码?哪里错了,请告诉我,非常感谢。

import scrapy
from crawlAll.items import CrawlallItem

class ToutiaoEssayJokeSpider(scrapy.Spider):
    name = "duanzi"
    allowed_domains = ["http://duanziwang.com"]
    start_urls = ['http://duanziwang.com/category/duanzi/page/1']

    def parse(self, response):
        for sel in response.xpath("//article[@class='excerpt excerpt-nothumbnail']"):
            item = CrawlallItem()
            item['Title'] = sel.xpath("//header/h2/a/text()").extract_first()
            item['Text'] = sel.xpath("//p[@class='note']/text()").extract_first()
            item['Views'] = sel.xpath("//p[1]/span[@class='muted'][2]/text()").extract_first()
            item['Time'] = sel.xpath("//p[1]/span[@class='muted'][1]/text()").extract_first()
            yield item
        next_page = response.xpath("//ul/li[@class='next-page']/a/@href").extract_first()
        if next_page is not None:
            next_page = response.urljoin(next_page)
            yield scrapy.Request(next_page, callback=self.parse)

我使用print(next_page)来测试next_page值是否为true,这是真的,它给我一个这样的链接地址:http://duanziwang.com/category/duanzi/page/2',那我的代码有什么问题?

1 个答案:

答案 0 :(得分:1)

您的allowed_domains参数有问题。在这种情况下,它不应包含http,通常最好只保留域名与顶级域名即domain.com

如果您运行蜘蛛并观察日志,您会看到:

[scrapy] DEBUG: Filtered offsite request to 'duanziwang.com': <GET http://duanziwang.com/category/duanzi/page/2>

所以试试:

    allowed_domains = ["duanziwang.com"]