粗暴的蜘蛛不会跳到下一页

时间:2018-11-03 15:28:41

标签: python scrapy

我正在用Scrapy为瑞典电子商务网站Blocket.se构建一个scaper。 它会按需抓取第一页,但不会跳到下一页。

下一个网址的命令

response.xpath(u'//a[contains(text(), "Nästa")]/@href').extract()

在Scrapy shell中尝试时会输出“不完整”链接:

?q=cykel&cg=0&w=1&st=s&c=&ca=11&l=0&md=th&o=2

工作是否必须是“完整”链接?

https://www.blocket.se/stockholm?q=cykel&cg=0&w=1&st=s&c=&ca=11&l=0&md=th&o=2

起始网址:https://www.blocket.se/stockholm?q=cykel&cg=0&w=1&st=s&c=&ca=11&is=1&l=0&md=th

完整代码:

import scrapy

class BlocketSpider(scrapy.Spider):
    name = "blocket"
    start_urls = ["https://www.blocket.se/stockholm?q=cykel&cg=0&w=1&st=s&c=&ca=11&is=1&l=0&md=th"]

    def parse(self, response):
        urls = response.css("h1.media-heading > a::attr(href)").extract()
        for url in urls:
            url = response.urljoin(url)
            yield scrapy.Request(url=url, callback=self.parse_details)


        #follow pagination links
        next_page_url = response.xpath(u'//a[contains(text(), "Nästa")]/@href').extract()
        if next_page_url:
            next_page_url = response.urljoin(next_page_url)
            yield scrapy.Request(url=next_page_url, callback=self.parse)

    def parse_details(self, response):
        yield {
        "Objekt": response.css("h1.h3::text").extract(),
        "Säljare":response.css("li.mrl > strong > a::text").extract(),
        "Uppladdad": response.css("li.mrl > time::text").extract(),
        "Pris": response.css("div.h3::text").extract(),
        "Område": response.css("span.area_label::text").extract(),
        "Bild-URL": response.css("div.item > img::attr(src)").extract(),
        }

1 个答案:

答案 0 :(得分:0)

是的,scrapy通常需要完整的URL。但是您可以继续使用urljoin()response.follow()方法:

next_page_url = response.xpath(u'//a[contains(text(), "Nästa")]/@href').extract()
if next_page_url:
    yield response.follow(url=next_page_url, callback=self.parse)

有关Scrapy Tutorial中的更多信息。

相关问题