Scrapy:如何将已抓取的项目用作动态URL的变量

时间:2019-04-24 08:09:16

标签: python web-scraping scrapy

我想在最后一次分页时开始抓取。从最高页面到最低页面

https://teslamotorsclub.com/tmc/threads/tesla-tsla-the-investment-world-the-2019-investors-roundtable.139047/page-

第2267页是动态的,因此在确定最后一页编号之前,我需要先刮取该项目,然后url分页应类似于第2267页,第2266页...

这是我所做的

class TeslamotorsclubSpider(scrapy.Spider):
    name = 'teslamotorsclub'
    allowed_domains = ['teslamotorsclub.com']
    start_urls = ['https://teslamotorsclub.com/tmc/threads/tesla-tsla-the-investment-world-the-2019-investors-roundtable.139047/']

    def parse(self, response):
        last_page = response.xpath('//div[@class = "PageNav"]/@data-last').extract_first()
        for item in response.css("[id^='fc-post-']"):
            last_page = response.xpath('//div[@class = "PageNav"]/@data-last').extract_first()
            datime = item.css("a.datePermalink span::attr(title)").get()
            message = item.css('div.messageContent blockquote').extract()
            datime = parser.parse(datime)
            yield {"last_page":last_page,"message":message,"datatime":datime}

        next_page = 'https://teslamotorsclub.com/tmc/threads/tesla-tsla-the-investment-world-the-2019-investors-roundtable.139047/page-' + str(TeslamotorsclubSpider.last_page)
        print(next_page)
        TeslamotorsclubSpider.last_page = int(TeslamotorsclubSpider.last_page)
        TeslamotorsclubSpider.last_page -= 1
        yield response.follow(next_page, callback=self.parse)   

我需要从最高页面到最低页面抓取项目。 请帮我谢谢你

3 个答案:

答案 0 :(得分:1)

页面link[rel=next]上的元素非常好。因此,您可以通过以下方式重构代码:解析页面,下一步调用,解析页面,下一步调用等。

def parse(self, response):
    for item in response.css("[id^='fc-post-']"):
        datime = item.css("a.datePermalink span::attr(title)").get()
        message = item.css('div.messageContent blockquote').extract()
        datime = parser.parse(datime)
        yield {"message":message,"datatime":datime}

    next_page = response.css('link[rel=next]::attr(href)').get()
    if next_page:
        yield response.follow(next_page, self.parse)   

UPD:这是将数据从最后一页抓到第一页的代码:

class TeslamotorsclubSpider(scrapy.Spider):
    name = 'teslamotorsclub'
    allowed_domains = ['teslamotorsclub.com']
    start_urls = ['https://teslamotorsclub.com/tmc/threads/tesla-tsla-the-investment-world-the-2019-investors-roundtable.139047/']
    next_page = 'https://teslamotorsclub.com/tmc/threads/tesla-tsla-the-investment-world-the-2019-investors-roundtable.139047/page-{}'

    def parse(self, response):
        last_page = response.xpath('//div[@class = "PageNav"]/@data-last').get()
        if last_page and int(last_page):
            # iterate from last page down to first
            for i in range(int(last_page), 0, -1):
                url = self.next_page.format(i)
                yield scrapy.Request(url, self.parse_page)

    def parse_page(self, response):
        # parse data on page
        for item in response.css("[id^='fc-post-']"):
            last_page = response.xpath('//div[@class = "PageNav"]/@data-last').get()
            datime = item.css("a.datePermalink span::attr(title)").get()
            message = item.css('div.messageContent blockquote').extract()
            datime = parser.parse(datime)
            yield {"last_page":last_page,"message":message,"datatime":datime}

答案 1 :(得分:0)

我使用下一个算法来解决它:

从首页开始。

url = url_page1

xpath_next_page = "//div[@class='pageNavLinkGroup']//a[@class='text' and contains(text(), 'Next')]"

加载第一页,做您的工作,最后检查XPATH是否出现在HTML和page + = 1上。

答案 2 :(得分:0)

如果最后一页到第一页,请尝试以下操作:

class TeslamotorsclubSpider(scrapy.Spider):
    name = 'teslamotorsclub'
    start_urls = ['https://teslamotorsclub.com/tmc/threads/tesla-tsla-the-investment-world-the-2019-investors-roundtable.139047/']
    page_start = 'https://teslamotorsclub.com/tmc/threads/tesla-tsla-the-investment-world-the-2019-investors-roundtable.139047/page-{}'
    cbool = False

    def parse(self, response):
        if not self.cbool:
            last_page = response.xpath('//div[@class = "PageNav"]/@data-last').extract_first()
            self.cbool = True
            yield response.follow(self.page_start.format(int(last_page)), callback=self.parse)

        else:
            for item in response.css("[id^='fc-post-']"):
                message = item.css('div.messageContent blockquote::text').extract()
                yield {"message":message} 

            prev_page = response.css("[class='PageNav'] a:contains('Prev')::attr('href')").get()
            yield {"prev_page":prev_page} #Check it whether it is working
            if prev_page:
                yield response.follow(prev_page, callback=self.parse)