Scrapy刮刀不刮过第1页

时间:2017-05-20 17:29:16

标签: python python-3.x web-scraping scrapy scrapy-spider

我正在关注scrapy教程here。我相信,我有与教程相同的代码,但我的刮刀只抓取第一页,然后将关于我的第一个Request的以下消息提供给另一页,并完成。我可能在错误的地方得到了第二个yield陈述吗?

  

DEBUG:过滤现场请求'newyork.craigslist.org':https://newyork.craigslist.org/search/egr?s = 120>

     

2017-05-20 18:21:31 [scrapy.core.engine]信息:关闭蜘蛛(已完成)

这是我的代码:

import scrapy
from scrapy import Request


class JobsSpider(scrapy.Spider):
    name = "jobs"
    allowed_domains = ["https://newyork.craigslist.org/search/egr"]
    start_urls = ['https://newyork.craigslist.org/search/egr/']

    def parse(self, response):
        jobs = response.xpath('//p[@class="result-info"]')

        for job in jobs:
            title = job.xpath('a/text()').extract_first()
            address = job.xpath('span[@class="result-meta"]/span[@class="result-hood"]/text()').extract_first("")[2:-1]
            relative_url = job.xpath('a/@href').extract_first("")
            absolute_url = response.urljoin(relative_url)

            yield {'URL': absolute_url, 'Title': title, 'Address': address}

        # scrape all pages
        next_page_relative_url = response.xpath('//a[@class="button next"]/@href').extract_first()
        next_page_absolute_url = response.urljoin(next_page_relative_url)

        yield Request(next_page_absolute_url, callback=self.parse)

1 个答案:

答案 0 :(得分:1)

好的,所以我明白了。我不得不改变这一行:

allowed_domains = ["https://newyork.craigslist.org/search/egr"]

到此:

allowed_domains = ["newyork.craigslist.org"]

现在可行。

相关问题