yield scrapy.Request()无法正常爬行下一页

时间:2018-03-26 11:51:24

标签: python scrapy web-crawler

相同的代码适用于不同的网站,但不适用于此网站!

网站= http://quotes.toscrape.com/

它不会出现任何错误并且成功地破解8页(或计数页面)         进口scrapy

    count = 8

    class QuotesSpiderSpider(scrapy.Spider):
        name = 'quotes_spider'
        allowed_domains = ['quotes.toscrape.com']
        start_urls = ['http://quotes.toscrape.com/']

        def parse(self, response):
            quotes = response.xpath('//*[@class="quote"]')

            for quote in quotes:
                text = quote.xpath('.//*[@class="text"]/text()').extract_first()
                author = quote.xpath('.//*[@class="author"]/text()').extract_first()

                yield{
                    'Text' : text,
                    'Author' : author
                }

            global count
            count = count - 1
            if(count > 0):
                next_page = response.xpath('//*[@class="next"]/a/@href').extract_first()
                absolute_next_page = response.urljoin(next_page)
                yield scrapy.Request(absolute_next_page)

但它只抓取此网站的第一页

网站https://www.goodreads.com/list/show/7

# -*- coding: utf-8 -*-
import scrapy

count = 5

class BooksSpider(scrapy.Spider):
    name = 'books'
    allowed_domains = ["goodreads.com/list/show/7"]
    start_urls = ["https://goodreads.com/list/show/7/"]

    def parse(self, response):
        books = response.xpath('//tr/td[3]')

        for book in books:
            bookTitle = book.xpath('.//*[@class="bookTitle"]/span/text()').extract_first()
            authorName = book.xpath('.//*[@class="authorName"]/span/text()').extract_first()

            yield{
                'BookTitle' : bookTitle,
                'AuthorName' : authorName
            }

        global count
        count = count - 1

        if (count > 0):
            next_page_url = response.xpath('//*[@class="pagination"]/a[@class="next_page"]/@href').extract_first()
            absolute_next_page_url = response.urljoin(next_page_url)
            yield scrapy.Request(url = absolute_next_page_url)

我想抓取某些有限的网页或第二个网站的所有网页。

1 个答案:

答案 0 :(得分:1)

您正在allowed_domains使用包含路径的域。

allowed_domains = ["goodreads.com/list/show/7"]

应该是

allowed_domains = ["goodreads.com"]