类实例内的空变量,尽管对其进行了专门设置

时间:2019-06-10 13:00:30

标签: python python-2.7 scrapy

当我运行以下代码时:

import scrapy
from scrapy.crawler import CrawlerProcess

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    search_url = ''

    def start_requests(self):
        print ('self.search_url is currently: ' + self.search_url)
        yield scrapy.Request(url=self.search_url, callback=self.parse)

    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)
        self.log('Saved file %s' % filename)

process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})

test_spider = QuotesSpider()
test_spider.search_url='http://quotes.toscrape.com/page/1/'

process.crawl(test_spider)
process.start() # the script will block here until the crawling is finished

我收到以下错误:

self.search_url is currently:
...
   ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url:
...

似乎在函数start_requests中,self.search_url似乎是一个空变量,即使我在调用该函数之前已将其值显式设置为某个值。我似乎无法弄清楚为什么。

1 个答案:

答案 0 :(得分:1)

最简单的方法是使用构造函数__init__(),但更简单的方法(也许是您想要的方法更快)是在类内部移动start_url的定义。例如:

import scrapy
from scrapy.crawler import CrawlerProcess

class QuotesSpider(scrapy.Spider):

    name = "quotes"
    search_url = 'http://quotes.toscrape.com/page/1/'

    def start_requests(self):
        print ('search_url is currently: ' + self.search_url)
        yield scrapy.Request(url=self.search_url, callback=self.parse)

    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)
        self.log('Saved file %s' % filename)

process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})

test_spider = QuotesSpider()

process.crawl(test_spider)
process.start()