ValueError:请求网址中缺少方案

时间:2019-06-10 12:40:44

标签: python python-2.7 scrapy

当我运行以下代码时:

import scrapy
from scrapy.crawler import CrawlerProcess

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    search_url = ''

    def start_requests(self):
        yield scrapy.Request(url=self.search_url, callback=self.parse)

    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)
        self.log('Saved file %s' % filename)

process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})

test_spider = QuotesSpider()
test_spider.search_url='http://quotes.toscrape.com/page/1/'

process.crawl(test_spider)
process.start() # the script will block here until the crawling is finished

我收到以下错误:

2019-06-10 08:33:01 [scrapy.core.engine] ERROR: Error while obtaining start requ
ests
Traceback (most recent call last):
  File "C:\Python27\lib\site-packages\scrapy\core\engine.py", line 127, in _next
_request
    request = next(slot.start_requests)
  File "quotes_spider.py", line 10, in start_requests
    yield scrapy.Request(url=self.search_url, callback=self.parse)
  File "C:\Python27\lib\site-packages\scrapy\http\request\__init__.py", line 25,
 in __init__
    self._set_url(url)
  File "C:\Python27\lib\site-packages\scrapy\http\request\__init__.py", line 62,
 in _set_url
    raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url:
2019-06-10 08:33:01 [scrapy.core.engine] INFO: Closing spider (finished)
2019-06-10 08:33:01 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 6, 10, 12, 33, 1, 539000),
 'log_count/ERROR': 1,
 'log_count/INFO': 9,
 'start_time': datetime.datetime(2019, 6, 10, 12, 33, 1, 534000)}
2019-06-10 08:33:01 [scrapy.core.engine] INFO: Spider closed (finished)

执行此行时:

yield scrapy.Request(url=self.search_url, callback=self.parse)

self.search_url似乎是一个空变量,即使我在调用该函数之前已将其值显式设置为某个值。我似乎无法弄清楚为什么。

1 个答案:

答案 0 :(得分:1)

这样对我有用:

process.crawl(test_spider, search_url="http://quotes.toscrape.com/page/1/")