由于路径而导致运行搜寻命令的问题

时间:2019-06-21 12:53:26

标签: python-3.x path scrapy web-crawler

我正在在线学习有关在命令提示符下运行简单搜寻器的教程。当我开始运行搜寻器时,会遇到(我相信是)一系列与路径相关的错误。我的路径已经确定,当我打开命令提示符并编写“ Python”时,一切正常。

以下是我使用Scrapy的抓取工具的Python代码:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = 'quotes'
    allowed_domains = ['quotes.toscrape.com']
    start_urls = ['quotes.toscrape.com']

    def parse(self, response):
        h1_tag = response.xpath('//h1/a/text()').extract_first()
        tags = response.xpath('//*[@class="tag-item"]/a/text()').extract()

        yield {'H1 tag': h1_tag, 'Tags': tags}

当我在PyCharm中运行它时,它运行时没有任何错误,并以代码0结尾。

这是我在命令提示符中用于爬网C:\ Users \ Kev \ Desktop \ quotes_spider> scrapy爬网引号的行。...

以下是我从命令propmt遇到的错误:

2019-06-21 08:34:10 [scrapy.core.engine] INFO: Spider opened
2019-06-21 08:34:10 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 
0 pages/min), scraped 0 items (at 0 items/min)
2019-06-21 08:34:10 [scrapy.extensions.telnet] INFO: Telnet console 
listening on 127.0.0.1:6023 
2019-06-21 08:34:10 [scrapy.core.engine] ERROR: Error while obtaining 
start requests
Traceback (most recent call last):
  File "c:\users\kev\appdata\local\programs\python\python37-32\lib\site- 
packages\scrapy\core\engine.py", line 127, in _next_request
    request = next(slot.start_requests)
  File "c:\users\kev\appdata\local\programs\python\python37-32\lib\site- 
packages\scrapy\spiders\__init__.py", line 83, in start_requests
     yield Request(url, dont_filter=True)
   File "c:\users\kev\appdata\local\programs\python\python37-32\lib\site- 
packages\scrapy\http\request\__init__.py", line 25, in __init__
    self._set_url(url)
   File "c:\users\kev\appdata\local\programs\python\python37-32\lib\site- 
packages\scrapy\http\request\__init__.py", line 62, in _set_url
    raise ValueError('Missing scheme in request url: %s' % self._url)
 ValueError: Missing scheme in request url: quotes.toscrape.com
 2019-06-21 08:34:10 [scrapy.core.engine] INFO: Closing spider (finished)
2019-06-21 08:34:10 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 6, 21, 12, 34, 10, 194671),
 'log_count/ERROR': 1,
 'log_count/INFO': 9,
 'start_time': datetime.datetime(2019, 6, 21, 12, 34, 10, 185685)}
 2019-06-21 08:34:10 [scrapy.core.engine] INFO: Spider closed (finished)

我基本上希望得到“已爬网(200)

由于错误消息,我认为这与路径某种程度上有关,但是由于路径已经建立,我不知道这怎么可能。

任何帮助将不胜感激,谢谢!

1 个答案:

答案 0 :(得分:1)

从以下位置编辑start_urls

start_urls = ['quotes.toscrape.com']

收件人:

start_urls = ['http://www.quotes.toscrape.com']

它应以http(s)://开头。只有allowed_domains不能有http(s)://