列表解析中的字符串格式

时间:2017-11-29 20:31:01

标签: python python-3.x scrapy scrapy-spider

我正在开发一个Web scraper,但是在列表解析中使用字符串占位符时,我偶然发现了这种奇怪的行为(这是我的代码来自Pycharm):

# -*- coding: utf-8 -*-
from arms_transfers.items import ArmsTransferItem
import itertools
import pycountry
import scrapy
import urllib3


class UnrocaSpider(scrapy.Spider):
    name = 'unroca'
    allowed_domains = ['unroca.org']

    country_names = [country.official_name if hasattr(country, 'official_name')
                     else country.name for country in list(pycountry.countries)]
    country_names = [name.lower().replace(' ', '-') for name in country_names]

    base_url = 'https://www.unroca.org/{}/report/{}/'
    url_param_tuples = list(itertools.product(country_names, range(2010, 2017)))
    start_urls = [base_url.format(param_tuple[0], param_tuple[1]) for param_tuple in url_param_tuples]

这是错误:

Traceback (most recent call last):
  File "anaconda3/envs/scraper/bin/scrapy", line 11, in <module>
    sys.exit(execute())
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/cmdline.py", line 148, in execute
    cmd.crawler_process = CrawlerProcess(settings)
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/crawler.py", line 243, in __init__
    super(CrawlerProcess, self).__init__(settings)
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/crawler.py", line 134, in __init__
    self.spider_loader = _get_spider_loader(settings)
  File "/anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/crawler.py", line 330, in _get_spider_loader
    return loader_cls.from_settings(settings.frozencopy())
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/spiderloader.py", line 61, in from_settings
    return cls(settings)
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/spiderloader.py", line 25, in __init__
    self._load_all_spiders()
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/spiderloader.py", line 47, in _load_all_spiders
    for module in walk_modules(name):
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/utils/misc.py", line 71, in walk_modules
    submod = import_module(fullpath)
  File "anaconda3/envs/scraper/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "Programming/my_projects/web-scrapers/arms_transfers/arms_transfers/spiders/unroca.py", line 9, in <module>
    class UnrocaSpider(scrapy.Spider):
  File "Programming/my_projects/web-scrapers/arms_transfers/arms_transfers/spiders/unroca.py", line 19, in UnrocaSpider
    start_urls = [base_url.format(param_tuple[0], param_tuple[1]) for param_tuple in url_param_tuples]
  File "Programming/my_projects/web-scrapers/arms_transfers/arms_transfers/spiders/unroca.py", line 19, in <listcomp>
    start_urls = [base_url.format(param_tuple[0], param_tuple[1]) for param_tuple in url_param_tuples]
NameError: name 'base_url' is not defined

奇怪的是,当我在Jupyter笔记本中运行时:

import pycountry
import itertools

country_names = [country.official_name if hasattr(country, 'official_name')
                     else country.name for country in list(pycountry.countries)]
country_names = [name.lower().replace(' ', '-') for name in country_names]

base_url = 'https://www.unroca.org/{}/report/{}/'
url_param_tuples = list(itertools.product(country_names, range(2010, 2017)))
start_urls = [base_url.format(param_tuple[0], param_tuple[1]) for param_tuple in url_param_tuples]

它正如我在Pycharm项目中所期望的那样工作:

 ['https://www.unroca.org/aruba/report/2010/',
 'https://www.unroca.org/aruba/report/2011/',
 'https://www.unroca.org/aruba/report/2012/',
 'https://www.unroca.org/aruba/report/2013/',
 'https://www.unroca.org/aruba/report/2014/',
 'https://www.unroca.org/aruba/report/2015/',
 'https://www.unroca.org/aruba/report/2016/',
 'https://www.unroca.org/islamic-republic-of-afghanistan/report/2010/',
 'https://www.unroca.org/islamic-republic-of-afghanistan/report/2011/',
 'https://www.unroca.org/islamic-republic-of-afghanistan/report/2012/',
 'https://www.unroca.org/islamic-republic-of-afghanistan/report/2013/',...]

Pycharm项目和Jupyter笔记本使用相同的conda环境和Python 3.6.3解释器。任何人都可以提供有关可以解释行为差异的内容吗?

1 个答案:

答案 0 :(得分:1)

要回答我自己的问题,如果您需要为scrapy.Spider课程生成自己的起始网址列表,则应覆盖scrapy.Spider.start_requests(self)。就我而言,这看起来像:

class UnrocaSpider(scrapy.Spider):
    name = 'unroca'
    allowed_domains = ['unroca.org']

    def start_requests(self):
        country_names = [country.official_name if hasattr(country, 'official_name')
                         else country.name for country in list(pycountry.countries)]
        country_names = [name.lower().replace(' ', '-') for name in country_names]

        base_url = 'https://www.unroca.org/{}/report/{}/'
        url_param_tuples = list(itertools.product(country_names, range(2010, 2017)))
        start_urls = [base_url.format(param_tuple[0], param_tuple[1]) for param_tuple in url_param_tuples]
        for url in start_urls:
            yield scrapy.Request(url, self.parse)