Scrapy ModuleNotFoundError:通过CrawlerProcess运行时,没有名为“ example”的模块

时间:2020-04-03 14:43:52

标签: python scrapy

我有一个使用管道的蜘蛛,如果我从命令行运行蜘蛛显然已修改为不使用CrawlerProcess,则该管道可以工作,但是使用CrawlerProcess时找不到模块示例,请帮助这让我发疯了

    Traceback (most recent call last):
  File "e:\Users\Chris\Anaconda3\envs\ScrapyNew\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks
    result = g.send(result)
  File "e:\Users\Chris\Anaconda3\envs\ScrapyNew\lib\site-packages\scrapy\crawler.py", line 80, in crawl
    self.engine = self._create_engine()
  File "e:\Users\Chris\Anaconda3\envs\ScrapyNew\lib\site-packages\scrapy\crawler.py", line 105, in _create_engine
    return ExecutionEngine(self, lambda _: self.stop())
  File "e:\Users\Chris\Anaconda3\envs\ScrapyNew\lib\site-packages\scrapy\core\engine.py", line 70, in __init__
    self.scraper = Scraper(crawler)
  File "e:\Users\Chris\Anaconda3\envs\ScrapyNew\lib\site-packages\scrapy\core\scraper.py", line 71, in __init__
    self.itemproc = itemproc_cls.from_crawler(crawler)
  File "e:\Users\Chris\Anaconda3\envs\ScrapyNew\lib\site-packages\scrapy\middleware.py", line 53, in from_crawler
    return cls.from_settings(crawler.settings, crawler)
  File "e:\Users\Chris\Anaconda3\envs\ScrapyNew\lib\site-packages\scrapy\middleware.py", line 34, in from_settings
    mwcls = load_object(clspath)
  File "e:\Users\Chris\Anaconda3\envs\ScrapyNew\lib\site-packages\scrapy\utils\misc.py", line 44, in load_object
    mod = import_module(module)
  File "e:\Users\Chris\Anaconda3\envs\ScrapyNew\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'example'

我在Gatherlandry.py中调用它并设置管道的Spider代码是:-

process = CrawlerProcess({
            "ITEM_PIPELINES": {'scrapy.pipelines.images.ImagesPipeline': 1,'example.pipelines.ExamplePipeline':200,
                               },
            "IMAGES_STORE": 'e:\\bub\\laundry',
            "IMAGES_THUMBS": {
        'thumb': (100, 100),
        'small': (470, 470),
    },
        #specifies exported fields and order
        'FEED_EXPORT_FIELDS': ["attribute_set_code","product_type","categories","product_websites","product_online","tax_class_name",
                               "visibility","sku_source","sku","media_gallery","image","small_image","thumbnail","manufacturer",
                               "name", "height", "width", "depth","weight","source","manufacturers_url",
                               "laundry_sku_type","capacity","capacity_drying","color","connectivity","dryer_type","energy_consumption",
                               "water_consumption","energy_rating","features","install_type","motor","noise_level_spin","noise_level_wash","noise_level_dry",
                               "programs","quick_wash_capacity_kg","quick_wash_time_mins","spin_efficiency","spin_speed","price"
                               ],
                               'FEED_URI' : 'xyx_laundry_items.csv',
                               'FEED_FORMAT' : 'csv',
    })


process.crawl(SamsungLaundrySpider)
process.start()
create_item_file("xyx_laundry_items.csv","xyx_out_laundry_items.csv")
create_image_file("xyx_laundry_items.csv","xyx_out_laundry_images.csv")

目录结构为

<folder> bub_bots
      scrapy.cfg
      <folder> example
            __init__.py
            items.py
            log.txt
            middlewares.py
            pipelines.py
            settings.py
            <folder> spiders
                  GatherLaundry.py

我从Spiders目录中的命令提示符运行它:-

python GatherLandry.py

我尝试将其添加到PHYTHONPATH中,但没有区别

我认为是因为当我从ProcessCrawler运行它时,它不会读取scrapy.cfg文件。谁能给我一些指导。

    Scrapy 1.6.0
    Python 3.7.3
anaconda Command line client (version 1.7.2)

预先感谢

克里斯

0 个答案:

没有答案