我正在使用Python 3开发一个scrapy项目,并将spiders部署到scrapinghub。我还使用Google云端存储来存储官方文档here中提到的已删除文件。
当我在本地运行蜘蛛并且蜘蛛被部署到scrapinghub而没有任何错误时,蜘蛛运行得非常好。我正在使用 scrapy:1.4-py3 作为scrapinghub的堆栈。在运行蜘蛛时,我收到以下错误:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1386, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 77, in crawl
self.engine = self._create_engine()
File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 102, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 70, in __init__
self.scraper = Scraper(crawler)
File "/usr/local/lib/python3.6/site-packages/scrapy/core/scraper.py", line 71, in __init__
self.itemproc = itemproc_cls.from_crawler(crawler)
File "/usr/local/lib/python3.6/site-packages/scrapy/middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/usr/local/lib/python3.6/site-packages/scrapy/middleware.py", line 36, in from_settings
mw = mwcls.from_crawler(crawler)
File "/usr/local/lib/python3.6/site-packages/scrapy/pipelines/media.py", line 68, in from_crawler
pipe = cls.from_settings(crawler.settings)
File "/usr/local/lib/python3.6/site-packages/scrapy/pipelines/images.py", line 95, in from_settings
return cls(store_uri, settings=settings)
File "/usr/local/lib/python3.6/site-packages/scrapy/pipelines/images.py", line 52, in __init__
download_func=download_func)
File "/usr/local/lib/python3.6/site-packages/scrapy/pipelines/files.py", line 234, in __init__
self.store = self._get_store(store_uri)
File "/usr/local/lib/python3.6/site-packages/scrapy/pipelines/files.py", line 269, in _get_store
store_cls = self.STORE_SCHEMES[scheme]
KeyError: 'gs'
PS:'gs'用于存储文件的路径,如
'IMAGES_STORE':'gs://<bucket-name>/'
我已经研究过这个错误,但是没有任何解决方案。任何帮助都会有很大的帮助。
答案 0 :(得分:5)
Google Cloud Storage支持是Scrapy 1.5中的一项新功能,因此您需要在Scrapy Cloud中使用scrapy:1.5-py3
堆栈。