ImportError:加载对象时出错#scrapy.contrib.memusage.MemoryUsage':没有名为uu的模块

时间:2015-11-16 11:48:09

标签: python scrapy

我已经在Mac上使用scrapy超过6个月了,而今天我突然无法解决这个问题。

我像往常一样使用" scrapy crawl"来运行它。我也尝试检查我的扭曲安装,这工作正常。

以下是该问题的完整堆栈跟踪:

2015-11-16 17:07:00+0530 [scrapy] INFO: Scrapy 0.24.6 started (bot: scrapybot)
2015-11-16 17:07:00+0530 [scrapy] INFO: Optional features available: ssl, http11
2015-11-16 17:07:00+0530 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'Crawler.spiders', 'SPIDER_MODULES': ['Crawler.spiders', 'Crawler.availability_spiders'], 'COOKIES_ENABLED': False, 'USER_AGENT': 'Mozilla/5.0 (Windows NT 6.3; rv:36.0) Gecko/20100101 Firefox/36.0'}
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 11, in 
sys.exit(execute())
File "/Library/Python/2.7/site-packages/scrapy/cmdline.py", line 143, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/Library/Python/2.7/site-packages/scrapy/cmdline.py", line 89, in _run_print_help
func(a, *kw)
File "/Library/Python/2.7/site-packages/scrapy/cmdline.py", line 150, in _run_command
cmd.run(args, opts)
File "/Library/Python/2.7/site-packages/scrapy/commands/crawl.py", line 60, in run
self.crawler_process.start()
File "/Library/Python/2.7/site-packages/scrapy/crawler.py", line 92, in start
if self.start_crawling():
File "/Library/Python/2.7/site-packages/scrapy/crawler.py", line 124, in start_crawling
return self._start_crawler() is not None
File "/Library/Python/2.7/site-packages/scrapy/crawler.py", line 139, in _start_crawler
crawler.configure()
File "/Library/Python/2.7/site-packages/scrapy/crawler.py", line 46, in configure
self.extensions = ExtensionManager.from_crawler(self)
File "/Library/Python/2.7/site-packages/scrapy/middleware.py", line 50, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/Library/Python/2.7/site-packages/scrapy/middleware.py", line 29, in from_settings
mwcls = load_object(clspath)
File "/Library/Python/2.7/site-packages/scrapy/utils/misc.py", line 42, in load_object
raise ImportError("Error loading object '%s': %s" % (path, e))
ImportError: Error loading object 'scrapy.contrib.memusage.MemoryUsage': No module named uu

如果有人对此为何发生任何疑问,请提供帮助。

编辑:在升级scrapy后,这个问题似乎已经消失,但是,我现在又遇到了另一个问题:

2015-11-16 19:57:21 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2015-11-16 19:57:21 [scrapy] INFO: Optional features available: ssl, http11
2015-11-16 19:57:21 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'Crawler.spiders', 'SPIDER_MODULES': ['Crawler.spiders', 'Crawler.availability_spiders'], 'COOKIES_ENABLED': False, 'USER_AGENT': 'Mozilla/5.0 (Windows NT 6.3; rv:36.0) Gecko/20100101 Firefox/36.0'}
2015-11-16 19:57:21 [py.warnings] WARNING: :0: UserWarning: You do not have a working installation of the service_identity module: 'cannot import name pyopenssl'.  Please install it from <https://pypi.python.org/pypi/service_identity> and make sure all of its dependencies are satisfied.  Without the service_identity module and a recent enough pyOpenSSL to support it, Twisted can perform only rudimentary TLS client hostname verification.  Many valid certificate/hostname mappings may be rejected.

2015-11-16 19:57:21 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
Unhandled error in Deferred:
2015-11-16 19:57:21 [twisted] CRITICAL: Unhandled error in Deferred:


Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 150, in _run_command
    cmd.run(args, opts)
  File "/usr/local/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run
    self.crawler_process.crawl(spname, **opts.spargs)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 153, in crawl
    d = crawler.crawl(*args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1274, in unwindGenerator
    return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
  File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
    result = g.send(result)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 71, in crawl
    self.engine = self._create_engine()
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 83, in _create_engine
    return ExecutionEngine(self, lambda _: self.stop())
  File "/usr/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 64, in __init__
    self.scheduler_cls = load_object(self.settings['SCHEDULER'])
  File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 44, in load_object
    mod = import_module(module)
  File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "/usr/local/lib/python2.7/site-packages/scrapy/core/scheduler.py", line 6, in <module>
    from queuelib import PriorityQueue
  File "/Users/pravesh/Library/Python/2.7/lib/python/site-packages/queuelib/__init__.py", line 1, in <module>
    from queuelib.queue import FifoDiskQueue, LifoDiskQueue
  File "/Users/pravesh/Library/Python/2.7/lib/python/site-packages/queuelib/queue.py", line 5, in <module>
    import sqlite3
  File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sqlite3/__init__.py", line 24, in <module>
    from dbapi2 import *
  File "/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sqlite3/dbapi2.py", line 28, in <module>
    from _sqlite3 import *
exceptions.ImportError: No module named _sqlite3
2015-11-16 19:57:21 [twisted] CRITICAL:

0 个答案:

没有答案