运行Scrapy但它出错:没有名为_util的模块

时间:2018-05-14 06:36:54

标签: python python-2.7 scrapy

我已经安装了Scrapy,并在python中导入它,每个东西看起来都很好。但是当我在http://scrapy-chs.readthedocs.io/zh_CN/0.24/intro/tutorial.html中尝试一个例子时,它会导致错误。

我跑> 2018-05-14 14:24:16 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: tutorial) > 2018-05-14 14:24:16 [scrapy.utils.log] INFO: Versions: lxml 3.2.1.0, > libxml2 2.9.1, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted > 18.4.0, Python 2.7.5 (default, Nov 20 2015, 02:00:19) - [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)], pyOpenSSL 0.13.1 (OpenSSL 1.0.1e-fips 11 > Feb 2013), cryptography 0.8.2, Platform > Linux-3.10.0-327.el7.x86_64-x86_64-with-centos-7.2.1511-Core > 2018-05-14 14:24:16 [scrapy.crawler] INFO: Overridden settings: > {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': > ['tutorial.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'tutorial'} > Traceback (most recent call last): File > "/disk1/wulixin/install/bin/scrapy", line 11, in <module> > sys.exit(execute()) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/cmdline.py", > line 150, in execute > _run_print_help(parser, _run_command, cmd, args, opts) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/cmdline.py", > line 90, in _run_print_help > func(*a, **kw) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/cmdline.py", > line 157, in _run_command > cmd.run(args, opts) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/commands/crawl.py", > line 57, in run > self.crawler_process.crawl(spname, **opts.spargs) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/crawler.py", > line 170, in crawl > crawler = self.create_crawler(crawler_or_spidercls) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/crawler.py", > line 198, in create_crawler > return self._create_crawler(crawler_or_spidercls) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/crawler.py", > line 203, in _create_crawler > return Crawler(spidercls, self.settings) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/crawler.py", > line 55, in __init__ > self.extensions = ExtensionManager.from_crawler(self) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/middleware.py", > line 58, in from_crawler > return cls.from_settings(crawler.settings, crawler) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/middleware.py", > line 34, in from_settings > mwcls = load_object(clspath) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/utils/misc.py", > line 44, in load_object > mod = import_module(module) File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in > import_module > __import__(name) File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/extensions/memusage.py", > line 16, in <module> > from scrapy.mail import MailSender File "/disk1/wulixin/install/lib/python2.7/site-packages/scrapy/mail.py", > line 25, in <module> > from twisted.internet import defer, reactor, ssl File "/disk1/wulixin/install/lib64/python2.7/site-packages/twisted/internet/ssl.py", > line 230, in <module> > from twisted.internet._sslverify import ( File "/disk1/wulixin/install/lib64/python2.7/site-packages/twisted/internet/_sslverify.py", > line 15, in <module> > from OpenSSL._util import lib as pyOpenSSLlib ImportError: No module named _util ,然后我得到:

$(this)

5 个答案:

答案 0 :(得分:12)

你需要升级pyopenssl

sudo pip install pyopenssl --user --upgrade

答案 1 :(得分:1)

如果使用的是最新版本的Twisted。 您可以尝试将Twisted降级:

pip install Twisted==16.4.1

答案 2 :(得分:0)

您不是在激活拼凑的环境

$source activate ScrapyEnvironment

答案 3 :(得分:0)

对于某些Mac OS X用户,这可能无法解决问题,因为较旧版本的pyOpenSSL永远不会卸载。 解决方法是从PyPI a link下载gz文件。然后解压缩压缩文件。导航到它,然后运行以下命令

sudo python setup.py install

然后正确更新pyOpenSSL。

答案 4 :(得分:0)

这是版本问题,请使用代码进行更新:

rho < 1 -> (lambda)/(k*mu) -> k > lambda/mu