我刚开始学习scrapy。所以我跟着scrapy documentation。我刚刚写了那个网站上提到的第一个蜘蛛。
import scrapy
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
filename = response.url.split("/")[-2]
with open(filename, 'wb') as f:
f.write(response.body)
在项目的根目录上运行此scrapy crawl dmoz
命令后,它会显示以下错误。
2015-06-07 21:53:06+0530 [scrapy] INFO: Scrapy 0.14.4 started (bot: tutorial)
2015-06-07 21:53:06+0530 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
Traceback (most recent call last):
File "/usr/bin/scrapy", line 4, in <module>
execute()
File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 132, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 97, in _run_print_help
func(*a, **kw)
File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 139, in _run_command
cmd.run(args, opts)
File "/usr/lib/python2.7/dist-packages/scrapy/commands/crawl.py", line 43, in run
spider = self.crawler.spiders.create(spname, **opts.spargs)
File "/usr/lib/python2.7/dist-packages/scrapy/command.py", line 34, in crawler
self._crawler.configure()
File "/usr/lib/python2.7/dist-packages/scrapy/crawler.py", line 36, in configure
self.spiders = spman_cls.from_crawler(self)
File "/usr/lib/python2.7/dist-packages/scrapy/spidermanager.py", line 37, in from_crawler
return cls.from_settings(crawler.settings)
File "/usr/lib/python2.7/dist-packages/scrapy/spidermanager.py", line 33, in from_settings
return cls(settings.getlist('SPIDER_MODULES'))
File "/usr/lib/python2.7/dist-packages/scrapy/spidermanager.py", line 23, in __init__
for module in walk_modules(name):
File "/usr/lib/python2.7/dist-packages/scrapy/utils/misc.py", line 65, in walk_modules
submod = __import__(fullpath, {}, {}, [''])
File "/home/avinash/tutorial/tutorial/spiders/dmoz_spider.py", line 3, in <module>
class DmozSpider(scrapy.Spider):
AttributeError: 'module' object has no attribute 'Spider'
答案 0 :(得分:6)
您正在使用旧 Scrapy(0.14.4)以及最新的文档。
解决方案:升级到最新版本的Scrapy或阅读适合当前安装版本的old docs
答案 1 :(得分:0)
使用Python.h missing
。如果出现sudo apt-get install python-dev
错误,请使用cmd_str = '&filter='
fields = {'f1': 'v1', 'f2': None, 'f3': 34, 'f4': datetime.now()}
for f, v in fields.items():
if v is not None:
if type(v) is str:
cmd_str += '%s|%s' % (f, v)
elif type(v) is datetime:
cmd_str += '{}[{}]={}'.format(f, type(v).__name__, v.isoformat())
else:
cmd_str += '{}[{}]={}'.format(f, type(v).__name__, v)
cmd_str += ','
cmd_str = cmd_str[:-1]
print(cmd_str)
(reference)安装python头文件
答案 2 :(得分:-2)
也许试试:
from scrapy import Spider
如果你想使用它的类
,仅仅导入模块是不够的