以下代码
class SiteSpider(BaseSpider):
name = "some_site.com"
allowed_domains = ["some_site.com"]
start_urls = [
"some_site.com/something/another/PRODUCT-CATEGORY1_10652_-1__85667",
]
rules = (
Rule(SgmlLinkExtractor(allow=('some_site.com/something/another/PRODUCT-CATEGORY_(.*)', ))),
# Extract links matching 'item.php' and parse them with the spider's method parse_item
Rule(SgmlLinkExtractor(allow=('some_site.com/something/another/PRODUCT-DETAIL(.*)', )), callback="parse_item"),
)
def parse_item(self, response):
.... parse stuff
引发以下错误
Traceback (most recent call last):
File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 1174, in mainLoop
self.runUntilCurrent()
File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 796, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 318, in callback
self._startRunCallbacks(result)
File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 424, in _startRunCallbacks
self._runCallbacks()
--- <exception caught here> ---
File "/usr/lib/python2.6/dist-packages/twisted/internet/defer.py", line 441, in _runCallbacks
self.result = callback(self.result, *args, **kw)
File "/usr/lib/pymodules/python2.6/scrapy/spider.py", line 62, in parse
raise NotImplementedError
exceptions.NotImplementedError:
当我将回调更改为“解析”并将函数更改为“解析”时,我没有收到任何错误,但没有任何内容被删除。我将其更改为“parse_items”,以为我可能会覆盖parse method by accident。也许我正在设置链接提取器错误?
我想要做的是解析CATEGORY页面上的每个ITEM链接。我这样做完全错了吗?
答案 0 :(得分:9)
我需要将BaseSpider更改为CrawlSpider。谢谢srapy用户!
http://groups.google.com/group/scrapy-users/browse_thread/thread/4adaba51f7bcd0af#
嗨鲍勃,
如果你改变,也许它可能有用 从BaseSpider到CrawlSpider?该 BaseSpider似乎没有实现Rule, 见:
http://doc.scrapy.org/topics/spiders.html?highlight=rule#scrapy.contr ...
-M
答案 1 :(得分:4)
默认情况下,scrapy在类中搜索解析函数。在您的蜘蛛中,缺少解析功能。而不是解析你给了parse_item。如果用parse替换parse_item,问题就解决了。 或者,您可以使用parse_item覆盖spider.py中的parse方法。