我设置了一个CrawlSpider来跟踪某些链接并抓取一个新闻杂志,其中每个问题的链接遵循以下URL方案:
http://example.com/YYYY/DDDD/index.htm其中YYYY是年份,DDDD是三位或四位数的发行号。
我只想要问题928以及以下规则。我没有任何问题连接到网站,抓取链接或提取项目(所以我没有包括我的其余代码)。蜘蛛似乎决心遵循非允许的链接。它试图抓住问题377,398等,并遵循“culture.htm”和“feature.htm”链接。这会引发很多错误并且不是非常重要,但它需要大量清理数据。关于出了什么问题的任何建议?
class crawlerNameSpider(CrawlSpider):
name = 'crawler'
allowed_domains = ["example.com"]
start_urls = ["http://example.com/issues.htm"]
rules = (
Rule(SgmlLinkExtractor(allow = ('\d\d\d\d/(92[8-9]|9[3-9][0-9]|\d\d\d\d)/index\.htm', )), follow = True),
Rule(SgmlLinkExtractor(allow = ('fr[0-9].htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(allow = ('eg[0-9]*.htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(allow = ('ec[0-9]*.htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(allow = ('op[0-9]*.htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(allow = ('sc[0-9]*.htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(allow = ('re[0-9]*.htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(allow = ('in[0-9]*.htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(deny = ('culture.htm', )), ),
Rule(SgmlLinkExtractor(deny = ('feature.htm', )), ),
)
编辑:我使用更简单的正则表达式2009年,2010年,2011年修复此问题,但我仍然很好奇为什么如果有人有任何建议,上述内容无效。
答案 0 :(得分:8)
您需要将deny
个参数传递给SgmlLinkExtractor
,以收集follow
的链接。如果他们调用一个函数Rule
,您就不需要创建这么多parse_item
。我会把你的代码写成:
rules = (
Rule(SgmlLinkExtractor(
allow = ('\d\d\d\d/(92[8-9]|9[3-9][0-9]|\d\d\d\d)/index\.htm', ),
deny = ('culture\.htm', 'feature\.htm'),
),
follow = True
),
Rule(SgmlLinkExtractor(
allow = (
'fr[0-9].htm',
'eg[0-9]*.htm',
'ec[0-9]*.htm',
'op[0-9]*.htm',
'sc[0-9]*.htm',
're[0-9]*.htm',
'in[0-9]*.htm',
)
),
callback = 'parse_item',
),
)
如果您使用parse_item
规则中的真实网址格式,可以将其简化为:
Rule(SgmlLinkExtractor(
allow = ('(fr|eg|ec|op|sc|re|in)[0-9]*\.htm', ),
callback = 'parse_item',
),
)