我有一些规则,我从数据库中动态抓取并将它们添加到我的蜘蛛中:
self.name = exSettings['site']
self.allowed_domains = [exSettings['root']]
self.start_urls = ['http://' + exSettings['root']]
self.rules = [Rule(SgmlLinkExtractor(allow=(exSettings['root'] + '$',)), follow= True)]
denyRules = []
for rule in exSettings['settings']:
linkRegex = rule['link_regex']
if rule['link_type'] == 'property_url':
propertyRule = Rule(SgmlLinkExtractor(allow=(linkRegex,)), follow=True, callback='parseProperty')
self.rules.insert(0, propertyRule)
self.listingEx.append({'link_regex': linkRegex, 'extraction': rule['extraction']})
elif rule['link_type'] == 'project_url':
projectRule = Rule(SgmlLinkExtractor(allow=(linkRegex,)), follow=True) #not set to crawl yet due to conflict if same links appear for both
self.rules.insert(0, projectRule)
elif rule['link_type'] == 'favorable_url':
favorableRule = Rule(SgmlLinkExtractor(allow=(linkRegex,)), follow=True)
self.rules.append(favorableRule)
elif rule['link_type'] == 'ignore_url':
denyRules.append(linkRegex)
#somehow all urls will get ignored if allow is empty and put as the first rule
d = Rule(SgmlLinkExtractor(allow=('testingonly',), deny=tuple(denyRules)), follow=True)
#self.rules.insert(0,d) #I have tried with both status but same results
self.rules.append(d)
我的数据库中有以下规则:
link_regex: /listing/\d+/.+ (property_url)
link_regex: /project-listings/.+ (favorable_url)
link_regex: singapore-property-listing/ (favorable_url)
link_regex: /mrt/ (ignore_url)
我在日志中看到了这一点:
http://www.propertyguru.com.sg/singapore-property-listing/property-for-sale/mrt/125/ang-mo-kio-mrt-station> (referer: http://www.propertyguru.com.sg/listing/8277630/for-sale-thomson-grand-6-star-development-)
是不是应该拒绝/mrt/
?为什么我仍然抓取了上述链接?
答案 0 :(得分:2)
据我所知deny
个参数必须位于同一个SgmlLinkExtractor
,其中allow
个模式。
在您的情况下,您创建的SgmlLinkExtractor
允许favorable_url
('singapore-property-listing/'
)。但是这个提取器没有任何deny
模式,所以它也提取/mrt/
。
要解决此问题,您应该将deny
个模式添加到通讯员SgmlLinkExtractor
。另请参阅related question。
也许有一些方法可以定义全局deny
模式,但我还没有看到它们。