我正在使用Scrapy抓取网站以获取所有网页,但我当前的代码规则仍允许我获取不需要的网址,例如评论链接,例如“http://www.example.com/some-article/comment-page-1”以及帖子的主网址。我可以添加哪些规则来排除这些不需要的项目?这是我目前的代码:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.item import Item
class MySpider(CrawlSpider):
name = 'crawltest'
allowed_domains = ['example.com']
start_urls = ['http://www.example.com']
rules = [Rule(SgmlLinkExtractor(allow=[r'/\d+']), follow=True), Rule(SgmlLinkExtractor(allow=[r'\d+']), callback='parse_item')]
def parse_item(self, response):
#do something
答案 0 :(得分:2)
SgmlLinkExtractor
有一个名为deny
的可选参数,如果允许正则表达式为真且拒绝正则表达式为假,则只匹配规则
来自docs的例子:
rules = (
# Extract links matching 'category.php' (but not matching 'subsection.php')
# and follow links from them (since no callback means follow=True by default).
Rule(SgmlLinkExtractor(allow=('category\.php', ), deny=('subsection\.php', ))),
# Extract links matching 'item.php' and parse them with the spider's method parse_item
Rule(SgmlLinkExtractor(allow=('item\.php', )), callback='parse_item'),
)
也许您可以检查网址是否包含单词comment
?