我有一个网页抓取工具,可以抓取网页上的新闻报道。
我知道如何使用XpathSelector从页面中的元素中抓取某些信息。
但是,我似乎无法弄清楚如何存储刚抓取的网页的网址。
class spidey(CrawlSpider):
name = 'spidey'
start_urls = ['http://nytimes.com'] # urls from which the spider will start crawling
rules = [Rule(SgmlLinkExtractor(allow=[r'page/\d+']), follow=True),
# r'page/\d+' : regular expression for http://nytimes.com/page/X URLs
Rule(SgmlLinkExtractor(allow=[r'\d{4}/\d{2}/\w+']), callback='parse_articles')]
# r'\d{4}/\d{2}/\w+' : regular expression for http://nytimes.com/YYYY/MM/title URLs
我想存储通过这些规则的每个链接。
我需要将哪些内容添加到parse_articles以将链接存储在我的项目中?
def parse_articles(self, response):
item = SpideyItem()
item['link'] = ???
return item