我目前正在努力学习通过Scraby学习网络抓取的基础知识,并且遇到了特定问题的项目被复制而不是扩展。
我抓取数据的第一页有一系列链接,我需要遵循这些链接来抓取其他链接。这些链接存储为项目[' link']。
我的问题是,通过迭代这些链接,通过嵌套在循环中的请求,结果不会被附加到原始项目实例,而是作为新实例返回。
因此结果看起来有点如下:
{'date': [u'29 June 2015', u'15 September 2015'],
'desc': [u'Audit Committee - 29 June 2015',
u'Audit Committee - 15 September 2015'],
'link': [u'/Council/Council-and-Committee-Minutes/Audit-Committee/2015/Audit-Committee-29-June-2015',
u'/Council/Council-and-Committee-Minutes/Audit-Committee/2015/Audit-Committee-15-September-2015'],
'pdf_url': 'http://www.antrimandnewtownabbey.gov.uk/Council/Council-and-Committee-Minutes/Audit-Committee/2015/Audit-Committee-15-September-2015',
'title': [u'2015']}
{'date': [u'29 June 2015', u'15 September 2015'],
'desc': [u'Audit Committee - 29 June 2015',
u'Audit Committee - 15 September 2015'],
'link': [u'/Council/Council-and-Committee-Minutes/Audit-Committee/2015/Audit-Committee-29-June-2015',
u'/Council/Council-and-Committee-Minutes/Audit-Committee/2015/Audit-Committee-15-September-2015'],
'pdf_url': 'http://www.antrimandnewtownabbey.gov.uk/Council/Council-and-Committee-Minutes/Audit-Committee/2015/Audit-Committee-29-June-2015',
'title': [u'2015']}
我希望它们包含在同一个对象中,如下所示:
{'date': [u'29 June 2015', u'15 September 2015'],
'desc': [u'Audit Committee - 29 June 2015',
u'Audit Committee - 15 September 2015'],
'link': [u'/Council/Council-and-Committee-Minutes/Audit-Committee/2015/Audit-Committee-29-June-2015',
u'/Council/Council-and-Committee-Minutes/Audit-Committee/2015/Audit-Committee-15-September-2015'],
'pdf_url': [u'http://www.antrimandnewtownabbey.gov.uk/Council/Council-and-Committee-Minutes/Audit-Committee/2015/Audit-Committee-29-June-2015',
u'http://www.antrimandnewtownabbey.gov.uk/Council/Council-and-Committee-Minutes/Audit-Committee/2015/Audit-Committee-15-September-2015'],
'title': [u'2015']}
这是我目前的实施(主要基于Scrapy教程):
def parse(self, response):
for sel in response.xpath('//div[@class="lower-col-right"]'):
item = CouncilExtractorItem()
item['title'] = sel.xpath('header[@class="intro user-content font-set clearfix"] /h1/text()').extract()
item['link'] = sel.xpath('div[@class="user-content"] /section[@class="listing-item"]/a/@href').extract()
item['desc'] = sel.xpath('div[@class="user-content"] /section[@class="listing-item"]/a/h2/text()').extract()
item['date'] = sel.xpath('div[@class="user-content"] /section[@class="listing-item"]/span/text()').extract()
for url in item['link']:
full_url = response.urljoin(url)
request = scrapy.Request(full_url, callback=self.parse_page2)
request.meta['item'] = item
yield request
def parse_page2(self, response):
item = response.meta['item']
item['pdf'] = response.url
return item
答案 0 :(得分:1)
您需要通过添加一个点来制作内部XPath表达式特定于上下文的:
for sel in response.xpath('//div[@class="lower-col-right"]'):
item = CouncilExtractorItem()
item['title'] = sel.xpath('.//header[@class="intro user-content font-set clearfix"]/h1/text()').extract()
item['link'] = sel.xpath('.//div[@class="user-content"]/section[@class="listing-item"]/a/@href').extract()
item['desc'] = sel.xpath('.//div[@class="user-content"]/section[@class="listing-item"]/a/h2/text()').extract()
item['date'] = sel.xpath('.//div[@class="user-content"]/section[@class="listing-item"]/span/text()').extract()
# ...
答案 1 :(得分:1)
问题是以下两个代码块的组合:
for url in item['link']:
full_url = response.urljoin(url)
request = scrapy.Request(full_url, callback=self.parse_page2)
request.meta['item'] = item
yield request
和
def parse_page2(self, response):
item = response.meta['item']
item['pdf'] = response.url
return item
您正在使用item
创建新请求作为每个网址的元素,然后您就可以替换该项目的“pdf&#39” ;字段,然后产生该项目。最终结果:对于每个URL,您将获得一个具有不同PDF字段的新重复项目。
原样,Scrapy无法知道您打算如何处理该项目。您需要将代码更改为:A)跟踪所有网址,并且只有在处理完所有网址后才会生成,并且B)附加到item['pdf']
,而不是覆盖它。