我正尝试在以下页面的表上抓取基础数据:https://www.un.org/sc/suborg/en/sanctions/1267/aq_sanctions_list/summaries
我想做的是访问每一行的基础链接,并捕获:
这是我所拥有的,但似乎无法正常工作,我不断收到“ NotImplementedError('{}。parse callback is notdefined'.format(self。 class .. em > name ))。我相信我定义的Xpath很好,不确定我缺少什么。
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class UNSCItem(scrapy.Item):
name = scrapy.Field()
uid = scrapy.Field()
link = scrapy.Field()
reason = scrapy.Field()
add_info = scrapy.Field()
class UNSC(scrapy.Spider):
name = "UNSC"
start_urls = [
'https://www.un.org/sc/suborg/en/sanctions/1267/aq_sanctions_list/summaries?type=All&page=0',
'https://www.un.org/sc/suborg/en/sanctions/1267/aq_sanctions_list/summaries?type=All&page=1',
'https://www.un.org/sc/suborg/en/sanctions/1267/aq_sanctions_list/summaries?type=All&page=2',
'https://www.un.org/sc/suborg/en/sanctions/1267/aq_sanctions_list/summaries?type=All&page=3',
'https://www.un.org/sc/suborg/en/sanctions/1267/aq_sanctions_list/summaries?type=All&page=4',
'https://www.un.org/sc/suborg/en/sanctions/1267/aq_sanctions_list/summaries?type=All&page=5',
'https://www.un.org/sc/suborg/en/sanctions/1267/aq_sanctions_list/summaries?type=All&page=6',]
rules = Rule(LinkExtractor(allow=('/sc/suborg/en/sanctions/1267/aq_sanctions_list/summaries/',)),callback='data_extract')
def data_extract(self, response):
item = UNSCItem()
name = response.xpath('//*[@id="content"]/article/div[3]/div//text()').extract()
uid = response.xpath('//*[@id="content"]/article/div[2]/div/div//text()').extract()
reason = response.xpath('//*[@id="content"]/article/div[6]/div[2]/div//text()').extract()
add_info = response.xpath('//*[@id="content"]/article/div[7]//text()').extract()
related = response.xpath('//*[@id="content"]/article/div[8]/div[2]//text()').extract()
yield item
答案 0 :(得分:1)
尝试以下方法。它应该从所有六个页面中获取所有ids
和相应的names
。我想,您可以自行管理其余领域。
只需按原样运行:
import scrapy
class UNSC(scrapy.Spider):
name = "UNSC"
start_urls = ['https://www.un.org/sc/suborg/en/sanctions/1267/aq_sanctions_list/summaries?type=All&page={}'.format(page) for page in range(0,7)]
def parse(self, response):
for item in response.xpath('//*[contains(@class,"views-table")]//tbody//tr'):
idnum = item.xpath('.//*[contains(@class,"views-field-field-reference-number")]/text()').extract()[-1].strip()
name = item.xpath('.//*[contains(@class,"views-field-title")]//span[@dir="ltr"]/text()').extract()[-1].strip()
yield{'ID':idnum,'Name':name}